May 2, 2008
The Cognitive Age
By DAVID BROOKS
If you go into a good library, you will find thousands of books on globalization. Some will laud it. Some will warn about its dangers. But they’ll agree that globalization is the chief process driving our age. Our lives are being transformed by the increasing movement of goods, people and capital across borders.
The globalization paradigm has led, in the political arena, to a certain historical narrative: There were once nation-states like the U.S. and the European powers, whose economies could be secured within borders. But now capital flows freely. Technology has leveled the playing field. Competition is global and fierce.
New dynamos like India and China threaten American dominance thanks to their cheap labor and manipulated currencies. Now, everything is made abroad. American manufacturing is in decline. The rest of the economy is threatened.
Hillary Clinton summarized the narrative this week: “They came for the steel companies and nobody said anything. They came for the auto companies and nobody said anything. They came for the office companies, people who did white-collar service jobs, and no one said anything. And they came for the professional jobs that could be outsourced, and nobody said anything.”
The globalization paradigm has turned out to be very convenient for politicians. It allows them to blame foreigners for economic woes. It allows them to pretend that by rewriting trade deals, they can assuage economic anxiety. It allows them to treat economic and social change as a great mercantilist competition, with various teams competing for global supremacy, and with politicians starring as the commanding generals.
But there’s a problem with the way the globalization paradigm has evolved. It doesn’t really explain most of what is happening in the world.
Globalization is real and important. It’s just not the central force driving economic change. Some Americans have seen their jobs shipped overseas, but global competition has accounted for a small share of job creation and destruction over the past few decades. Capital does indeed flow around the world. But as Pankaj Ghemawat of the Harvard Business School has observed, 90 percent of fixed investment around the world is domestic. Companies open plants overseas, but that’s mainly so their production facilities can be close to local markets.
Nor is the globalization paradigm even accurate when applied to manufacturing. Instead of fleeing to Asia, U.S. manufacturing output is up over recent decades. As Thomas Duesterberg of Manufacturers Alliance/MAPI, a research firm, has pointed out, the U.S.’s share of global manufacturing output has actually increased slightly since 1980.
The chief force reshaping manufacturing is technological change (hastened by competition with other companies in Canada, Germany or down the street). Thanks to innovation, manufacturing productivity has doubled over two decades. Employers now require fewer but more highly skilled workers. Technological change affects China just as it does the America. William Overholt of the RAND Corporation has noted that between 1994 and 2004 the Chinese shed 25 million manufacturing jobs, 10 times more than the U.S.
The central process driving this is not globalization. It’s the skills revolution. We’re moving into a more demanding cognitive age. In order to thrive, people are compelled to become better at absorbing, processing and combining information. This is happening in localized and globalized sectors, and it would be happening even if you tore up every free trade deal ever inked.
The globalization paradigm emphasizes the fact that information can now travel 15,000 miles in an instant. But the most important part of information’s journey is the last few inches — the space between a person’s eyes or ears and the various regions of the brain. Does the individual have the capacity to understand the information? Does he or she have the training to exploit it? Are there cultural assumptions that distort the way it is perceived?
The globalization paradigm leads people to see economic development as a form of foreign policy, as a grand competition between nations and civilizations. These abstractions, called “the Chinese” or “the Indians,” are doing this or that. But the cognitive age paradigm emphasizes psychology, culture and pedagogy — the specific processes that foster learning. It emphasizes that different societies are being stressed in similar ways by increased demands on human capital. If you understand that you are living at the beginning of a cognitive age, you’re focusing on the real source of prosperity and understand that your anxiety is not being caused by a foreigner.
It’s not that globalization and the skills revolution are contradictory processes. But which paradigm you embrace determines which facts and remedies you emphasize. Politicians, especially Democratic ones, have fallen in love with the globalization paradigm. It’s time to move beyond it.
May 27, 2008
Curriculum Designed to Unite Art and Science
By NATALIE ANGIER
Senator Barack Obama likes to joke that the battle for the Democratic presidential nomination has been going on so long, babies have been born, and they’re already walking and talking.
That’s nothing. The battle between the sciences and the humanities has been going on for so long, its early participants have stopped walking and talking, because they’re already dead.
It’s been some 50 years since the physicist-turned-novelist C.P. Snow delivered his famous “Two Cultures” lecture at the University of Cambridge, in which he decried the “gulf of mutual incomprehension,” the “hostility and dislike” that divided the world’s “natural scientists,” its chemists, engineers, physicists and biologists, from its “literary intellectuals,” a group that, by Snow’s reckoning, included pretty much everyone who wasn’t a scientist. His critique set off a frenzy of hand-wringing that continues to this day, particularly in the United States, as educators, policymakers and other observers bemoan the Balkanization of knowledge, the scientific illiteracy of the general public and the chronic academic turf wars that are all too easily lampooned.
Yet a few scholars of thick dermis and pep-rally vigor believe that the cultural chasm can be bridged and the sciences and the humanities united into a powerful new discipline that would apply the strengths of both mindsets, the quantitative and qualitative, to a wide array of problems. Among the most ambitious of these exercises in fusion thinking is a program under development at Binghamton University in New York called the New Humanities Initiative.
Jointly conceived by David Sloan Wilson, a professor of biology, and Leslie Heywood, a professor of English, the program is intended to build on some of the themes explored in Dr. Wilson’s evolutionary studies program, which has proved enormously popular with science and nonscience majors alike, and which he describes in the recently published “Evolution for Everybody.” In Dr. Wilson’s view, evolutionary biology is a discipline that, to be done right, demands a crossover approach, the capacity to think in narrative and abstract terms simultaneously, so why not use it as a template for emulsifying the two cultures generally?
“There are more similarities than differences between the humanities and the sciences, and some of the stereotypes have to be altered,” Dr. Wilson said. “Darwin, for example, established his entire evolutionary theory on the basis of his observations of natural history, and most of that information was qualitative, not quantitative.”
As he and Dr. Heywood envision the program, courses under the New Humanities rubric would be offered campuswide, in any number of departments, including history, literature, philosophy, sociology, law and business. The students would be introduced to basic scientific tools like statistics and experimental design and to liberal arts staples like the importance of analyzing specific texts or documents closely, identifying their animating ideas and comparing them with the texts of other times or other immortal minds.
One goal of the initiative is to demystify science by applying its traditional routines and parlance in nontraditional settings — graphing Jane Austen, as the title of an upcoming book felicitously puts it. “If you do statistics in the context of something you’re interested in and are good at, then it becomes an incremental as opposed to a saltational jump,” Dr. Wilson said. “You see that the mechanics are not so hard after all, and once you understand why you’re doing the statistics in the first place, it ends up being simple nuts and bolts stuff, nothing more.”
To illustrate how the New Humanities approach to scholarship might work, Dr. Heywood cited her own recent investigations into the complex symbolism of the wolf, a topic inspired by a pet of hers that was seven-eighths wolf. “He was completely different from a dog,” she said. “He was terrified of things in the human environment that dogs are perfectly at ease with, like the swishing sound of a jogging suit, or somebody wearing a hat, and he kept his reserve with people, even me.”
Dr. Heywood began studying the association between wolves and nature, and how people’s attitudes toward one might affect their regard for the other. “In the standard humanities approach, you compile and interpret images of wolves from folkloric history, and you analyze previously published texts about wolves,” and that’s pretty much it, Dr. Heywood said. Seeking a more full-bodied understanding, she delved into the scientific literature, studying wolf ecology, biology and evolution. She worked with Dr. Wilson and others to design a survey to gauge people’s responses to three images of a wolf: one of a classic beautiful wolf, another of a hunter holding a dead wolf, the third of a snarling, aggressive wolf.
It’s an implicit association test, designed to gauge subliminal attitudes by measuring latency of response between exposure to an image on a screen and the pressing of a button next to words like beautiful, frightening, good, wrong.
“These firsthand responses give me more to work with in understanding how people read wolves, as opposed to seeing things through other filters and published texts,” Dr. Heywood said.
Combining some of her early survey results with the wealth of wolf imagery culled from cultures around the world, Dr. Heywood finds preliminary support for the provocative hypothesis that humans and wolves may have co-evolved.
“They were competing predators that occupied the same ecological niche as we did,” she said, “but it’s possible that we learned some of our social and hunting behaviors from them as well.” Hence, our deeply conflicted feelings toward wolves — as the nurturing mother to Romulus and Remus, as the vicious trickster disguised as Little Red Riding Hood’s grandmother.
In designing the New Humanities initiative, Dr. Wilson is determined to avoid romanticizing science or presenting it as the ultimate arbiter of meaning, as other would-be integrationists and ardent Darwinists have done.
“You can study music, dance, narrative storytelling and artmaking scientifically, and you can conclude that yes, they’re deeply biologically driven, they’re essential to our species, but there would still be something missing,” he said, “and that thing is an appreciation for the work itself, a true understanding of its meaning in its culture and context.”
Other researchers who have reviewed the program prospectus have expressed their enthusiasm, among them George Levine, an emeritus professor of English at Rutgers University, a distinguished scholar in residence at New York University and author of “Darwin Loves You.” Dr. Levine has criticized many recent attempts at so-called Literary Darwinism, the application of evolutionary psychology ideas to the analysis of great novels and plays. What it usually amounts to is reimagining Emma Bovary or Emma Woodhouse as a young, fecund female hunter-gatherer circa 200,000 B.C.
“When you maximize the importance of biological forces and minimize culture, you get something that doesn’t tell you a whole lot about the particularities of literature,” Dr. Levine said. “What you end up with, as far as I’m concerned, is banality.” Reading the New Humanities proposal, by contrast, “I was struck by how it absolutely refused the simple dichotomy,” he said.
“There is a kind of basic illiteracy on both sides,” he added, “and I find it a thrilling idea that people might be made to take pleasure in crossing the border.”
May 30, 2008
Best Is the New Worst
By SUSAN JACOBY
PITY the poor word “elite,” which simply means “the best” as an adjective and “the best of a group” as a noun. What was once an accolade has turned poisonous in American public life over the past 40 years, as both the left and the right have twisted it into a code word meaning “not one of us.” But the newest and most ominous wrinkle in the denigration of all things elite is that the slur is being applied to knowledge itself.
Senator Hillary Clinton’s use of the phrase “elite opinion” to dismiss the near unanimous opposition of economists to her proposal for a gas tax holiday was a landmark in the use of elite to attack expertise supposedly beyond the comprehension of average Americans. One might as well say that there is no point in consulting musicians about music or ichthyologists about fish.
The assault on “elite” did not begin with politicians, although it does have political antecedents in sneers directed at “eggheads” during the anti-Communist crusades of the 1950s. The broader cultural perversion of its meaning dates from the late 1960s, when the academic left pinned the label on faculty members who resisted the establishment of separate departments for what were then called “minority studies.” In this case, two distinct faculty groups were tarred with elitism — those who wanted to incorporate black and women’s studies into the core curriculum, and those who thought that blacks and women had produced nothing worthy of study. Instead of elitist, the former group should have been described as “inclusionary” and the latter as “bigoted.”
The second stage of elite-bashing was conceived by the cultural and political right. Conservative intellectuals who rose to prominence during the Reagan administration managed the neat trick of reversing the ’60s usage of “elite” by applying it as a slur to the left alone. “Elite,” often rendered in the plural, became synonymous with “limousine liberals” who opposed supposedly normative American values. That the right-wing intellectual establishment also constituted a powerful elite was somehow obscured.
“Elite” and “elitist” do not, in a dictionary sense, mean the same thing. An elitist is someone who does believe in government by an elite few — an anti-democratic philosophy that has nothing to do with elite achievement. But the terms have become so conflated that Americans have come to consider both elite and elitist synonyms for snobbish.
All the older forms of elite-bashing have now devolved into a kind of aggressive denial of the threat to American democracy posed by public ignorance.
During the past few months, I have received hundreds of e-mail messages calling me an elitist for drawing attention to America’s knowledge deficit. One of the most memorable came from a man who objected to my citation of a statistic, from a 2006 National Geographic-Roper survey, indicating that nearly two-thirds of Americans age 18 to 24 cannot find Iraq on a map. “Why should I care whether my mechanic knows where Iraq is, as long as he knows how to fix my car?” the man asked.
But what could be more elitist than the idea that a mechanic cannot be expected to know the location of a country where thousands of Americans of his own generation are fighting and dying?
Another peculiar new use of “elitist” (often coupled with “Luddite”) is its application to any caveats about the Internet as a source of knowledge. After listening to one of my lectures, a college student told me that it was elitist to express alarm that one in four Americans, according to the National Constitution Center, cannot name any First Amendment rights or that 62 percent cannot name the three branches of government. “You don’t need to have that in your head,” the student said, “because you can just look it up on the Web.”
True, but how can an information-seeker know what to look for if he or she does not know that the Bill of Rights exists? There is no point-and-click formula for accumulating a body of knowledge needed to make sense of isolated facts.
It is past time to retire the sliming of elite knowledge and education from public discourse. Do we want mediocre schools or the best education for our children? If we need an operation, do we want an ordinary surgeon or the best, most elite surgeon available?
America was never imagined as a democracy of dumbness. The Declaration of Independence and the Constitution were written by an elite group of leaders, and although their dream was limited to white men, it held the seeds of a future in which anyone might aspire to the highest — let us say it out loud, elite — level of achievement.
Susan Jacoby is the author of “The Age of American Unreason.”
June 27, 2008
Your Brain Lies to You
By SAM WANG and SANDRA AAMODT
FALSE beliefs are everywhere. Eighteen percent of Americans think the sun revolves around the earth, one poll has found. Thus it seems slightly less egregious that, according to another poll, 10 percent of us think that Senator Barack Obama, a Christian, is instead a Muslim. The Obama campaign has created a Web site to dispel misinformation. But this effort may be more difficult than it seems, thanks to the quirky way in which our brains store memories — and mislead us along the way.
The brain does not simply gather and stockpile information as a computer’s hard drive does. Facts are stored first in the hippocampus, a structure deep in the brain about the size and shape of a fat man’s curled pinkie finger. But the information does not rest there. Every time we recall it, our brain writes it down again, and during this re-storage, it is also reprocessed. In time, the fact is gradually transferred to the cerebral cortex and is separated from the context in which it was originally learned. For example, you know that the capital of California is Sacramento, but you probably don’t remember how you learned it.
This phenomenon, known as source amnesia, can also lead people to forget whether a statement is true. Even when a lie is presented with a disclaimer, people often later remember it as true.
With time, this misremembering only gets worse. A false statement from a noncredible source that is at first not believed can gain credibility during the months it takes to reprocess memories from short-term hippocampal storage to longer-term cortical storage. As the source is forgotten, the message and its implications gain strength. This could explain why, during the 2004 presidential campaign, it took some weeks for the Swift Boat Veterans for Truth campaign against Senator John Kerry to have an effect on his standing in the polls.
Even if they do not understand the neuroscience behind source amnesia, campaign strategists can exploit it to spread misinformation. They know that if their message is initially memorable, its impression will persist long after it is debunked. In repeating a falsehood, someone may back it up with an opening line like “I think I read somewhere” or even with a reference to a specific source.
In one study, a group of Stanford students was exposed repeatedly to an unsubstantiated claim taken from a Web site that Coca-Cola is an effective paint thinner. Students who read the statement five times were nearly one-third more likely than those who read it only twice to attribute it to Consumer Reports (rather than The National Enquirer, their other choice), giving it a gloss of credibility.
Adding to this innate tendency to mold information we recall is the way our brains fit facts into established mental frameworks. We tend to remember news that accords with our worldview, and discount statements that contradict it.
In another Stanford study, 48 students, half of whom said they favored capital punishment and half of whom said they opposed it, were presented with two pieces of evidence, one supporting and one contradicting the claim that capital punishment deters crime. Both groups were more convinced by the evidence that supported their initial position.
Psychologists have suggested that legends propagate by striking an emotional chord. In the same way, ideas can spread by emotional selection, rather than by their factual merits, encouraging the persistence of falsehoods about Coke — or about a presidential candidate.
Journalists and campaign workers may think they are acting to counter misinformation by pointing out that it is not true. But by repeating a false rumor, they may inadvertently make it stronger. In its concerted effort to “stop the smears,” the Obama campaign may want to keep this in mind. Rather than emphasize that Mr. Obama is not a Muslim, for instance, it may be more effective to stress that he embraced Christianity as a young man.
Consumers of news, for their part, are prone to selectively accept and remember statements that reinforce beliefs they already hold. In a replication of the study of students’ impressions of evidence about the death penalty, researchers found that even when subjects were given a specific instruction to be objective, they were still inclined to reject evidence that disagreed with their beliefs.
In the same study, however, when subjects were asked to imagine their reaction if the evidence had pointed to the opposite conclusion, they were more open-minded to information that contradicted their beliefs. Apparently, it pays for consumers of controversial news to take a moment and consider that the opposite interpretation may be true.
In 1919, Justice Oliver Wendell Holmes of the Supreme Court wrote that “the best test of truth is the power of the thought to get itself accepted in the competition of the market.” Holmes erroneously assumed that ideas are more likely to spread if they are honest. Our brains do not naturally obey this admirable dictum, but by better understanding the mechanisms of memory perhaps we can move closer to Holmes’s ideal.
Sam Wang, an associate professor of molecular biology and neuroscience at Princeton, and Sandra Aamodt, a former editor in chief of Nature Neuroscience, are the authors of “Welcome to Your Brain: Why You Lose Your Car Keys but Never Forget How to Drive and Other Puzzles of Everyday Life.”
Thanks to the Internet, news of the murder and beheading of Greyhound passenger Tim McLean spread around the world in hours.
As a Canwest report details, it wasn't long before copies of the police scanner tape were online, along with allegations of drug use by bus passengers, cannibalism and various other things.
Looked at purely from the perspective of what technology can do, it was quite remarkable.
The only problem is that mixed with the facts was a small tissue of lies. For instance, one product of the blogosphere was a purported link between McLean and his accused killer. It was apparently the work of his grieving friends attempting, as a police spokesman put it, "to rationalize the irrational." Still, it was passed around as the latest revelation.
This is hardly the first time unreliable information went out over the Internet, of course. During the last U.S. presidential election, influential bloggers touted selected exit polls, fantasized a John Kerry win, and according to Bloomberg wires, reversed the U.S. stock market just before markets closed.
The Internet's amazing capability to distribute news is therefore a cautionary tale. It is a reminder to the reader that for all the mainstream media's faults, such as its built-in biases and the lumbering pace at which it moves, and for the admitted spectacular successes that amateur news gatherers occasionally score, the mainstream media does at least scrutinize what comes in for general believability.
It's not a perfect filter. Mistakes get through. Sometimes we accept a government's word that a Middle East dictator is concealing weapons of mass destruction, or perversely refuse to accept that a war is being won, because victory doesn't fit the narrative we thought was unfolding.
But, the mainstream media is also accountable. Mistakes are corrected. In the end, hand-wringing mea culpas follow unwelcome but irrefutable revelations. News hounds may even have their noses so close to the ground that they fail to notice they're on a false trail, but the mainstream media's intention is always to be accurate, and balanced.
And importantly, now that the mainstream media has moved heavily into the Internet (check the Herald's webpage, for instance) readers can be sure those same checks and balances to news coverage in the digital age.
In short, there's much value to readers in the judgment of an experienced news team.
However, there's this good news side to the Internet.
Sure, there's a problem with bad information. However, not for centuries have individuals had such a podium. For, technology's wheel has turned full circle, in terms of what it takes to publish information.
Reading up on how the Anglo-American tradition of free speech came about, I came across many examples of one-man proprietorships in the early days: The fellow who owned the Gutenberg was also reporter, editor, publisher, printer, delivery boy and business manager.
No surprise, these people did what they did because they had a point to make.
Generally, it was a religious one and parenthetically, one of the things least understood now is how large is the blood-debt owed by the secular right of free speech to stubborn Protestant printers of the 16th and 17th centuries, who fought authorities to present truth as they saw it.
But, it took the later development of the 19th century mass market and the emergence of newspapers as profit-making businesses rather than proselytizing broadsheets, to spark today's aspiration to objectivity, balance and credibility.
This was when the machinery of production went beyond the physical capacity of one man to reach a mass audience. Meanwhile, political pressure made even the most ambitious press barons aware the appearance of telling both sides of the story was a commercial asset.
As the decades went by, there was less and less room for lone rangers.
But, the gift of the Internet and the PC is that once more, one person can do it all and this time, speak to millions. (In theory, anyway. Actually doing it is another matter.)
Yes, it means the reader seeking truth must be careful where he looks for it, and some of the McLean material shows just what kind of dreck there is.
Still, when Orwellian machinery of social control exists, it's good to know there are also ways for individuals to do an end run around it. If that means truth and error must temporarily coexist, well, 'twas ever thus.
And -- here comes the pitch -- it also shows why subscribing to a good newspaper such as the Herald is still worth the buck.
Getting it first is great.
Getting it right, is what really matters. This we work very, very hard to accomplish.
September 16, 2008
Gut Instinct’s Surprising Role in Math
By NATALIE ANGIER
You are shopping in a busy supermarket and you’re ready to pay up and go home. You perform a quick visual sweep of the checkout options and immediately start ramming your cart through traffic toward an appealingly unpeopled line halfway across the store. As you wait in line and start reading nutrition labels, you can’t help but calculate that the 529 calories contained in a single slice of your Key lime cheesecake amounts to one-fourth of your recommended daily caloric allowance and will take you 90 minutes on the elliptical to burn off and you’d better just stick the thing behind this stack of Soap Opera Digests and hope a clerk finds it before it melts.
One shopping spree, two distinct number systems in play. Whenever we choose a shorter grocery line over a longer one, or a bustling restaurant over an unpopular one, we rally our approximate number system, an ancient and intuitive sense that we are born with and that we share with many other animals. Rats, pigeons, monkeys, babies — all can tell more from fewer, abundant from stingy. An approximate number sense is essential to brute survival: how else can a bird find the best patch of berries, or two baboons know better than to pick a fight with a gang of six?
When it comes to genuine computation, however, to seeing a self-important number like 529 and panicking when you divide it into 2,200, or realizing that, hey, it’s the square of 23! well, that calls for a very different number system, one that is specific, symbolic and highly abstract. By all evidence, scientists say, the capacity to do mathematics, to manipulate representations of numbers and explore the quantitative texture of our world is a uniquely human and very recent skill. People have been at it only for the last few millennia, it’s not universal to all cultures, and it takes years of education to master. Math-making seems the opposite of automatic, which is why scientists long thought it had nothing to do with our ancient, pre-verbal size-em-up ways.
Yet a host of new studies suggests that the two number systems, the bestial and celestial, may be profoundly related, an insight with potentially broad implications for math education.
One research team has found that how readily people rally their approximate number sense is linked over time to success in even the most advanced and abstruse mathematics courses. Other scientists have shown that preschool children are remarkably good at approximating the impact of adding to or subtracting from large groups of items but are poor at translating the approximate into the specific. Taken together, the new research suggests that math teachers might do well to emphasize the power of the ballpark figure, to focus less on arithmetic precision and more on general reckoning.
“When mathematicians and physicists are left alone in a room, one of the games they’ll play is called a Fermi problem, in which they try to figure out the approximate answer to an arbitrary problem,” said Rebecca Saxe, a cognitive neuroscientist at the Massachusetts Institute of Technology who is married to a physicist. “They’ll ask, how many piano tuners are there in Chicago, or what contribution to the ocean’s temperature do fish make, and they’ll try to come up with a plausible answer.”
“What this suggests to me,” she added, “is that the people whom we think of as being the most involved in the symbolic part of math intuitively know that they have to practice those other, nonsymbolic, approximating skills.”
This month in the journal Nature, Justin Halberda and Lisa Feigenson of Johns Hopkins University and Michele Mazzocco of the Kennedy Krieger Institute in Baltimore described their study of 64 14-year-olds who were tested at length on the discriminating power of their approximate number sense. The teenagers sat at a computer as a series of slides with varying numbers of yellow and blue dots flashed on a screen for 200 milliseconds each — barely as long as an eye blink. After each slide, the students pressed a button indicating whether they thought there had been more yellow dots or blue. (Take a version of the test.)
Given the antiquity and ubiquity of the nonverbal number sense, the researchers were impressed by how widely it varied in acuity. There were kids with fine powers of discrimination, able to distinguish ratios on the order of 9 blue dots for every 10 yellows, Dr. Feigenson said. “Others performed at a level comparable to a 9-month-old,” barely able to tell if five yellows outgunned three blues. Comparing the acuity scores with other test results that Dr. Mazzocco had collected from the students over the past 10 years, the researchers found a robust correlation between dot-spotting prowess at age 14 and strong performance on a raft of standardized math tests from kindergarten onward. “We can’t draw causal arrows one way or another,” Dr. Feigenson said, “but your evolutionarily endowed sense of approximation is related to how good you are at formal math.”
The researchers caution that they have no idea yet how the two number systems interact. Brain imaging studies have traced the approximate number sense to a specific neural structure called the intraparietal sulcus, which also helps assess features like an object’s magnitude and distance. Symbolic math, by contrast, operates along a more widely distributed circuitry, activating many of the prefrontal regions of the brain that we associate with being human. Somewhere, local and global must be hooked up to a party line.
Other open questions include how malleable our inborn number sense may be, whether it can be improved with training, and whether those improvements would pay off in a greater appetite and aptitude for math. If children start training with the flashing dot game at age 4, will they be supernumerate by middle school?
Dr. Halberda, who happens to be Dr. Feigenson’s spouse, relishes the work’s philosophical implications. “What’s interesting and surprising in our results is that the same system we spend years trying to acquire in school, and that we use to send a man to the moon, and that has inspired the likes of Plato, Einstein and Stephen Hawking, has something in common with what a rat is doing when it’s out hunting for food,” he said. “I find that deeply moving.”
Behind every great leap of our computational mind lies the pitter-patter of rats’ feet, the little squeak of rodent kind.
Poet Emily Dickinson wrote, "There is no frigate like a book, to take us lands away." Or to link us in our common humanity. Reading is one of life's greatest pleasures and there is no more lasting gift parents can give their children than a love of reading.
"This Traverse may the poorest take; Without oppress of Toll," Dickinson continues, and the truth in her lovely poem is in that line. Reading is something everyone can do and there are no barriers to its enjoyment -- not socioeconomic, demographic, racial, ethnic, religious or any other category by which people define themselves. All it takes is a book, a reader's imagination and the ability to read.
Canwest's Raise-a-Reader campaign, which focuses on improving literacy and helping children to become lifelong lovers of reading, has been such a dazzling success year after year because of that universality to which reading speaks. A quiet space at home or a packed C-Train during rush hour -- reading can be done anywhere, by anyone. It requires no expensive equipment and no journey except the one the mind takes. Just open to page one and enjoy.
Nor could anything be lovelier than the sight of a child, wide-eyed and engrossed in a story, whether read at bedtime by a loving parent, at school or day care, or at story-hour at the library. In this day of electronic games, nothing speaks to a child's mind the way a book can, engaging him or her on so many levels of richness that can never be duplicated by the most state-of-the-art video game. A favourite book roots itself in the child's mind in such a way that passages and scenes from it come back years later, enriching the adult life anew. What greater gift can there be than the gift of literacy?
October 28, 2008
The Behavioral Revolution
By DAVID BROOKS
Roughly speaking, there are four steps to every decision. First, you perceive a situation. Then you think of possible courses of action. Then you calculate which course is in your best interest. Then you take the action.
Over the past few centuries, public policy analysts have assumed that step three is the most important. Economic models and entire social science disciplines are premised on the assumption that people are mostly engaged in rationally calculating and maximizing their self-interest.
But during this financial crisis, that way of thinking has failed spectacularly. As Alan Greenspan noted in his Congressional testimony last week, he was “shocked” that markets did not work as anticipated. “I made a mistake in presuming that the self-interests of organizations, specifically banks and others, were such as that they were best capable of protecting their own shareholders and their equity in the firms.”
So perhaps this will be the moment when we alter our view of decision-making. Perhaps this will be the moment when we shift our focus from step three, rational calculation, to step one, perception.
Perceiving a situation seems, at first glimpse, like a remarkably simple operation. You just look and see what’s around. But the operation that seems most simple is actually the most complex, it’s just that most of the action takes place below the level of awareness. Looking at and perceiving the world is an active process of meaning-making that shapes and biases the rest of the decision-making chain.
Economists and psychologists have been exploring our perceptual biases for four decades now, with the work of Amos Tversky and Daniel Kahneman, and also with work by people like Richard Thaler, Robert Shiller, John Bargh and Dan Ariely.
My sense is that this financial crisis is going to amount to a coming-out party for behavioral economists and others who are bringing sophisticated psychology to the realm of public policy. At least these folks have plausible explanations for why so many people could have been so gigantically wrong about the risks they were taking.
Nassim Nicholas Taleb has been deeply influenced by this stream of research. Taleb not only has an explanation for what’s happening, he saw it coming. His popular books “Fooled by Randomness” and “The Back Swan” were broadsides at the risk-management models used in the financial world and beyond.
In “The Black Swan,” Taleb wrote, “The government-sponsored institution Fannie Mae, when I look at its risks, seems to be sitting on a barrel of dynamite, vulnerable to the slightest hiccup.” Globalization, he noted, “creates interlocking fragility.” He warned that while the growth of giant banks gives the appearance of stability, in reality, it raises the risk of a systemic collapse — “when one fails, they all fail.”
Taleb believes that our brains evolved to suit a world much simpler than the one we now face. His writing is idiosyncratic, but he does touch on many of the perceptual biases that distort our thinking: our tendency to see data that confirm our prejudices more vividly than data that contradict them; our tendency to overvalue recent events when anticipating future possibilities; our tendency to spin concurring facts into a single causal narrative; our tendency to applaud our own supposed skill in circumstances when we’ve actually benefited from dumb luck.
And looking at the financial crisis, it is easy to see dozens of errors of perception. Traders misperceived the possibility of rare events. They got caught in social contagions and reinforced each other’s risk assessments. They failed to perceive how tightly linked global networks can transform small events into big disasters.
Taleb is characteristically vituperative about the quantitative risk models, which try to model something that defies modelization. He subscribes to what he calls the tragic vision of humankind, which “believes in the existence of inherent limitations and flaws in the way we think and act and requires an acknowledgement of this fact as a basis for any individual and collective action.” If recent events don’t underline this worldview, nothing will.
If you start thinking about our faulty perceptions, the first thing you realize is that markets are not perfectly efficient, people are not always good guardians of their own self-interest and there might be limited circumstances when government could usefully slant the decision-making architecture (see “Nudge” by Thaler and Cass Sunstein for proposals). But the second thing you realize is that government officials are probably going to be even worse perceivers of reality than private business types. Their information feedback mechanism is more limited, and, being deeply politicized, they’re even more likely to filter inconvenient facts.
This meltdown is not just a financial event, but also a cultural one. It’s a big, whopping reminder that the human mind is continually trying to perceive things that aren’t true, and not perceiving them takes enormous effort.
Dennis Danielson, a distinguished Miltonist, has just published a translation of “Paradise Lost.” Into what language?, you ask. Into English, is the answer.
Danielson is well aware that it might seem odd to translate a poem into the language in which it is already written. Dryden turned some of “Paradise Lost” into rhymed verse for a libretto while Milton was still alive; but that was an adaptation, not a translation. There are of course the Classic Comics and Cliff Notes precedents; but these are abridgments designed for the students who don’t have time to, or don’t want to, read the book. Danielson’s is a word-for-word translation, probably longer than the original since its prose unpacks a very dense poetry. The value of his edition, he says, is that it “invites more readers than ever before to enjoy the magnificent story — to experience the grandeur, heroism, pathos, beauty and grace of Milton’s inimitable work.”
Danielson borrows the word “inimitable” from John Wesley, who in 1763 was already articulating the justification for a prose translation of the poem. Wesley reports that in the competition for the title of world’s greatest poem, “the preference has generally been given by impartial judges to Milton’s ‘Paradise Lost,’” but, he laments, “this inimitable work amidst all its beauties is unintelligible to [an] abundance of readers.” Two hundred and fifty years later, Harold Bloom made the same observation. Ordinary readers, he said, now “require mediation to read ‘Paradise Lost’ with full appreciation.”
What features of the poem require mediation? Danielson’s answer is the “linguistic obscurity” from which he proposes to “free” the story so that today’s readers can read it “in their own language.” By their own language he doesn’t mean the language of some “with-it” slang, but a language less Latinate in its syntax and less archaic in its diction than the original (which was archaic and stylized when it was written). Milton’s language is not like Chaucer’s — a dialect modern readers must learn; it is our language structured into a syntax more convoluted than the syntax of ordinary speech, but less convoluted or cryptic than the syntax of modern poets like Hart Crane, Wallace Stevens and John Ashbery.
Like Milton, these poets do not make it easy for readers to move through a poem. Roadblocks, in the form of ambiguities, deliberate obscurities, shifting grammatical paths and recondite allusions, are everywhere and one is expected to stop and try to figure things out, make connections or come to terms with an inability to make connections.
The experience of reading poetry like this was well described by the great critic F.R. Leavis, who said of Milton (he did not mean this as a compliment) that his verse “calls pervasively for a kind of attention … toward itself.” That is, when reading the poetry one is not encouraged to see it as a window on some other, more real, world; it is its own world, and when it refers it refers to other parts of itself. Milton, Leavis said, displays “a capacity for words rather than a capacity for feeling through words.” The poetry is not mimetic in the usual sense of representing something prior to it; it creates the facts and significances to which you are continually asked to attend.
It is from this strenuous and often frustrating labor that Danielson wants to free the reader, who, once liberated, will be able to go with the flow and enjoy the pleasures of a powerful narrative. But that is not what Milton had in mind, as Donald Davie, another prominent critic, saw when he observed (again in complaint) that, rather than facilitating forward movement, Milton’s verse tends to “check narrative impetus” and to “provoke interesting and important speculative questions,” the consideration of which interrupts our progress.
Here is an example. When Adam decides to join Eve in sin and eat the apple, the poem says that he was “fondly overcome by female charm.” The word that asks you to pause is “fondly,” which means both foolishly and affectionately. The two meanings have different relationships to the action they characterize. If you do something foolishly, you have no excuse, and it’s a bit of a mystery as to why you did; if you do it prompted by affection and love, the wrongness of it may still be asserted, but something like an explanation or an excuse has at least been suggested.
The ambiguity plays into the poem-length concern with the question of just how culpable Adam and Eve are for the fall. (Given their faculties and emotions, were they capable of standing?) “Fondly” doesn’t resolve the question, but keeps it alive and adds to the work the reader must always be doing when negotiating this poem.
Here is Danielson’s translation of the line: “an infatuated fool overcome by a woman’s charms.” “Infatuated” isn’t right because it redoubles the accusation in “fool” rather than softening it. The judgment is sharp and clear, but it is a clearer judgment than Milton intended or provided. Something has been lost (although as Danielson points out, something is always lost in a translation).
Another example. At an earlier point, the epic narrator comments on the extent of mankind’s susceptibility to the blandishments of the fallen angels — “and devils to adore for deities.” The tone is one of incredulity; how could anyone be so stupid as to be unable to tell the difference? But the line’s assertion that as polar opposites devils and deities should be easily distinguishable is complicated by the fact that as words “devils” and “deities” are close together, beginning and ending with the same letter and sharing an “e” and an “i” in between. The equivalence suggested by sound (although denied by the sense) is reinforced by the mirror-structure of “adore for,” a phrase that separates devils from deities but in fact participates in the subliminal assertion of their likeness.
What, then, is the line saying? It is saying simultaneously that the difference between devils and deities is obvious and perspicuous and that the difference is hard to tell. This is one of those moments Davie has in mind when he talks about the tendency of Milton’s verse to go off the rails of narrative in order to raise speculative questions that have no definitive answer.
When Danielson comes to render “devils to adore for deities,” he turns it into a present participle: “worshiping devils themselves.” Absent are both the tone of scornful wonder the epic voice directs at the erring sinners and the undercutting of that scorn by the dance of vowels and consonants.
One more example. In line 2 of book I, the reference to the fruit of the forbidden tree is followed by “whose mortal taste/ Brought death into the world.” “Mortal,” from the Latin “mors,” means both fatal — there is no recovery from it — and bringing about the condition of mortality, the condition of being human, the taste of mortality. By eating of the forbidden tree, Adam and Eve become capable of death and therefore capable of having a beginning and an end and a middle filled up by successes, failures, losses and recoveries. To say that a “mortal taste” brought death into the world is to say something tautologous; but the tautology is profound when it reminds us of both the costs and the glories of being mortal. If no mortality, then no human struggles, no narrative, no story, no aspiration (in eternity there’s nowhere to go), no “Paradise Lost.”
Danielson translates “whose mortal taste” as “whose lethal taste,” which is accurate, avoids tautology (or at least suppresses it) and gets us into the next line cleanly and without fuss or provoked speculation. But fuss and bother and speculations provoked by etymological puzzles are what makes this verse go (or, rather, not go), and while the reader’s way may be smoothed by a user-friendly prose translation, smoothness is not what Milton is after; it is not a pleasure he wishes to provide.
I have no doubt that Danielson is aware of all of this. He is not making a mistake. He is making a choice. He knows as well as anyone how Milton’s poetry works, but it is his judgment (following Wesley and Bloom) that many modern readers will not take their Milton straight and require some unraveling of the knots before embarking on the journey.
I’m not sure he’s right (I’ve found students of all kinds responsive to the poetry once they give it half a chance), but whether he is or not, he has fashioned a powerful pedagogical tool that is a gift to any teacher of Milton whatever the level of instruction.
The edition is a parallel one — Milton’s original on the left hand page and Danielson’s prose rendering on the right. This means that you can ask students to take a passage and compare the effects and meanings produced by the two texts. You can ask students to compose their own translations and explain or defend the choices they made. You can ask students to look at prose translations in another language and think about the difference, if there is one, between translating into a foreign tongue and translating into a more user-friendly version of English. You can ask students to speculate on the nature of translation and on the relationship between translation and the perennial debate about whether there are linguistic universals.
In short, armed with just this edition which has no editorial apparatus (to have included one would have been to defeat Danielson’s purpose), you can teach a course in Milton and venture into some deep philosophical waters as well. A nice bargain in this holiday season.
[Editor's note: An earlier version of this article misquoted a phrase from Milton's "Paradise Lost." The phrase has been corrected to "and devils to adore for deities."]
December 16, 2008
Lost in the Crowd
By DAVID BROOKS
All day long, you are affected by large forces. Genes influence your intelligence and willingness to take risks. Social dynamics unconsciously shape your choices. Instantaneous perceptions set off neural reactions in your head without you even being aware of them.
Over the past few years, scientists have made a series of exciting discoveries about how these deep patterns influence daily life. Nobody has done more to bring these discoveries to public attention than Malcolm Gladwell.
Gladwell’s important new book, “Outliers,” seems at first glance to be a description of exceptionally talented individuals. But in fact, it’s another book about deep patterns. Exceptionally successful people are not lone pioneers who created their own success, he argues. They are the lucky beneficiaries of social arrangements.
As Gladwell told Jason Zengerle of New York magazine: “The book’s saying, ‘Great people aren’t so great. Their own greatness is not the salient fact about them. It’s the kind of fortunate mix of opportunities they’ve been given.’ ”
Gladwell’s noncontroversial claim is that some people have more opportunities than other people. Bill Gates was lucky to go to a great private school with its own computer at the dawn of the information revolution. Gladwell’s more interesting claim is that social forces largely explain why some people work harder when presented with those opportunities.
Chinese people work hard because they grew up in a culture built around rice farming. Tending a rice paddy required working up to 3,000 hours a year, and it left a cultural legacy that prizes industriousness. Many upper-middle-class American kids are raised in an atmosphere of “concerted cultivation,” which inculcates a fanatical devotion to meritocratic striving.
In Gladwell’s account, individual traits play a smaller role in explaining success while social circumstances play a larger one. As he told Zengerle, “I am explicitly turning my back on, I think, these kind of empty models that say, you know, you can be whatever you want to be. Well, actually, you can’t be whatever you want to be. The world decides what you can and can’t be.”
As usual, Gladwell intelligently captures a larger tendency of thought — the growing appreciation of the power of cultural patterns, social contagions, memes. His book is being received by reviewers as a call to action for the Obama age. It could lead policy makers to finally reject policies built on the assumption that people are coldly rational utility-maximizing individuals. It could cause them to focus more on policies that foster relationships, social bonds and cultures of achievement.
Yet, I can’t help but feel that Gladwell and others who share his emphasis are getting swept away by the coolness of the new discoveries. They’ve lost sight of the point at which the influence of social forces ends and the influence of the self-initiating individual begins.
Most successful people begin with two beliefs: the future can be better than the present, and I have the power to make it so. They were often showered by good fortune, but relied at crucial moments upon achievements of individual will.
Most successful people also have a phenomenal ability to consciously focus their attention. We know from experiments with subjects as diverse as obsessive-compulsive disorder sufferers and Buddhist monks that people who can self-consciously focus attention have the power to rewire their brains.
Control of attention is the ultimate individual power. People who can do that are not prisoners of the stimuli around them. They can choose from the patterns in the world and lengthen their time horizons. This individual power leads to others. It leads to self-control, the ability to formulate strategies in order to resist impulses. If forced to choose, we would all rather our children be poor with self-control than rich without it.
It leads to resilience, the ability to persevere with an idea even when all the influences in the world say it can’t be done. A common story among entrepreneurs is that people told them they were too stupid to do something, and they set out to prove the jerks wrong.
It leads to creativity. Individuals who can focus attention have the ability to hold a subject or problem in their mind long enough to see it anew.
Gladwell’s social determinism is a useful corrective to the Homo economicus view of human nature. It’s also pleasantly egalitarian. The less successful are not less worthy, they’re just less lucky. But it slights the centrality of individual character and individual creativity. And it doesn’t fully explain the genuine greatness of humanity’s outliers. As the classical philosophers understood, examples of individual greatness inspire achievement more reliably than any other form of education. If Gladwell can reduce William Shakespeare to a mere product of social forces, I’ll buy 25 more copies of “Outliers” and give them away in Times Square.
December 27, 2008
Op-Ed Guest Columnist
Living the Off-Label Life
By JUDITH WARNER
What if you could just take a pill and all of a sudden remember to pay your bills on time? What if, thanks to modern neuroscience, you could, simultaneously, make New Year’s Eve plans, pay the mortgage, call the pediatrician, consolidate credit card debt and do your job — well — without forgetting dentist appointments or neglecting to pick up your children at school?
Would you do it? Tune out the distractions of our online, on-call, too-fast A.D.D.-ogenic world with focus and memory-enhancing medications like Ritalin or Adderall? Stay sharp as a knife — no matter how overworked and sleep-deprived — with a mental-alertness-boosting drug like the anti-narcolepsy medication Provigil?
I’ve always said no. Fantasy aside, I’ve always rejected the idea of using drugs meant for people with real neurological disorders to treat the pathologies of everyday life.
Most of us, viscerally, do. Cognitive enhancement — a practice typified by the widely reported abuse of psychostimulants by college students cramming for exams, and by the less reported but apparently growing use of mind-boosters like Provigil among in-the-know scientists and professors — goes against the grain of some of our most basic beliefs about fairness and meritocracy. It seems to many people to be unnatural, inhuman, hubristic, pure cheating.
That’s why when Henry Greely, director of Stanford Law School’s Center for Law and the Biosciences, published an article, with a host of co-authors, in the science journal Nature earlier this month suggesting that we ought to rethink our gut reactions and “accept the benefits of enhancement,” he was deluged with irate responses from readers.
“There were three kinds of e-mail reactions,” he told me in a phone interview last week. “‘How much crack are you smoking? How much money did your friends in pharma give you? How much crack did you get from your friends in pharma?’ ”
As Americans, our default setting on matters of psychotropic drugs — particularly when it comes to medicating those who are not very ill — tends to be, as the psychiatrist Gerald Klerman called it in 1972, something akin to “pharmacological Calvinism.” People should suffer and endure, the thinking goes, accept what hard work and their God-given abilities bring them and hope for no more.
But Greely and his Nature co-authors suggest that such arguments are outdated and intellectually dishonest. We enhance our brain function all the time, they say — by drinking coffee, by eating nutritious food, by getting an education, even by getting a good night’s sleep. Taking brain-enhancing drugs should be viewed as just another step along that continuum, one that’s “morally equivalent” to such “other, more familiar, enhancements,” they write.
Normal life, unlike sports competitions, they argue, isn’t a zero-sum game, where one person’s doped advantage necessarily brings another’s disadvantage. A surgeon whose mind is extra-sharp, a pilot who’s extra alert, a medical researcher whose memory is fine-tuned to make extraordinary connections, is able to work not just to his or her own benefit, but for that of countless numbers of people. “Cognitive enhancement,” they write, “unlike enhancement for sports competitions, could lead to substantive improvements in the world.”
I’m not convinced of that. I’m not sure that pushing for your personal best — all the time — is tantamount to truly being the best person you can be. I have long thought that a life so frenetic and fractured that it drives “neuro-normal” people to distraction, leaving them sleep-deprived and exhausted, demands — indeed, screams for — systemic change.
But now I do wonder: What if the excessive demands of life today are creating ever-larger categories of people who can’t reach their potential due to handicaps that in an easier time were just quirks? (Absent-minded professor-types were, for generations, typically men who didn’t need to be present — organized and on-time — for their kids.) Is it any fairer to saddle a child with a chronically overwhelmed parent than with one suffering from untreated depression?
And, furthermore, how much can most of us, on a truly meaningful scale, change our lives? At a time of widespread layoffs and job anxiety among those still employed, can anyone but the most fortunate afford to cut their hours to give themselves time to breathe? Can working parents really sacrifice on either side of the wage-earning/life-making equation? It’s disturbing to think that we just have to make do with the world we now live in. But to do otherwise is for most people an impossible luxury.
For some of us, saddled with brains ill-adapted to this era, and taxed with way too many demands and distractions, pharmacological Calvinism may now be a luxury, too.
December 27, 2008
Why We’re Still Happy
By SONJA LYUBOMIRSKY
THESE days, bad news about the economy is everywhere.
So why aren’t we panicking? Why aren’t we spending our days dejected about the markets? How is it that we manage to remain mostly preoccupied with the quotidian tasks and concerns of life? Traffic, dinner, homework, deadlines, sharp words, flirtatious glances.
Because the news these days affects everyone.
Research in psychology and economics suggests that when only your salary is cut, or when only you make a foolish investment, or when only you lose your job, you become considerably less satisfied with your life. But when everyone from autoworkers to Wall Street financiers becomes worse off, your life satisfaction remains pretty much the same.
Indeed, humans are remarkably attuned to relative position and status. As the economists David Hemenway and Sara Solnick demonstrated in a study at Harvard, many people would prefer to receive an annual salary of $50,000 when others are making $25,000 than to earn $100,000 a year when others are making $200,000.
Similarly, Daniel Zizzo and Andrew Oswald, economists in Britain, conducted a study that showed that people would give up money if doing so would cause someone else to give up a slightly larger sum. That is, we will make ourselves poorer in order to make someone else poorer, too.
Findings like these reveal an all-too-human truth. We care more about social comparison, status and rank than about the absolute value of our bank accounts or reputations.
For example, Andrew Clark, an economist in France, has recently shown that being laid off hurts less if you live in a community with a high unemployment rate. What’s more, if you are unemployed, you will, on average, be happier if your spouse is unemployed, too.
So in a world in which just about all of us have seen our retirement savings and home values plummet, it’s no wonder that we all feel surprisingly O.K.
Sonja Lyubomirsky, a professor of psychology at the University of California, Riverside, is the author of “The How of Happiness: A Scientific Approach to Getting the Life You Want.”
I thought this was an interesting thread so I thought I would give a summary of the problems facing contemporary epistemology (went back through old lecture notes in university). Basically, the problem is skepticism, which traces itself generally to Descartes.
I don't know how the Ismaili philosophy would respond to this problem, but I do know that Descartes arguements borrowed from Plato and Aristotle, whose philosophy were incorporated into Ismailism.
Anyway, here is food for thought for anyone who is interested.
Problems in Contemporary Epistemology
Skepticism: Holds that no one can know anything about the world around us.
You know that O only if you are certain that O. (O is just any ordinary proposition: I have two hands.)
H shows that one cannot be certain of O (H = Skeptical Proposition)
Therefore, you do not know that O.
Arguemnt from Ignorance
1.You know that O, only if you know that H is false. (That you are not dreaming).
2.But you do not know that H is false. (You don't know you are not dreaming).
3.Therefore, you don't know that O.
This is a valid arguement:
If A, then B.
Therefore, Not A.
Generally all contemporary epistemology centers around how to respond to the skeptic. But how did we end up in this predicament in the first place? It all goes back to Rene Descartes.
Descartes is the father of modern day epistemology. His goal was twofold:
1.To find a secure foundation for all knowledge, and
2.To expose beliefs without foundation.
Foundationalism holds that
1.There are some justified basic beliefs
2.All non-basic justified beliefs depend for their justification ultimately upon basic justified beliefs.
We can know some basic things (we exist, material things exist).
Other things depend upon that for their justification.
He wrote in the time of during the Enlightenment, the scientific revolution, in which there was a growing degree of skepticism, a questioning of the Church, etc.
His first Meditation gave the criteria for justification and knowledge: certainty. To know something, you must be certain. It is all or nothing.
How can we identify beliefs that are false from those we can be certain are true? Through doubt. And yet Descartes is not a skeptic. He wants to defeat skepticism, and to do this, paradoxicaly though it may sound, he casts doubt on every single belief. His method was basically to tear it all down and start anew.
1.Beliefs based on sources of information that are sometimes mistaken are uncertain
2.Sense-perception is a source of information on which beliefs are based, and is sometimes mistaken
3.Therefore beliefs based on sense-perception are uncertain.
1.Beliefs based on sensory experiences in dreams cannot be distinguished from those sensory experiences when awake
2.If so, I can never know for certain whether or not I am dreaming.
3.If so, I cannot know my specific beliefs about external world are not false.
4.Therefore, I can never know for certain anything specific about external world on the basis of sensory experience.
Evil Genius Arguement
1.It is possible that an omnipotent evil genius exists who deceives me.
2.If so, it is possible all my beliefs are false.
3.Therefore, I cannot be certain of anything.
This Meditation gave the Cogito arguement, “that I am,” that anything which is thinking must exist.
1.I am thinking
2.Therefore, I exist.
The purpose of his Meditations is to find a base of certain knowledge, from which all other knowledge can be then extended (Foundationalism). He is advancing of rationalism, the idea that reason is the ultimate source of knowledge, in contrast with empiricism which says senses, not reason, is the ultimate source of knowledge.
Fundamental to his advancement of rationalism is the Wax Argument:
1.All properties of wax that we perceive with the senses change as the wax melts.
2.Whatever aspects of a thing disapears while the thing remains cannot be the essence of a thing. (This idea goes back to Plato and Aristotle. Essence is a necessary feature of something and without it it wouldn't be what it is.)
3.Our senses do not grasp the essence of the wax.
4.The wax can be extended in ways that I cannot accurately imagine.
5.Yet the wax remains the same piece of wax as it melts.
6.Therefore, insofar as I grasp essence of the wax, I grasp in through my mind and not through my senses or imagination.
7.What I know regarding wax applies to everything external to me.
8.Therefore, in order to grasp essence of any external body, you must know that your mind exists but not vis versa.
9.Therefore, the mind is more easily known than the body.
What this arguement demonstrates is that even in the case of mateiral things, are most certain knowledge is derived from the intellect, not the senses or imagination. Knowledge of mind comes first in order of knowing, and from this are derived knowledge of bodies.
Justified True Belief is a modern day concept of knowledge. It states that in order to know P:
1.P must be true. (truth condition)
2.S must believe that P. (belief condition)
3.S must be justified in believing that P. (justification condition)
The Cartesian conception of knowledge is contrasted with justified true belief in the following manner:
S knows that P, if and only if
1.P must be true.
2.S must believe that P without any possible doubt.
3.S must have a justification confirming the truth of P.
So unlike justified true belief, you need absolute certainty. Yet how do we know P is true? What is the truth criteria? What Descrates calls clear and distinct ideas. Where do these come from? God. Descartes is going to show that God exists.
1. I have idea of supremely perfect being.
2. Existence is a perfection.
3. Part of nature of supremely perfect being that it exists.
4. Therefore a supremely perfect being exists.
This is a purely a priori (rational) argument. It requires no empirical premises or sensory evidence.
Cosmological Arguement (Put forward by St. Thomas Aquinas)
God exists based on the observable fact of the world.
1.I have an idea of God.
2.This idea must have a cause. (Drawing on Leibenez' principle of sufficient reason that everything has a cause.)
3.There cannot be any less reality (perfection) in the cause than in the effect. This is revealed by the light of nature.
4.If the cause of my idea were anything but God, than 3 would be false.
5.Therefore God exists.
Light of Nature
The light of nature is a rational faculty, and whatever is revealed by it cannot be doubtful.
Descartes had proved that God exists, and that He God cannot deceive, because deception is sign of imperfection.
1.)Deception is imperfection
2.)God is supremely perfect.
3.)Therefore God is not a deceiver.
If God was not good he would be lacking something. God cannot lack anything because He is supremely perfect.
Philosophers generally think something has gone wrong here but disagree where the mistake is.
Descrates needs God to be perfect in order for the rest of his philosophy to work. He needs clear and distinct ideas. How do we know they're true? Because God is perfect and not a deceiver. But how do you know its the light of nature instead of an evil deceiver?
Descartes reasoning goes in circle He needs the existence of God to overrule the evil genuis, and he also requires that clear and distinct ideas exist. And yet clear and distinct ideas are needed to demonsrtate existence of God. The light of nature is self-evident, an a priori truth.
Skepticism holds that Descrates used a dubious theology to confirm certainty. Its principal arguement is that: we don't know we're not being dreaming/deceived/are brains hooked up to vats, therefore knowledge of the external world is impossible.
The skeptical arguments stems from the rejection of Descartes' proof for the existence of God. They leave that aside, and adopt the Method of Universal Doubt Descartes employed.
What I personally think about it, first of all, is that I believe God can never be proved through reason, inasmuch as God is trans-rational (importantly, not sub-rational). You can have a profound experience of God, and that experience is something that "passeth understanding." Miracles likewise do not follow the rules of logic, which is partially why they are miracles. But the experience is universal: whether you are Rumi (Muslim), Meister Eckhart (Christian) or Ramakrishna (Hindu), the essence of your experience will be the same.
And yet, how do you prove it within the criteria the skeptic demands? You cannot, as the skeptic is asking for proof of something that belongs within another domain. It is like asking for someone to give you the temperature of music ... It does not apply to that domain.
But, if we can accept Descartes' idea of God on its own grounds, then how does his epistemology hold up? This is something of course somethign that is never brought up in epistemology class.
March 19, 2009
The Daily Me
By NICHOLAS D. KRISTOF
Some of the obituaries these days aren’t in the newspapers but are for the newspapers. The Seattle Post-Intelligencer is the latest to pass away, save for a remnant that will exist only in cyberspace, and the public is increasingly seeking its news not from mainstream television networks or ink-on-dead-trees but from grazing online.
When we go online, each of us is our own editor, our own gatekeeper. We select the kind of news and opinions that we care most about.
Nicholas Negroponte of M.I.T. has called this emerging news product The Daily Me. And if that’s the trend, God save us from ourselves.
That’s because there’s pretty good evidence that we generally don’t truly want good information — but rather information that confirms our prejudices. We may believe intellectually in the clash of opinions, but in practice we like to embed ourselves in the reassuring womb of an echo chamber.
One classic study sent mailings to Republicans and Democrats, offering them various kinds of political research, ostensibly from a neutral source. Both groups were most eager to receive intelligent arguments that strongly corroborated their pre-existing views.
There was also modest interest in receiving manifestly silly arguments for the other party’s views (we feel good when we can caricature the other guys as dunces). But there was little interest in encountering solid arguments that might undermine one’s own position.
That general finding has been replicated repeatedly, as the essayist and author Farhad Manjoo noted in his terrific book last year: “True Enough: Learning to Live in a Post-Fact Society.”
Let me get one thing out of the way: I’m sometimes guilty myself of selective truth-seeking on the Web. The blog I turn to for insight into Middle East news is often Professor Juan Cole’s, because he’s smart, well-informed and sensible — in other words, I often agree with his take. I’m less likely to peruse the blog of Daniel Pipes, another Middle East expert who is smart and well-informed — but who strikes me as less sensible, partly because I often disagree with him.
The effect of The Daily Me would be to insulate us further in our own hermetically sealed political chambers. One of last year’s more fascinating books was Bill Bishop’s “The Big Sort: Why the Clustering of Like-Minded America is Tearing Us Apart.” He argues that Americans increasingly are segregating themselves into communities, clubs and churches where they are surrounded by people who think the way they do.
Almost half of Americans now live in counties that vote in landslides either for Democrats or for Republicans, he said. In the 1960s and 1970s, in similarly competitive national elections, only about one-third lived in landslide counties.
“The nation grows more politically segregated — and the benefit that ought to come with having a variety of opinions is lost to the righteousness that is the special entitlement of homogeneous groups,” Mr. Bishop writes.
One 12-nation study found Americans the least likely to discuss politics with people of different views, and this was particularly true of the well educated. High school dropouts had the most diverse group of discussion-mates, while college graduates managed to shelter themselves from uncomfortable perspectives.
The result is polarization and intolerance. Cass Sunstein, a Harvard law professor now working for President Obama, has conducted research showing that when liberals or conservatives discuss issues such as affirmative action or climate change with like-minded people, their views quickly become more homogeneous and more extreme than before the discussion. For example, some liberals in one study initially worried that action on climate change might hurt the poor, while some conservatives were sympathetic to affirmative action. But after discussing the issue with like-minded people for only 15 minutes, liberals became more liberal and conservatives more conservative.
The decline of traditional news media will accelerate the rise of The Daily Me, and we’ll be irritated less by what we read and find our wisdom confirmed more often. The danger is that this self-selected “news” acts as a narcotic, lulling us into a self-confident stupor through which we will perceive in blacks and whites a world that typically unfolds in grays.
So what’s the solution? Tax breaks for liberals who watch Bill O’Reilly or conservatives who watch Keith Olbermann? No, until President Obama brings us universal health care, we can’t risk the surge in heart attacks.
So perhaps the only way forward is for each of us to struggle on our own to work out intellectually with sparring partners whose views we deplore. Think of it as a daily mental workout analogous to a trip to the gym; if you don’t work up a sweat, it doesn’t count.
Now excuse me while I go and read The Wall Street Journal’s editorial page.
March 26, 2009
Learning How to Think
By NICHOLAS D. KRISTOF
Ever wonder how financial experts could lead the world over the economic cliff?
One explanation is that so-called experts turn out to be, in many situations, a stunningly poor source of expertise. There’s evidence that what matters in making a sound forecast or decision isn’t so much knowledge or experience as good judgment — or, to be more precise, the way a person’s mind works.
More on that in a moment. First, let’s acknowledge that even very smart people allow themselves to be buffaloed by an apparent “expert” on occasion.
The best example of the awe that an “expert” inspires is the “Dr. Fox effect.” It’s named for a pioneering series of psychology experiments in which an actor was paid to give a meaningless presentation to professional educators.
The actor was introduced as “Dr. Myron L. Fox” (no such real person existed) and was described as an eminent authority on the application of mathematics to human behavior. He then delivered a lecture on “mathematical game theory as applied to physician education” — except that by design it had no point and was completely devoid of substance. However, it was warmly delivered and full of jokes and interesting neologisms.
Afterward, those in attendance were given questionnaires and asked to rate “Dr. Fox.” They were mostly impressed. “Excellent presentation, enjoyed listening,” wrote one. Another protested: “Too intellectual a presentation.”
A different study illustrated the genuflection to “experts” another way. It found that a president who goes on television to make a case moves public opinion only negligibly, by less than a percentage point. But experts who are trotted out on television can move public opinion by more than 3 percentage points, because they seem to be reliable or impartial authorities.
But do experts actually get it right themselves?
The expert on experts is Philip Tetlock, a professor at the University of California, Berkeley. His 2005 book, “Expert Political Judgment,” is based on two decades of tracking some 82,000 predictions by 284 experts. The experts’ forecasts were tracked both on the subjects of their specialties and on subjects that they knew little about.
The result? The predictions of experts were, on average, only a tiny bit better than random guesses — the equivalent of a chimpanzee throwing darts at a board.
“It made virtually no difference whether participants had doctorates, whether they were economists, political scientists, journalists or historians, whether they had policy experience or access to classified information, or whether they had logged many or few years of experience,” Mr. Tetlock wrote.
Indeed, the only consistent predictor was fame — and it was an inverse relationship. The more famous experts did worse than unknown ones. That had to do with a fault in the media. Talent bookers for television shows and reporters tended to call up experts who provided strong, coherent points of view, who saw things in blacks and whites. People who shouted — like, yes, Jim Cramer!
Mr. Tetlock called experts such as these the “hedgehogs,” after a famous distinction by the late Sir Isaiah Berlin (my favorite philosopher) between hedgehogs and foxes. Hedgehogs tend to have a focused worldview, an ideological leaning, strong convictions; foxes are more cautious, more centrist, more likely to adjust their views, more pragmatic, more prone to self-doubt, more inclined to see complexity and nuance. And it turns out that while foxes don’t give great sound-bites, they are far more likely to get things right.
This was the distinction that mattered most among the forecasters, not whether they had expertise. Over all, the foxes did significantly better, both in areas they knew well and in areas they didn’t.
Other studies have confirmed the general sense that expertise is overrated. In one experiment, clinical psychologists did no better than their secretaries in their diagnoses. In another, a white rat in a maze repeatedly beat groups of Yale undergraduates in understanding the optimal way to get food dropped in the maze. The students overanalyzed and saw patterns that didn’t exist, so they were beaten by the rodent.
The marketplace of ideas for now doesn’t clear out bad pundits and bad ideas partly because there’s no accountability. We trumpet our successes and ignore failures — or else attempt to explain that the failure doesn’t count because the situation changed or that we were basically right but the timing was off.
For example, I boast about having warned in 2002 and 2003 that Iraq would be a violent mess after we invaded. But I tend to make excuses for my own incorrect forecast in early 2007 that the troop “surge” would fail.
So what about a system to evaluate us prognosticators? Professor Tetlock suggests that various foundations might try to create a “trans-ideological Consumer Reports for punditry,” monitoring and evaluating the records of various experts and pundits as a public service. I agree: Hold us accountable!
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum