The article below is about levels of meaning that exist in life - there is the ordinary meaning and then there is a meaning within a metaphysical context....
November 28, 2011, 3:55 pm
In the Context of No Context
During my recent blogging hiatus, Will Wilkinson penned a withering post criticizing a recent convert to Christianity who had suggested that atheism can’t supply “meaning” in human life. Arguing that questions of meaning are logically independent of questions about metaphysics, he wrote:
"If you ask me, the best reason to think “life is meaningful” is because one’s life seems meaningful. If you can’t stop “acting as if my own life had meaning,” it’s probably because it does have meaning. Indeed, not being able to stop acting as if one’s life is meaningful is probably what it means for life to be meaningful. But why think this has any logical or causal relationship to the scientific facts about our brains or lifespans? The truth of the proposition “life has meaning” is more evident and secure than any proposition about what must be true if life is to have meaning. Epistemic best practices recommend treating “life has meaning” as a more-or-less self-evident, non-conditional proposition. Once we’ve got that squared away, we can go ahead and take the facts about the world as they come. It turns out our lives are infinitesimally short on the scale of cosmic time. We know that to be true. Interesting! So now we know two things: that life has meaning and that our lives are just a blip in the history of the universe.
This is, I’m confident, the right way to do it. Why think the one fact has anything to do with the other?"
I see Wilkinson’s point, but I don’t think he quite sees the point that he’s critiquing. Suppose, by way of analogy, that a group of people find themselves conscripted into a World-War-I-type conflict — they’re thrown together in a platoon and stationed out in no man’s land, where over time a kind of miniature society gets created, with its own loves and hates, hope and joys, and of course its own grinding, life-threatening routines. Eventually, some people in the platoon begin to wonder about the point of it all: Why are they fighting, who are they fighting, what do they hope to gain, what awaits them at war’s end, will there ever be a war’s end, and for that matter are they even sure that they’re the good guys? (Maybe they’ve been conscripted by the Third Reich! Maybe their forever war is just a kind of virtual reality created by alien intelligences to study the human way of combat! Etc.) They begin to wonder, in other words, about the meaning of it all, and whether there’s any larger context to their often-agonizing struggle for survival. And in the absence of such context, many of them flirt with a kind of existential despair, which makes the everyday duties of the trench seem that much more onerous, and the charnel house of war that much more difficult to bear.
At this point, one of the platoon’s more intellectually sophisticated members speaks up. He thinks his angst-ridden comrades are missing the point: Regardless of the larger context of the conflict, they know the war has meaning because they can’t stop acting like it has meaning. Even in their slough of despond, most of them don’t throw themselves on barbed wire or rush headlong into a wave of poison gas. (And the ones who do usually have something clinically wrong with them.) Instead, they duck when the shells sail over, charge when the commander gives the order, tend the wounded and comfort the dying and feel intuitively invested in the capture of the next hill, the next salient, the next trench. They do so, this clever soldier goes on, because their immediate context — life-and-death battles, wartime loves and friendships, etc — supplies intense feelings of meaningfulness, and so long as it does the big-picture questions that they’re worrying about must be logically separable from the everyday challenges of being a front-line soldier. If some of the soldiers want to worry about these big-picture questions, that’s fair enough. But they shouldn’t pretend that their worries give them a monopoly on a life meaningfully lived (or a war meaningfully fought). Instead, given how much meaningfulness is immediately and obviously available — right here and right now, amid the rocket’s red glare and the bombs bursting in air — the desire to understand the war’s larger context is just a personal choice, with no necessary connection to the question of whether today’s battle is worth the fighting.
This is a very natural way to approach warfare, as it happens. (Many studies of combat have shown that the bonds of affection between soldiers tend to matter more to cohesion and morale than the grand ideological purposes — or lack thereof — that they’re fighting for.) And it’s a very natural way to approach everyday life as well. But the part of the point of religion and philosophy is address questions that lurk beneath these natural rhythms, instead of just taking our feelings of meaningfulness as the alpha and omega of human existence. In the context of the war, of course the battle feels meaningful. In the context of daily life as we experience it, of course our joys and sorrows feel intensely meaningful. But just as it surely makes a (if you will) meaningful difference why the war itself is being waged, it surely makes a rather large difference whether our joys and sorrows take place in, say, C.S. Lewis’s Christian universe or Richard Dawkins’s godless cosmos. Saying that “we know life is meaningful because it feels meaningful” is true for the first level of context, but non-responsive for the second.
December 4, 2011, 5:30 pm
Art and the Limits of Neuroscience
By ALVA NOë
What is art? What does art reveal about human nature? The trend these days is to approach such questions in the key of neuroscience.
“Neuroaesthetics” is a term that has been coined to refer to the project of studying art using the methods of neuroscience. It would be fair to say that neuroaesthetics has become a hot field. It is not unusual for leading scientists and distinguished theorists of art to collaborate on papers that find their way into top scientific journals.
Semir Zeki, a neuroscientist at University College London, likes to say that art is governed by the laws of the brain. It is brains, he says, that see art and it is brains that make art. Champions of the new brain-based approach to art sometimes think of themselves as fighting a battle with scholars in the humanities who may lack the courage (in the words of the art historian John Onians) to acknowledge the ways in which biology constrains cultural activity. Strikingly, it hasn’t been much of a battle. Students of culture, like so many of us, seem all too glad to join in the general enthusiasm for neural approaches to just about everything.
What is striking about neuroaesthetics is not so much the fact that it has failed to produce interesting or surprising results about art, but rather the fact that no one — not the scientists, and not the artists and art historians — seem to have minded, or even noticed. What stands in the way of success in this new field is, first, the fact that neuroscience has yet to frame anything like an adequate biological or “naturalistic” account of human experience — of thought, perception, or consciousness.
The idea that a person is a functioning assembly of brain cells and associated molecules is not something neuroscience has discovered. It is, rather, something it takes for granted. You are your brain. Francis Crick once called this “the astonishing hypothesis,” because, as he claimed, it is so remote from the way most people alive today think about themselves. But what is really astonishing about this supposedly astonishing hypothesis is how astonishing it is not! The idea that there is a thing inside us that thinks and feels — and that we are that thing — is an old one. Descartes thought that the thinking thing inside had to be immaterial; he couldn’t conceive how flesh could perform the job. Scientists today suppose that it is the brain that is the thing inside us that thinks and feels. But the basic idea is the same. And this is not an idle point. However surprising it may seem, the fact is we don’t actually have a better understanding how the brain might produce consciousness than Descartes did of how the immaterial soul would accomplish this feat; after all, at the present time we lack even the rudimentary outlines of a neural theory of consciousness.
What we do know is that a healthy brain is necessary for normal mental life, and indeed, for any life at all. But of course much else is necessary for mental life. We need roughly normal bodies and a roughly normal environment. We also need the presence and availability of other people if we are to have anything like the sorts of lives that we know and value. So we really ought to say that it is the normally embodied, environmentally- and socially-situated human animal that thinks, feels, decides and is conscious. But once we say this, it would be simpler, and more accurate, to allow that it is people, not their brains, who think and feel and decide. It is people, not their brains, that make and enjoy art. You are not your brain, you are a living human being.
We need finally to break with the dogma that you are something inside of you — whether we think of this as the brain or an immaterial soul — and we need finally take seriously the possibility that the conscious mind is achieved by persons and other animals thanks to their dynamic exchange with the world around them (a dynamic exchange that no doubt depends on the brain, among other things). Importantly, to break with the Cartesian dogmas of contemporary neuroscience would not be to cave in and give up on a commitment to understanding ourselves as natural. It would be rather to rethink what a biologically adequate conception of our nature would be.
But there is a second obstacle to progress in neuroaesthetics. Neural approaches to art have not yet been able to find a way to bring art into focus in the laboratory. As mentioned, theorists in this field like to say that art is constrained by the laws of the brain. But in practice what this is usually taken to come down to is the humble fact that the brain constrains the experience of art because it constrains all experience. Visual artists, for example, don’t work with ultraviolet light, as Zeki reminds us, because we can’t see ultraviolet light. They do work with shape and form and color because we can see them.
Now it is doubtless correct that visual artists confine themselves to materials and effects that are, well, visible. And likewise, it seems right that our perception of works of art, like our perception of anything, depends on the nature of our perceptual capacities, capacities which, in their turn, are constrained by the brain.
But there is a problem with this: An account of how the brain constrains our ability to perceive has no greater claim to being an account of our ability to perceive art than it has to being an account of how we perceive sports, or how we perceive the man across from us on the subway. In works about neuroaesthetics, art is discussed in the prefaces and touted on the book jackets, but never really manages to show up in the body of the works themselves!
Some of us might wonder whether the relevant question is how we perceive works of art, anyway. What we ought to be asking is: Why do we value some works as art? Why do they move us? Why does art matter? And here again, the closest neural scientists or psychologists come to saying anything about this kind of aesthetic evaluation is to say something about preference. But the class of things we like, or that we prefer as compared to other things, is much wider than the class of things we value as art. And the sorts of reasons we have for valuing one art work over another are not the same kind of reasons we would give for liking one person more than another, or one flavor more than another. And it is no help to appeal to beauty here. Beauty is both too wide and too narrow. Not all art works are beautiful (or pleasing for that matter, even if many are), and not everything we find beautiful (a person, say, or a sunset) is a work of art.
Again we find not that neuroaesthetics takes aim at our target and misses, but that it fails even to bring the target into focus.
Yet it’s early. Neuroaesthetics, like the neuroscience of consciousness itself, is still in its infancy. Is there any reason to doubt that progress will be made? Is there any principled reason to be skeptical that there can be a valuable study of art making use of the methods and tools of neuroscience? I think the answer to these questions must be yes, but not because there is no value in bringing art and empirical science into contact, and not because art does not reflect our human biology.
To begin to see this, consider: engagement with a work of art is a bit like engagement with another person in conversation; and a work of art itself can be usefully compared with a humorous gesture or a joke. Just as getting a joke requires sensitivity to a whole background context, to presuppositions and intended as well as unintended meanings, so “getting” a work of art requires an attunement to problems, questions, attitudes and expectations; it requires an engagement with the context in which the work of art has work to do. We might say that works of art pose questions and encountering a work of art meaningfully requires understanding the relevant questions and getting why they matter, or maybe even, why they don’t matter, or don’t matter any more, or why they would matter in one context but not another. In short, the work of art, whatever its local subject matter or specific concerns ― God, life, death, politics, the beautiful, art itself, perceptual consciousness ― and whatever its medium, is doing something like philosophical work.
One consequence of this is that it may belong to the very nature of art, as it belongs to the nature of philosophy, that there can be nothing like a settled, once-and-for-all account of what art is, just as there can be no all-purpose account of what happens when people communicate or when they laugh together. Art, even for those who make it and love it, is always a question, a problem for itself. What is art? The question must arise, but it allows no definitive answer.
For these reasons, neuroscience, which looks at events in the brains of individual people and can do no more than describe and analyze them, may just be the wrong kind of empirical science for understanding art.
Far from its being the case that we can apply neuroscience as an intellectual ready-made to understand art, it may be that art, by disclosing the ways in which human experience in general is something we enact together, in exchange, may provide new resources for shaping a more plausible, more empirically rigorous, account of our human nature.
Alva Noë is a philosopher at CUNY’s Graduate Center. He is the author of “Out of Our Heads: Why You Are Not Your Brain and Other Lessons From The Biology of Consciousness.” He is now writing a book on art and human nature. Noë writes a weekly column for NPR’s 13.7: Culture and Cosmos blog. You can follow him on Twitter and Facebook.
The Stone is a forum for contemporary philosophers on issues both timely and timeless.
For roughly 98 percent of the last 2,500 years of Western intellectual history, philosophy was considered the mother of all knowledge. It generated most of the fields of research still with us today. This is why we continue to call our highest degrees Ph.D.’s, namely, philosophy doctorates. At the same time, we live an age in which many seem no longer sure what philosophy is or is good for anymore. Most seem to see it as a highly abstracted discipline with little if any bearing on objective reality — something more akin to art, literature or religion. All have plenty to say about reality. But the overarching assumption is that none of it actually qualifies as knowledge until proven scientifically.
Yet philosophy differs in a fundamental way from art, literature or religion, as its etymological meaning is “the love of wisdom,” which implies a significant degree of objective knowledge. And this knowledge must be attained on its own terms. Or else it would be but another branch of science.
So what objective knowledge can philosophy bring that is not already determinable by science? This is a question that has become increasingly fashionable — even in philosophy — to answer with a defiant “none.” For numerous philosophers have come to believe, in concert with the prejudices of our age, that only science holds the potential to solve persistent philosophical mysteries as the nature of truth, life, mind, meaning, justice, the good and the beautiful.
Thus, myriad contemporary philosophers are perfectly willing to offer themselves up as intellectual servants or ushers of scientific progress. Their research largely functions as a spearhead for scientific exploration and as a balm for making those pursuits more palpable and palatable to the wider population. The philosopher S.M. Liao, for example, argued recently in The Atlantic that we begin voluntarily bioengineering ourselves to lower our carbon footprints and to become generally more virtuous. And Prof. Colin McGinn, writing recently in The Stone, claimed to be so tired of philosophy being disrespected and misunderstood that he urged that philosophers begin referring to themselves as “ontic scientists.”
McGinn takes the moniker of science as broad enough to include philosophy since the dictionary defines it as “any systematically organized body of knowledge on any subject.” But this definition is so vague that it betrays a widespread confusion as to what science actually is. And McGinn’s reminder that its etymology comes from “scientia,” the ancient Latin word for “knowledge,” only adds to the muddle. For by this definition we might well brand every academic discipline as science. “Literary studies” then become “literary sciences” — sounds much more respectable. “Fine arts” become “aesthetic sciences” — that would surely get more parents to let their kids major in art. While we’re at it, let’s replace the Bachelor of Arts degree with the Bachelor of Science. (I hesitate to even mention such options lest enterprising deans get any ideas.) Authors and artists aren’t engaged primarily in any kind of science, as their disciplines have more to do with subjective and qualitative standards than objective and quantitative ones. And that’s of course not to say that only science can bring objective and quantitative knowledge. Philosophy can too.
The intellectual culture of scientism clouds our understanding of science itself. What’s more, it eclipses alternative ways of knowing — chiefly the philosophical — that can actually yield greater certainty than the scientific. While science and philosophy do at times overlap, they are fundamentally different approaches to understanding. So philosophers should not add to the conceptual confusion that subsumes all knowledge into science. Rather, we should underscore the fact that various disciplines we ordinarily treat as science are at least as — if not more —philosophical than scientific. Take for example mathematics, theoretical physics, psychology and economics. These are predominately rational conceptual disciplines. That is, they are not chiefly reliant on empirical observation. For unlike science, they may be conducted while sitting in an armchair with eyes closed.
Does this mean these fields do not yield objective knowledge? The question is frankly absurd. Indeed if any of their findings count as genuine knowledge, they may actually be more enduring. For unlike empirical observations, which may be mistaken or incomplete, philosophical findings depend primarily on rational and logical principles. As such, whereas science tends to alter and update its findings day to day through trial and error, logical deductions are timeless. This is why Einstein pompously called attempts to empirically confirm his special theory of relativity “the detail work.” Indeed last September, The New York Times reported that scientists at the European Center for Nuclear Research (CERN) thought they had empirically disproved Einstein’s theory that nothing could travel faster than the speed of light, only to find their results could not be reproduced in follow-up experiments last month. Such experimental anomalies are confounding. But as CERN’s research director Sergio Bertolucci plainly put it, “This is how science works.”
However, 5 plus 7 will always equal 12. No amount of further observation will change that. And while mathematics is empirically testable at such rudimentary levels, it stops being so in its purest forms, like analysis and number theory. Proofs in these areas are conducted entirely conceptually. Similarly with logic, certain arguments are proven inexorably valid while others are inexorably invalid. Logically fallacious arguments can be rather sophisticated
and persuasive. But they are nevertheless invalid and always will be. Exposing such errors is part of philosophy’s stock and trade. Thus as Socrates pointed out long ago, much of the knowledge gained by doing philosophy consists in realizing what is not the case.
One such example is Thrasymachus’ claim that justice is best defined as the advantage of the stronger, namely, that which is in the competitive interest of the powerful. Socrates reduces this view to absurdity by showing that the wise need not compete with anyone.
Or to take a more positive example, Wittgenstein showed that an ordinary word such as “game” is used consistently in myriad contrasting ways without possessing any essential unifying definition. Though this may seem impossible, the meaning of such terms is actually determined by their contextual usage. For when we look at faces within a nuclear family, we see resemblances from one to the next. Yet no single trait need be present in every face to recognize them all as members of the family. Similarly, divergent uses of “game” form a family. Ultimately as a result of Wittgenstein’s philosophy, we know that natural language is a public phenomenon that cannot logically be invented in isolation.
These are essentially conceptual clarifications. And as such, they are relatively timeless philosophical truths.
This is also why jurisprudence qualifies as an objective body of knowledge without needing to change its name to “judicial science,” as some universities now describe it. Though it is informed by empirical research into human nature and the general workings of society, it relies principally on the cogency of arguments from learned experts as
measured by their logical validity and the truth value of their premises. If both of these criteria are present, then the arguments are sound. Hence, Supreme Court justices are not so much scientific as philosophical experts on the nature of justice. And that is not to say their expertise does not count as genuine knowledge. In the best cases, it rises to the loftier level of wisdom — the central objective of philosophy.
Though philosophy does sometimes employ thought experiments, these aren’t actually scientific, for they are conducted entirely in the imagination. For example, judges have imagined what might happen if, say, insider trading were made legal. And they have concluded that while it would lower regulatory costs and promote a degree of investor freedom, legalization would imperil the free market itself by undermining honest securities markets and eroding investor confidence. While this might appear to be an empirical question, it cannot be settled empirically without conducting the experiment, which is naturally beyond the reach of jurisprudence. Only legislatures could conduct the experiment by legalizing insider trading. And even then, one could not conduct it completely scientifically without a separate control-group society in which insider trading remained illegal for comparison. Regardless, judges would likely again forbid legalization essentially on compelling philosophical grounds.
Similarly in ethics, science cannot necessarily tell us what to value. Science has made significant progress in helping to understand human nature. Such research, if accurate, provides very real constraints to philosophical constructs on the nature of the good. Still, evidence of how most people happen to be does not necessarily tell us everything about how we should aspire to be. For how we should aspire to be is a conceptual question, namely, of how we ought to act, as opposed to an empirical question of how we do act. We might administer scientific polls to determine the degree to which people take themselves to be happy and what causes they might attribute to their own levels happiness. But it’s difficult to know if these self-reports are authoritative since many may not have coherent, consistent or accurate conceptions of happiness to begin with. We might even ask them if they find such and such ethical arguments convincing, namely, if happiness ought to be their only aim in life. But we don’t and shouldn’t take those results as sufficient to determine, say, the ethics standards of the American Medical Association, as those require philosophical analysis.
In sum, philosophy is not science. For it employs the rational tools of logical analysis and conceptual clarification in lieu of empirical measurement. And this approach, when carefully carried out, can yield knowledge at times more reliable and enduring than science, strictly speaking. For scientific measurement is in principle always subject to at least some degree of readjustment based on future observation. Yet sound philosophical argument achieves a measure of immortality.
So if we philosophers want to restore philosophy’s authority in the wider culture, we should not change its name but engage more often with issues of contemporary concern — not so much as scientists but as guardians of reason. This might encourage the wider population to think more critically, that is, to become more philosophical.
Julian Friedland is a visiting assistant professor at Fordham University Gabelli School of Business, Division of Law and Ethics. He is editor of “Doing Well and Good: The Human Face of the New Capitalism.” His research focuses primarily on the nature of positive professional duty.
There is a standard view about language that one finds among philosophers, language departments, pundits and politicians. It is the idea that a language like English is a semi-stable abstract object that we learn to some degree or other and then use in order to communicate or express ideas and perform certain tasks. I call this the static picture of language, because, even though it acknowledges some language change, the pace of change is thought to be slow, and what change there is, is thought to be the hard fought product of conflict. Thus, even the “revisionist” picture of language sketched by Gary Gutting in a recent Stone column counts as static on my view, because the change is slow and it must overcome resistance.
If word meanings can change in the course of a single conversation how could they not change over the course of centuries?
Recent work in philosophy, psychology and artificial intelligence has suggested an alternative picture that rejects the idea that languages are stable abstract objects that we learn and then use. According to the alternative “dynamic” picture, human languages are one-off things that we build “on the fly” on a conversation-by-conversation basis; we can call these one-off fleeting languages microlanguages. Importantly, this picture rejects the idea that words are relatively stable things with fixed meanings that we come to learn. Rather, word meanings themselves are dynamic — they shift from microlanguage to microlanguage.
Shifts of meaning do not merely occur between conversations; they also occur within conversations — in fact conversations are often designed to help this shifting take place. That is, when we engage in conversation, much of what we say does not involve making claims about the world but involves instructing our communicative partners how to adjust word meanings for the purposes of our conversation.
For example, the linguist Chris Barker has observed that many of the utterances we make play the role of shifting the meaning of a term. To illustrate, suppose I am thinking of applying for academic jobs and I tell my friend that I don’t care where I teach so long as the school is in a city. My friend suggests that I apply to the University of Michigan and I reply “Ann Arbor is not a city.” In doing this, I am not making a claim about the world so much as instructing my friend (for the purposes of our conversation) to adjust the meaning of “city” from official definitions to one in which places like Ann Arbor do not count as a cities.
Word meanings are dynamic, but they are also underdetermined. What this means is that there is no complete answer to what does and doesn’t fall within the range of a term like “red” or “city” or “hexagonal.” We may sharpen the meaning and we may get clearer on what falls in the range of these terms, but we never completely sharpen the meaning.
This isn’t just the case for words like “city” but, for all words, ranging from words for things, like “person” and “tree,” words for abstract ideas, like “art” and “freedom,” and words for crimes, like “rape” and “murder.” Indeed, I would argue that this is also the case with mathematical and logical terms like “parallel line” and “entailment.” The meanings of these terms remain open to some degree or other, and are sharpened as needed when we make advances in mathematics and logic.
The dynamic lexicon changes the way we look at problems ranging from human-computer interaction to logic itself, but it also has an application in the political realm. Over the last few decades, some important legal scholars and judges — most notably the United States Supreme Court Justice, Antonin Scalia — have made the case that the Constitution is not a living document, and that we should try to get back to understanding the Constitution as it was originally written by the original framers — sometimes this doctrine is called textualism. Scalia’s doctrine says that we cannot do better than concentrate on what the Constitution actually says — on what the words on paper say. Scalia once put this in the form of a tautology: “Words mean what they mean.” In his more cautious formulation he says that “words do have a limited range of meaning, and no interpretation that goes beyond that range is permissible.”
Pretty clearly Scalia is assuming what I have called the static picture of language. But “words mean what they mean” is not the tautology that Scalia seems to think it is. If word meanings can change dramatically during the course of a single conversation how could they not change over the course of centuries? But more importantly, Scalia’s position seems to assume that the original meanings of the words used in the Constitution are nearly fully determined — that the meaning of a term like “person” or phrase like “due process,” as used in the Constitution is fully fleshed out. But is it really determined whether, for example, the term “person” in the Constitution applies to medically viable fetuses, brain dead humans on life support, and, as we will have to ask in the fullness of time, intelligent robots? The dynamic picture says no.
The words used by lawmakers are just as open ended as words used in day-to-day conversation. Indeed, many laws are specifically written so as to be open-ended. But even if that was not the intent, there is no way to close the gap and have the meanings of words fully fleshed out. Technological advances are notorious for exposing the open-endedness of the language in our laws, even when we thought our definitions were airtight. Lawmakers can’t anticipate everything. Indeed, you could make the case that the whole area of patent law just is the problem of deciding whether some new technology should fall within the range of the language of the patent.
Far from being absurd, the idea that the Constitution is a living organism follows from the fact that the words used in writing the Constitution are underdetermined and dynamic and thus “living organisms” in the metaphorical sense in play here. In this respect there is nothing unique about the Constitution. It is a dynamic object because of the simple reason that word meanings are dynamic. Every written document — indeed every word written or uttered — is a living organism.
By Deepak Chopra, M.D., FACP, and Dr. Rudolph E. Tanzi, Ph.D., Joseph P. and Rose F. Kennedy, Professor of Neurology, Harvard Medical School Director, Genetics and Aging at Massachusetts General Hospital (MGH).
Like a personal computer, science needs a recycle bin for ideas that didn't work out as planned. In this bin would go commuter trains riding on frictionless rails using superconductivity, along with interferon, the last AIDS vaccine, and most genetic therapies. These failed promises have two things in common: They looked like the wave of the future but then reality proved too complex to fit the simple model that was being offered.
The next thing to go into the recycle bin might be the brain. We are living in a golden age of brain research, thanks largely to vast improvements in brain scans. Now that functional MRIs can give snapshots of the brain in real time, researchers can see specific areas of the brain light up, indicating increased activity. On the other hand, dark spots in the brain indicate minimal activity or none at all. Thus, we arrive at those familiar maps that compare a normal brain with one that has deviated from the norm. This is obviously a great boon where disease is concerned. Doctors can see precisely where epilepsy or Parkinsonism or a brain tumor has created damage, and with this knowledge new drugs and more precise surgery can target the problem.
But then overreach crept in. We are shown brain scans of repeat felons with pointers to the defective areas of their brains. The same holds for Buddhist monks, only in their case, brain activity is heightened and improved, especially in the prefrontal lobes associated with compassion. By now there is no condition, good or bad, that hasn't been linked to a brain pattern that either "proves" that there is a link between the brain and a certain behavior or exhibits the "cause" of a certain trait. The whole assumption, shared by 99 percent of neuroscientists, is that we are our brains.
In this scheme, the brain is in charge, having evolved to control certain fixed behaviors. Why do men see other men as rivals for a desirable woman? Why do people seek God? Why does snacking in front of the TV become a habit? We are flooded with articles and books reinforcing the same assumption: The brain is using you, not the other way around. Yet it's clear that a faulty premise is leading to gross overreach.
The flaws in current reasoning can be summarized with devastating force:
1. Brain activity isn't the same as thinking, feeling, or seeing.
2. No one has remotely shown how molecules acquire the qualities of the mind.
3. It is impossible to construct a theory of the mind based on material objects that somehow became conscious.
4. When the brain lights up, its activity is like a radio lighting up when music is played. It is an obvious fallacy to say that the radio composed the music. What is being viewed is only a physical correlation, not a cause.
It's a massive struggle to get neuroscientists to see these flaws. They are king of the hill right now, and so long as new discoveries are being made every day, a sense of triumph pervades the field. "Of course" we will solve everything from depression to overeating, crime to religious fanaticism, by tinkering with neurons and the kinks thrown into normal, desirable brain activity. But that's like hearing a really bad performance of "Rhapsody in Blue" and trying to turn it into a good performance by kicking the radio.
We've become excited by a flawless 2008 article published by Donald D. Hoffman, professor of cognitive sciences at the University of California Irvine. It's called "Conscious Realism and the Mind-Body Problem," and its aim is to show, using logic, philosophy, and neuroscience, that we are not our brains. We are "conscious agents" -- Hoffman's term for minds that shape reality, including the reality of the brain. Hoffman is optimistic that the thorny problem of consciousness can be solved, and science can find a testable model for the mind. But future progress depends on researchers abandoning their current premise, that the brain is the mind. We urge you to read the article in its entirety, but for us, the good news is that Hoffman's ideas show that the tide may be turning.
It is degrading to human potential when the brain uses us instead of vice versa. There is no doubt that we can become trapped by faulty wiring in the brain -- this happens in depression, addictions, and phobias, for example. Neural circuits can seemingly take control, and there is much talk of "hard wiring" by which some activity is fixed and preset by nature, such as the fight-or-flight response. But what about people who break bad habits, kick their addictions, or overcome depression? It would be absurd to say that the brain, being stuck in faulty wiring, suddenly and spontaneously fixed the wiring. What actually happens, as anyone knows who has achieved success in these areas, is that the mind takes control. Mind shapes the brain, and when you make up your mind to do something, you return to the natural state of using your brain instead of the other way around.
It's very good news that you are not your brain, because when your mind finds its true power, the result is healing, inspiration, insight, self-awareness, discovery, curiosity, and quantum leaps in personal growth. The brain is totally incapable of such things. After all, if it is a hard-wired machine, there is no room for sudden leaps and renewed inspiration. The machine simply does what it does. A depressed brain can no more heal itself than a car can suddenly decide to fly. Right now the golden age of brain research is brilliantly decoding neural circuitry, and thanks to neuroplasticity, we know that the brain's neural pathways can be changed. The marvels of brain activity grow more astonishing every day. Yet in our astonishment it would be a grave mistake, and a disservice to our humanity, to forget that the real glory of human existence is the mind, not the brain that serves it.
Deepak Chopra and Rudy Tanzi are co-authors of their forthcoming book Superbrain: New Breakthroughs for Maximizing Health, Happiness and Spiritual Well-Being by Harmony Books.
CAMBRIDGE, Mass. — Seated in a cheerfully cramped monitoring room at the Harvard University Laboratory for Developmental Studies, Elizabeth S. Spelke, a professor of psychology and a pre-eminent researcher of the basic ingredient list from which all human knowledge is constructed, looked on expectantly as her students prepared a boisterous 8-month-old girl with dark curly hair for the onerous task of watching cartoons.
The video clips featured simple Keith Haring-type characters jumping, sliding and dancing from one group to another. The researchers’ objective, as with half a dozen similar projects under way in the lab, was to explore what infants understand about social groups and social expectations.
Yet even before the recording began, the 15-pound research subject made plain the scope of her social brain. She tracked conversations, stared at newcomers and burned off adult corneas with the brilliance of her smile. Dr. Spelke, who first came to prominence by delineating how infants learn about objects, numbers, the lay of the land, shook her head in self-mocking astonishment.
“Why did it take me 30 years to start studying this?” she said. “All this time I’ve been giving infants objects to hold, or spinning them around in a room to see how they navigate, when what they really wanted to do was engage with other people!”
Dr. Spelke, 62, is tall and slim, and parts her long hair down the middle, like a college student. She dresses casually, in a corduroy jumper or a cardigan and slacks, and when she talks, she pitches forward and plants forearms on thighs, hands clasped, seeming both deeply engaged and ready to bolt. The lab she founded with her colleague Susan Carey is strewed with toys and festooned with children’s T-shirts, but the Elmo atmospherics belie both the lab’s seriousness of purpose and Dr. Spelke’s towering reputation among her peers in cognitive psychology.
“When people ask Liz, ‘What do you do?’ she tells them, ‘I study babies,’ ” said Steven Pinker, a fellow Harvard professor and the author of “The Better Angels of Our Nature,” among other books. “That’s endearingly self-deprecating, but she sells herself short.”
What Dr. Spelke is really doing, he said, is what Descartes, Kant and Locke tried to do. “She is trying to identify the bedrock categories of human knowledge. She is asking, ‘What is number, space, agency, and how does knowledge in each category develop from its minimal state?’ ”
Dr. Spelke studies babies not because they’re cute but because they’re root. “I’ve always been fascinated by questions about human cognition and the organization of the human mind,” she said, “and why we’re good at some tasks and bad at others.”
But the adult mind is far too complicated, Dr. Spelke said, “too stuffed full of facts” to make sense of it. In her view, the best way to determine what, if anything, humans are born knowing, is to go straight to the source, and consult the recently born.
Decoding Infants’ Gaze
Dr. Spelke is a pioneer in the use of the infant gaze as a key to the infant mind — that is, identifying the inherent expectations of babies as young as a week or two by measuring how long they stare at a scene in which those presumptions are upended or unmet. “More than any scientist I know, Liz combines theoretical acumen with experimental genius,” Dr. Carey said. Nancy Kanwisher, a neuroscientist at M.I.T., put it this way: “Liz developed the infant gaze idea into a powerful experimental paradigm that radically changed our view of infant cognition.”
Here, according to the Spelke lab, are some of the things that babies know, generally before the age of 1:
They know what an object is: a discrete physical unit in which all sides move roughly as one, and with some independence from other objects.
“If I reach for a corner of a book and grasp it, I expect the rest of the book to come with me, but not a chunk of the table,” said Phil Kellman, Dr. Spelke’s first graduate student, now at the University of California, Los Angeles.
A baby has the same expectation. If you show the baby a trick sequence in which a rod that appears to be solid moves back and forth behind another object, the baby will gape in astonishment when that object is removed and the rod turns out to be two fragments.
“The visual system comes equipped to partition a scene into functional units we need to know about for survival,” Dr. Kellman said. Wondering whether your bag of four oranges puts you over the limit for the supermarket express lane? A baby would say, “You pick up the bag, the parts hang together, that makes it one item, so please get in line.”
Babies know, too, that objects can’t go through solid boundaries or occupy the same position as other objects, and that objects generally travel through space in a continuous trajectory. If you claimed to have invented a transporter device like the one in “Star Trek,” a baby would scoff.
Babies are born accountants. They can estimate quantities and distinguish between more and less. Show infants arrays of, say, 4 or 12 dots and they will match each number to an accompanying sound, looking longer at the 4 dots when they hear 4 sounds than when they hear 12 sounds, even if each of the 4 sounds is played comparatively longer. Babies also can perform a kind of addition and subtraction, anticipating the relative abundance of groups of dots that are being pushed together or pulled apart, and looking longer when the wrong number of dots appears.
Babies are born Euclideans. Infants and toddlers use geometric clues to orient themselves in three-dimensional space, navigate through rooms and locate hidden treasures. Is the room square or rectangular? Did the nice cardigan lady put the Slinky in a corner whose left wall is long or short?
At the same time, the Spelke lab discovered, young children are quite bad at using landmarks or décor to find their way. Not until age 5 or 6 do they begin augmenting search strategies with cues like “She hid my toy in a corner whose left wall is red rather than white.”
“That was a deep surprise to me,” Dr. Spelke said. “My intuition was, a little kid would never make the mistake of ignoring information like the color of a wall.” Nowadays, she continued, “I don’t place much faith in my intuitions, except as a starting place for designing experiments.”
These core mental modules — object representation, approximate number sense and geometric navigation — are all ancient systems shared at least in part with other animals; for example, rats also navigate through a maze by way of shape but not color. The modules amount to baby’s first crib sheet to the physical world.
“The job of the baby,” Dr. Spelke said, “is to learn.”
Role of Language
More recently, she and her colleagues have begun identifying some of the baseline settings of infant social intelligence. Katherine D. Kinzler, now of the University of Chicago, and Kristin Shutts, now at the University of Wisconsin, have found that infants just a few weeks old show a clear liking for people who use speech patterns the babies have already been exposed to, and that includes the regional accents, twangs, and R’s or lack thereof. A baby from Boston not only gazes longer at somebody speaking English than at somebody speaking French; the baby gazes longest at a person who sounds like Click and Clack of the radio show “Car Talk.”
In guiding early social leanings, accent trumps race. A white American baby would rather accept food from a black English-speaking adult than from a white Parisian, and a 5-year-old would rather befriend a child of another race who sounds like a local than one of the same race who has a foreign accent.
Other researchers in the Spelke lab are studying whether babies expect behavioral conformity among members of a group (hey, the blue character is supposed to be jumping like the rest of the blues, not sliding like the yellow characters); whether they expect other people to behave sensibly (if you’re going to reach for a toy, will you please do it efficiently rather than let your hand meander all over the place?); and how babies decide whether a novel object has “agency” (is this small, fuzzy blob active or inert?).
Dr. Spelke is also seeking to understand how the core domains of the human mind interact to yield our uniquely restless and creative intelligence — able to master calculus, probe the cosmos and play a Bach toccata as no bonobo or New Caledonian crow can. Even though “our core systems are fundamental yet limited,” as she put it, “we manage to get beyond them.”
Dr. Spelke has proposed that human language is the secret ingredient, the cognitive catalyst that allows our numeric, architectonic and social modules to join forces, swap ideas and take us to far horizons. “What’s special about language is its productive combinatorial power,” she said. “We can use it to combine anything with anything.”
She points out that children start integrating what they know about the shape of the environment, their navigational sense, with what they know about its landmarks — object recognition — at just the age when they begin to master spatial language and words like “left” and “right.” Yet, she acknowledges, her ideas about language as the central consolidator of human intelligence remain unproved and contentious.
Whatever their aim, the studies in her lab are difficult, each requiring scores of parentally volunteered participants. Babies don’t follow instructions and often “fuss out” mid-test, taking their data points with them.
Yet Dr. Spelke herself never fusses out or turns rote. She prowls the lab from a knee-high perspective, fretting the details of an experiment like Steve Jobs worrying over iPhone pixel density. “Is this car seat angled a little too far back?” she asked her students, poking the little velveteen chair every which way. “I’m concerned that a baby will have to strain too much to see the screen and decide it’s not worth the trouble.”
Should a student or colleague disagree with her, Dr. Spelke skips the defensive bristling, perhaps in part because she is serenely self-confident about her intellectual powers. “It was all easy for me,” she said of her early school years. “I don’t think I had to work hard until I got to college, or even graduate school.”
So, Radcliffe Phi Beta Kappa, ho hum. “My mother is absolutely brilliant, not just in science, but in everything,” said her daughter, Bridget, a medical student. “There’s a joke in my family that my mother and brother are the geniuses, and Dad and I are the grunts.” (“I hate this joke,” Dr. Spelke commented by e-mail, “and utterly reject this distinction!”)
Above all, Dr. Spelke relishes a good debate. “She welcomes people disagreeing with her,” said her husband, Elliott M. Blass, an emeritus professor of psychology at the University of Massachusetts. “She says it’s not about being right, it’s about getting it right.”
When Lawrence H. Summers, then president of Harvard, notoriously suggested in 2005 that the shortage of women in the physical sciences might be partly due to possible innate shortcomings in math, Dr. Spelke zestily entered the fray. She combed through results from her lab and elsewhere on basic number skills, seeking evidence of early differences between girls and boys. She found none.
“My position is that the null hypothesis is correct,” she said. “There is no cognitive difference and nothing to say about it.”
Dr. Spelke laid out her case in an acclaimed debate with her old friend Dr. Pinker, who defended the Summers camp.
“I have enormous respect for Steve, and I think he’s great,” Dr. Spelke said. “But when he argues that it makes sense that so many women are going into biology and medicine because those are the ‘helping’ professions, well, I remember when being a doctor was considered far too full of blood and gore for women and their uncontrollable emotions to handle.”
Raising Her Babies
For her part, Dr. Spelke has passionately combined science and motherhood. Her mother studied piano at Juilliard but gave it up when Elizabeth was born. “I felt terribly guilty about that,” Dr. Spelke said. “I never wanted my children to go through the same thing.”
When her children were young, Dr. Spelke often took them to the lab or held meetings at home. The whole family traveled together — France, Spain, Sweden, Egypt, Turkey — never reserving lodgings but finding accommodations as they could. (The best, Dr. Blass said, was a casbah in the Moroccan desert.)
Scaling the academic ranks, Dr. Spelke still found time to supplement her children’s public school education with a home-schooled version of the rigorous French curriculum. She baked their birthday cakes from scratch, staged elaborate treasure hunts and spent many days each year creating their Halloween costumes: Bridget as a cave girl or her favorite ballet bird; her younger brother, Joey, as a drawbridge.
“Growing up in my house was a constant adventure,” Bridget said. “As a new mother myself,” she added, “I don’t know how my mom did it.”
Is Dr. Spelke the master of every domain? It’s enough to make the average mother fuss out.
May 10, 2012, 9:00 pm
Can Physics and Philosophy Get Along?
By GARY GUTTING
Physicists have been giving philosophers a hard time lately. Stephen Hawking claimed in a speech last year that philosophy is “dead” because philosophers haven’t kept up with science. More recently, Lawrence Krauss, in his book, “A Universe From Nothing: Why There Is Something Rather Than Nothing,” has insisted that “philosophy and theology are incapable of addressing by themselves the truly fundamental questions that perplex us about our existence.” David Albert, a distinguished philosopher of science, dismissively reviewed Krauss’s book: “all there is to say about this [Krauss’s claim that the universe may have come from nothing], as far as I can see, is that Krauss is dead wrong and his religious and philosophical critics are absolutely right.” Krauss — ignoring Albert’s Ph.D. in theoretical physics — retorted in an interview that Albert is a “moronic philosopher.” (Krauss somewhat moderates his views in a recent Scientific American article.)
I’d like to see if I can raise the level of the discussion a bit. Despite some nasty asides, Krauss doesn’t deny that philosophers may have something to contribute to our understanding of “fundamental questions” (his “by themselves” in the above quotation is a typical qualification). And almost all philosophers of science — certainly Albert — would agree that an intimate knowledge of science is essential for their discipline. So it should be possible to at least start a line of thought that incorporates both the physicist’s and the philosopher’s sensibilities.
There is a long tradition of philosophers’ arguing for the existence of God on the grounds that the material (physical) universe as a whole requires an immaterial explanation. Otherwise, they maintain, the universe would have to originate from nothing, and it’s not possible that something come from nothing. (One response to the argument is that the universe may have always existed and so never came into being, but the Big Bang, well established by contemporary cosmology, is often said to exclude this possibility.)
Krauss is totally unimpressed by this line of argument, since, he says, its force depends on the meaning of “nothing” and, in the context of cosmology, this meaning depends on what sense science can make of the term. For example, one plausible scientific meaning for “nothing” is “empty space”: space with no elementary particles in it. But quantum mechanics shows that particles can emerge from empty space, and so seems to show that the universe (that is, all elementary particles and so the things they make up) could come from nothing.
But, Krauss admits, particles can emerge from empty space because empty space, despite its name, does contain virtual fields that fluctuate and can give empty space properties even in the absence of particles. These fields are governed by laws allowing for the “spontaneous” production of particles. Virtual fields, the philosopher will urge, are the “something” from which the particles come. All right, says Krauss, but there is the further possibility that the long-sought quantum theory of gravity, uniting quantum mechanics and general relativity, will allow for the spontaneous production of empty space itself, simply in virtue of the theory’s laws. Then we would have everything — space, fields and particles — coming from nothing.
But, the philosopher says, What about the laws of physics? They are something, not nothing—and where do they come from? Well, says Krauss — trying to be patient — there’s another promising theoretical approach that plausibly posits a “multiverse”: a possibly infinite collection of self-contained, non-interacting universes, each with is own laws of nature. In fact, it might well be that the multiverse contains universes with every possible set of laws. We have the laws we do simply because of the particular universe we’re in. But, of course, the philosopher can respond that the multiverse itself is governed by higher-level laws.
At every turn, the philosopher concludes, there are laws of nature, and the laws always apply to some physical “stuff” (particles, fields, whatever) that is governed by the laws. In no case, then, does something really come from nothing.
It seems to me, however, that this is a case of the philosopher’s winning the battle but losing the war. There is an absolute use of “nothing” that excludes literally everything that exists. In one sense, Krauss is just obstinately ignoring this use. But if Krauss knew more philosophy, he could readily cite many philosophers who find this absolute use — and the corresponding principle that something cannot come from nothing — unintelligible. For an excellent survey of arguments along this line, see Roy Sorensen’s Stanford Encyclopedia article, “Nothingness.”
But even if the question survives the many philosophical critiques of its intelligibility, there have been strong objections to applying “something cannot come from nothing” to the universe as a whole. David Hume, for example, argued that it is only from experience that we know that individual things don’t just spring into existence (there is no logical contradiction in their doing so). Since we have no experience of the universe coming into existence, we have no reason to say that if it has come to be, it must have a cause. Hume and his followers would be entirely happy with leaving the question of a cause of the universe up to empirical science.
While Krauss could appeal to philosophy to strengthen his case against “something cannot come from nothing,” he opens himself to philosophical criticism by simply assuming that scientific experiment is, as he puts it, the “ultimate arbiter of truth” about the world. The success of science gives us every reason to continue to pursue its experimental method in search of further truths. But science itself is incapable of establishing that all truths about the world are discoverable by its methods.
Precisely because science deals with only what can be known, direct or indirectly, by sense experience, it cannot answer the question of whether there is anything — for example, consciousness, morality, beauty or God — that is not entirely knowable by sense experience. To show that there is nothing beyond sense experience, we would need philosophical arguments, not scientific experiments.
Krauss may well be right that philosophers should leave questions about the nature of the world to scientists. But, without philosophy, his claim can only be a matter of faith, not knowledge.
Here is an idea many philosophers and logicians have about the function of logic in our cognitive life, our inquiries and debates. It isn’t a player. Rather, it’s an umpire, a neutral arbitrator between opposing theories, imposing some basic rules on all sides in a dispute. The picture is that logic has no substantive content, for otherwise the correctness of that content could itself be debated, which would impugn the neutrality of logic. One way to develop this idea is by saying that logic supplies no information of its own, because the point of information is to rule out possibilities, whereas logic only rules out inconsistencies, which are not genuine possibilities. On this view, logic in itself is totally uninformative, although it may help us extract and handle non-logical information from other sources.
The idea that logic is uninformative strikes me as deeply mistaken, and I’m going to explain why. But it may not seem crazy when one looks at elementary examples of the cognitive value of logic, such as when we extend our knowledge by deducing logical consequences of what we already know. If you know that either Mary or Mark did the murder (only they had access to the crime scene at the right time), and then Mary produces a rock-solid alibi, so you know she didn’t do it, you can deduce that Mark did it. Logic also helps us recognize our mistakes, when our beliefs turn out to contain inconsistencies. If I believe that no politicians are honest, and that John is a politician, and that he is honest, at least one of those three beliefs must be false, although logic doesn’t tell me which one.
The power of logic becomes increasingly clear when we chain together such elementary steps into longer and longer chains of reasoning, and the idea of logic as uninformative becomes correspondingly less and less plausible. Mathematics provides the most striking examples, since all its theorems are ultimately derived from a few simple axioms by chains of logical reasoning, some of them hundreds of pages long, even though mathematicians usually don’t bother to analyze their proofs into the most elementary steps.
For instance, Fermat’s Last Theorem was finally proved by Andrew Wiles and others after it had tortured mathematicians as an unsolved problem for more than three centuries. Exactly which mathematical axioms are indispensable for the proof is only gradually becoming clear, but for present purposes what matters is that together the accepted axioms suffice. One thing the proof showed is that it is a truth of pure logic that those axioms imply Fermat’s Last Theorem. If logic is uninformative, shouldn’t it be uninformative to be told that the accepted axioms of mathematics imply Fermat’s Last Theorem? But it wasn’t uninformative; it was one of the most exciting discoveries in decades. If the idea of information as ruling out possibilities can’t handle the informativeness of logic, that is a problem for that idea of information, not for the informativeness of logic.
The conception of logic as a neutral umpire of debate also fails to withstand scrutiny, for similar reasons. Principles of logic can themselves be debated, and often are, just like principles of any other science. For example, one principle of standard logic is the law of excluded middle, which says that something either is the case, or it isn’t. Either it’s raining, or it’s not. Many philosophers and others have rejected the law of excluded middle, on various grounds. Some think it fails in borderline cases, for instance when very few drops of rain are falling, and avoid it by adopting fuzzy logic. Others think the law fails when applied to future contingencies, such as whether you will be in the same job this time next year. On the other side, many philosophers — including me – argue that the law withstands these challenges. Whichever side is right, logical theories are players in these debates, not neutral umpires.
Another debate in which logical theories are players concerns the ban on contradictions. Most logicians accept the ban but some, known as dialetheists, reject it. They treat some paradoxes as black holes in logical space, where even contradictions are true (and false).
A different dispute in logic concerns “quantum logic.” Standard logic includes the “distributive” law, by which a statement of the form “X and either Y or Z” is equivalent to the corresponding statement of the form “Either X and Y or X and Z.” On one highly controversial view of the phenomenon of complementarity in quantum mechanics, it involves counterexamples to the distributive law: for example, since we can’t simultaneously observe both which way a particle is moving and where it is, the particle may be moving left and either in a given region or not, without either moving left and being in that region or moving left and not being in that region. Although that idea hasn’t done what its advocates originally hoped to solve the puzzles of quantum mechanics, it is yet another case where logical theories were players, not neutral umpires.
As it happens, I think that standard logic can resist all these challenges. The point is that each of them has been seriously proposed by (a minority of) expert logicians, and rationally debated. Although attempts were made to reinterpret the debates as misunderstandings in which the two sides spoke different languages, those attempts underestimated the capacity of our language to function as a forum for debate in which profound theoretical disagreements can be expressed. Logic is just not a controversy-free zone. If we restricted it to uncontroversial principles, nothing would be left. As in the rest of science, no principle is above challenge. That does not imply that nothing is known. The fact that you know something does not mean that nobody else is allowed to challenge it.
Of course, we’d be in trouble if we could never agree on anything in logic. Fortunately, we can secure enough agreement in logic for most purposes, but nothing in the nature of logic guarantees those agreements. Perhaps the methodological privilege of logic is not that its principles are so weak, but that they are so strong. They are formulated at such a high level of generality that, typically, if they crash, they crash so badly that we easily notice, because the counterexamples to them are simple. If we want to identify what is genuinely distinctive of logic, we should stop overlooking its close similarities to the rest of science.
Read previous posts by Timothy Williamson.
Timothy Williamson is the Wykeham Professor of Logic at Oxford University, a Fellow of the British Academy and a Foreign Honorary Member of the American Academy of Arts and Sciences. He has been a visiting professor at M.I.T. and Princeton. His books include “Knowledge and its Limits” (2000) and “The Philosophy of Philosophy” (2007) and, most recently, “Modal Logic as Metaphysics,” which will be published next year.
Adopting the reductionism that equates humans with other animals or computers has a serious downside: it wipes out the meaning of your own life.
In a recent essay for The Stone, I claimed that humans are “something more than other animals, and essentially more than any computer.” Some readers found the claim importantly or trivially true; others found it partially or totally false; still others reacted as if I’d said that we’re not animals at all, or that there are no resemblances between our brains and computers. Some pointed out, rightly, that plenty of people do fine research in biology or computer science without reducing the human to the subhuman.
But reductionism is also afoot, often not within science itself but in the way scientific findings get interpreted. John Gray writes in his 2002 British best seller, “Straw Dogs,” “Humans think they are free, conscious beings, when in truth they are deluded animals.” The neurologist-philosopher Raymond Tallis lambastes such notions in his 2011 book, “Aping Mankind,” where he cites many more examples of reductionism from all corners of contemporary culture.
Now, what do I mean by reductionism, and what’s wrong with it? Every thinking person tries to reduce some things to others; if you attribute your cousin’s political outburst to his indigestion, you’ve reduced the rant to the reflux. But the reductionism that’s at stake here is a much broader habit of thinking that tries to flatten reality down and allow only certain kinds of explanations. Here I’ll provide a little historical perspective on this kind of thinking and explain why adopting it is a bad bargain: it wipes out the meaning of your own life.
Over 2,300 years ago, Aristotle argued in his “Physics” that we should try to explain natural phenomena in four different but compatible ways, traditionally known as the four causes. We can identify a moving cause, or what initiates a change: the impact of a cue stick on a billiard ball is the moving cause of the ball’s motion. We can account for some properties of things in terms of what they’re made of (material cause), as when we explain why a balloon is stretchy by pointing out that it’s made of rubber. We can understand the nature or kind of a phenomenon (formal cause), as when we define a cumulus cloud. And we can understand a thing’s function or end (final cause), as when we say that eyes are for seeing.
You’ll notice that the first two kinds of cause sound more modern than the others. Since Galileo, we have increasingly been living in a post-Aristotelian world where talk of “natures” and “ends” strikes us as unscientific jargon — although it hasn’t disappeared altogether. Aristotle thought that final causality applied to all natural things, but many of his final-cause explanations now seem naïve — say, the idea that heavy things fall because their natural end is to reach the earth. Final cause plays no part in our physics. In biology and medicine, though, it’s still at least convenient to use final-cause language and say, for instance, that a function of the liver is to aid in digestion. As for formal cause, every science works with some notion of what kind of thing it studies — such as what an organism is, what an economy is, or what language is.
But do things really come in a profusion of different kinds? For example, are living things irreducibly different from nonliving things? Reductionists would answer that a horse isn’t ultimately different in kind from a chunk of granite; the horse is just a more complicated effect of the moving and material causes that physics investigates. This view flattens life down to more general facts about patterns of matter and energy.
Likewise, reductionists will say that human beings aren’t irreducibly different from horses: politics, music, money and romance are just complex effects of biological phenomena, and these are just effects of the phenomena we observe in nonliving things. Humans get flattened down along with the rest of nature.
Reductionism, then, tries to limit reality to as few kinds as possible. For reductionists, as things combine into more complicated structures, they don’t turn into something that they really weren’t before, or reach any qualitatively new level. In this sense, reality is flat.
Notice that in this world view, since modern physics doesn’t use final causes and physics is the master science, ends or purposes play no role in reality, although talk of such things may be a convenient figure of speech. The questions “How did we get here?” and “What are we made of?” make sense for a reductionist, but questions such as “What is human nature?” and “How should we live?”— if they have any meaning at all — have to be reframed as questions about moving or material physical causes.
Now let’s consider a nonreductionist alternative: there are a great many different kinds of beings, with different natures. Reality is messy and diverse, with lumps and gaps, peaks and valleys.
But what would account for these differences in kind? The traditional Western answer is that there is a highest being who is responsible for giving created beings their natures and their very existence.
Today this traditional answer doesn’t seem as convincing as it once did. As Nietzsche complains in “Twilight of the Idols” (1889), in the traditional view “the higher is not allowed to develop from the lower, is not allowed to have developed at all.” But Darwin has helped us see that new species can develop from simpler ones. Nietzsche abandoned not just traditional creationism but God as well; others find evolution compatible with monotheism. The point for our present purposes is that Nietzsche is opposing not only the view that things require a top-down act of creation, but also reductionists who flatten everything down to the same level; he suggests that reality has peaks and valleys, and the higher emerges from the lower. Some call such a view emergentism.
An emergentist account of reality could go something like this. Over billions of years, increasingly complex beings have evolved from simpler ones. But there isn’t just greater complexity — new kinds of beings emerge, living beings, and new capacities: feeling pleasure and pain, instead of just interacting chemically and physically with other things; becoming aware of other things and oneself; and eventually, human love, freedom and reason. Reality isn’t flat.
Higher beings continue to have lower dimensions. People are still animals, and animals are still physical things — throw me out a window and I’ll follow the law of gravity, with deleterious consequences for my freedom and reason. So we can certainly study ourselves as biological, chemical and physical beings. We can correctly reconstruct the moving causes that brought us about, and analyze our material causes.
However, these findings aren’t enough for a full understanding of what humans are. We must also understand our formal cause — what’s distinctive about us in our evolved state. Thanks to the process of emergence, we have become something more than other animals.
That doesn’t mean we’re all morally excellent (we can become heroic or vile); it doesn’t gives us the right to abuse and exterminate other species; and it doesn’t mean humans can do everything better (a cheetah will outrun me and a bloodhound will outsniff me every time). But we’ve developed a wealth of irreducibly human abilities, desires, responsibilities, predicaments, insights and questions that other species, as far as we can tell, approximate only vaguely.
In particular, recognizing our connections to other animals isn’t enough for us to understand ethics and politics. As incomplete, open-ended, partially self-determining animals, we must deliberate on how to live, acting in the light of what we understand about human virtue and vice, human justice and injustice. We will often go astray in our choices, but the realm of human freedom and purposes is irreducible and real: we really do envision and debate possibilities, we really do take decisions, and we really do reach better or worse goals.
As for our computing devices, who knows? Maybe we’ll find a way to jump-start them into reaching a higher level, so that they become conscious actors instead of the blind, indifferent electron pushers they’ve been so far — although, like anyone who’s seen a few science fiction movies, I’m not sure this project is particularly wise.
So is something like this emergentist view right, or is reductionism the way to go?
One thing is clear: a totally flattened-out explanation of reality far exceeds our current scientific ability. Our knowledge of general physics has to be enriched with new concepts when we study complex systems such as a muddy stream or a viral infection, not to mention human phenomena such as the Arab Spring, Twitter or “Glee.” We have to develop new ideas when we look at new kinds of reality.
In principle, though, is reductionism ultimately true? Serious thinkers have given serious arguments on both sides of this metaphysical question. For great philosophers with a reductionist cast of mind, read Spinoza or Hobbes. For brilliant emergentists, read John Dewey or Maurice Merleau-Ponty. Such issues can’t be settled in a single essay.
But make no mistake, reductionism comes at a very steep price: it asks you to hammer your own life flat. If you believe that love, freedom, reason and human purpose have no distinctive nature of their own, you’ll have to regard many of your own pursuits as phantasms and view yourself as a “deluded animal.”
Everything you feel that you’re choosing because you affirm it as good — your career, your marriage, reading The New York Times today, or even espousing reductionism — you’ll have to regard intellectually as just an effect of moving and material causes. You’ll have to abandon trust in your own experience for the sake of trust in the metaphysical principle of reductionism.
Richard Polt is a professor of philosophy at Xavier University in Cincinnati. His books include “Heidegger: An Introduction.”
Stone Links: Consider the Octopus
By MARK DE SILVA
Scientific American discusses the claim of a group of prominent researchers on consciousness recently convening at Cambridge: a neocortex is not a precondition of conscious experience. In fact, these researchers believe there are good reasons to suppose that the neural substrates of experiential states, and of emotive states as well, are present in creatures with brains structured very differently from our own. Most interestingly, even some invertebrates—specifically, octopuses—appear to show signs of being conscious. “That does not necessarily mean that you could have a distraught octopus or an elated cuttlefish on your hands,” Katherine Harmon writes. But it does mean we need to think of consciousness as being spread across a wide range of species, and being realizable by a number of different neurological structures.
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum