Not long after the Apollo landing, a prevalent cliche for a few years was "If we can put humans on the moon, why can't we....[insert prominent social problem such as starvation, epidemic, radical inequalities, etc.]? In 1980, in his book "Critical Path," Buckminster Fuller wrote:

"We are blessed with technology that would be indescribable to our forefathers. We have the wherewithal, the know-it-all to feed everybody, clothe everybody, and give every human on Earth a chance. We know now what we could never have known before-that we now have the option for all humanity to "make it" successfully on this planet in this lifetime."

In the contemporary zeitgeist, Fuller's claims seem naively utopian. The past century saw too much misery resulting from the attempts to build utopias. But without the belief that human civilization can improve, how could we have arrived at the point where we can formulate questions like these and exchange them around the world at the speed of light?

There are several obvious choices for answers to the question of why this question isn't asked any more:

1. We might have been able to put humans on the moon in 1969, but not today. Good point. And the reason for this circumstance — lack of political will, and the community of know-how it took NASA a decade to assemble, not lack of technical capabilities — is instructive.

2. Technology actually has solved enormous social problems — antibiotics, hygienic plumbing, immunization, the green revolution. I agree with this, and see it as evidence that it is possible to relieve twice as much, a thousand times as much human misery as previous inventions.

3. Human use of technologies have created even greater social problems — antibiotics are misused and supergerms evolved; nuclear wastes and weapons are threats, not enhancements; the green revolution swelled the slums of the world as agricultural productivity rose and global agribiz emerged.

4. There is no market for solving social problems, and it isn't the business of government to get into the technology or any other kind of business. This is the fallacy of the excluded middle. Some technologies such as the digital computer and the Internet were jump-started by governments, evolved through grassroots enthusiasms, and later become industries and "new economies."

5. Throwing technology at problems can be helpful, but the fundamental problems are political and economic and rooted in human nature. This answer should not be ignored. A tool is not the task, and often the invisible, social, non-physical aspects of a technological regime make all the difference.

There's some truth to each of these answers, yet they all fall short because all assume that we know how to think about technology. Just because we know how to make things doesn't guarantee that we know what those things will do to us. Or what kind of things we ought to make.

What if we don't know how to think about the tools we are so skilled at creating? What if we could learn?

Perhaps knowing how to think about technology is a skill we will have to teach ourselves the way we taught ourselves previous new ways of thinking such as mathematics, logic, and science.

A few centuries ago, a few people began questioning the assumption that people knew how to think about the physical world. Neither philosophy nor religion seemed to be able to stave off famine and epidemic. The enlightenment was about a new method for thinking.

Part of that new method was the way of asking and testing questions known as science, which provided the knowledge needed to create new medicines, new tools, new weapons, new economic systems.

We learned how to think very well about the physical world, and how to unleash the power in that knowledge. But perhaps we have yet to learn how to think about what to do with our tools.

HOWARD RHEINGOLD is author of The Virtual Community, Virtual Reality, Tools for Thought. Founder of Electric Minds, named by Time magazine one of the ten best web sites of 1996. Editor of The Millennium Whole Earth Catalog.

Until this summer this was the most common question at Silicon Valley parties, at bus stops, conferences and grocery stores. Everyone had a business model, none planned to make money, all focused on the exit strategy.

The euphoria attracted a despicable kind carpetbagger, one who wanted nothing more than money, and had a disdain for technology. All of a sudden technology was out of fashion in Silicon Valley.

Now that the dotcom crash seems permanent, entrepreneurs are looking for real ways to make money. No more vacuous "business models." VCs are hunkering down for a long haul. The average IQ of Silicon Valley entrepreneurs is zooming to its former stratospheric levels. There's a genuine excitement here now, but if you ask what the business model is you're going to get a boring answer.

Silicon Valley goes in cycles. Downturns are a perfect time to dig in, listen to users, learn what they want, and create the technology that scratches the itch, and plan on selling it for money.

In the 1960's and 70's it seemed just a matter of time before antiquated notions of god, heaven, and divine intervention would disappear from the intellectual spectrum, at least in the US. Instead, we find ourselves in an era when God appears to be on the lips of all politicians, creationism is rampant in our schools, and the separation of church and state seems more fragile than ever. What is the cause of this regression, and what can we do to combat it? Surely, one of the legacies of science is to learn to accept the Universe for what it is, rather than imposing our own belief systems on it. We should be prepared to offend any sensibilities, even religious ones, when they disagree with the evidence of experiment. Should scientists be more vocal in order to combat the born-again evangelists who are propagating ill-founded notions about the cosmos?

LAWRENCE M. KRAUSS is Ambrose Swasey Professor of Physics, Professor of Astronomy, and Chair of the Physics Department at Case Western Reserve University. He is the recipient of the AAAS Award for Public Understanding of Science, and this year's Lilienfeld Prize from the American Physical Society. He is the author of numerous books, including The Physics of Star Trek.

It may have been uttered as often in caricature as in anger, but the voice from the crowd asked a question that was accepted as reasonable even by those who winced at the jeering tone. Until fifteen or twenty years ago, the interest-earning and brain-working classes generally felt that they owed something to the workers, for doing the drudgery needed to keep an industrialised society going. And it was taken for granted that the workers were a class, with collective interests of their own. In some instances — British miners, for example — they enjoyed considerable respect and a romantic aura. Even in the United States, where perhaps the question was not put quite the same way, the sentiments were there.

Now there is an underclass of dangerous and hopeless folk, an elite of the fabulous and beautiful, and everybody in between is middle class. Meritocracy is taken for granted, bringing with it a perspective that sees only individuals, not groups. There are no working classes, only low-grade employees. In a meritocracy, respect is due according to the rank that an individual has attained. And since achievement is an individual matter, those at the upper levels see no reason to feel they owe anything to those at lower ones. This state of affairs will probably endure until such time that people cease to think of their society as a meritocracy, with its upbeat tone of progress and fairness, and start to feel that they are living in a Red Queen world, where they have to run ever faster just to stay in the same place.

MAREK KOHN'S most recent book, published last year, is As We Know It: Coming to Terms with an Evolved Mind. His other books include The Race Gallery: The Return of Racial Science and Dope Girls: The Birth of the British Drug Underground. He writes a weekly column on digital culture, Second Site, for theLondon Independent on Sunday.

To our "Sex and the City" generation, these three questions sound shamefully Victorian and bourgeois. Yet they were not unique to 19th century England: they obsessed the families of eligible young men and women in every agricultural and industrial civilization. Only with our socially-atomized, late-capitalist society have these questions become tasteless, if not taboo. Worried parents ask them only in the privacy of their own consciences, in the sleepless nights before a son or daughter's ill-considered marriage.

The "good family" question always concerned genetic inheritance as much as financial inheritance. Since humans evolved in bands of closely-related kin, we probably evolved an intuitive appreciation of the genetics relevant to mate choice — taking into account the heritable strengths and weakness that we could observe in each potential mate's relatives, as well as their own qualities. Recent findings in medical genetics and behavior genetics demonstrate the wisdom of taking a keen interest in such relatives: one can tell a lot about a young person's likely future personality, achievements, beliefs, parenting style, and mental and physical health by observing their parents, siblings, uncles, and aunts. Yet the current American anti-genetic ideology demands that we ignore such cues of genetic quality — God forbid anyone should accuse us of eugenics. Consider the possible reactions a woman might have to hearing that a potential husband was beaten as a child by parents who were alcoholic, aggressive religious fundamentalists. Twin and adoption studies show that alcoholism, aggressiveness, and religiousity are moderately heritable, so such a man is likely to become a rather unpleasant father. Yet our therapy cures-all culture says the woman should offer only non-judgmental sympathy to the man, ignoring the inner warning bells that may be going off about his family and thus his genes. Arguably, our culture alienates women and men from their own genetic intuitions, and thereby puts their children at risk.

The question "What are their accomplishments?" refers not to career success, but to the constellation of hobbies, interests, and skills that would have adorned most educated young people in previous centuries. Things like playing pianos, painting portraits, singing hymns, riding horses, and planning dinner parties. Such accomplishments have been lost through time pressures, squeezed out between the hyper-competitive domain of school and work, and the narcissistic domain of leisure and entertainment. It is rare to find a young person who does anything in the evening that requires practice (as opposed to study or work) — anything that builds skills and self-esteem, anything that creates a satisfying, productive "flow" state, anything that can be displayed with pride in public. Parental hot-housing of young children is not the same: after the child's resentment builds throughout the French and ballet lessons, the budding skills are abandoned with the rebelliousness of puberty — or continued perfunctorily only because they will look good on college applications. The result is a cohort of young people whose only possible source of self-esteem is the school/work domain — an increasingly winner-take-all contest where only the brightest and most motivated feel good about themselves. (And we wonder why suicidal depression among adolescents has doubled in one generation.) This situation is convenient for corporate recruiting — it channels human instincts for self-display and status into an extremely narrow range of economically productive activities. Yet it denies young people the breadth of skills that would make their own lives more fulfilling, and their potential lovers more impressed. Their identities grow one-dimensionally, shooting straight up towards career success without branching out into the variegated skill sets which could soak up the sunlight of respect from flirtations and friendships, and which could offer shelter, and alternative directions for growth, should the central shoot snap.

The question "Was their money and status acquired ethically?" sounds even quainter, but its loss is even more insidious. As the maximization of share-holder value guides every decision in contemporary business, individual moral principles are exiled to the leisure realm. They can be manifest only in the Greenpeace membership that reduces one's guilt about working for Starbucks or Nike. Just as hip young consumers justify the purchase of immorally manufactured products as "ironic" consumption, they justify working for immoral businesses as "ironic" careerism. They aren't "really" working in an ad agency that handles the Phillip Morris account for China; they're just interning for the experience, or they're really an aspiring screen-writer or dot-com entrepreneur. The explosion in part-time, underpaid, high-turnover service industry jobs encourages this sort of amoral, ironic detachment on the lower rungs of the corporate ladder. At the upper end, most executives assume that shareholder value trumps their own personal values. And in the middle, managers dare not raise issues of corporate ethics for fear of being down-sized. The dating scene is complicit in this corporate amorality. The idea that Carrie Bradshaw or Ally McBeal would stop seeing a guy just because he works for an unethical company doesn't even compute. The only relevant morality is personal — whether he is kind, honest, and faithful to them. Who cares about the effect his company is having on the Phillipino girls working for his sub-contractors? "Sisterhood" is so Seventies. Conversely, men who question the ethics of a woman's career choice risk sounding sexist: how dare he ask her to handicap herself with a conscience, when her gender is already enough of a handicap in getting past the glass ceiling?

In place of these biologically, psychologically, ethically grounded questions, marketers encourage young people to ask questions only about each other's branded identities. Armani or J. Crew clothes? Stanford or U.C.L.A. degree? Democrat or Republican? Prefer "The Matrix" or "You've Got Mail'? Eminem or Sophie B. Hawkins? Been to Ibiza or Cool Britannia? Taking Prozac or Wellbutrin for the depression? Any taste that doesn't lead to a purchase, any skill that doesn't require equipment, any belief that doesn't lead to supporting a non-profit group with an aggressive P.R. department, doesn't make any sense in current mating market. We are supposed to consume our way into an identity, and into our most intimate relationships. But after all the shopping is done, we have to face, for the rest of our lives, the answers that the Victorians sought: what genetic propensities, fulfilling skills, and moral values do our sexual partners have? We might not have bothered to ask, but our children will find out sooner or later.

GEOFFREY MILLER is an evolutionary psychologist at the London School of Economics and at U.C.L.A. His first book was The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature.

Wrapped like hotdogs full of mustard, snorting in search of air to breath from beneath the blanket — like dendrites looking for the first time for new contacts — , the skull plunged in a floppy pillow and the eyes allowed only to stare at the grey sky, most of the time too flat and low to enjoy a more diversified life in three dimensions. What has been the impact on the newly born brain's positioned mummy-like, and tight for generations in the pram, of this upside down perception of the Universe?

We do have a point of reference to imagine what life was like during the first eight or nine months after birth, before the invention of the anatomically shaped infant car seat that makes our youngest travel and look around from their earliest age. I'll come to that later.

First let me insist for those unaware of radical innovations in evolutionary psychology, that no baby has ever been found — there are plenty of very reliable tests for that — , who after having experienced the glamour of looking at the Universe face to face, right and left, backwards and forward, has regretted the odd way of being carried around by previous generations. Not only that; no newly born would ever accept now to look at the Universe from other vantage points than the high-tech pushchair, carriages, and travelling systems for children aged birth to four years, developed in the mid-80's , out of the original baby car seat invented in America.

Just as monkeys become quickly aware of new inventions and adopt them without second thoughts, our youngest do not accept any longer to be carried in prams where they lied flat and dormant. They have suddenly become aware that they can be taken around in efficiently designed traveling engines, from where they can look at the world in movement practically as soon as they open their eyes.

If somebody thinks that the end of looking upside-down at the Universe during the first eight or nine months of life is not important enough to be quoted as the end of anything, think of what neuroscientists are discovering about what happens during the first five months of the unborn just after conception.

Professor Beckman in Wursburg University (Germany) has convinced at last his fellow psychiatrists that neuron's mistakes in their migration from the limbic to the upper layers of the brain of the unborn are responsible, to a very large extent, for the 1% of epileptics and schizophrenics in the world's population. By the way, the 1% is fixed, no matter how many neuroscientists join the battle against mental illness. It is like a sort of cosmic radiation background. The only exception that shows up is whenever deep malnutrition or feverish influenza in expectant mothers pushes the rate significantly up.

Likewise, very few scientist would refuse to acknowledge today, that what happens during the first five months of the embryo is not only relevant in the case of malformations and mental disorders, but also in the case of levels of intelligence and other reasonable behavior patterns. How could anybody discard then the tremendous impact on the newly born brain of interacting with the Universe face to face during the first eight to nine months?

Surely, if we continue searching for the missing link between a single gene and a bark — and I deeply hope that we do now that molecular biology and genetics have joined forces — , everybody should care about the end of the upside-down perception of the Universe, and the silent revolution led by babies nurtured in the latest high-tech travelling system's interactive culture.

Professor EDUARDO PUNSET teaches Economics at the Sarriá Chemichal Institute of Ramon Llull University (Barcelona). He is Chairman of Planetary Agency, an audiovisual concern for the public understanding of Science. He was IMF Representative in the Caribbean, Professor of Innovation & Technology at Madrid University, and Minister for Relations with the UE.

The ubiquity of upscale coffee houses has eliminated the need to ask "Where can I get a cup of coffee?" But I suspect that the question "What questions have disappeared?" is meant to elicit even deeper and [perhaps] more meaningful responses.

The coffee house glut has been accompanied — although with no necessarily causal link — by an avalanche of information [or, at least, of data] and in the rush to obtain [or "to access" — groan] that information we’ve stopped asking "What does it all mean?" It is as though raw data, in and of itself, has real value, indeed all of the value, and thus there is no need to stop, to assimilate, to ponder. We grasp for faster computers, greater bandwidth, non-stop connectivity. We put computers in every classroom, rewire schools. But, with the exception of a great deal of concern about the business and marketing uses of the new "information age," we pay precious little attention to how the information can be used to change, or improve, our lives, nor do we seem to take the time to slow down and deliberate upon its meaning.

We wire the schools, but never ask what all those computers in classrooms will be used for, or whether teachers know what to do with them, or whether we can devise ways to employ the technology to help people learn in new or better ways. We get cable modems, or high-speed telephone lines, but don’t think about what we can do with them beyond getting more information faster. [Really, does being able to watch the trailer for "Chicken Run" in a 2-inch square window on the computer screen after a several minute long download, constitute a major advance — and if we could cut the download time to several seconds, would that qualify?]

Most insidious, I think, is that the rush to get more information faster almost forces people to avoid the act of thinking. Why stop and try to make sense of the information we’ve obtained when we can click on that icon and get still more data? And more.

RAPHAEL KASPER, a physicist, is Associate Vice Provost for Research at Columbia University and was Associate Director of the Superconducting Super Collider Laboratory.

I do not, of course, mean any particular Great Central Theory. I am referring to the once-pervasive habit of relating everything that had human scale — Chinese history, the Odyssey, your mother's fear of heights — to an all-explaining principle. This principle was set forth in a short shelf of classic works and then worked to a fine filigree by close-minded people masquerading as open-minded people. The precise Great Central Theory might be, as it was in my childhood milieu, the theories of Freud. It might be Marx. It might be Levi-Strauss or, more recently, Foucault. At the turn of the last century, there was a Darwinist version going, promulgated by Francis Galton, Herbert Spencer and their ilk.

These monolithic growths had begun, I suppose, as the answers to specific questions, but then they metastasized; their adherents would expect the great central theory to answer any question. Commitment to a Great Central Theory thus became more a religious act than an intellectual one. And, as with all religions, the worldview of the devout crept into popular culture. (When I was in high school we'd say So-and-So was really anal about his locker or that What's-his-name's parents were really bourgeois.) For decades, this was what intellectual life appeared to be: Commit to an overarching explanation, relate it to everything you experienced, defend it against infidels. Die disillusioned, or, worse, die smug.

So why has this sort of question vanished? My guess is that, broadly speaking, it was a product of the Manichean worldview of the last century. Depression, dictators, war, genocide, nuclear terror — all of these lend themselves to a Yes-or-No, With-Us-or-With-Them, Federation vs. Klingons mindset. We were, to put it simply, all a little paranoid. And paranoids love a Great Key: Use this and see the single underlying cause for what seems to be unrelated and random!

Nowadays the world, though no less dangerous, seems to demand attention to the seperateness of things, the distinctiveness of questions. ''Theories of everything'' are terms physicists use to explain their near-theological concerns, but at the human scale most people care about, where we ask questions like ''why can't we dump the Electoral College?'' or ''How come Mom likes my sister better?'', the Great Central Theory question has vanished with the black-or-white arrangement of the human world.

What's next?

Three possibilities.

One, some new Great Central Theory slouches in; some of the Darwinians think they've got the candidate, and they certainly evince signs of quasi-religious commitment. (For example, as a Freudian would say you doubted Freud because of your neuroses, I have heard Darwinians say I doubted their theories because of an evolved predisposition not to believe the truth. I call this quasi-religious because this move makes the theory impregnable to evidence or new ideas.)

Two, the notion that overarching theory is impossible becomes, itself, a new dogma. I lean toward this prejudice myself but I recognize its dangers. An intellectual life that was all boutiques could be, in its way, as stultifying as a giant one-product factory.

Three, we learn from the mistakes of the last two centuries and insist that our answers always match our questions, and that the distinction between theory and religious belief be maintained.

DAVID BERREBY'S writing about science and culture has appeared in The New York Times Magazine, The New Republic, Slate, The Sciences and many other publications.

The greatest idea that's disappeared from mainstream science this past 400 years is surely that of God. The greats who laid the foundations of modern science in the 17th century (Galileo, Newton, Leibnitz, Descartes) and the significant-but-not-quite-so-greats (Robert Boyle, John Ray, etc.) were theologians as much as they were scientists and philosophers. They wanted to know how things are, of course — but also what God had in mind when he made them this way. They took it for granted, or contrived to prove to their own satisfaction, that unless there is a God, omniscient and mindful, then there could be no Universe at all.

Although David Hume did much to erode such argument, it persisted well into the 19th century. Recently I have been intrigued to find James Hutton — who, as one of the founders of modern geology, is one of the boldest and most imaginative of all scientists — earnestly wondering in a late 18th century essay what God could possibly have intended when he made volcanoes. The notion that there could be no complex and adapted beings at all without a God to create them, was effectively the default position in orthodox biology until (as Dan Dennett has so succinctly explained) Charles Darwin showed how natural selection could produce complexity out of simplicity, and adaptation out of mere juxtaposition. Today, very obviously, no Hutton-style musing would find its way into a refereed journal. In Nature, God features only as the subject of (generally rather feeble) sociological and sometimes evolutionary speculation.

Religion obviously flourishes still, but are religion and science now condemned to mortal conflict? Fundamentalist-atheists would have it so, but I think not. The greatest ideas in philosophy and science never really go away, even if they do change their form or go out of fashion, but they do take a very long time to unfold. For at least 300 years — from the 16th to the 19th centuries — emergent science and post- medieval theology were deliberately intertwined, in many ingenious ways. Through the past 150, they have been just as assiduously disentangled. But the game is far from over. Cosmologists and metaphysicians continue to eye and circle each other. Epistemology — how we know what's true — is of equal interest to scientists and theologians, and each would be foolish to suppose that the other has nothing to offer. How distant is the religious notion of revelation from Dirac's — or Keats's? — perception of truth as beauty? Most intriguingly of all, serious theologians are now discussing the role of religion in shaping emotional response while modern aficionados of artificial intelligence acknowledge (as Hume did) that emotion is an essential component of thought itself. Lastly, the ethics of science and technology — how we should use our new-found power — are the key discussions of our age and it is destructive to write religion out of the act, even if the priests, rabbis and mullahs who so far have been invited to take part have often proved disappointing.

I don't share the modern enthusiasm for over-extended life but I would like to see how the dialogue unfolds in the centuries to come.

COLIN TUDGE is a Research Fellow at the Centre for Philosophy, London School of Economics. His two latest books are The Variety of Life and In Mendel's Footnotes.

The detection of "questions that are no longer asked" is difficult. Old questions, like MacArthur's old soldiers, just fade away. Scientists and scholars in hot pursuit of new questions neither note nor mourn their passing. I regularly face a modest form of the disappearing question challenge when a textbook used in one of my classes is revised. Deletions are hard to find; they leave no voids and are more stealthy than black holes, not even affecting their surrounds. New text content stands out, while missing material must be established through careful line-by-line reading. Whether in textbooks or in life, we don't think much about what is no longer relevant.

My response to the inquiry about questions that are no longer asked is to reframe it and suggest instead a common class of missing questions, those associated with obsolete and inappropriate metaphors. Metaphor is a powerful cognitive tool, which, like all models, clarifies thinking when appropriate, but constrains it when inappropriate. Science is full of them. My professional specialties of neuroscience and biopsychology has mind/brain metaphors ranging from Locke's ancient blank slate (tabula rasa), to the more technologically advanced switchboard, and the metaphor de jour, the computer. None do justice to the brain as a soggy lump of wetware, but linger as cognitive/linguistic models. Natural selection in the realm of metaphors is slow and imperfect. Witness the reference to DNA as a "blueprint" for an organism, when Dawkins' "recipe" metaphor more accurately reflects DNA's incoding of instructions for organismic assembly.

ROBERT R. PROVINE is Professor of Psychology and Neuroscience at the University of Maryland, Baltimore County, and author of Laughter: A Scientific Investigation.

Some questions are so rarely asked that we are astonished anyone would ask them at all. The entire world seems to agree that knowing mathematics is the key to something important, they just forget what. Benjamin Franklin asked this question in 1749 while thinking about what American schools should be like and concluded that only practical mathematics should be taught. The famous mathematician G.H. Hardy asked this question (A Mathematicians's Apology) and concluded that while he loved the beauty of mathematics there was no real point teaching it to children.

Today, we worry about the Koreans and Lithuanians doing better than us in math tests and every "education president" asserts that we will raise math scores, but no one asks why this matters. Vague utterances about how math teaches reasoning belie the fact that mathematicicans do everyday reasoning particularly better than anyone else. To anyone who reads this and still is skeptical, I ask: what is the Quadratic Formula? You learned it in ninth grade, you couldn't graduate high school without it. When was the last time you used it? What was the point of learning it?

ROGER SCHANK is the Chairman and Chief Technology Officer for Cognitive Arts and has been the Director of the Institute for the Learning Sciences at Northwestern University since its founding in 1989. One of the world's leading Artificial Intelligence researchers, he is books include: Dynamic Memory: A Theory of Learning in Computers and People , Tell Me a Story: A New Look at Real and Artificial Memory, The Connoisseur's Guide to the Mind, and Engines for Education.

The twentieth century will be remembered as one of the most violent in history. There were two world wars, numerous genocides, and millions of murders conducted in the name of progress. Driving this violence was the urge to find truth or purity. The violence was lit by the refining fire of belief. The redemptive ideal was called national socialism, communism, stalinism, maoism.

Today, these gods have feet of clay, and we mock their pretensions. Global consumerism is the new world order, but global consumerism is not a god. Market capitalism does not ask questions about transcendent meaning. Western democracies, nodding into the sleep of reason, have grown numb with self-congratulation about having won the hot, the cold, and, now, the star wars.

The questions that have disappeared are eschatological. But they have not really disappeared. They are a chtonic force, waiting underground, searching for a new language in which to express themselves. This observation sprang to mind while I was standing in the Place de la Revolution, now known as the Place de la Concorde, awaiting the arrival of the third millennium. During the French revolution this square was so soaked in blood that oxen refused to cross it. On New Year's Eve it was soaked in rain and champagne, as we counted down to a display of fireworks that never materialized. Instead, there was a Ferris wheel, lit alternately in mauve and chartreuse, and some lasers illuminating the Luxor obelisk which today is the square's secular center. No one staring at it knew how to read the hieroglyphics carved on its face, but this obelisk was once a transcendent object, infused with meaning, and so, too, was the guillotine that formerly stood in its place.

THOMAS A. BASS, who currently lives in Paris, is the author of The Eudaemonic Pie, Vietnamerica, The Predictors, and other books.

This question haunted serious writers in the early 20th century, when critics sought a product that measured up to the European standard. Now it is dead, and the underlying notion is in ICU. What happened?

Well, the idea itself was never a very good one. It had breathtakingly hazy contours. It ignored the work of authors like Melville, Hawthorne, Wharton, and Twain. And it seemed to assume that a single novel could sum up this vast and complex nation. I'd like to think its disappearance reflects these problems.

But technology also helped shelve the question. As media proliferated, literature grew less central. If the Great American Novel appeared tomorrow, how many people would actually read it? My guess: Most would wait for the movie.

DANIEL McNEILL is the author of The Face, and principal author of the best-selling Fuzzy Logic, which won the Los Angeles Times Book Prize in Science and Technology, and was a New York Times "Notable Book of the Year".

I am taking the liberty of sending two questions. (After all, the people on the list like to push the boundaries.)

"Have We Seen the End of Science?"

John Horgan announced the end of science in his book of the same title. Almost weekly the most spectacular advances are being announced and intriguing questions being asked in fields such as biology and physics. The answer was always a resounding no; now nobody asks the question.

"Will the Internet Stock Bubble Burst?"

We certainly know the answer to that one now.

On February 22, 2000 I gave a talk at the Century Association in NYC titled "Modern Finance and Computers". One of the topics that i covered was "will the internet stock bubble burst?" I said it was a classic bubble and would end in the usual way. I cited the example of an Fall, 1999 IPO for VA Linux. This was a company that had a market capitalization of 10 billion dollars at the end of the first day even though it had never shown a profit and was up against competitors such as Dell and IBM.

The NASDAQ reached its high on March 10, 2000, and the internet sector collapsed a couple of weeks later. The high for VA Linux in 2000 was $247;yesterday it closed below $10.

JOSEPH F. TRAUB is the Edwin Howard Armstrong Professor of Computer Science at Columbia University and External Professor at the Santa Fe Institute. He was founding Chairman of the Computer Science Department at Columbia University from 1979 to 1989, and founding chair of the Computer Science and Telecommunications Board of the National Academy of Sciences from 1986 to 1992. From 1971 to 1979 he was Head of the Computer Science Department at Carnegie-Mellon University. Traub is the founding editor of the Journal of Complexity and an associate editor of Complexity. A Festschrift in celebration of his sixtieth birthday was recently published. He is the author of nine books including the recently published Complexity and Information.

The sudden increase in digital information, or bits, in our everyday lives has destroyed any question of permanence or scarcity of those bits. Just consider the example of e-mail.

Years ago when you first got online, you were excited to get e-mail, right? So every day, the big question when you logged in was, Will I have any e-mail? The chirpy announcement that "You've got mail!" actually meant something, since sometimes you didn't have mail.

Today there's no question. There's no such thing as no mail. You don't have to ask; you DO have mail. If it's not the mail you want (from friends or family), it's work-related mail or, worse, spam. Our inboxes may soon be so flooded with spam that we look for entirely different ways to use e-mail.

The death of that question, "Do I have e-mail?" has brought us a new, more interesting question as a result: "What do I do with all this e-mail?" More generally, what do we do with all these bits(e-mail, wireless messages, websites, Palm Pilot files, Napster downloads)? This is the question that will define our relationship with digital technology in coming years.

MARK HURST, founder of Internet consulting firm Creative Good, is widely credited for popularizing the term "customer experience" and the methodology around it. Hurst has worked since the birth of the Web to make Internet technology easier and more relevant to its "average" users. In 1999, InfoWorld magazine named Hurst "Netrepreneur of the Year", saying that "Mark Hurst has done more than any other individual to make Web-commerce sites easier to use." Over 39,000 people subscribe to his Good Experience newsletter, available for free at goodexperience.com or update@goodexperie[36] nce.com.

Or at least, certainly not the ones that have so far been submitted to this list, since the questions posted are proof positive that they have not disappeared at all, or at least, not altogether. Sure, some questions have their heyday for a while, and then they may disappear for many a moon. But the great question you posed -- what questions have disappeared? -- shows that they were just waiting for a question like this for someone to be reminded just how much emptier our existence would be without certain questions.

But I also think that some questions certainly have gone by the wayside for a long time, though not necessarily the ones that so far have been posed. We may ask, for instance, questions like, Has history ended?, and then go on to offer up a response of one sort or another. But when is the last time we asked, what *is* history? What different types of history are there? What makes history history, regardless of which type it is?

Or we may ask: Why have certain questions been discarded? But when's the last time anyone has asked, What is a question? What does a question do? What does a question to do us, and what do we do to it?

We may ask: How do people differ in how they think and learn? But do we still ask: What is thinking? What is learning?

Instead, we seem to take for granted that we know what history is, that we know what thinking is, that we know what learning is, when in fact if we delved a little more into these questions, we may well find that none of us hold the same views on what these rich concepts mean and how they function. Which leads me to this perspective: What *has* all but disappeared, I think, is a way of answering questions, regardless of which one is being posed, regardless of how seemingly profound or off-beat or mundane it is. I'm speaking of the kind of rigorous, exhaustive, methodical yet highly imaginative scrutiny of a Socrates or a Plato that challenged all assumptions embedded in a question, and that revealed breathtakingly new vistas and hidden likenesses between seemingly disparate entities.

Who these days takes the time and effort, much less has the critical and creative acumen, to answer questions as those I've already posed, much less such questions as ¨What is human good?¨ or ¨What is a good human?¨in the soul-stirringly visionary yet at the same time down-to-earth way they did? We need a new generation of questioners in the mold of Plato and Sorcrates, people who dare to think a bit outside the lines, who take nothing for granted when a question is posed, and who subject their scrutiny to continual examination and consideration of cogent objections and alternative ways of seeing.

CHRISTOPHER PHILLIPS is the author of ¨Socrates Cafe: A Fresh Taste of Philosophy¨, and founder-executive director of the nonprofit Society for Philosophical Inquiry.

The moral, intellectual, physical and social improvement of the human race was a hot topic of the Enlightenment. It helped shape the American and French revolutions. Creating the "New Soviet Man" was at the heart of the Russian revolution — that's what justified the violence.

A central theme of Lyndon Johnson's Great Society was not just that human misery could be alleviated. It was that core human problems like crime could be fixed by the government eliminating root causes like want.

That's all gone.

We now barely trust government to teach kids to read.

JOEL GARREAU, the cultural revolution correspondent of The Washington Post, is a student of global culture, values, and change whose current interests range from human networks and the transmission of ideas to the hypothesis that the '90s — like the '50s — set the stage for a social revolution to come. He is the author of the best-selling books Edge City: Life on the New Frontier and The Nine Nations of North America, and a principal of The Edge City Group, which is dedicated to the creation of more liveable and profitable urban areas worldwide.

The distinctive Amnesty International arched sticker, with a burning candle surrounded by a swoosh of barbed wire, seemed to adorn every college dorm-room door, beat up Honda Accord, and office bulletin board when I started college in the late '80s at Fordham University. Human rights was the "in" cause. So, we all joined Amnesty and watched our heroes including Bruce Springsteen, Sting, and Peter Gabriel sing on the "Human Rights Now" tour (brought to you, of course, by Reebok).

As quickly as it took center stage, however, human rights seemed to fall off the map. Somewhere in the mid-90s, something stole our fire and free time, perhaps it was the gold rush years of the Internet or the end of the Cold War. The wild spread of entrepreneurship and capitalism may have carried some democracy along with it. Yet just because people are starting companies and economic markets are opening up doesn't mean that there are fewer tortures, rapes, and murders for political beliefs. (These kinds of false perceptions may stem from giving places like China "Most Favored Nation" status).

Youth inspired by artists created the foundation of Amnesty's success in the '80s, so maybe a vacuum of activist artists is to blame for human rights disappearing from the collective consciousness. Would a homophobic, misogynistic, and violent artist like Eminem ever take a stand for anyone other than himself? Could anyone take him seriously if he did? Britney Spears' fans might not have a problem with her dressing in a thong at the MTV Music Awards but how comfortable would they be if she addressed the issue of the rape, kidnapping, and torture of young women in Sierra Leone?

Of course, you don't have to look around the world to find human-rights abuses. Rodney King and Abner Louima taught us that human rights is an important and pressing issue right in our backyard. (Because of these examples, some narrow-minded individuals may see is as only a race specific issue.) One bright spot in all of this, however, is that the technology that was supposed to create a Big Brother state, like video cameras, is now being used to police Big Brother himself. (Check out witness.org and send them a check — or a video camera — if you have the means.)

Eleanor Roosevelt considered her fight to create the Universal Declaration of Human Rights her greatest accomplishment. How ashamed would she be that 50 years has elapsed since her battle, and now, no one seems to care.

Different people on the Edge list seem to have chosen to understand 'questions that have disappeared' in three very different senses:

1. Questions that were once popular but have now been answered
2. Questions that should never have been asked in the first place
3. Questions that have disappeared although they never received a satisfactory answer.

This third meaning is, I suspect, the one intended by the organizer of the forum. It is the most interesting of the three since it suggests real science that we should now be doing, rather than just raking over the historical coals.

The three meanings are too disparate to bring together easily, but I'll try. The popular question 'Has there been enough time for evolution to take place?' can now confidently be answered in the affirmative. It should never have been put in the first place since, self-evidently, we are here. But what is more interesting is that the real question that faces us is almost the exact opposite. Why is evolution so slow, given that natural selection is so powerful? Far from there being too little time for evolution to play with, there seems to be too much.

Ledyard Stebbins did a theoretical calculation about an extremely weak selection pressure, acting on a population of mouse-sized animals to favor the largest individuals. His hypothetical selection pressure was so weak as to be below the threshold of detectability in field sampling studies. Yet the calculated time to evolve elephant-sized descendants from mouse-sized ancestors was only a few tens of thousands of generations: too short to be detected under most circumstances in the fossil record. To exaggerate somewhat, evolution could be yo-yo-ing from mouse to elephant, and back again, so fast that the changes could seem instantaneous in the fossil record.

Worse, Stebbins's calculation assumed an exceedingly weak selection pressure. The real selection pressures measured in the field by Ford and his colleagues on lepidoptera and snails, by Endler and his colleagues on guppies, and by the Grants and their colleagues on the Galapagos finches, are orders of magnitude stronger. If we fed into the Stebbins calculation a selection pressure as strong as the Grants have measured in the field, it is positively worrying to contemplate how fast evolution could go. The same conclusion is indirectly suggested by domestic breeding. We have gone from wolf to Yorkshire terrier in a few centuries, and could presumably go back to something like a wolf in as short a time.

It is indeed the case that evolution on the Galapagos archipelago has been pretty fast, though still nothing like as fast as the measured selection pressures might project. The islands have been in existence for five million years at the outside, and the whole of their famous endemic fauna has evolved during that time. But even the Galapagos islands are old compared to Lake Victoria. In the less than one million years of the lake's brief lifetime, more than 170 species of the genus Haplochromis alone have evolved.

Yet the Coelacanth Latimeria, and the three genera of lungfish, have scarcely changed in hundreds of millions of years. Surviving Lingula ('lamp shells') are classified in the same genus as their ancestors of 400 million years ago, and could conceivably interbreed with them if introduced through a time machine. The question that still faces us is this. How can evolution be both so fast and so leadenly slow? How can there be so much variance in rates of evolution? Is stasis just due to stabilizing selection and lack of directional selection? Or is there something remarkably special going on in the (non) evolution of living fossils? As William Blake might have written to a coelacanth: Did he who made the haplochromids make thee?

RICHARD DAWKINS is an evolutionary biologist and the Charles Simonyi Professor For The Understanding Of Science at Oxford University; Fellow of New College; author of The Selfish Gene, The Extended Phenotype , The Blind Watchmaker, River out of Eden (ScienceMasters Series), Climbing Mount Improbable, and Unweaving the Rainbow.

Two journalists once ranked the discovery of lost Atlantis as potentially the most spectacular sensation of all times. Now, the question what or where Atlantis might have been has disappeared. Why?

The Greek philosopher Plato, the only source for Atlantis, incorporated an extensive description of this legendary city into a mundane summary of contemporary (4th century BC) scientific achievements and knowledge of prehistory. Nobody attributed much attention to the account during subsequent centuries. In Medieval times, scholarly interest focussed on Aristotle, while Plato was neglected. When archaeology and history finally assumed the shape of scientific disciplines ‹ after the middle of the 18th century AD ‹ science still was under the influence of Christian theology, its Medieval mother discipline. The first art historians, who were brought up in a creationist world, consequently interpreted western culture as an almost divine concept which first materialized in ancient Greece, without having had any noticeable predecessors. Accordingly, any ancient texts referring to high civilizations, much older than Classical Greece, had to be fictitious by definition.

During the 20th century, dozens of palaces dating to a golden age a thousand years older than Plato's Athens have been excavated around the eastern Mediterranean. Atlantis can now be placed in a historical context. It is an Egyptian recollection of Bronze Age Troy and its awe-inspiring war against the Greek kingdoms. Plato's account and the end of the Bronze Age around 1200 BC can now be seen in a new light. Why was this connection not made earlier? Four Egyptian words, describing location and size, were mistranslated, because at the time Egypt and Greece used different calendars and scales. And, in contrast to biology, where, after Darwin, the idea of creationism was dropped in favor of evolutionism, Aegean prehistory has never questioned its basic premises.

Geoarchaeologist EBERHARD ZANGGER is Director of Corporate Communications at KPNQwest (Switzerland) and the author of The Flood from Heaven : Deciphering the Atlantis Legend and Geoarchaeology of the Argolid. Zangger has written a monograph, published by the German Archaeological Institute, as well as more than seventy scholarly articles, which have appeared in theAmerican Journal of Archaeology, Hesperia, the Oxford Journal of Archaeology,and the Journal of Field Archaeology.

Questions disappear when they seem to be answered or unanswerable. The interesting missing questions are the apparently answered ones that are not, and the apparently unanswerable ones that are.

One of life's most profound questions has been thought to be unanswerable. That question is, "Why is life so full of suffering?" Impatience with centuries of theological and philosophical speculation has led many to give up on the big question, and to ask instead only how brain mechanisms work, and why people differ in their experiences of suffering. But the larger question has an answer, an evolutionary answer. The capacities for suffering — pain, hunger, cough, anxiety, sadness, boredom and all the res — have been shaped by natural selection. They seem to be problems because they are so painful and because they are aroused only in adverse circumstances, but they are, in fact, solutions.

The illusion that they are problems is further fostered by the smoke-detector principle — selection has shaped control mechanisms that express defensive responses whenever the costs are less than the protection they provide. This is often indeed, much more often than is absolutely necessary. Thus, while the capacities for suffering are useful and generally well-regulated, most individual instances are excessive or entirely unnecessary. It has not escaped notice that this principle has profound implications for the power and limits of pharmacology to relieve human suffering.

RANDOLPH M. NESSE is Professor of Psychiatry and Director of the Evolution and Human Adaptation Program at the University of Michigan. He is the author, with George Williams, of Why We Get Sick: The New Science of Darwinian Medicine.

Is enlightenment a myth or a reality? I mean the enlightenment of the east, not west, the state of supreme mystical awareness also known as nirvana, satori, cosmic consciousnesss, awakening. Enlightenment is the telos of the great Eastern religions, Buddhism and Hinduism, and it crops up occasionally in western religions, too, although in a more marginal fashion. Enlightenment once preoccupied such prominent western intellectuals as William James, Aldous Huxley and Joseph Campbell, and there was a surge of scientific interest in mysticism in the 1960’s and 1970’s. Then mysticism became tainted by its association with the human potential and New Age movements and the psychedelic counterculture, and for the last few decades it has for the most part been banished from serious scientific and intellectual discourse. Recently a few scholars have written excellent books that examine mysticism in the light of modern psychology and neuroscience — Zen and the Brain by the neurologist James Austin; Mysticism, Mind, Consciousness by the philosopher Robert Forman; The Mystical Mind by the late psychiatrist Eugene d'Aquili and the radiologist Andrew Newberg — but their work has received scant attention in the scientific mainstream. My impression is that many scientists are privately fascinated by mysticism but fear being branded as fuzzy-headed by disclosing their interest. If more scientists revealed their interest in mystical consciousness, perhaps it could become a legitimate subject for investigation once again.

JOHN HORGAN is a freelance writer and author of The End of Science and The Undiscovered Mind. A senior writer at Scientific American from 1986 to 1997, he has also written for the New York Times, Washington Post, New Republic, Slate, London Times, Times Literary Supplement and other publications.

By the middle of the twentieth century, scientists and doctors were sure that it was just a matter of time, and not much time at that, before most diseases would be wiped from the face of the Earth. Antibiotics would get rid of bacterial infections; vaccines would get rid of viruses; DDT would get rid of malaria. Now one drug after the next are becoming useless against resistant parasites, and new plagues such as AIDS are sweeping through our species. Except for a handful of diseases like smallpox and Guinea worms, eradication now looks like a fantasy. There are three primary reasons that this question is no longer asked. First, parasites evolution is far faster and more sophisticated than anyone previously appreciated. Second, scientists don't understand the complexities of the immune system well enough to design effective vaccines for many diseases yet. For another, the cures that have been discovered are often useless because the global public health system is a mess. The arrogant dream of eradication has been replaced by much more modest goals of trying to keep diseases in check.

CARL ZIMMER is the author of Parasite Rex and writes a column about evolution for Natural History.

As we all use the global medium, Internet, people who are running it behind is making the decisions on how to run this medium. So far so good. But not anymore.

With all the ICANN process, commercialization of Domain Name registration, expanding the new gTLDs, one can ask: who are entitled to make these decisions, and how come they can decide that way?

Despite the growing digital divide, the number of people who use the Net is still exploding, even in the developing side of the world. What is fair, what is democratic, what kind of principles can we all agree on this single global complex system, from all corners of the world is my question of the year to come.

IZUMI AIZU, a researcher and promoter of the Net in Asia since mid 80s, is principal, Asia Network Research and Senior Research Fellow at GLOCOM (Center for Global Communications), at the International University of Japan.

For all of history, humans traded objects, then traded currency for objects, with the idea that when you buy some thing, you own it. This most fundamental human right — the right to own — is under attack again. Only this time, the software industry, not the followers of Karl Marx, are responsible.

In the last 15 years of the information age, we discovered that every object really has three separable components: the informational content, delivered on a physical medium, governed by the license, a social or legal contract governing the rights to use the thing. It used to be that the tangible media token (the thing) both held the content, and (via possession) enforced the common understanding of the "ownership" license. Owners have rights to a thing — to trade it, sell it, loan it, rent it, destroy it, donate it, paint it, photograph it, or even chop it up to sell in pieces.

For a book, the content is the sequence of words themselves, which may be rendered as ink scratches on a medium of paper bound inside cardboard covers. For song it is a reproduction of the audio pattern pressed into vinyl or plastic to be released by a reading device. The license - to own all rights but copy rights — was enforced simply by possession of the media token, the physical copy of the book or disk itself. If you wanted to, you could buy a book, rip it into separate pages, and sell each page individually. You can slice a vinyl record into individual song rings for trade.

For software, content is the evolving bits of program and data arranged to operate on some computer. Software can be delivered in paper, magnetic, optical, or silicon form, or can be downloaded from the internet, even wirelessly. Software publishers know the medium is completely irrelevant, except to give the consumer the feeling of a purchase.

Even though you had the feeling of trading money for something, your really don't own the software you paid for. The license clearly states that you don't. You are merely granted a right to use the information, and the real owner can terminate your license at will, if you criticize him in public. Moreover, you cannot resell the software, you cannot take a "suite" apart into working products to sell each one separately. You cannot rent it or loan it to a friend. You cannot look under the hood to try to fix it when it breaks. You don't actually own anything but exchanged your money for a "right to use", and those rights can be arbitrarily dictated, and then rendered worthless by the very monopoly you got it from, forcing you to pay again for something you felt you had acquired last year.

There is no fundamental difference between software, recordings, and books. E-books are not sold, but licensed, and Secure Music will be available in a pay-per-download format. Inexorably driven by more lucrative profits from rentals, I predict that within a couple of decades, you will no longer be able to "buy" a new book or record. You will not be able to "own" copies. This may not seem so nefarious, as long as you have easy access to the "celestial jukebox" and can temporarily download a "read-once" license to any entertainment from private satellites. Your children will have more room in their homes and offices without the weight of a lifetime collection of books and recordings.

What are humans when stripped of our libraries? And it won't stop with books.

For an automobile, the content is the blueprint, the organization of mechanisms into stylistic and functional patterns which move, built out of media such as metals, pipes, hoses, leather, plastic, rubber, and fluids. Because of the great expense of cloning a car, Ford doesn't have to spell out the licensing agreement: You own it until it is lost, sold, or stolen. You can rent it, loan it, sell it, take it apart, and sell the radio, tires, engine, carburetor, etc. individually.

But the license agreement can be changed! And when Ford discovers the power of UCITA, you will have to pay an annual fee for a car you don't own, which will blow up if you fail to bring it in or pay your renewal fee. And you will find that you cannot resell your car on an open secondary market, but can only trade it in to the automobile publisher for an upgrade.

Without an effort to protect the right to own, we may wake up to find that there is nothing left to buy.

JORDAN POLLACK, a computer science and complex systems professor at Brandeis, works on AI, Artificial Life, Neural Networks, Evolution, Dynamical Systems, Games, Robotics, Machine Learning, and Educational Technology. He is a prolific inventor, advises several startup companies and incubators, and in his spare time runs Thinmail, a service designed to enhance the usefulness of wireless email.

Many neuroscientists, myself included, went into brain research because of an interest in the fact that our brains make us who we are. But the topics we end up working on are typically more mundane. It's much easier to research the neural basis of perception, memory or emotion than the way perceptual, memory, and emotion systems are integrated in the process of encoding who we are. Questions about the neural basis of personhood, the self, have never been at the forefront of brain science, and so are not, strictly speaking, lost questions to the field. But they are lost questions for those of us who were drawn to neuroscience by an interest in them, and then settle for less when overcome with frustration over the magnitude of the problem relative to the means we have for solving it. But questions about the self and the brain may not be as hard to address as they seem. A simple shift in emphasis from issues about the way the brain typically works in all of us to the way it works in individuals would be an important entry point. This would then necessitate that research on cognitive processes, like perception or memory, take subjects' motivations and emotions into consideration, rather than doing everything possible to eliminate them. Eventually, researchers would study perception, memory, or emotion less as isolated brain functions than as activities that, when integrated, contribute to the real function of the brain-- the creation and maintenance of the self.

JOSEPH LEDOUX is a Professor of Neural Science at New York University. He is author of The Emotional Brain.

This question (or pair of questions) was on everyone's lips in the 1970s, following the oil shortage and lines at gas stations. It stimulated a lot of good thinking and good work on alternative energy sources, renewable energy sources, and energy efficiency. Although this question is still asked by many knowledgeable and concerned people, it has disappeared from the public's radar screen (or, better, television screen). Even the recent escalation of fuel prices and the electricity shortage in California have not lent urgency to thinking ahead about energy.

But we should be asking, we should be worrying, and we should be planning. A real energy crisis is closer now than it was when the question had high currency. The energy-crisis question is only part of a larger question: How is humankind going to deal in the long term with its impact on the physical world we inhabit (of which the exhaustion of fossil fuels is only a part)? Another way to phrase the larger question: Are we going to manage more or less gracefully a transition to a sustainable world, or will eventual sustainability be what's left, willy nilly, after the chaos of unplanned, unanticipated change?

Science will provide no miracles (as the Wall Street Journal, in its justification of inaction, would have us believe), but science can do a lot to ameliorate the dislocations that this century will bring. We need to encourage our public figures to lift their eyes beyond the two-, four-, and six-year time horizons of their jobs.

KENNETH FORD is a retired physicist who teaches at Germantown Friends School in Philadelphia. He is the co-author, with John Wheeler, of Geons, Black Holes, and Quantum Foam: A Life in Physics.

The question disappeared in most of Europe and North America, of course, because of the great movement toward women's employment and career advancement even after marrying and bearing children. Feminist historians have long documented how the "story" of the female heroine used to end with marriage; indeed, this story was so set in stone as late as the 1950's and early 60's in this country that Sylvia Plath's heroine in The Bell Jar had to flirt with suicide in order to try to find a way out of it. Betty Friedan noted in The Feminine Mystique that women (meaning middle class white women; the narrative was always different for women of color and working class women) couldn't "think their way past" marriage and family in terms of imagining a future that had greater dimension. But the narrative shifted and it's safe to say that the female sense of identity in the West, for the first time ever, no longer hinges on the identity of her mate — which is a truly new story in terms of our larger history.

NAOMI WOLF, author, feminist, and social critic, is s an outspoken and influential voice for women's rights and empowerment. she is the author of The Beauty Myth, Fire with Fire, and Promiscuities.

Another question that has fallen into the dustbin of history is this: Is human nature innately good or evil? This became a gripping topic in the late 17th century, as Enlightment thinkers began to challenge the Christian assumption that man was born a fallen creature. It was a great debate while it lasted: original sin vs. tabla rasa and the perfectability of man; Edmund Burke vs. Tom Paine; Dostoyevsky vs. the Russian reformers. But Darwin and Freud undermined the foundations of both sides, by discrediting the very possibility of discussing human nature in moral or teleological terms. Now the debate has been recast as "nature vs. nurture" and in secular scientific circles at least, man is the higher primate -- a beast with distinctly mixed potential.

ANN CRITTENDEN is an award-winning journalist and author. She was a reporter for The New York Times from 1975 to 1983, where her work on a broad range of economic issues was nominated for the Pulitzer Prize. She is the author of several books inncluding The Price of Motherhood: Why the Most Important Job in the World is Still the Least Valued. Her articles have appeared in numerous magazines, including The Nation, Foreign Affairs, McCall's, Lear's, and Working Woman.

The road of knowledge is littered with old questions, but by their very nature, none of them stands out above all others. The diversity of thoughtful responses given on the Edge forum, which just begin to scratch the surface, illustrates how progress happens. The evolution of knowledge is a Schumpterian process of creative destruction, in which weeding out the questions that no longer merit attention is an integral part of formulating better questions that should. Forgetting is a vital part of creation.

Maxwell once worried that the second law of thermodynamics could be violated by a demon who could measure the velocity of individual particles and separate the fast ones from the slow ones, and use this to do work. Charlie Bennet showed that that this is impossible, because to make a measurement the demon has to first put her instruments in a known state.

This involves erasing information. The energy needed to do this is more than can be gained. Thus, the fact that forgetting takes work is essential to the second law of thermodynamics. Why is this relevant? As Gregory Bateson once said, the second law of thermodynamics is the reason that it is easier to mess up a room than it is to clean it. Forgetting is an essential part of the process of creating order. Asking the right questions is the most important part of the creative process. There are lots of people who are good at solving problems, fewer who are good at asking questions.

Around the time I took my qualifying examination in physics, someone showed me the test that Lord Rayleigh took when he graduated as senior wrangler from Cambridge in 1865. I would have failed it. There were no questions on thermodynamics, statistical mechanics, quantum mechanics, nuclear physics, particle physics, condensed matter, or relativity, i.e. no questions covering most of what I had learned.

However, the classical mechanics questions, which comprised most of the bets, were diabolically hard. Their solution involved techniques that are no longer taught, and that a modern physicist would have to work hard to recreate. Of course, in a field like philosophy this would not have surprised me — it just hadn't occurred to me that this was as true for physics as well. The physicists in Rayleigh's generation presumably worked just as hard, and knew just as many things. They just knew different things. After overcoming the shock of how much had seemingly been lost, I rationalized my ignorance with the belief that what I was taught was more useful than what Rayleigh was taught. Whether as a culture or as individuals, to learn new things, we have to forget old things. The notion of what is useful is constantly evolving.

The most important questions evolve through time as people understand little bits and pieces, and view them from different angles in the attempt to solve them. Each question is replaced by a new one that is (hopefully) better framed than its antecedant. Reflecting on those that have been cast aside is like sifting through flotsam on a beach, and asking what it tells us. Is there a common thread that might give us a clue to posing better questions in the future?

When we examine questions such as "What is a vital force?", "How fast is the earth moving?", "Does God exist?", "Have we seen the end of science?", "Has history ended?", "Can machines think?", there are some common threads. One is that we never really understood what these questions meant in the first place. But these questions (to varying degrees) have been useful in helping us to formulate better, more focused questions. We just have to turn loose of our pet ideas, and make a careful distinction between what we know and what we only think we know, and try to be more precise about what we are really asking.

I would be curious to hear more discussion about the common patterns and the conclusions to be drawn from the questions that have disappeared.

J. DOYNE FARMER, one of the pioneers of what has come to be called chaos theory, is McKinsey Professor, Sante Fe, Institute, and the co founder and former co-president of Prediction Company in Santa Fe, New Mexico.

What does science have to say about the origins of love in the scheme of things? Not a lot. In fact, it is still virtually a taboo subject, just as consciousness was until very recently. However, since feelings are a major component of consciousness, it seems likely that the ontology of love is now likely to emerge as a significant question in science.

Within Christian culture, as in many other religious traditions, love has its origin as a primal quality of God and so is co-eternal with Him. His creation is an outpouring of this love in shared relationship with beings that participate in the essential creativity of the cosmos. As in the world of Shakespeare and the Renaissance Magi, it is love that makes the world go round and animates all relationships.

This magical view of the world did not satisfy the emerging perspective of Galilean science, which saw relationships in nature as law-like, obeying self-consistent logical principles of order. God may well have created the world, but he did so according to intelligible principles. It is the job of the scientist to identify these and describe them in mathematical form. And so with Newton, love turned into gravity. The rotation of the earth around the sun, and the moon around the earth, was a result of the inverse square law of gravitational attraction. It was not a manifestation of love as an attractive principle between animated beings, however much humanity remained attached to romantic feelings about the full moon. Love was henceforth banished from scientific discourse and the mechanical world-view took over.

Now science itself is changing and mechanical principles are being replaced by more subtle notions of interaction and relationships. Quantum mechanics was the first harbinger of a new holistic world of non-local connectedness in which causality operates in a much more intricate way than conventional mechanism. We now have complexity theory as well, which seeks to understand how emergent properties arise in complex systems such as developing organisms, colonies of social insects, and human brains. Often these properties are not reducible to the behavior of their component parts and their interactions, though there is always consistency between levels: that is, there are no contradictions between the properties of the parts of a complex system and the order that emerges from them. Consciousness appears to be one of these emergent properties. With this recognition, science enters a new realm.

Consciousness involves feelings, or more generally what are called qualia, the experience of qualities such as pain, pleasure, beauty, and ŠŠ. love. This presents us with a major challenge. The scientific principle of consistency between levels in systems requires that feelings emerge from some property of the component parts (e.g., neurones) that is consistent with feeling, experience. But if matter is 'dead', without any feeling, and neurones are just made of this dead matter, even though organized in a complex way, then where do feelings come from ? This is the crunch question which presents us with a hard choice. We can either say that feelings are epiphenomena, illusions that evolution has invented because they are useful for survival. Or we can change our view of matter and ascribe to the basic stuff of reality some elementary component of feeling, sentience, however rudimentary. Of course, we could also take the view that nature is not self-consistent and that miracles are possible; that something can come from nothing, such as feeling from dead, insentient matter, thus returning to the magical world-view of the early renaissance. But if we are to remain scientific, then the choice is between the other two alternatives.

The notion that evolution has invented feelings because they are useful for survival is not a scientific explanation, because it gives no account of how feelings are possible as properties that emerge in the complex systems we call organisms (i.e., consistent emergent properties of life). So we are left with the other hard choice: matter must have some rudimentary property of sentience. This is the conclusion that the mathematician/philosopher A.N. Whitehead came to in his classic, Process and Reality, and it is being proposed as a solution to the Cartesian separation of mind and matter by some contemporary philosophers and scientists. It involves a radical reappraisal of what we call 'reality'. But it does suggest a world in which love exists as something real, in accord with most peoples' experience. And goodness knows, we could do with a little more of it in our fragmented world.

BRIAN GOODWIN is a professor of biology at the Schumacher College, Milton Keynes, and the author of Temporal Organization in Cells and Analytical Physiology, How The Leopard Changed Its Spots: The Evolution of Complexity, and (with Gerry Webster) Form and Transformation: Generative and Relational Principles in Biology. Dr. Goodwin is a member of the Board of Directors of the Sante Fe Institute.

This question bit the dust after a brief but busy life; it is entirely a second-half-of the-20th-century question. Had it been asked before the 20th century, it would have been phrased differently: "heredity" instead of "genes." But it wasn't asked back then, because the answer was obvious to everyone. Unfortunately, the answer everyone gave — yes! — was based on erroneous reasoning about ambiguous evidence: the difference in behavior between the pauper and the prince was attributed entirely to heredity. The fact that the two had been reared in very different circumstances, and hence had had very different experiences, was overlooked.

Around the middle of the 20th century, it became politically incorrect and academically unpopular to use the word "heredity"; if the topic came up at all, a euphemism, "nature," was used in its place. The fact that the pauper and the prince had been reared in very different circumstances now came to the fore, and the behavioral differences between them was now attributed entirely to the differences in their experiences. The observation that the prince had many of the same quirks as the king was now blamed entirely on his upbringing. Unfortunately, this answer, too, was based on erroneous reasoning about ambiguous evidence.

That children tend to resemble their biological parents is ambiguous evidence; the fact that such evidence is plentiful — agreeable parents tend to have agreeable kids, aggressive parents tend to have aggressive kids, and so on — does not make it any less ambiguous. The problem is that most kids are reared by their biological parents. The parents have provided both the genes and the home environment, so the kids' heredity and environment are correlated. The prince has inherited not only his father's genes but also his father's palace, his father's footmen, and his father's Lord High Executioner (no reference to living political figures is intended).

To disambiguate the evidence, special techniques are required — ways of teasing apart heredity and environment by controlling the one and varying the other. Such techniques didn't begin to be widely used until the 1970s; their results didn't become widely known and widely accepted until the 1990s. By then so much evidence had piled up that the conclusion (which should have been obvious all along) was incontrovertible: yes, genes do influence human behavior, and so do the experiences children have while growing up.

(I should point out, in response to David Deutsch's contribution to the World Question Center, that no one study, and no one method, can provide an answer to a question of this sort. In the case of genetic influences on behavior, we have converging evidence — studies using a variety of methods all led to the same conclusion and even agreed pretty well on the quantitative details.)

Though the question has been answered, it has left behind a cloud of confusion that might not disappear for some time. The biases of the second half of the 20th century persist: when "dysfunctional" parents are found to have dysfunctional kids, the tendency is still to blame the environment provided by the parents and to overlook the fact that the parents also provided the genes.

Some would argue that this bias makes sense. After all, they say, we knowhow the environment influences behavior. How the genes influence behavior is still a mystery — a question for the 21st century to solve. But they are wrong. They know much less than they think they know about how the environment influences behavior.

The 21st century has two important questions to answer. How do genes influence human behavior? How is human behavior influenced by the experiences a child has while growing up?

JUDITH RICH HARRIS is a writer and developmental psychologist; co-author ofThe Child: A Contemporary View Of Development; winner of the 1997 George A. Miller Award for an outstanding article in general psychology, and author ofThe Nurture Assumption: Why Children Turn Out The Way They Do.

In 1957, a few years after he co-discovered the double helix, Francis Crick proposed a very famous hypothesis. It states that "once 'information' has passed into protein it cannot get out again. In more detail, the transfer of information from nucleic acid to nucleic acid, or from nucleic acid to protein may be possible, but transfer from protein to protein, or from protein to nucleic acid is impossible." After it had proven to form the foundation of molecular biology, he later called this hypothesis the "Central Dogma" of biology.

In the last years of the last millennium, Crick's dogma fell. The reason? Direct protein-to-protein information transfer was found to be possible in a class of proteins called "prions." With the aid of a catalyst, prions (short for "proteinaceous infectious particles") cause another molecule of the same class to adopt an infectious shape like their own simply through contact. Thus, prions are an important and only recently discovered mechanism for the inheritance of information through means other than DNA. Such an important discovery merited a recent Nobel Prize for Stanley Prusiner, who doggedly pursued the possibility of a rogue biological entity replicating without the assistance of genes against a back-drop of resistance and disbelief among most of his colleagues. Further testimony to the significance of prions comes from the current BSE crisis in Europe. Now that we know how they work, prions — and the diseases they cause — may begin popping up all over the place.

ROBERT AUNGER is an anthropologist studying cultural evolution, both through the now much-maligned method of fieldwork in nonwestern societies, and the application of theory adapted from evolutionary biology. He is at the Department of Biological Anthropology at the University of Cambridge, and the editor of Darwinizing Culture: The Status of Memetics as a Science.

"What Questions Have Disappeared...And Why?" Funny you should ask that. "And why? " could itself be the most important question that has disappeared from many fields.

"And why?": in other words, "what is the explanation for what we see happening?" "What is it in reality that brings about the outcome that we predict?" Whenever we fail to take that question seriously enough, we are blinded to gaps in our favoured explanation. And so, when we use that explanation to interpret regularities that we may observe, instead of understanding that the explanation was an assumption in our analysis, we regard it as the inescapable implication of our observations.

"I just can't feel myself split", complained Bryce DeWitt when he first encountered the many-universes interpretation of quantum theory. Then Hugh Everett convinced him that this was the same circular reasoning that Galileo rejected when he explained how the Earth can be in motion even though we observe it to be at rest. The point is, both theories are consistent with that observation. Thanks to Everett, DeWitt and others, the "and why" question began gradually to return to quantum theory, whence it had largely disappeared during the 1930s. I believe that its absence did great harm both in impeding progress and in encouraging all sorts of mystical fads and pseudo-science. But elsewhere, especially in the human philosophies (generally known as social sciences), it is still largely missing. Although behaviourism — the principled refusal to ask "and why?" — is no longer dominant as an explicit ideology, it is still widespread as a psychological attitude in the human philosophies.

Suppose you identified a gene G, and a human behaviour B, and you undertook a study with 1000 randomly chosen people, and the result was that of the 500 people who had G in their genome, 499 did B, while of the 500 who lacked G, 499 failed to do B. You'd conclude, wouldn't you, that G is the predominant cause of B? Obviously there must be other mechanisms involved, but they have little influence on whether a person does B or not. You'd inform the press that all those once-trendy theories that tried to explain B through people's upbringing or culture, or attributed it to the exercise of free will or the logic of the situation or any combination of such factors — were just wrong. You've proved that when people choose to do B, they are at the very least responding to a powerful influence from their genes. And if someone points out that your results are perfectly consistent with B being 100% caused by something other than G (or any other gene), or with G exerting an influence in the direction of not doing B, you will shrug momentarily, and then forget that possibility. Won't you?

DAVID DEUTSCH's research in quantum physics has been influential and highly acclaimed. His papers on quantum computation laid the foundations for that field, breaking new ground in the theory of computation as well as physics, and have triggered an explosion of research efforts worldwide. He is a member of the Centre for Quantum Computation at the Clarendon Laboratory, Oxford University and the author of The Fabric of Reality.

My vanished question is:"'How does a slide rule work?"' Slide rules were once ubiquitous in labs, classrooms, and the pockets of engineers. They are now as common as dinosaurs; totally replaced by electronic calulators and computers. The interesting question to ponder is: what is it that in the future will do to computers what computers did to slide rules?

JOHN BARROW is a physicist at Cambridge University.. He is the author of The World Within the World, Pi in the Sky, Theories of Everything, The Origins of the Universe (Science Masters Series),The Left Hand of Creation, The Artful Universe, and Impossibility: The Limits of Science and the Science of Limits.

This question is no longer asked, not because the question has been answered (although I happen to believe the answer is e to the i pi) but because the search for knowledge about the spiritual world has shifted focus as a result of science and maths cornering all the physical and numerical answers. Along with "Did Adam have a navel?" and "Did Jesus' mother give birth parthenogenetically?", this question is no longer asked by anyone of reasonable intelligence. Those who would in the past have searched for scientific support for their spiritual ideas have finally been persuaded that this is a demeaning use for human brainpower and that by moving questions about the reality of spiritual and religious ideas into the same category as questions about mind (as opposed to brain) they will retain the respect of unbelievers and actually get nearer to an understanding of the sources of their preoccupations.

As an addendum, although I wasn't asked I would like also to answer the question "What questions should disappear and why?"

The one question that should disappear as soon as possible ­ and to a certain extent scientists are to blame for the fact that it is still asked ­ is: What is the explanation for astrology/UFOs/clairvoyance/telepathy/any other 'paranormal' phenomenon you care to name?

This question is still asked because scientists and science educators have failed to get over to the public the fact that there is only one method of explaining phenomena ­ the scientific method. Therefore, anything that people are puzzled by that has not been explained either doesn't exist or there isn't yet enough evidence to prove that it does. But still I get the impression that for believers in these phenomena there are two types of explanatory system ­ science and nonscience (you can pronounce the latter 'nonsense' if you like). When you try to argue with these people by pointing out that there isn't sufficient repeatable evidence even to begin to attempt an explanation in scientific terms, they just say that this particular phenomenon doesn't require that degree of stringency. When the evidence is strong enough to puzzle scientists as well as nonscientists, they'll begin to devise explanations ­ scientific explanations.

There's a good example of how this works currently with the interest taken in St John's wort as a possible treatment for depression. Once there was enough consistent evidence to suggest that there might be an effect, clinical trials were planned and are now under way. Interestingly, an indication that there might be a genuine effect comes from a substantial body of information suggesting that there are adverse drug interactions between St John's wort and immunosuppressive drugs taken by transplant patients. Once an 'alternative' remedy actually causes harm as well as having alleged benefits, it's claimed effects are more likely to be genuine. One argument against most of the quack remedies around, such as homoeopathy, is that they are entirely safe (although this is seen as a recommendation, by the gullible.)

The demand that phenomena that are not explainable in scientific terms should be accepted on the basis of some other explanation similar to the argument you might care to use with your bank manager that there is more than one type of arithmetic. Using his conventional accounting methods he might think your account is overdrawn but you would argue that, although the evidence isn't as strong as his method might require, you believe you still have lots of money in your account and therefore will continue writing cheques. (As like as not, this belief in a positive balance in your account will be based on some erroneous assumption ­ for example, that you still have a lot of blank cheques left in your chequebook.)

KARL SABBAGH is a television producer who has turned to writing. Among his television programs are "Skyscraper" ­ a four-hour series about the design and construction of a New York skyscraper; "Race for the Top" ­ a documentary about the hunt for top quark; and "21st Century Jet" ­ a five part series following Boeing's new 777 airliner from computer design to entry into passenger service.He is the author of six books including Skyscraper, 21st Century Jet, and A Rum Affair .

This can be elaborated by the following anecdote, from an interview (2.99) with Herbert York:

"Donald Hornig, who was head of PSAC [President's Science Advisory Committee, during the Johnson Administration] was not imaginative. I can give you an example of this. I was very enthusiastic about getting a picture of the other side of the moon. And there were various ways of doing it, sooner or later. And I argued with Hornig about it and he said, 'Why? It looks just like this side.' And it turned out it didn't. But nevertheless, that was it, and that's the real Hornig. 'Why are you so enthused about the other side of the moon? The other side of the moon looks just like this side, why would you be so interested to see it?'"

GEORGE DYSON, a historian among futurists, has been excavating the history and prehistory of the digital revolution going back 300 years. His most recent book is Darwin Among the Machines.

...about our species, our gender, our friends, lovers and ourselves, the mystery who each of us is and where in the ant hill the intelligence lies, if each ant has no clue.

We search for the variations amongst a set, try to define the set in its limits and borders against other sets, look for analogies, anomalies and statistical outliers....

There is in fact an almost universal algorithm, like cats stalking their prey, to makes sense of our nature by boundary conditions, alas compiled with spotty statistics and messy heuristics, gullible souls, political machinations, cheats, lies and video tape, in short: human nature. We search and probe, the literate digerati confer virtually, each wondering about the other, each looking at their unique sets of parents, and the impossibility to imagine them in the act of procreation.

In other words, we still have no idea what-so-ever who we really are, what mankind as a whole is all about. We have mild inclinations on where we have been, sort of, and contradictory intentions on where we may be headed, kind of, but all in all, we are remarkably clue-free.

But that question at least need no longer be asked and has indeed vanished after this:

Even when they had way too much to drink, pigs don´t turn into men.

KAI KRAUSE is currently building a research lab dubbed "Byteburg" in a thousand year old castle above the Rhein river in the geometric center of Europe. He asked not to be summed up by previous accomplishments, titles or awards.

Despite Seattle and the French farmers, free market advocates of globalization have largely won — even CHina is signing up to be a major player in the international trading and growth-oriented global political economy. So it is rare to hear this question anymore, even from so-called "enterprise institutes" dedicated to protecting property rights.

The problem is, what has been won?? My concern is not with the question no-longer asked in this context, but rather with the companion question not often enough asked: "Is there any such thing as a free market"?

To be sure, markets are generally efficient ways of allocating resources and accomplishing economic goals. However, markets are notorious for leaving out much of what people really value. In different words, the market price of doing business simply excludes much of the full costs or benefits of doing business because many effects aren't measured in traditional monetary units. For example, the cost of a ton of coal isn't just the extraction costs plus transportation costs plus profit, but also real expenses to real people (or real creatures) who happen to be external to the energy market. Such "externalities" are very real to coastal dwellers trying to cope with sea level rises likely to be induced from the global warming driven by massive coal burning.

I recall a discussion at the recent international negotiations to limit emissions of greenhouse gasses in which a chieftain from the tiny Pacific island of Kiribati was being told by an OPEC supporter opposed to international controls on emissions from fossil fuels that the summed economies of all the small island states were only a trivial fraction of the global GDP, and thus even if sea level rise were to drive them out of national existence, this was "not sufficient reason to hold back to economic progress of the planet by constricting the free use of energy markets".

"We are not ungenerous", he said, so in the "unlikely event" that you were a victim of sea level rise, "we'll just pay to relocate all of you and your people to even better homes and jobs than you have now", and this, he went on, will be much cheaper than to "halt industrial growth" (THis isn't the forum to refute the nonsense that controls on emissions will halt industrial growth.) After hearing this offer, the aging and stately chieftain paused, scratched his flowing hair, politely thanked the OPEC man for his thoughtfulness and simply said, "we may be able to move, but what do I do with the buried bones of my grandfather?"

Economists refer to the units of value in cost-benefit analyses as "numeraires" — dollars per ton carbon emitted in the climate example, is the numeraire of choice for "free market" advocates. But what of lives lost per ton of emissions from intensified hurricanes, or species driven off mountain tops to extinction per ton, or heritage sites lost per ton?? Or what if global GDP indeed goes up fastest by free markets but 25% of the world gets left further behind as globally economically efficient markets expand? Is equity a legitimate numeraire too?

Therefore, while market systems seem indeed to have triumphed, it is time to phase in a new, multi-part question: "How can free markets be adjusted to value what is left out of private cost-benefit calculus but represents real value so we can get the price signals in markets to reflect all the costs and benefits to society across all the numeraires, and not simply have market prices rigged to preserve the status quo in which monetary costs to private parties are the primary condition?"

I hope the new US president soon transcends all that obligatory free market rhetoric of the campaign and learns much more about what constitutes a full market price. It is very likely he'll get an earful as he jetsets about the planet in Air Force 1 catching up on the landscapes — political and physical — of the vastly diverse countries in the world that it is time for him to visit. Many world leaders are quite worried about just what we will have won as currently defined free markets triumph.

STEPHEN H. SCHNEIDER is Professor in the Biological Sciences Department at Stanford University and the Former Department Director and Head of Advanced Study Project at the National Center for Atmospheric Research Boulder. He is internationally recognized as one of the world's leading experts in atmospheric research and its implications for environment and society. Dr. Schneider's books include The Genesis Strategy: Climate Change and Global Survival; The Coevolution Of Climate and Life and Global Warming: Are We Entering The Greenhouse Century?; and Laboratory Earth

The disappearance of this question isn't only a trace of the deletion of the left. It is also a measure of our loss of faith in secular redemption. We don't look forward anymore to radical transformation.

Perhaps it's a result of a century of disappointments: from the revolution of 1917 to Stalin and the fall of communism; from the Spanish Civil War to Franco; from Mao's long march to Deng's proclamation that to get rich is glorious. Perhaps it's a result of political history. But there was more that had to do with psychological transformation. Remember Norman O. Brown's essay, "The place of apocalypse in the life of the mind"? Remember R. D. Laing's turn on breakdown as breakthrough? Remember the fascination with words like 'metamorphosis' and 'metanoia'?

Maybe we're just getting older and all too used to being the people we are. But I'd like to think we're getting wiser and less naive about the possibility of shedding our pasts overnight.

It's important to distinguish between political liberalism on the one hand and a faith in discontinuous transformation on the other. If we fail to make this distinction, then forgetting about the revolution turns (metanoically) into the familiar swing to the right. Old radicals turn reactionary. If we're less dramatic about our beliefs, if we're more cautious about distinguishing between revolutionary politics and evolutionary psychology, then we'll retain our faith in the dream that we can do better. Just not overnight.

p.s. Part of the passion for paradigms and thier shiftings may derive from displaced revolutionary fervor. If you yearn for transfiguration, but can't find it in religion or politics, then you'll seek it elsewhere, like the history of science.

p.p.s. There is one place where talk of transformation is alive and kicking, if not well: The executive suite. The business press is full of books about corporate transformation, re-engineering from a blank sheet of paper, reinvention from scratch. Yes, corporate America is feeling the influence of the sixties as boomers reach thte board room. And this is not a bad thing. For, just as the wisdom to distinguish between revolutionary politics and evolutionary psychology can help us keep the faith in marginal improvements in the human condition, so the tension between greying warriors for change and youthful stalwarts of the status quo will keep us from lurching left or right.

JAMES OGILVY is co-founder and managing director of Global Business Network; taught philosophy at Yale and Williams; served as director of research for the Values and Lifestyles Program at SRI International; author of Many Dimensional Man, and Living without a Goal.

It seems to me we’ve surrendered the notion of the sacred to those who only mean to halt the evolution of culture. Things we call "sacred" are simply ideologies and truths so successfully institutionalized that they seem unquestionable. For example, the notion that sexual imagery is bad for young people to see — a fact never established by any psychological or anthropological study I’ve come across — is accepted as God-ordained fact, and used as a fundamental building block to justify censorship. (Meanwhile, countless sitcoms in which parents lie to one another are considered wholesome enough to earn "G" television ratings.)

A politician’s claim to be "God-fearing" is meant to signify that he has priorities greater than short-term political gain. What most people don’t realize is that, in the Bible anyway, God-fearing is a distant second to God-loving. People who were God-fearing only behaved ethically because they were afraid of the Hebrew God’s wrath. This wasn’t a sacred relationship at all, but the self-interested avoidance of retaliation.

Today, it seems that no place, and — more importantly — no time is truly sacred. Our mediating technologies render us available to our business associates at any hour, day or night. Any moment spent thinking instead of spending, or laughing instead of working is an opportunity missed. And the more time we sacrifice to production and consumption, the less any alternative seems available to us.

One radical proposal to combat the contraction of sacred time was suggested in the book of Exodus, and it's called the Sabbath. What if we all decided that for one day each week, we would refrain from buying or selling anything? Would it throw America into a recession? Maybe the ancients didn't pick the number seven out of a hat. Perhaps they understood that human beings can only immerse themselves in commerce for six days at a stretch before losing touch with anything approaching the civil, social, or sacred.

DOUGLAS RUSHKOFF is the author of Coercion, Media Virus, Playing the Future, Ecstasy Club. Professor of Virtual Culture, New York University.

Most Americans, even (or, perhaps, especially) educated Americans, seem to believe that all people are basically the same ‹ we have the same innate abilities and capacities, and only hard work and luck separates those who are highly skilled from those who are not. But this idea is highly implausible. People differ along every other dimension, from the size of their stomachs and shoes to the length of their toes and tibias. They even differ in the sizes of their brains. So, why shouldn't they also differ in their abilities and capacities? Of course, the answer is that they do. It's time to acknowledge this fact and take advantage of it.

In my view, the 21st century is going to be the "Century of Personalization." No more off-the-rack drugs: Gene and proteonomic chips will give readouts for each person, allowing drugs to be tailored to their individual physiologies. No more off-the-rack clothes: For example, you'll stick your feet in a box, lasers will measure every aspect of them, and shoes will be custom-made according to your preferred style. Similarly, no more off-the-rack teaching.

Specifically, the first step is to diagnose individual differences in cognitive abilities and capacities, so we can play to a given person's strengths and avoid falling prey to his or her weaknesses. But in order to characterize these differences, we first need to understand at least the broad outlines of general mechanisms that are common to the species.

All of us have biceps and triceps, but these muscles differ in their strength. So too with our mental muscles. All of us have a short-term memory, for example (in spite of how it may sometimes feel at the end of the day), and all of us are capable storing information in long-term memory. Differences among people in part reflect differences in the efficacy of such mechanisms. For example, there are at least four distinct ways that visual/spatial information can be processed (which I'm not going to go into here), and people differ in their relative abilities on each one. Presenting the same content in different ways will invite different sorts of processing, which will be more or less congenial for a given person.

But there's more to it than specifying mechanisms and figuring out how well people can use them (as daunting as that is). Many of the differences in cognitive abilities and capacities probably reflect how mechanisms work together and when they are recruited. Understanding such differences will tell us how to organize material so that it goes down smoothly. For example, how--for a given person-should examples and general principles be intermixed?

And, yet more. We aren't bloodless brains floating in vats, soaking up information pumped into us. Rather, it's up to us to decide what to pay attention to, and what to think about. Thus, it's no surprise that people learn better when they are motivated. We need to know how a particular student should be led to use the information during learning. For example, some people may "get" physics only when it's taught in the context of auto mechanics.

All of this implies that methods of teaching in the 21st Century will be tightly tied to research in cognitive psychology and cognitive neuroscience. At present, the study of individual differences is almost entirely divorced from research on general mechanisms. Even if this is remedied, it's going to be a challenge to penetrate the educational establishment and have this information put to use. So, the smart move will probably be to do an end-run around this establishment, using computers to tutor children individually outside of school. This in turn raises the specter of another kind of Digital Divide. Some of us may in fact still get off-the-rack education.

Finally, I'll leave aside another set of questions no one seems to be seriously asking: What should be taught? And should the same material be taught to everyone? You can imagine why this second question isn't being asked, but it's high time we seriously considered making the curriculum relevant for the 21st Century.

STEPHEN M. KOSSLYN, a full professor of psychology at Harvard at age 34, is a researcher focusing primarily on the nature of visual mental imagery. His books include Image and Mind, Ghosts in the Mind's Machine, Wet Mind: The New Cognitive Neuroscience, Image and Brain: The Resolution of the Imagery Debate, and Psychology: The Brain, the Person, the World.

The detection of "questions that are no longer asked" is difficult. Old questions, like MacArthur's old soldiers, just fade away. Scientists and scholars in hot pursuit of new questions neither note nor mourn their passing. I regularly face a modest form of the disappearing question challenge when a textbook used in one of my classes is revised. Deletions are hard to find; they leave no voids and are more stealthy than black holes, not even affecting their surrounds. New text content stands out, while missing material must be established through careful line-by-line reading. Whether in textbooks or in life, we don't think much about what is no longer relevant.

My response to the inquiry about questions that are no longer asked is to reframe it and suggest instead a common class of missing questions, those associated with obsolete and inappropriate metaphors. Metaphor is a powerful cognitive tool, which, like all models, clarifies thinking when appropriate, but constrains it when inappropriate. Science is full of them. My professional specialties of neuroscience and biopsychology has mind/brain metaphors ranging from Locke's ancient blank slate (tabula rasa), to the more technologically advanced switchboard, and the metaphor de jour, the computer. None do justice to the brain as a soggy lump of wetware, but linger as cognitive/linguistic models. Natural selection in the realm of metaphors is slow and imperfect. Witness the reference to DNA as a "blueprint" for an organism, when Dawkins' "recipe" metaphor more accurately reflects DNA's incoding of instructions for organismic assembly.

ROBERT R. PROVINE is Professor of Psychology and Neuroscience at the University of Maryland, Baltimore County, and author of Laughter: A Scientific Investigation.

My colleagues in the fashionable fields of string theory and quantum gravity advertise themselves as searching desperately for the 'Theory of Everything", while their experimental colleagues are gravid with the "God Particle", the marvelous Higgson which is the somewhat misattributed source of all mass. (They are also after an understanding of the earliest few microseconds of the Big Bang.) As Bill Clinton might remark, it depends on what the meaning of "everything" is. To these savants, "everything" means a list of some two dozen numbers which are the parameters of the Standard Model. This is a set of equations which already exists and does describe very well what you and I would be willing to settle for as "everything". This is why, following Bob Laughlin, I make the distinction between "everything" and "every thing". Every thing that you and I have encountered in our real lives, or are likely to interact with in the future, is no longer outside of the realm of a physics which is transparent to us: relativity, special and general; electromagnetism; the quantum theory of ordinary, usually condensed, matter; and, for a few remote phenomena, hopefully rare here on earth, our almost equally cut-and-dried understanding of nuclear physics. [Two parenthetic remarks: 1) I don't mention statistical mechanics only because it is a powerful technique, not a body of facts; 2) our colleagues have done only a sloppy job so far of deriving nuclear physics from the Standard Model, but no one really doubts that they can.]

I am not arguing that the search for the meaning of those two dozen parameters isn't exciting, interesting, and worthwhile: yes, it's not boring to wonder why the electron is so much lighter than the proton, or why the proton is stable at least for another 35 powers of ten years, or whether quintessence exists. But learning why can have no real effect on our lives, spiritually inspiring as it would indeed be, even to a hardened old atheist like myself.

When I was learning physics, half a century ago, the motivation for much of what was being done was still "is quantum theory really right?" Not just QED, though the solution of that was important, but there were still great mysteries in the behavior of ordinary matter--like superconductivity, for instance. It was only some twenty years later that I woke up to the fact that the battle had been won, probably long before, and that my motivation was no longer to test the underlying equations and ideas, but to understand what is going on. Within the same few years , the molecular biology pioneers convinced us we needed no mysterious "life force" to bring all of life under the same umbrella. Revolutions in geology, in astrophysics, and the remarkable success of the Standard Model in sorting out the fundamental forces and fields, leave us in the enviable position I described above: given any problematic phenomenon, we know where to start, at least. And nothing uncovered in string theory or quantum gravity will make any difference to that starting point.

Is this Horgan's End of Science? Absolutely not. It's just that the most exciting frontier of science no longer lies at the somewhat sophomoric — or quasi-religious — level of the most "fundamental" questions of "what are we made of?" and the like; what needs to be asked is "how did all this delightful complexity arise from the stark simplicity of the fundamental theory?" We have the theory of every thing in any field of science you care to name, and that's about as far as it gets us. If you like, science is now almost universally at the "software" level; the fundamental physicists have given us all the hardware we need, but that doesn't solve the problem, in physics as in every other field. It's a different game, probably a much harder one in fact, as it has often been in the past; but the game is only begun.

PHILIP W. ANDERSON is a Nobel laureate physicist at Princeton and one of the leading theorists on superconductivity. He is the author of A Career in Theoretical Physics, and Economy as a Complex Evolving System.

A hundred years ago, one of the most fundamental questions in physical science was: How fast is the Earth moving? Many experiments had been performed to measure the speed of the Earth through space as it orbits the sun, and as the solar system orbits the galaxy. The most famous was conducted in 1887 by Albert Michelson and Edward Morley using an optical interferometer. The result they obtained was... zero. Today, scientists regard the question of the Earth's speed through space as meaningless and misconceived, although many non-scientists still refer to the concept.

Why has the question disappeared? Einstein's theory of relativity, published in 1905, denied any absolute frame of rest in the universe; speed is meaningful only relative to other bodies or physical systems. Ironically, some decades later, it was discovered there is a special frame of reference in the universe defined by the cosmic microwave background radiation, the fading afterglow of the big bang. The Earth sweeps through this radiation at roughly 600 km per second (over a million miles per hour) in the direction of the constellation Leo. This is the closest that modern astronomy gets to the notion of an absolute cosmic velocity.

PAUL DAVIES is an internationally acclaimed physicist, writer and broadcaster, now based in South Australia. Professor Davies is the author of some twenty books, including Other Worlds, God and the New Physics, The Edge of Infinity, The Mind of God, The Cosmic Blueprint, Are We Alone? and About Time. He is the recipient of a Glaxo Science Writers' Fellowship, an Advance Australia Award and a Eureka prize for his contributions to Australian science, and in 1995 he won the prestigious Templeton Prize for his work on the deeper meaning of science.

The question that has appeared this year is "What questions (plural) have disappeared and why?" Countless questions have disappeared, of course, but for relatively few reasons.

The most obvious vanishings are connected to the passing of time. No one asks anymore "Who's pitching tomorrow for the Brooklyn Dodgers?" or "Who is Princess Diana dating now?"

Other disappearances are related to the advance of science and mathematics. People no longer seriously inquire whether Jupiter has moons, whether DNA has two or three helical strands, or whether there might be integers a, b, and c such that a^3 + b^3 = c^3.

Still other vanished queries are the result of changes in our ontology, scientific or otherwise. We've stopped wondering, "What happened to the phlogiston?" or "How many witches live in this valley?"

The most interesting lacunae in the erotetic landscape, however, derive from lapsed assumptions, untenable distinctions, incommensurable mindsets, or superannuated worldviews that in one way or another engender questions that are senseless or, at least, much less compelling than formerly. "What are the election's exact vote totals" comes to mind.

Now that I've clarified to myself the meaning of "What questions have disappeared and why?" I have to confess that I don't have any particularly telling examples. (Reminds me of the joke about the farmer endlessly elucidating the lost tourist's query about the correct road to some hamlet before admitting, "Don't reckon that I know.")

JOHN ALLEN PAULOS, bestselling author, mathematician, and public speaker is professor of mathematics at Temple University in Philadelphia. In addition to being the author of a number of scholarly papers on mathematical logic, probability, and the philosophy of science, Dr. Paulos books include Innumeracy - Mathematical Illiteracy and Its Consequences, A Mathematician Reads the Newspaper, and Once Upon a Number.

One question that has almost disappeared, but which I think should not is the old question about whether our categories of reality are discovered or constructed. In medieval times this was the debate about realism versus nominalism. Earlier this century the question flared up again in the debates about the relativistic nature of knowledge and has more recently given rise to the whole "science wars" debacle, but reading the science press today one would think the question had been finally resolved — on the side of realism. Reading the science press now one gets a strong impression that for most scientists our categories of reality are Platonic, almost God-given entities just waiting for the right mind to uncover them. This hard-nosed realist trend is evident across the spectrum of the sciences, but is particularly strong in physics, where the search is currently on for the supposed "ultimate" category of reality — strings being a favored candidate. What gets lost in all this is any analysis of the role that language plays in our pictures of reality. We literally cannot see things that we have no words for. As Einstein once said "we can only see what our theories allow us to see." I would argue that the question of what role language plays in shaping our picture of reality is one of the most critical questions in science today — and one that should be back on the agenda of every thoughtful scientiist.

Just one example should suffice to illustrate what is at stake here: MIT philosopher of science Evelyn Fox Keller has shown in her book Secrets of Life, Secrets of Death (and elsewhere) the primary role played by language in shaping theories of genetics. Earlier this century physicists like Max Delbruck and Erwin Schrodinger started to have a philosophical impact on the biological sciences, which henceforth became increasingly "physicized." Just as atoms were seen as the ultimate constituents of matter so genes came to be seen as the ultimate constituents of life — the entities in which all power and control over living organisms resided. What this metaphor of the "master molecule" obscured was th role played by the cytoplasm in regulating the function and activation of genes. For half a century study of the cytoplasm was virtually ignored because genetics were so fixed on the idea of the gene as the "master colecule." Sure much good work on genetics was done, but important areas of biological function were also ignored. And are still being ignored by the current "master molecule" camp — the evolutionary psychologists, who cannot seem to see anything but through the prism of genes.

Scientists (like all other humans) can only see reality as their language and their metaphors allow them to see it. This is not to say that scientists "make up" their discoveries, only to point out that language plays a critical role in shaping the way we categorize, and hence theorize, the world around us. Revolutions in science are not just the result of revolutions in the laboratory or at theorists blackboards, they are also linguistic revoluttions. Think of words like inertia, energy, momentum — words which did not have any scientific meaning before the seventeenth century. Or words like quantum, spin, charm and strange, which have only had scientific meaning science the quantum revolution of the early twentieth century. Categories of reality are not merely discovered — they are also constructed by the words we use. Understanding more deeply the interplay between the physical world and human language is, I believe, one of the major tasks for the future of science.

MARGARET WERTHEIM is the author of Pythagoras Trousers, a history of the relationship between physics and religion; and The Pearly Gates of Cyberspace: A History of Space from Dante to the Internet. She is a research associate to the American Museum of Natural History in NY and a fellow of the Los Angeles Institute for the Humanities. She is currently working on a documentary about "outsider physics."

This question was once entertained by the educated and non-educated alike, but is now out of fashion among the learned, except in two small corners of intellectual life. One corner is religious theology, which many scientists would hardly consider a legitimate form of inquiry at this time. In fact it would not be an exaggeration to say that modern thinking considers this question as fit only for the religious, and that it has no part in the realm of science at all. But even among the religious this question has lost favor because, to be honest, theology hasn't provided very many satisfactory answers for modern sensibilities, and almost no new answers in recent times. It feels like a dead end. A question that cannot be asked merely by musing in a book-lined room.

The other corner where this question is asked — but only indirectly — is in particle physics and cosmology. We get hints of answers here and there mainly as by-products of other more scientifically specific questions, but very few scientists set out to answer this question primarily. The problem here is that because the question of the nature of our creator is dismissed as a religious question, and both of these sciences require some of the most expensive equipment in the world paid by democracies committed to separation of church and state, it won't do to address the question directly.

But there is a third way of thinking emerging that may provide a better way to ask this question. This is the third culture of technology. Instead of asking this question starting from the human mind contemplating the mysteries of God, as humanists and theologists do, or starting from experiment, observation, and testing as scientists do, the third way investigates the nature of our creator by creating creations. This is the approach of nerds and technologists. Technologists are busy creating artificial worlds, virtual realities, artificial life, and eventually perhaps, parallel universes, and in this process they explore the nature of godhood. When we make worlds, what are the various styles of being god? What is the relation to the creator and the created? How does one make laws that unfold creatively? How much of what is created can be created without a god? Where is god essential? Sometimes there are theories (theology) but more often this inquiry is driven by pure pragmatic engineering: "We are as gods and may as well get good at it," to quote Stewart Brand.

While the third way offers a potential for new answers, more than the ways of the humanities or science, the truth is that even here this question — of the nature of our creator — is not asked directly very much. This really is a question that has disappeared from public discourse, although of course, it is asked every day by billions of people silently.

KEVIN KELLY is a founding editor of Wired magazine. In 1993 and 1996, under his co-authorship, Wired won it's industry's Oscar — The National Magazine Award for General Excellence. Prior to the launch of Wired , Kelly was editor/publisher of the Whole Earth Review, a journal of unorthodox technical and cultural news. He is the author of New Rules for the New Economy; andOut of Control: The New Biology of Machines, Social Systems, and the Economic World.

All those questions that were asked in extinct languages for which there is no written record.

DAVID HAIG is an evolutionary geneticist/theorist at Harvard who is interested in conflicts and conflict resolution with the genome, with a particular interest in genomic imprinting and relations between parents and offspring. Hiscurrent interests include the evolution of linkage groups and the evolution of viviparity.

Darwinism is alive and well in academic discussions and in pop thinking. Natural selection is a key element in explaining just about everything we encounter today, from the origin and spread of AIDS to the realization that our parents didn't "make us do it," our ancestors did. Ironically, though, Darwinism has disappeared from the area where it was first and most firmly seated the evolution of life, and especially the evolution of humanity. Human evolution was once pictured as a series of responses to changing environments coordinated by differences in reproduction and survivorship, as opportunistic changes taking advantage of the new possibilities opened up by the cultural inheritance of social information, as the triumph of technology over brute force, as the organization of intelligence by language. Evolutionary psychologists and other behavioralists still view it this way, but this is no longer presented as the mainstream view of human paleontologists and geneticists who address paleodemographic problems.

Human evolution is now commonly depicted as the consequence of species replacements, where there are a series of species emanating from different, but usually African homelands, each sooner or later replacing the earlier ones. It is not the selection process that provides the source of human superiority in each successive replacement, but the random accidents that take place when new species are formed from small populations of old ones. The process is seen as being driven by random extinctions, opening up unexpected opportunities for those fortunate new species lucky to be at the right time and place.

The origin and evolution of human species are now also addressed by geneticists studying the variation and distribution of human genes today (and in a few cases ancient genes from Neandertals). They use this information to estimate the history of human population size and the related questions of when the human population might have been small, where it might have originated, and when it might have been expanding. It is possible to do this if one can assume that mutation and genetic drift are the only driving forces of genetic change, because the effect of drift depends on population size. But this assumption means that Darwinian selection did not play any significant role in genetic evolution. Similarly, interpreting the distribution of ancient DNA as reflecting population history (rather than the history of the genes studied the histories are not necessarily the same) also assumes that selection on the DNA studied did not play a role in its evolution. In fact, the absence of Darwinian selection is the underlying assumption for these types of genetic studies.

Human paleontology has taken a giant step away from Darwin will it have the courage to follow the lead of evolutionary behavior and step back?

MILFORD H. WOLPOFF is Professor of Anthropology and Adjunct Associate Research Scientist, Museum of Anthropology at the University of Michigan. His work and theories on a "multiregional" model of human development challenge the popular "Eve" theory. His work has been covered in The New York Times,New Scientist, Discover, and Newsweek, among other publications. He is the author (with Rachel Caspari) of Race and Human Evolution: A Fatal Attraction

Contemporary linguists tend to assume in their work that subordinate clauses, such as "The boy that I saw yesterday" or "I knew what happened when she came down the steps", are an integral part of the innate linguistic endowment, and/or central features of "human speech" writ large. Most laymen would assume the same thing. However, the fact is that when we analyze a great many strictly spoken languages with no written tradition, subordinate clauses are rare to nonexistent. In many Native American languages, for example, the only way to express something like the men who were members is a clause which parses approximately as "The 'membering' men"; the facts are similar in thousands of other languages largely used orally.

In fact, even in earlier documents in today's "tall building" literary languages, one generally finds a preference for stringing simple main clauses together —she came down the steps, and I knew what happened rather than embedding them in one another along the lines of when she came down the steps, I knew what happened. The guilty sense we often have when reading English of the first half of the last millennium that the writing is stylistically somewhat "clunky" is due largely to the marginality of the subordinate clause: here is Thomas Malory in the late fifteenth century:

And thenne they putte on their helmes and departed
and recommaunded them all wholly unto the Quene
and there was wepynge and grete sorowe
Thenne the Quene departed in to her chamber
and helde her
that no man shold perceyue here grete sorowes

Early Russian parses similarly, and crucially, so do the Hebrew Bible and the Greek of Homer.

At the time that these documents were written, writing conventions had yet to develop, and thus written language hewed closer to the way language is actually spoken on the ground. Over time, subordinate clauses, a sometime thing in speech, were developed as central features in written speech, their economy being aesthetically pleasing, and more easily manipulated via the conscious activity of writing than the spontaneous "on-line" activity of speaking. Educated people, exposed richly to written speech via education, tended to incorporate the subordinate clause mania into their spoken varieties. Hence today we think of subordinate clauses as "English", as the French do "French", and so on — even though if we listen to a tape recording of ourselves speaking casually, even we tend to embrace main clauses strung together in favor of the layered sentential constructions of Cicero.

But the "natural" state of language persists in the many which have had no written tradition. In the 1800s, various linguists casually speculated as to whether subordinate clauses were largely artifactual rather than integral to human language, with one (Karl Brugmann) even going as far as to assert that originally, humans spoke only with main clauses.

Today, however, linguistics operates under the sway of our enlightened valuation of "undeveloped" cultures, which has, healthily, included an acknowledgment of the fact that the languages of "primitive" peoples are as richly complex as written Western languages. (In fact, the more National Geographic the culture, the more fearsomely complex the language tends to be overall.) However, this sense has discouraged most linguists from treading into the realm of noting that one aspect of "complexity", subordinate clauses, is in fact not central to expression in unwritten languages and is most copiously represented in languages with a long written tradition. In general, the idea that First World written languages might exhibit certain complexities atypical of languages spoken by preliterate cultures has largely been tacitly taboo for decades in linguistics, generally only treated in passing in obscure venues.

The problem is that this could be argued to bode ill for investigations of the precise nature of Universal Grammar, which will certainly require a rigorous separation of the cultural and contingent from the encoded.

JOHN H. MCWHORTER is Assistant Professor of Linguistics at the University of California at Berkeley. He taught at Cornell University before entering his current position at Berkeley. He specializes in pidgin and creole languages, particularly of the Caribbean, and is the author of Toward a New Model of Creole Genesisand The Word on the Street : Fact and Fable About American English. He also teaches black musical theater history at Berkeley and is currently writing a musical biography of Adam Clayton Powell, Jr.

Did Fermat's question, "is it true that there are no integers x, y, z and n, all greater than 2, such that x^n + y^n = z^n?", F? for short, raised in the 17th century, disappear when Andrew Wiles answered it affirmatively by a proof of Fermat's theorem F in 1995?

The answer is no.

The question F? can be explained to every child, but the proof of F is extremely sophisticated requiring techniques and results way beyond the reach of elementary arithmetic, thus raising the quest for conceptually simpler proofs. What is going on here, why do such elementary theorems require such intricate machinery for their proof? The fact of the truth of F itself is hardly of vital interest. But, in the wake of Goedel's incompleteness proof of 1931, F? finds it place in a sequence of elementary number theoretic questions for which there provably cannot exist any algorithmic proof procedure!

Or take the question D? raised by the gut feeling that there are more points on a straight line segment than there are integers in the infinite sequence 1,2,3,4,.... Before it can be answered the question what is meant by "more" must be dealt with. This done by the 18th Century's progress in the Foundations, D? became amenable to Cantor's diagonal argument, establishing theorem D. But this was by no means the end of the question!

The proof gave rise to new fields of investigation and new ideas. In particular, the Continuum hypothesis C?, a direct descendant of D? was shown to be "independent" of the accepted formal system of set theory. A whole new realm of questions sprang up; questions X? that are answered by proofs of independence, bluntly by: "that depends" ‹ on what you are talking about, what system you are using, on your definition of the word "is" and so forth. With this they give rise to comparative studies of systems without as well as with the assumption X added. Euclid's parallel axiom in geometry, is the most popular early example.

What about the question as to the nature of infinitesimal's, a question that has plagued us ever since Leibniz. Euler and his colleagues had used them with remarkable success boldly following their intuition. But in the 18th Century mathematicians became self conscious. By the time we were teaching our calculus classes by means of epsilon's, delta's and Dedekind cuts some of us might have thought that Cauchy, Weierstrass and Dedekind had chased the question away. But then along came logicians like Abraham Robinson with a new take on it with so-called non standard quantities ‹ another favorite of the popular science press.

Finally, turning to a controversial issue; the question of the existence of God can neither be dismissed by a rational "No" nor by a politically expedient "Yes". Actually as a plain yes-or-no question it ought to have disappeared long ago. Nietzsche, in particular, did his very best over a hundred years ago to make it go away. But the concept of God persists and keeps a maze of questions afloat, such as "who means what by Him"?, "do we need a boogie man to keep us in line"?, "do we need a crutch to hold despair at bay"? and so forth, all questions concerning human nature.

Good questions do not disappear, they mature, mutate and spawn new questions.

VERENA HUBER-DYSON, is a mathematician who taught at UC Berkeley in the early sixties, then at the U of Illinois' at Chicago Circle, before retiring from the University of Calgary. Her research papers on the interface between Logic and Algebra concern decision problems in group theory. Her monograph Goedel's theorem: a workbook on formalization is an attempt at a self contained interdisciplinary introduction to logic and the foundations of mathematics.

1. If Barbour's theory of Platonia is even roughly correct, then everything exists in a timless universe, and therefore doesn't actually "disappear". Therefore, all questions are always asked, as everything is actually happening at once. I know that doesn't help much, and it dodges the main thrust of the question, but it's one support for my answer, if oblique.

2. Other than forgotten questions that disappear of their own accord, or are in some dead language, or are too personal/particular/atomised (i.e., What did you think of the latest excretion from Hollywood? Is it snowing now? Why is that weirdo across the library reading room looking at me?!?! When will I lose these 35 "friends" who are perched on my belt buckle? etc.) questions don't really disappear. They are asked again and again and are answered again and again, and this is a very good thing. Three Year Olds will always ask "Daddy, where do the stars come fwum?" And daddys will always answer as best they can. Eventually, some little three year old will grow into an adult astronomer and might find even better answers than their daddy supplied them on a cold Christmas night. And they will answer the same simple question with a long involved answer, or possibly, a better and simpler answer. In this way, questions come up again and again, but over time they spin out in new directions with new answers.

3. It's important to not let questions disappear. By doubting the obvious, examining the the same ground with fresh ideas, and questioning recieved ideas, great strides in the collected knowledge of this human project can be (and historically, have been) gained. When we consign a question to the scrap heap of history we run many risks — risks of blind arrogance, deaf self righteousness, and finally choking on the bland pablum of unquestioned dogma.

4. It's important to question the questions. It keeps the question alive, as it refines the question. Question the questions, and then reverse the process - question the questioning of questions. Permit the mind everything, even if it seems repetitive. If you spin your wheels long enough you'll blow a bearing or snap a spring, and the question is re-invented, re-asked, and re-known, but in a way not previously understood. In this way, questions don't disappear, they evolve into other questions. For a while they might bloat up in the sun and smell really weird, but it's all part of the process...

HENRY WARWICK sometimes works as a scientist in the computer industry. He always works as an artist, composer, and writer. He lives in San Francisco, California.

Such a simple question. Many of you might think "Has that question really disappeared?" Some questions disappear for ever because they have been answered. Some questions go extinct because they were bad questions to begin with. But there are others that appear to vanish but then we find that they are back with us again in a slightly different guise. They are questions that are just too close to our hearts for us to let them die completely.

For millennia, human superiority was taken for granted. From the lowest forms of life up to humans and then on to the angels and God, all living thing were seen as arranged in the Great Chain of Being. Ascend the chain and perfection grows. It is a hierarchical philosophy that conveniently allows for the exploitation of dumber beasts — of other species or races — as a right by their superiors. We dispose of them as God disposes of us.

The idea of human superiority should have died when Darwin came on the scene.

Unfortunately, the full implications of what he said have been difficult to take in: there is no Great Chain of Being, no higher and no lower. All creatures have adapted effectively to their own environments in their own way. Human "smartness" is just a particular survival strategy among many others, not the top of a long ladder.

It took a surprisingly long time for scientists to grasp this. For decades, comparative psychologists tried to work out the learning abilities of different species so that they could be arranged on a single scale. Animal equivalents of intelligence tests were used and people seriously asked whether fish were smarter than birds. It took the new science of ethology, created by Nobel-prize winners Konrad Lorenz, Niko Tinbergen and Karl von Frisch, to show that each species had the abilities it needed for its own lifestyle and they could not be not arranged on a universal scale. Human smartness is no smarter than anyone else's smartness. The question should have died for good.

Artificial intelligence researchers came along later but they too could not easily part from medieval thinking. The most important problems to tackle were agreed to be those that represented our "highest" abilities. Solve them and everything else would be easy. As a result, we have ended up with computer programs that can play chess as well as a grandmaster. But unfortunately we have none that can make a robot walk as well as a 2-year old, yet alone run like a cat. The really hard problems turn out to be those that we share with "lower" animals.

Strangley enough, even evolutionary biologists still get caught up with the notion that humans stand at the apex of existence. There are endless books from evolutionary biologists speculating on the reasons why humans evolved such wonderful big brains, but a complete absence of those which ask if a big brains is a really useful organ to have. The evidence is far from persuasive. If you look at a wide range of organisms, those with bigger brains are generally no more successful than those with smaller brains — hey go extinct just as fast.

Of course, it would be really nice to sample a large range of different planets where life is to be found and see if big-brained creatures do better over really long time scales (the Earth is quite a young place). Unfortunately, we cannot yet do that, although the fact that we have never been contacted by any intelligent life from older parts of the Universe suggests that it usually comes to a bad end.

Still, as we are humans it's just so hard not to be seduced by the question "What makes us so special" which is just the same as the question above but in a different form. When you switch on a kitchen light and see a cockroach scuttle for safety you can't help seeing it as a lower form of life. Unfortunately, there are a lot more of them than there are of us and they have been around far, far longer. Cockroach philosophers doubtless entertain their six-legged friends by asking "What makes us so special".

Einstein's theory of gravity — general relativity — transcended Newton's by offering deeper insights. It accounted naturally, in a way that Newton didn't, for why everything falls at the same speed, and why the force obeys an inverse square law. The theory dates from 1916, and was famously corroborated by the measured deflection of starlight during eclipses, and by the anomalies in Mercury's orbit. But it took more than 50 years before there were any tests that could measure the distinctive effects of the theory with better than 10 percent accuracy. In the 1960s and 1970s , there was serious interest in tests that could decide between general relativity and alternative theories that were still in the running. But now these tests have improved so much, and yielded such comprehensive and precise support for Einstein, that it would require very compelling evicence indeed to shake our belief that general relativity is the correct "classical" theory of gravity,

New and different experiments are nonetheless currently being planned. But the expectation that they'll corroborate the theory is no so strong that we'd demand a high burden of proof before accepting a contrary result, For instance, NASA plans to launch an ultra-precise gyroscope ("Gravity Probe B") to measure tiny precession effects. If the results confirm Einstein, nobody will be surprised nor excited — though they would have been if the experiment had flown 30 years ago, when it was first devised. On the other hand, if this very technically-challenging experiment revealed seeming discrepancies, I suspect that most scientists would suspend judgment until it had been corroborated. So the most exciting result of Gravity Probe B would be a request to NASA for another vast sum, in order repeat it.

But Einstein himself raised other deep questions that are likely to attract more interest in the 21st century than they ever did in the 20th. He spent his last 30 years in a vain (and, as we now recognize, premature) quest for a unified theory. Will such a theory — reconciling gravity with the quantum principle, and transforming our conception of space and time — be achieved in coming decades? And, if it is, what answer will it offer to an another of Einstein's questions: "Did God have any choice in the creation of the world?" Is our universe — and the physical laws that govern it — the unique outcome of a fundamental theory, or are the underlying laws more "permissive", in the sense that they could allow other very different universes as well?

MARTIN REES is Royal Society Professor at Cambridge University and a leading researcher in astrophysics and cosmology. His books include Before the Beginning, Gravity's Fatal Attraction and (most recently) Just Six Numbers.

No doubt, there are differences between women and men, some obvious and others more contentious. But arguments for inequality of worth or rights between the sexes have wholly lost intellectual respectability. Why? Because they were grounded in biologically evolved dispositions and culturally transmitted prejudices that, however strongly entrenched, could not withstand the kind of rational scrutiny to which they have been submitted in the past two centuries. Also because, more recently, the Feminist movement has given so many of us the motivation and the means to look into ourselves and recognize and fight lingering biases. Still, the battle against sexism is not over — and it may never be.

DAN SPERBER is a social and cognitive scientist at the French Centre National de la Recherche Scientifique (CNRS) in Paris. His books include Rethinking Symbolism, On Anthropological Knowledge, Explaining Culture: A Naturalistic Approach, and, with Deirdre Wilson, Relevance: Communication and Cognition.

"... can there really be fossil sea-shells in the mountains of Kentucky, hundreds of miles from the Atlantic coast? "

This question about questions may be a useful way to differentiate "science" from "not-science"; questions really do disappear in the former in a way, or at least at a rate, that they don't in the latter.

A question that has disappeared: can there really be fossil sea-shells in the mountains of Kentucky, hundreds of miles from the Atlantic coast?

I came across this particular question recently when reading Thomas Jefferson's 'Notes on the State of Virginia'; he devotes several pages to speculation about whether the finds in Kentucky really were sea-shells, and, if so, how they could have ended up there. Geologists could, today, tell him.

"...from what source do governments get their legitimate power?"

Perhaps another question dear to Jefferson's heart has also disappeared: from what source do governments get their legitimate power? In 1780, this was a real question, concerning which reasonable people gave different answers: 'God,' or 'the divine right of Kings,' or 'heredity,' or 'the need to protect its citizens.' By declaring as 'self evident' the 'truth' that 'governments derive their just power from the consent of the governed,' Jefferson was trying to declare that this question had, in fact, disappeared. I think he may have been right.

DAVID POST is Professor of Law at Temple University, and Senior Fellow at The Tech Center at George Mason University, with an interest in questions of (and inter-connections between) Internet law, complexity theory, and the ideas of Thomas Jefferson.

Psychology for much of the 20th century was dominated by the view that men and women were psychologically identical. So pervasive was this assumption that research articles in psychology journals prior to the 1970's rarely bothered to report the sex of their study participants. Women and men were understood to be interchangeable. Findings for one sex were presumed to be applicable to the other. Once the American Psychological Association required sex of participants to be reported in published experiments, controversy erupted over whether men and women were psychologically different. The past three decades of empirical research has resolved this issue, at least in delimited domains. Although women and men show great psychological similarity, they also differ in profound ways. They diverge in the sexual desires they express and mating strategies they pursue. They differ in the time they allocate to friends and relentlessness with which they pursue status. They display distinct abilities in reading other's minds, feeling other's feelings, and responding emotionally to specific traumas in their lives. Men opt for a wider range of risky activities, are more prone to violence against others, make sharper in-group versus out-group distinctions, and commit the vast majority of homicides worldwide. The question 'Do men and women differ psychologically?' has been replaced with more interesting questions. In what ways do these sex differences create conflict between men and women? Have the selection pressures that created these differences vanished in the modern world? How can societies premised on equality grapple with the profound psychological divergences of the sexes?

DAVID M. BUSS is Professor of Psychology at the University of Texas, Austin, and author of several books, most recently The Dangerous Passion: Why Jealousy is as Necessary as Love and Evolutionary Psychology: The New Science of the Mind , and The Evolution of Desire: Strategies of Human Mating.

At a particular point (yet to be clearly defined) in human cultural evolution, a specific idea took hold that there were two, partially separable, elements present in a living creature: the material body and the force that animated it. On the death of the body the animating force would, naturally, desire the continuation of this-worldly action and struggle to reassert itself (just as one might strive to retrieve a flint axe one had accidentally dropped). If the soul (or spirit) succeeded, it would also seek to repossess its property, including its spouse, and reassert its material appetites.

The desire of the disembodied soul was viewed as dangerous by the living, who had by all means to enchant, cajole, fight off, sedate, or otherwise distract and disable it. This requirement to keep the soul from the body after death did not last forever, only so long as the flesh lay on the bones. For the progress of the body's decomposition was seen as analogous to the slow progress the soul made toward the threshold of the Otherworld. When the bones where white (or were sent up in smoke or whatever the rite in that community was), then it was deemed that the person had finally left this life and was no longer a danger to the living. Thus it was, that for most of recent human history (roughly the last 35,000 years) funerary rites were twofold: the primary rites zoned off the freshly dead and instantiated the delicate ritual powers designed to keep the unquiet soul at bay; the secondary rites, occurring after weeks or months (or, sometimes — in the case of people who had wielded tremendous worldly power — years), firmly and finally incorporated the deceased into the realm of the ancestors.

Since the rise of science and scepticism, the idea of the danger of the disembodied soul has, for an increasing number of communities, simply evaporated. But there is a law of conservation of questions. "How can I stop the soul of the deceased reanimating the body?" is now being replaced with "How can I live so long that my life becomes indefinite?," a question previously only asked by the most arrogant pharaohs and emperors.

TIMOTHY TAYLOR lecturers in the Department of Archaeological Sciences, University of Bradford, UK. He is the author of The Prehistory of Sex.

The moment of birth used to be attended by an answer to a nine-month mystery: girl or boy? Now, to anyone with the slightest curiosity and no mystical scruples, simple, non-invasive technology can provide the answer from an early stage of pregnancy. With both of our children, we chose to know the answer (in the UK about half of parents want to know), and I suspect the likelihood of the question continuing to be asked will diminish rapidly.

What's interesting is this is the first of many questions about the anticipated child that will soon not be asked. These will range from the trivial (eye colour, mature height) to the important (propensity to certain diseases and illnesses). The uneasiness many people still have about knowing the sex of the child suggests that society is vastly unprepared for the pre-birth answers to a wide range of questions.

LANCE KNOBEL is a managing director of Vesta Group, an Internet and wireless investment company based in London. He was formerly head of the programme of the World Economic Forum's Annual Meeting in Davos and Editor-in-Chief of World Link.

How should adult education work? How do we educate the masses? (That's right, The Masses.) How do we widen the circle of people who love and support great art, great music, great literature? How do we widen the circle of adults who understand the science and engineering that our modern world is built on? How do we rear good American citizens? Or for that matter good German citizens, or Israeli or Danish or Chilean? And if this is the information age, why does the population at large grow worse-informed every year? (Sorry — that last one isn't a question people have stopped asking; they never started.)

These questions have disappeared because in 2001, the "educated elite" never goes anywhere without its quote-marks. Here in America's fancy universities, we used to believe that everyone deserved and ought to have the blessings of education. Today we believe our children should have them — and to make up for that fact, to even the score, we have abolished the phrase. No more "blessings of education." That makes us feel better. Many of us can't say "truth and beauty" without snickering like 10-year-old boys.

But the situation will change, as soon as we regain the presence of mind to start asking these questions again. We have the raw materials on hand for the greatest cultural rebirth in history. We have the money and the technical means. We tend to tell our children nowadays (implicitly) that their goal in life is to get rich, get famous and lord it over the world. We are ashamed to tell them that what they really ought to be is good, brave and true. (In fact I am almost ashamed to type it.) This terrible crisis of confidence we're going through was probably inevitable; at any rate it's temporary, and if we can't summon the courage to tell our children what's right, my guess is that they will figure it out for themselves, and tell us. I'm optimistic.

DAVID GELERNTER, Professor of Computer Science at Yale University and author of Mirror Worlds, The Muse in the Machine, 1939: The Lost World of the Fair, and Drawiing a Life: Surviving the Unabomber.

Metaphysical answers haven't disappeared: the new agers are full of them, and so are the old religionists.

Cosmological questions haven't disappeared: but scientists press them as real questions about the very physical universe.

But the old Platonic questions about the nature of the good and the form of beauty — they went away when we weren't looking. They won't be back.

JAMES J. O'DONNELL, Professor of Classical Studies and Vice Provost for Information Systems and Computing at the University of Pennsylvania, is the author of Avatars of the Word: From Papyrus to Cyberspace.

During the early 1980s, I had the wonderful fortune to spend a great deal of time with Richard Feynman, and our innumerable conversations extended over a very broad range of topics (not always physics!). At that time, I had just finished re-reading his wonderful book, The Character of Physical Law, and wanted to discuss an interesting question with him, not directly addressed by his book:

Why is our sense of beauty and elegance such a useful tool for discriminating between a good theory and a bad theory?

And a related question:

Why are the fundamental laws of the universe self-similar?

Over lunch, I put the questions to him.

"It's goddam useless to discuss these things. It's a waste of time," was Dick's initial response. Dick always had an immediate gut-wrenching approach to philosophical questions. Nevertheless, I persisted, because it certainly was to be admitted that he had a strong intuitive sense of the elegance of fundamental theories, and might be able to provide some insight rather than just philosophizing. It was also true that this notion was a successful guiding principle for many great physicists of the twentieth century including Einstein, Bohr, Dirac, Gell-Mann, etc. Why this was so, was interesting to me.

We spent several hours trying to get at the heart of the problem and, indeed, trying to determine if it was even a true notion rather than some romantic representation of science.

We did agree that it was impossible to explain honestly the beauties of the laws of nature in a way that people can feel, without their having some deep understanding of mathematics. It wasn't that mathematics was just another language for physicists, it was a tool for reasoning by which you could connect one statement with another. The physicist has meaning to all his phrases. He needs to have a connection of words to the real world.

Certainly, a beautiful theory meant being able to describe it very simply in terms of fundamental mathematical quantities. "Simply" meant compression into a small mathematical expression with tremendous explanatory powers, which required only a finite amount of interpretation. In other words, a huge number of relationships between data are concisely fit into a single statement. Later, Murray Gell-Mann expressed this point well, when he wrote, "The complexity of what you have to learn in order to be able to read the statement of the law is not really very great compared to the apparent complexity of the data that are being summarized by that law. That apparent complexity is partly removed when the law is formed."

Another driving principle was that the laws of the universe are self similar, in that there are connections between two sets of phenomena previously thought to be distinct. There seemed to be a beauty in the inter-relationships fed by perhaps a prejudice that at the bottom of it all was a simple unifying law.

It was easy to find numerous examples from the history of modern science that fit within this framework (Maxwell's equations for electromagnetism, Einstein's general-relativistic equations for gravitation, Dirac's relativistic quantum mechanics, etc.,), but Dick and I were still working away at the fringes of the problem. So far, all we could do was describe the problem, find numerous examples, but we could not answer what provided the feeling for great intuitive guesses.

Perhaps, our love of symmetries and patterns, are an integral part of why would embrace certain theories and not others. For example, for every conservation law, there was a corresponding symmetry, albeit sometimes these symmetries would be broken. But this led us to another question: Is symmetry inherent in nature or is it something we create? When we spoke of symmetries, we were referring to the symmetry of the mathematical laws of physics, not to the symmetry of objects commonly found in nature. We felt that symmetry was inherent in nature, because it was not something that we expected to find in physics. Another psychological prejudice was our love for patterns. The simplicity of the patterns in physics were beautiful. This does not mean simple in action ­ the motion of the planets and of atoms can be very complex, but the basic patterns underneath are simple. This is what is common to all of our fundamental laws.

It should be noted that we could also come up with numerous examples where one's sense of elegance and beauty led to beautiful theories that were wrong. A perfect example of a mathematically elegant theory that turned out to be wrong is Francis Crick's 1957 attempt at working out the genetic coding problem (Codes without Commas). It was also true that there were many examples of physical theories that were pursued on the basis of lovely symmetries and patterns, and that these also turned out to be false. Usually, these were false because of some logical inconsistency or the crude fact that they did not agree with experiment.

The best that Dick and I could come up with was an unscientific response, which is, given our fondness for patterns and symmetry, we have a prejudice — that nature is simple and therefore beautiful.

Since that time, the question has disappeared from my mind, and it is fun thinking about it again, but in doing scientific research, I now have to concern myself with more pragmatic questions.

AL SECKEL is acknowledged as one of the world's leading authorities on illusions. He has given invited lectures on illusions at Caltech, Harvard, MIT, Berkeley, Oxford University, University of Cambridge, UCLA, UCSD, University of Lund, University of Utrecht, and many other fine institutions. Seckel is currently under contract with the Brain and Cognitive Division of the MIT Press to author a comprehensive treatise on illusions, perception, and cognitive science.

Progress in the domain that Marvin Minsky once characterized as "making machines do things that would be considered intelligent if done by people" has not been as dramatic as its founders might once have hoped, but the penetration of machine cognition into everyday life (from the computer that plays chess to the computer that determines if your toast is done) has been broad and deep. We now the term "intelligent" to refer to the kind of helpful smartness embedded in such objects. So the language has shifted and the question has disappeared. But until recently, there was a tendency to limit appreciation of machine mental prowess to the realm of the cognitive. In other words, acceptance of artificial intelligence came with a certain "romantic reaction." People were willing to accept that simulated thinking might well be deemed thinking, but simulated feeling was not feeling. Simulated love could never be love.

These days, however, the realm of machine emotion has become a contested terrain. There is research in "affective computing" and in robotics which produces virtual pets and digital dolls — objects that present themselves as experiencing subjects. In artificial intelligence's "essentialist" past, researchers tried to argue that the machines they had built were "really" intelligent. In the current business of building machines that self-present as "creatures," the work of inferring emotion is left in large part to the user. The new artificial creatures are designed to push our evolutionary buttons to respond to their speech, their gestures, and their demands for nurturance by experiencing them as sentient, even emotional. And people are indeed inclined to respond to creatures they teach and nurture by caring about them, often in spite of themselves. People tell themselves that the robot dog is a program embodied in plastic, but they become fond of it all the same. They want to care for it and they want it to care for them.

In cultural terms, old questions about machine intelligence has given way to a question not about the machines but about us: What kind of relationships is it appropriate to have with a machine? It is significant that this question has become relevant in a day-to-day sense during a period of unprecedented human redefinition through genomics and psychopharmacology, fields that along with robotics, encourage us to ask not only whether machines will be able to think like people, but whether people have always thought like machines

SHERRY TURKLE is a professor of the sociology of science at MIT. She is the author of Life on the Screen: Identity in the Age of the Internet; The Second Self: Computers and the Human Spirit; and Psychoanalytic Politics: Jacques Lacan and Freud's French Revolution.

On April 8, 1966, the cover of Time Magazine asked "Is God Dead?" in bold red letters on a jet black background. This is an arresting question that no one asks anymore, but back in 1996 it was a hot issue that received serious comment. In 1882 Friedrich Nietzsche in The Gay Science had a character called "the madman" running through the marketplace shouting "God is dead!", but in the book, no one took the madman seriously.

The Time Magazine article reported that a group of young theologians calling themselves Christian atheists, led by Thomas J. J. Altizer at Emory University, had claimed God was dead. This hit a cultural nerve and in an appearance on "The Merv Griffin Show" Altizer was greeted by shouts of "Kill him! Kill him!" Today Altizer continues to develop an increasingly apocalyptic theology but has not received a grant or much attention since 1966.

The lesson here is that the impact of a question very much depends on the cultural moment. Questions disappear not because they are answered but because they are no longer interesting.

TERRENCE J. SEJNOWSKI, a pioneer in Computational Neurobiology, is regarded by many as one of the world's most foremost theoretical brain scientists. In 1988, he moved from Johns Hopkins University to the Salk Institute, where he is a Howard Hughes Medical Investigator and the director of the Computational Neurobiology Laboratory. In addition to co-authoring The Computational Brain,[136]he has published over 250 scientific articles.

This is a question that the ancients asked, and one that crops up a few times in 20th century philosophical discussions. When it is mentioned, it is usually as an example of a problem that looks to be both deep and in principle insoluble. Unsurprisingly, then, it seems to have fallen by the scientific, cosmological and philosophical waysides. But sometimes I wonder whether it really is insoluble (or senseless), or whether science may one day surprise us by finding an answer.

ANDY CLARK is Professor of Philosophy and Cognitive Science at the University of Sussex, UK. He was previously Director of the Philosophy/Neuroscience/Psychology Program at Washington University in St. Louis. He is the author of Microcognition: Philosophy, Cognitive Science and Parallel Distributed Processing, Associative Engines, and Being There: Putting Brain, Body and World Together Again.

They cordoned off the area and brought in disposal experts to defuse the bomb, but it turned out to be full of — sawdust. The Population Bomb is truly a dud, although this news and its implications have yet fully to sink into the general consciousness.

Ideas can become so embedded in our outlook that they are hard to shake by rational argument. As a Peace Corps Volunteer working in rural India in the 1960s, I vividly remember being faced with multiple uncertainties about what might work for the modernization of India. There was only one thing I and my fellow development workers could all agree on: India unquestionably would experience mass famine by the 1980s at the latest. For us at the time this notion was an eschatological inevitability and an article of faith.

For 35 years since those days, India has grown in population by over a million souls a month, never failing to feed itself or earn enough to buy the food it needs (sporadic famine conditions in isolated areas, which still happen in India, are always a matter of communications and distribution breakdown).

Like so many of the doomsayers of the twentieth century, we left crucial factors out of our glib calculations. First, we failed to appreciate that people in developing countries will behave exactly like people in the rest of the world: as they improve their standard of living, they have fewer children. In India, the rate of population increase began to turn around in the 1970s, and it has declined since. More importantly, we underestimated the capacity of human intelligence to adapt changing situations.

Broadly speaking, instead of a world population of 25 or 30 billion, which some prophets of the 1960s were predicting, it now looks as though the peak of world population growth might be reached within 25 to 40 years at a maximum of 8.5 billion (just 2.5 billion above the present world population). Even without advances in food technology, the areas of land currently out of agricultural production in the United States and elsewhere will prevent starvation. But genetic technologies will increase the quantities and healthfulness of food, while at the same time making food production much more environmentally friendly. For example, combining gene modification with herbicides will make it possible to produce crops that induce no soil erosion. New varieties will requires less intensive application of nitrogen fertilizers and pesticides. If genetic techniques can control endemic pests, vast areas of Africa could be brought into productive cultivation.

There will be no way to add 2.5 billion people to the planet without environmental costs. Some present difficulties, such as limited supplies of fresh water in Third World localities, will only get worse. But these problems will not be insoluble. Moreover, there is not the slightest chance that population growth will in itself cause famine. What will be fascinating to watch, for those who live long enough to witness it, will be how the world copes with an aging, declining population, once the high-point has been reached.

The steady evaporation of the question, "When will overpopulation create worldwide starvation?", has left a gaping hole in the mental universe of the doomsayers. They have been quick to fill it with anxieties about global warming, cellphones, the ozone hole, and Macdonaldization. There appears to be a hard-wired human propensity to invent threats where they cannot clearly be discovered. Historically, this has been applied to foreign ethnic groups or odd individuals in a small-scale society (the old woman whose witchcraft must have caused village children to die). Today's anxieties focus on broader threats to mankind, where activism can mix fashionable politics with dubious science. In this respect alone, the human race is not about to run out of problems. Fortunately, it also shows no sign of running out of solutions.

DENIS DUTTON, founder and editor of the innovative Web page Arts & Letters Daily (www.cybereditions.com/aldaily/[141]), teaches the philosophy of art at the University of Canterbury, New Zealand and writes widely on aesthetics. He is editor of the journal Philosophy and Literature, published by the Johns Hopkins University Press. Professor Dutton is a director of Radio New Zealand, Inc.

This question bit the dust after a brief but busy life; it is entirely a second-half-of the-20th-century question. Had it been asked before the 20th century, it would have been phrased differently: "heredity" instead of "genes." But it wasn't asked back then, because the answer was obvious to everyone. Unfortunately, the answer everyone gave — yes! — was based on erroneous reasoning about ambiguous evidence: the difference in behavior between the pauper and the prince was attributed entirely to heredity. The fact that the two had been reared in very different circumstances, and hence had had very different experiences, was overlooked.

Around the middle of the 20th century, it became politically incorrect and academically unpopular to use the word "heredity"; if the topic came up at all, a euphemism, "nature," was used in its place. The fact that the pauper and the prince had been reared in very different circumstances now came to the fore, and the behavioral differences between them was now attributed entirely to the differences in their experiences. The observation that the prince had many of the same quirks as the king was now blamed entirely on his upbringing. Unfortunately, this answer, too, was based on erroneous reasoning about ambiguous evidence.

That children tend to resemble their biological parents is ambiguous evidence; the fact that such evidence is plentiful — agreeable parents tend to have agreeable kids, aggressive parents tend to have aggressive kids, and so on — does not make it any less ambiguous. The problem is that most kids are reared by their biological parents. The parents have provided both the genes and the home environment, so the kids' heredity and environment are correlated. The prince has inherited not only his father's genes but also his father's palace, his father's footmen, and his father's Lord High Executioner (no reference to living political figures is intended).

To disambiguate the evidence, special techniques are required — ways of teasing apart heredity and environment by controlling the one and varying the other. Such techniques didn't begin to be widely used until the 1970s; their results didn't become widely known and widely accepted until the 1990s. By then so much evidence had piled up that the conclusion (which should have been obvious all along) was incontrovertible: yes, genes do influence human behavior, and so do the experiences children have while growing up.

(I should point out, in response to David Deutsch's contribution to the World Question Center, that no one study, and no one method, can provide an answer to a question of this sort. In the case of genetic influences on behavior, we have converging evidence — studies using a variety of methods all led to the same conclusion and even agreed pretty well on the quantitative details.)

Though the question has been answered, it has left behind a cloud of confusion that might not disappear for some time. The biases of the second half of the 20th century persist: when "dysfunctional" parents are found to have dysfunctional kids, the tendency is still to blame the environment provided by the parents and to overlook the fact that the parents also provided the genes.

Some would argue that this bias makes sense. After all, they say, we knowhow the environment influences behavior. How the genes influence behavior is still a mystery — a question for the 21st century to solve. But they are wrong. They know much less than they think they know about how the environment influences behavior.

The 21st century has two important questions to answer. How do genes influence human behavior? How is human behavior influenced by the experiences a child has while growing up?

JUDITH RICH HARRIS is a writer and developmental psychologist; co-author ofThe Child: A Contemporary View Of Development; winner of the 1997 George A. Miller Award for an outstanding article in general psychology, and author ofThe Nurture Assumption: Why Children Turn Out The Way They Do.

From the time of the Enlightenment philosophers have speculated that the remarkable advances of science would one day spill over into the realm of moral philosophy, and that scientists would be able to discover answers to previously insoluble moral dilemmas and ethical conundrums. One of the reasons Ed Wilson's book Consilience was so successful was that he attempted to revive this Enlightenment dream. Alas, we seem no closer than we were when Voltaire, Diderot, and company first encouraged scientists to go after moral and ethical questions. Are such matters truly insoluble and thus out of the realm of science (since, as Peter Medewar noted, "science is the art of the soluble")? Should we abandon Ed Wilson's Enlightenment dream of applying evolutionary biology to the moral realm? Most scientists agree that moral questions are scientifically insoluble and they have abandoned the Enlightenment dream. But not all. We shall see.

MICHAEL SHERMER is the founding publisher of Skeptic magazine, the host of the acclaimed public science lecture series at Caltech, and a monthly columnist for Scientific American. His books include Why People Believe Weird Things, How We Believe, and Denying History.

Three in four entering collegians today deem it "very important" or "essential" that they become "very well-off financially." Most adults believe "more money" would boost their quality of life. And today's "luxury fever" suggests that affluent Americans and Europeans are putting their money where their hearts are. "Whoever said money can't buy happiness isn't spending it right," proclaimed a Lexus ad. But the facts of life have revealed otherwise.

Although poverty and powerlessness often bode ill for body and spirit, wealth fails to elevate well-being. Surveys reveal that even lottery winners and the super rich soon adapt to their affluence. Moreover, those who strive most for wealth tend, ironically, to live with lower well-being than those focused on intimacy and communal bonds. And consider post-1960 American history: Average real income has doubled, so we own twice the cars per person, eat out two and a half times as often, and live and work in air conditioned spaces. Yet, paradoxically, we are a bit less likely to say we're "very happy." We are more often seriously depressed. And we are just now, thankfully, beginning to pull out of a serious social recession that was marked by doubled divorce, tripled teen suicide, quadrupled juvenile violence, quintupled prison population, and a sextupled proportion of babies born to unmarried parents. The bottom line: Economic growth has not improved psychological morale or communal health.

DAVID G. MYERS is a social psychologist at Hope College (Michigan) and author, most recently, of The American Paradox: Spiritual Hunger in an Age of Plenty and of A Quiet World: Living with Hearing Loss.

Some scientific questions cannot be resolved, but rather are dissolved, and vanish once we begin to better understand their terms.

This is often the case for "definitional questions". For instance, what is the definition of life? Can we trace a sharp boundary between what is living and what is not living? Is a virus living? Is the entire earth a living organism? It seems that our brain predisposes us to ask questions that require a yes or no answer. Moreover, as scientists, we'd like to keep our mental categories straight and, therefore, we would like to have neat and tidy definitions of the terms we use. However, especially in the biological sciences, the objects of reality do not conform nicely to our categorical expectations. As we delve into research, we begin to realize that what we naively conceived of as a essential category is, in fact, a cluster of loosely bound properties that each need to be considered in turn (in the case of life: metabolism, reproduction, autonomy, homeostasy, etc..). Thus, what was initially considered as a simple question, requiring a straightforward answer, becomes a complex issue or even a whole domain of research. We begin to realize that there is no single answer, but many different answers depending on how one frames the terms of the question. And eventually, the question is simply dropped. It is not longer relevant.

I strongly suspect that one of today's hottest scientific question,s the definition of consciousness, is of this kind. Some scientists seem to believe that what we call consciousness is an essence of reality, a single coherent phenomenon that can be reduced to a single level such as a quantum property of microtubules. Another possibility, however, that consciousness is a cluster of properties that, most of the time, cohere together in awake adult humans. A minimal list probably includes the ability to attend to sensory inputs or internal thoughts, to make them available broadly to multiple cerebral systems, to store them in working memory and in episodic memory, to manipulate them mentally, to act intentionally based on them, and in particular to report them verbally. As we explore the issue empirically, we begin to find many situations (such as visual masking or specific brain lesions) in which those properties break down. The neat question "what is consciousness" dissolves into a myriad of more precise and more fruitful research avenues.

Any biological theory of consciousness, which assumes that consciousness has evolved, implies that "having consciousness" is not an all-or-none property. The biological substrates of consciousness in human adults are probably also present, but only in partial form, in other species, in young children or brain-lesioned patients. It is therefore a partially arbitrary question whether we want to extend the use of the term "consciousness" to them. For instance, several mammals, and even very young human children, show intentional behavior, partially reportable mental states, some working memory ability — but perhaps no theory of mind, and more "encapsulated" mental processes that cannot be reported verbally or even non-verbally. Do they have consciousness, then? My bet is that once a detailed cognitive and neural theory of the various aspects of consciousness is available, the vacuity of this question will become obvious.

STANISLAS DEHAENE, researcher at the Institut National de la Santé, studies cognitive neuropsychology of language and number processing in the human brain; author of The Number Sense: How Mathematical Knowledge Is Embedded In Our Brains.

Nobody asks what "vital force" is anymore. Organisms still have just as much vital force as they had before, but as understanding of biological mechanisms increased, the idea of a single essence evaporated. Hopefully the same will happen with "consciousness".

GEOFFREY HINTON, is an AI researcher at the Gatsby Computational Neuroscience Unit, University College London, where he does research on ways of using neural networks for learning, memory, perception and symbol processing and has over 100 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm that is now widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, and Helmholtz machines. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input.

That was a brow-furrower in the late '50s and early '60s for social observers and forecasters. Whole books addressed the problem, most of them opining that Americans would have to become very interested in the arts. Turned out the problem never got around to existing, and the same kind of people are worrying now about how Americans will survive the stress of endless multi-tasking.

"Can the threat of recombinant DNA possibly be contained?"

That was the brand new bogey of the mid-'70s. At a famous self regulating conference at Asilomar conference center in California, genetic researchers debated the question and imposed rules (but not "relinquishment") on the lab work. The question was answered: the threat was handily contained, and it was not as much of a threat as feared anyway. Most people retrospectively applaud the original caution. Similar fears and debate now accompany the introduction of Genetically Modified foods and organisms. Maybe it's the same question rephrased, and it will keep being rephrased as long as biotech is making news. Can the threat of frankenfoods possibly be contained? Can the threat of gene-modified children possibly be contained? Can the threat of bioweapons possibly be contained? Can the threat of human life extension possibly be contained?

It won't be a new question until it reaches reflexivity: "Are GM humans really human?"

STEWART BRAND is founder of the Whole Earth Catalog, cofounder of The Well, cofounder of Global Business Network, cofounder and president of The Long Now Foundation. He is the original editor of The Whole Earth Catalog, author ofThe Media Lab: Inventing the Future at MIT , How Buildings Learn, and The Clock of the Long Now: Time and Responsibility (MasterMinds Series).

It burned through the sixties, seventies and even eighties, until the answer was, Of course. It was replaced with a different, less emotionally fraught question: How can we make them think smarter/better/deeper?

The central issue is the social, not scientific, definition of "thinking". A generation of Western intellectuals who took their identity mainly from their intelligence has grown too old to ask the question with any conviction, and anyway, machines are all around them thinking up a storm. Machines don't yet think like Einstein, but then neither do most people, and we don't question their humanity on that account.

PAMELA McCORDUCK is the author or coauthor of seven books, among themMachines Who Think, and coauthor with Nancy Ramsey of The Futures Of Women: Scenarios for the 21st Century.

People who interpret the Bible literally may believe that a man named Noah collected all species of Earthly organisms on his ark. However, scientists no longer ask this question. Let me put the problem in a modern perspective by considering what it means to have animals from every species on an ark. Consider that siphonapterologists (experts in fleas) recognize 1,830 variety of fleas. Incredible as it may seem, there are around 300,000 species of beetles, making beetles one of the most diverse groups of organisms on earth. When biologist J.B.S. Haldane was asked by a religious person what message the Lord conveyed through His creations, he responded, "an inordinate fondness for beetles."

One of my favorite books on beetles is Ilkka Hanski's Dung Beetle Ecology, which points out that a large number (about 7000 species) of the 300,000 species of beetles live off animal dung. Did Noah bring these species on the ark? If he did, did he concern himself with the fact that animal dung is often fiercely contested. On the African savanna up to 4000 beetles have been observed to converge on 500 grams of fresh elephant dung within 15 minutes after it is deposited.

Did Noah or his family also take kleptoparastic beetles on the ark? These are dung beetles known to steal dung from others. Did Noah need to take into consideration that insect dung communities involve hundreds of complex ecological interactions between coprophagous flies and their parasites, insects, mites, and nematodes (an ecology probably difficult to manage on the ark!). In South Africa, more than 100 species of dung beetle occur together in a single cow pat. One gigantic species, Heliocopris dilloni resides exclusively in elephant dung. A few species of beetles are so specialized that they live close to the source of dung, in the hairs near an animal's anus.

You get my point! It's quite a mystery as to what the Biblical authors meant when they called for Noah taking pairs of every animal on the Earth. Incidentally, scientists very roughly estimate that the weight of animals in the hypothetical ark to be 1000 tons. You can use a value of 10 million for the number of species and assume an average mass of 100 grams. (Insects decrease this figure for average mass because of the huge number of insect species.) There would be some increase in mass if plants were used in the computation. (How would this change if extinct species were included?)

Even if Noah took ten or twenty of each kind of mammal, very few would be alive after a thousand years because approximately 50 individuals of a single species are needed to sustain genetic health. Any small population is subject to extinction from disease, environmental changes, and genetic risks — the gradual accumulation of traits with small but harmful effects. There is also the additional problem of making sure that there is both male and female offspring surviving. Today, species are considered endangered well before their numbers drop below fifty. (Interestingly, there's a conflicting Biblical description in the story of Noah that indicated God wanted Noah to take "seven pairs of clean animals... and a pair of the animals that are not clean... and seven pairs of the birds of the air also.")

The Biblical flood would probably kill most of the plant life on Earth. Even if the waters were to recede, the resultant salt deposits would prevent plants from growing for many years. Of additional concern is the ecological effect of the numerous dead carcasses caused by the initial flood.

Various authors have noted that if, in forty days and nights the highest mountains on Earth were covered, the required incredible rate of rain fall of fifteen feet per hour would sink the ark. All of these cogitations lead me to believe that most scientifically trained people no longer ask whether an actual man named Noah collected all species of Earthly organism on his ark. By extension, most scientifically trained people no longer ask if the Bible is literal truth.

CLIFF PICKOVER is author of over 20 books, his latest being Wonders ofNumbers: Adventures in Math, Mind, and Meaning. His web site, www.pickover.com[157], has received over 300,000 visits.

Recently, I was relaxing in my hotel room with a biography of Queen Elizabeth I. Her biographer noted that when Elizabeth R wasn't feeling quite herself she would call for a good "bleeding." I wondered about this practice which now seems so destructive and dangerous, especially given the hygienic possibilities of 16th-century Britain. Even for the rich and famous. But Elizabeth R survived numerous bleedings and, I imagine, lots of other strange treatments that were designed to make her look and feel like her very best self — by the standards of her time. (Did she have a great immune system? Probably.)

As dotty and unclean as "bleedings" now seem to a 21st century New Yorker, I realized with a jolt that Elizabeth was pampering, not punishing, herself — and I was going to be late for my reflexology appointment. I had scheduled a two-hour orgy of relaxation and detoxification at a spa.

I imagine that the ladies at court asked each other, in the manner of ladies who-lunch, "Who does your bleeding?" — trading notes on price, ambiance and service, just as ladies today discuss their facials, massages and other personal treatments.

Some skeptics assume that the beauty and spa treatments of today are as ineffective or dangerous as those of the Renaissance period. In fact, there have been inroads. Germ theory helped — as did a host of other developments, including a fascination in the West with things Eastern. The kind of people who would once have gone in for bleeding now go in for things like reflexology and shiatsu. That urge to cleanse and detoxify the body has long been around but we've actually figured out how to do it because we better understand the body.

The pampered are prettier and healthier today than were their 16th century European counterparts. I wonder whether, another thousand or so years into the future, we will all look prettier and healthier in ways that we can't yet fathom. This kind of query might seem irresponsible, shallow, even immoral — given the real health crises facing human beings in 2001. But the way we look has everything to do with how we live and how we think.

And I'm glad that bleedings are no longer the rage.

TRACY QUAN, a writer and working girl living in New York, is the author of "Nancy Chan: Diary of a Manhattan Call Girl", a serial novel about the life and loves of Nancy Chan, a turn-of-the- millennium call girl. Excerpts from the novel — which began running in July, 2000 in the online magazine, Salon — have attracted a wide readership as well as the attention of the The New York Timesand other publications.

This question was long considered metaphysical, briefly became a scientific question, and has now disappeared again. Victorian intellectuals such as Frederic Myers, Henry Sidgwick and Edmund Gurney founded the Society for Psychical Research in 1882 partly because they realised that the dramatic claims of spiritualist mediums could be empirically tested. They hoped to prove "survival" and thus overturn the growing materialism of the day. Some,
like Faraday, convinced themselves by experiment that the claims were false, and lost interest. Others, like Myers, devoted their entire lives to ultimately inconclusive research. The Society continues to this day, but survival research has all but ceased.

I suggest that no one asks the question any more because the answer seems too obvious. To most scientists it is obviously "No", while to most New Agers and religious people it is obviously "Yes". But perhaps we should. The answer may be obvious (it's "No" — I'm an unreligious scientist) but its implications for living our lives and dealing compassionately with other people are profound.

SUSAN BLACKMORE is a psychologist and ex-parapsychologist, who — when she found no evidence of psychic phenomena — turned her attention to why people believe in them. She is author of several skeptical books on the paranormal and, more recently, The Meme Machine.

It has become unfashionable to ask about the structure of reality without already having chosen a framework in which to ponder the answer, be it scientific, religious or sceptical. A sense of wonder at the sheer appearance of the world, moment by moment, has been lost.

To look at the world in wonder, and to stay with that sense of wonder without jumping straight past it, has become almost impossible for someone taking science seriously. The three dominant reactions are: to see science as the only way to get at the truth, at what is really real; to accept science but to postulate a more encompassing reality around or next to it, based on an existing religion; or to accept science as one useful approach in a plurality of many approaches, neither of which has anything to say about reality in any ultimate way.

The first reaction leads to a sense of wonder scaled down to the question of wonder about the underlying mathematical equations of physics, their interpretation, and the complexity of the phenomena found on the level of chemistry and biology. The second reaction tends to allow wonder to occur only within the particular religous framework that is accepted on faith. The third reaction allows no room for wonder about reality, since there is no ultimate reality to wonder about.

Having lost our ability to ask what reality is like means having lost our innocence. The challenge is to regain a new form of innocence, by accepting all that we can learn from science, while simultaneously daring to ask 'what else is true?' In each period of history, the greatest philosophers struggled with the question of how to confront skepticism and cynicism, from Socrates and Descartes to Kant and Husserl in Europe, and Nagarjuna and many others in Asia and elsewhere. I hope that the question "What is Reality?" will reappear soon, as a viable intellectual question and at the same time as an invitation to try to put all our beliefs and frameworks on hold. Looking at reality without any filter may or may not be possible, but without at least trying to do so we will have given up too soon.

PIET HUT is professor of astrophysics at the Institute for Advanced Study, in Princeton. He is involved in the project of building GRAPEs, the world's fastest special-purpose computers, at Tokyo University, and he is also a founding member of the Kira Institute.

People in the Western world assume women have it all: education, job opportunities, birth control, love control, and financial freedom. But women still lack the essential freedom — equality — they lacked a century ago. Women are minorities in every sector of our government and economy, and women are still expected to raise families while at the same time earning incomes that are comparably lower than what males earn. And in our culture, women are still depicted as whores, bimbos, or bloodsuckers by advertisers to sell everything from computers to cars.

Will it take another century or another millenium before the biological differences between men and women are taken as a carte blanche justification for the unequal treatment of women?

SYLVIA PAULL is Founder, Gracenet (www.gracenet.net[166]) Serving women in high-tech and business media.

My colleagues in the fashionable fields of string theory and quantum gravity advertise themselves as searching desperately for the 'Theory of Everything", while their experimental colleagues are gravid with the "God Particle", the marvelous Higgson which is the somewhat misattributed source of all mass. (They are also after an understanding of the earliest few microseconds of the Big Bang.) As Bill Clinton might remark, it depends on what the meaning of "everything" is. To these savants, "everything" means a list of some two dozen numbers which are the parameters of the Standard Model. This is a set of equations which already exists and does describe very well what you and I would be willing to settle for as "everything". This is why, following Bob Laughlin, I make the distinction between "everything" and "every thing". Every thing that you and I have encountered in our real lives, or are likely to interact with in the future, is no longer outside of the realm of a physics which is transparent to us: relativity, special and general; electromagnetism; the quantum theory of ordinary, usually condensed, matter; and, for a few remote phenomena, hopefully rare here on earth, our almost equally cut-and-dried understanding of nuclear physics. [Two parenthetic remarks: 1) I don't mention statistical mechanics only because it is a powerful technique, not a body of facts; 2) our colleagues have done only a sloppy job so far of deriving nuclear physics from the Standard Model, but no one really doubts that they can.]

I am not arguing that the search for the meaning of those two dozen parameters isn't exciting, interesting, and worthwhile: yes, it's not boring to wonder why the electron is so much lighter than the proton, or why the proton is stable at least for another 35 powers of ten years, or whether quintessence exists. But learning why can have no real effect on our lives, spiritually inspiring as it would indeed be, even to a hardened old atheist like myself.

When I was learning physics, half a century ago, the motivation for much of what was being done was still "is quantum theory really right?" Not just QED, though the solution of that was important, but there were still great mysteries in the behavior of ordinary matter--like superconductivity, for instance. It was only some twenty years later that I woke up to the fact that the battle had been won, probably long before, and that my motivation was no longer to test the underlying equations and ideas, but to understand what is going on. Within the same few years , the molecular biology pioneers convinced us we needed no mysterious "life force" to bring all of life under the same umbrella. Revolutions in geology, in astrophysics, and the remarkable success of the Standard Model in sorting out the fundamental forces and fields, leave us in the enviable position I described above: given any problematic phenomenon, we know where to start, at least. And nothing uncovered in string theory or quantum gravity will make any difference to that starting point.

Is this Horgan's End of Science? Absolutely not. It's just that the most exciting frontier of science no longer lies at the somewhat sophomoric — or quasi-religious — level of the most "fundamental" questions of "what are we made of?" and the like; what needs to be asked is "how did all this delightful complexity arise from the stark simplicity of the fundamental theory?" We have the theory of every thing in any field of science you care to name, and that's about as far as it gets us. If you like, science is now almost universally at the "software" level; the fundamental physicists have given us all the hardware we need, but that doesn't solve the problem, in physics as in every other field. It's a different game, probably a much harder one in fact, as it has often been in the past; but the game is only begun.

PHILIP W. ANDERSON is a Nobel laureate physicist at Princeton and one of the leading theorists on superconductivity. He is the author of A Career in Theoretical Physics, and Economy as a Complex Evolving System.

A question no longer being asked is how to make the next step in the evolution of a democratic society. Until very recently it was widely understood that democracy was a project with many steps, whose goal was the eventual construction of a perfectly just and egalitarian society. But recently, with the well deserved collapse of Marxism, it has begun to seem that the highest stage of civilization we humans can aspire to is global capitalism leavened by some version of a bureaucratic welfare state, all governed badly by an unwieldy and corrupt representative democracy. This is better than many of the alternatives, but it is hardly egalitarian and often unjust; those of us who care about these values must hope that human ingenuity is up to the task of inventing something still better.

It is proper that the nineteenth century idea of utopia has finally been put to rest, for that was based on a paradox, which is that any predetermined blueprint for an ideal society could only be imposed by force. It is now almost universally acknowledged that there is no workable alternative to the democratic ideal that governments get their authority by winning the consent of the governed. This means that if we are to change society, it must be by a process of evolution rather than revolution. But why should this mean that big changes are impossible? What is missing are new ideas, and a context to debate them.

There are at least four issues facing the future of the democratic project. First, while democracy in the worlds most powerful country is perceived by many of its citizens as corrupted, there is little prospect for serious reform. The result is alienation so severe that around half of our citizens do not participate in politics. At what point, we may ask, will so few vote that the government of the United States may cease to have a valid claim to have won the consent of the governed. As the political and journalistic classes have largely lost the trust of the population, where will leadership to begin the reform that is so obviously needed come from?

A second point of crisis and opportunity is in the newly democratized states. In many of these countries intellectuals played a major role in the recent establishment of democracy. These people are not likely to go to sleep and let the World Bank tell them what democracy is.

The third opportunity is in Europe, where a rather successful integration of capitalism and democratic socialism has been achieved. These societies suffer much less from poverty and the other social and economic ills that appear so unsolvable in the US context. (And it is not coincidental that the major means of funding political campaigns in the US are illegal in most of Europe.) Walking the streets in Denmark or Holland it is possible to wonder what a democratic society that evolved beyond social democracy might look like. European integration may be only the first step towards a new kind of nation state which will give much of its sovereignty up to multinational entities, a kind of nation-as-local-government.

Another challenge for democracy is the spread of the bureaucratic mode of organization, which in most countries has taken over the administration of education, science, health and other vital areas of public interest. As any one who works for a modern university or hospital can attest to, bureaucratic organizations are inherently undemocratic. Debate amongst knowledgeable, responsible individuals is replaced by the management of perceptions and the manipulation of supposedly objective indices. As the politics of the academy begins to look more like nineteenth century Russia than 5th Century BC Athens we intellectuals need to do some serious work to invent more democratic modes of organization for ourselves and for others who work in the public interest.

Is it not then time we "third culture intellectuals" begin to attack the problem of democracy, in both our workplaces and in our societies? Perhaps, with all of our independence, creativity, intelligence and edginess, we may find we really have something of value to contribute?

LEE SMOLIN is a theoretical physicist; professor of physics and member of the Center for Gravitational Physics and Geometry at Pennsylvania State University; author of The Life of The Cosmos.

When, every few years, you see a bite taken out of the sun or moon, you ought to remember just how frightening that question used to be. It became clockwork when the right viewpoint was eventually discovered by science (imagining yourself high above the north pole, looking at the shadows cast by the earth and the moon). But there was an intermediate stage of empirical knowledge, when the shaman discovered that the sixth full moon after a prior eclipse had a two-third's chance of being associated with another eclipse. And so when the shaman told people to pray hard the night before, he was soon seen as being on speaking terms with whomever ran the heavens. This helped convert part-time shamen into full-time priests, supported by the community. This can be seen as the entry-level job for philosophers and scientists, who prize the discoveries they can pass on to the next generation, allowing us to see farther, always opening up new questions while retiring old ones. It's like climbing a mountain that keeps providing an even better viewpoint.

WILLIAM H. CALVIN is a neurobiologist at the University of Washington, who writes about brains, evolution, and climate. His recent books are The Cerebral Code, How Brains Think, and (with the linguist Derek Bickerton) Lingua ex Machina.

Like mathematics whose symbols can represent physical properties when applied to some scientific problem, God is a convenient symbol for nature, for the way the world works. Einstein's reaction of utter incredibility to the quantum theory from its development in the late 20's until his death in 1955, was echoed by colleagues who had participated in the early creation of the quantum revolution, which Richard Feynman had termed the most radical theory ever.

Well does she?

The simplest example of what sure looks like God playing dice happens when you walk past a store window on a sunny day. Of course you are not just admiring your posture and checking your attire, you are probably watching the guy undressing the manikin, but that is another story.

So how do you see yourself, albeit dimly, while the manikin abuser sees you very clearly? Everyone knows that light is a stream of photons, here from the sun, some striking your nose, then reflected in all directions. We focus on two photons heading for the window. We'll need thousands to get a good picture but two will do for a start. One penetrates the window and impacts the eye of the manikin dresser. The second is reflected from the store window and hits your eye, a fine picture of a good looking pedestrian! What determines what the photons will do? The photons are identical...trust me. Philosophers of science assure us that identical experiments give identical results.

Not so!

The only rational conclusion would seem to be that she plays dice at each impact of the photon. Using a die with 10 faces, good enough for managing this bit of the world, numbers one to nine determine that the photon goes through, a ten and the photon is reflected. Its random...a matter of probability.

Dress this concept up in shiny mathematics and we have quantum science which underlies physics, most of chemistry and molecular biology. It now accounts for 43.7% of our GNP. (this is consistent with 87.1% of all numbers being made up.)

So what was wrong with Einstein and his friends? Probabilistic nature which is applicable to the world of atoms and smaller, has implications which are bizarre, spooky, wierd. Granting that it works, Einstein could not accept it and hoped for a deeper explanation. Today, many really smart physicists are are seeking a kinder, gentler formulation but 99.3% of working physicists go along with the notion that she is one hell-of-a crap shooter.

LEON M. LEDERMAN , the director emeritus of Fermi National Accelerator Laboratory, has received the Wolf Prize in Physics (1982), and the Nobel Prize in Physics (1988). In 1993 he was awarded the Enrico Fermi Prize by President Clinton. He is the author of several books, including (with David Schramm) From Quarks to the Cosmos : Tools of Discovery, and (with Dick Teresi) The God Particle: If the Universe Is the Answer, What Is the Question?

Speculation about the possibility of a "plurality of worlds" goes back at least as far as Epicurus in the fourth century BC. Admittedly, Epicurus' definition of a "world" was closer to what we would currently regard as a solar system — but he imagined innumerable such spheres, each containing a system of planets, packed together. "There are," he declared, "infinite worlds both like and unlike this world [i.e., solar system] of ours."

The same question was subsequently considered by astronomers and philosophers over the course of many centuries; within the past half century, the idea of the existence of other planets has become a science fiction staple, and people have started looking for evidence of extraterrestrial civilizations. Like Epicurus, many people have concluded that there must be other solar systems out there, consisting of planets orbiting other stars. But they didn't know for sure. Today, we do.

The first "extrasolar" planet (ie, beyond the solar system) was found in 1995 by two Swiss astronomers, and since then another 48 planets have been found orbiting dozens of nearby sun-like stars. This figure is subject to change, because planets are now being found at an average rate of more than one per month; more planets are now known to exist outside the solar system than within it. Furthermore, one star is known to have at least two planets, and another has at least three. We can, in other words, now draw maps of alien solar systems — maps that were previously restricted to the realm of science fiction.

The discovery that there are other planets out there has not, however, caused as much of a fuss as might have been expected, for two reasons. First, decades of Star Trek and its ilk meant that the existence of other worlds was assumed; the discovery has merely confirmed what has lately become a widely-held belief. And second, none of these new planets has actually been seen. Instead, their existence has been inferred through the tiny wobbles that they cause in the motion of their parent stars. The first picture of an extrasolar planet is, however, probably just a few years away. Like the first picture of Earth from space, it is likely to become an iconic image that once again redefines the way we as humans think about our place in the universe.

Incidentally, none of these new planets has a name yet, because the International Astronomical Union, the body which handles astronomical naming, has yet to rule on the matter. But the two astronomers who found the first extrasolar planet have proposed a name for it anyway, and one that seems highly appropriate: they think it should be called Epicurus.

TOM STANDAGE is technology correspondent at The Economist in London and author of the books The Victorian Internet and The Neptune File, both of which draw parallels between episodes in the history of science and modern events. He has also written for the Daily Telegraph, The Guardian, Prospect, and Wired.He is married and lives in Greenwich, England, just down the hill from the Old Royal Observatory.

In the not so distant future we will have to revive the question about who and what we are.

We will have to, not because we choose to do so, but because the question will be posed to us by Others or Otherness: Aliens, robots, mutants, and the like.

New phenomena like information processing artifacts, computational life forms, bioengineered humans, upgraded animals and pen-pals in space will force us to consider ourselves and our situation: Why didn't we finish hunger on this planet? Are we evil or just idiots? Why do we really want to rebuild ourselves? Do we regain our soul when the tv-set is turned off?

It's going to happen like this: We build a robot. It turns towards us and says: "If technology is the answer then what was the question?"

TOR NORRETRANDERS is a science writer, consultant, lecturer and organizer based in Copenhagen, Denmark. He was recently appointed Chairman of the National Council for Competency.

The reason this question is dead is because traditional Skinnerianism, which viewed rats and pigeons as furry and feathered black boxes, guided by simple principles of reinforcement and punishment, is theoretically caput. It can no longer account for the extraordinary things that animals do, spontaneously.

Thus, we now know that animals form cognitive maps of their environment, compute numerosities, represent the relationships among individuals in their social group, and most recently, have some understanding of what others know.

The questions for the future, then, are not "Do animals think?", but "What precisely do they think about, and to what extent do their thoughts differ from our own?"

MARC D. HAUSER is an evolutionary psychologist, and a professor at Harvard University where he is a fellow of the Mind, Brain, and Behavior Program. He is a professor in the departments of Anthropology and Psychology, as well as the Program in Neurosciences. He is the author of The Evolution of Communication, and Wild Minds: What AnimalsThink.

This question was at the heart of heated debates for decades during the recently past century, and it was at the ambitious origins of the Artificial Intelligence adventure. It had profound implications not only for science, but also for philosophy, technology, business, and even theology. In the 50's and 60's, for instance, it made a lot of sense to ask the question whether one day a computer could defeat an international chess master, and if it did, it was assumed that we would learn a great deal about how human thought works. Today we know that building such a machine is possible, but the reach of the issue has dramatically changed. Nowadays not many would claim that building such a computer actually informs us in an interesting way about what human thought is and how it works. Beyond the (indeed impressive) engineering achievements involved in building such machines, we got from them little (if any) insight into the mysteries, variability, depth, plasticity, and richness of human thought. Today, the question "do computers think?" has become completely uninteresting and it has disappeared from the cutting edge academic circus, remaining mainly in the realm of pop science, Hollywood films, and video games.

And why it disappeared?

It disappeared because it was finally answered with categorical responses that stopped generating fruitful work. The question became useless and uninspiring, ... boring. What is interesting, however, is that the question disappeared with no single definitive answer! It disappeared with categorical "of-course-yes" and "of-course-not" responses. Of-course-yes people, in general motivated by a technological goal (i.e., "to design and to build something") and implicitly based on functionalist views, built their arguments on the amazing ongoing improvement in the design and development of hardware and software technologies. For them the question became uninteresting because it didn't help to design or to build anything anymore. What became relevant for of-course-yes people was mainly the engineering challenge, that is, to actually design and to build computers capable of processing algorithms in a faster, cheaper, and more flexible manner. (And also, for many, what became relevant was to build computers for human activities and purposes). Now when of-course-yes people are presented with serious problems that challenge their view, they provide the usual response: "just wait until we get better computers" (once known as the wait-until the-year-2000 argument). On the other hand there were the of-course not people, who were mainly motivated by a scientific task (i.e., "to describe, explain, and predict a phenomenon"), which was not necessarily technology-driven. They mainly dealt with real-time and real-world biological, psychological, and cultural realities. These people understood that most of the arrogant predictions made by Artificial Intelligence researchers in the 60's and 70's hadn't been realized because of fundamental theoretical problems, not because of the lack of powerful enough machines. They observed that even the simplest everyday aspects of human thought, such as common sense, sense of humor, spontaneous metaphorical thought, use of counterfactuals in natural language, to mention only a few, were in fact intractable for the most sophisticated machines. They also observed that the nature of the brain and other bodily mechanisms that make thinking and the mind possible, were by several orders of magnitude, way more complex than what it was thought during the hey-days of Artificial Intelligence. Thus for of course-not people the question whether computers think became uninteresting, since it didn't provide insights into a genuine understanding of the intricacies of human thinking. Today the question is dead. The answer had become a matter of faith.

RAFAEL E. NÚÑEZ, currently at the Department of Psychology of the University of Freiburg, is a research associate of the University of California, Berkeley. He has worked for more than a decade on the foundations of embodied cognition, with special research into the nature and origin of mathematical concepts. He has published in several languages in a variety of areas, and has taught in leading academic institutions in Europe, the United States, and South America. He is the author (with George Lakoff) of Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being; and co-editor (with Walter Freeman) of Reclaiming Cognition: The Primacy of Action, Intention, and Emotion.

Heavens, I take a couple of days off from reading email over Christmas and when I next log on already there are over twenty responses to the Edgequestion! Maybe the question we should all be asking is "Doesn't anyone take time off any more?"

As to questions that have disappeared, as a mathematician I hope we've seen the last of the question "Why can't girls/women do math?" With women now outnumbering men in mathematics programs in most US colleges and universities, that old wives' tale (old husbands' tale?) has surely been consigned to the garbage can. Some recent research at Brown University confirmed what most of us had long suspected: that past (and any remaining present) performance differences were based on cultural stereotyping. (The researchers found that women students performed worse at math tests when they were given in a mixed gender class than when no men were present. No communication was necessary to cause the difference. The sheer presence of men was enough.)

While I was enjoying my offline Christmas, Roger Schank already raised the other big math question: Why do we make such a big deal of math performance and of teaching math to everyone in the first place? But with the educational math wars still raging, I doubt we've
seen the last of that one!

KEITH DEVLIN is a mathematician, writer, and broadcaster living in California. His latest book is The Math Gene: How Mathematical Thinking Evolved and Why Numbers Are Like Gossip.

I am going to take slight liberty with your question. With the publication a decade ago of Francis Fukuyama's justly acclaimed article The End Of History, many pundits and non-pundits assumed that historical forces and trends had been spent. The era of the "isms" was at an end; liberal democracy, market forces, and globalization had triumphed; the heavy weight of the past was attenuating around the globe.

At the start of 2001, we are no longer asking "Has History Ended?" History seems all too alive. The events of Seattle challenged the globalization behemoth; the world is no longer beating a path to internet startups; Communist and fascist revivals have emerged in several countries; the historical legacies in areas like the Balkans and the Middle East are as vivid as ever; and, as I noted in response to last year's question, much of Africa is at war. As if to remind us of our naivete, Fidel Castro and Saddam Hussein have been in "office" as long as most Americans can remember. If George II is ignorant of this history, he is likely to see it repeated.

HOWARD GARDNER, the major proponent of the theory of multiple intelligences, is Professor of Education at Harvard University and author of numerous books including The Mind's New Science and Extraordinary Minds: Portraits of Four Exceptional Individuals.

The first was the Enlightenment. Hobbes claimed the brutishishness of man in a state of nature called for a governmental Leviathan. Rousseau's concept of the noble savage led him to call for the abolition of property and the predominance of the "general will." Adam Smith justified market capitalism by saying that it is not the generosity but the self-interest of the baker that leads him to give us bread. Madison justified constitutional government by saying that if people were angels, no government would be necessary, and if angels were to govern people, no controls on government would be necessary. The young Marx's notion of a "species character" for creativity and self-expression led to "From each according to his ability"; his later belief that human nature is transformed throughout history justified revolutionary social change.

The second period was the 1960s and its immediate aftermath, when Enlightenment romanticism was revived. Here is an argument the US Attorney General, Ramsay Clark, against criminal punishment: "Healthy, rational people will not injure others ... they will understand that the individual and his society are best served by conduct that does not inflict injury. ... Rehabilitated, an individual will not have the capacity-cannot bring himself-to injure another or take or destroy property." This is, of course, an empirical claim about human nature, with significant consequences for policy.

The discussion came to an end in the 1970s, when even the mildest non romantic statements about human nature were met with angry denunciations and accusations of Nazism. At the century's turn we have an unprecedented wealth of data from social psychology, ethnography, behavioral economics, criminology, behavioral genetics, cognitive neuroscience, and so on, that could inform (though of course, not dictate) policies in law, political decision-making, welfare, and so on. But they are seldom brought to bear on the issues. In part this is a good thing, because academics have been known to shoot off their mouths with half-baked or crackpot policy proposals. But since all policy decisions presuppose some hypothesis about human nature, wouldn't it make sense to bring the presuppositions into the open so they can be scrutinized in the light of our best data?

STEVEN PINKER is professor in the Department of Brain and Cognitive Sciences at MIT; director of the McDonnell-Pew Center for Cognitive Neuroscience at MIT; author of Language Learnability and Language Development, Learnability and Cognition, The Language Instinct , How the Mind Works, and Words and Rules.

There is a set of questions that ought to have disappeared, but — given human psychology — probably never will do: questions that seek reasons for patterns that in reality are due to chance.

Why is "one plus twelve" an anagram of "two plus eleven"? Why do the moon and the sun take up exactly the same areas in the sky as seen from Earth? Why did my friend telephone me just as I was going to telephone her? Whose face is it in the clouds?

The truth is that not everything has a reason behind it. We should not assume there is someone or something to be blamed for every pattern that strikes us as significant.

But we have evolved to have what the psychologist Bartlett called an "effort after meaning". We have always done better to find meaning where there was none than to miss meaning where there was.

We're human. When we win the lottery by betting on the numbers of our birthday, the question Why? will spring up, no matter what.

NICHOLAS HUMPHREY is a theoretical psychologist at the Centre for Philosophy of Natural and Social Sciences, London School of Economics, and the author ofConsciousness Regained, The Inner Eye, A History of the Mind, and Leaps of Faith: Science, Miracles, and the Search for Supernatural Consolation.

With the success of molecular biology explaining the mechanisms of life we have lost sight of the question one level up. We do not have any good answers at a more systems level of what it takes for something to be alive. We can list general necessities for a system to be alive, but we can not predict whether a given configuration of molecules will be alive or not. As evidence that we really do not understand what it takes for something to be alive, we have not been able to build machines that are alive.

Everything else that we understand leads to machines that capitalize on that understanding — machines that fly, machines that run, machines that calculate, machines that make polymers, machines that communicate, machines that listen, machines that play games. We have not built any machines that live.

RODNEY A. BROOKS is director of the MIT Artificial Intelligence Laboratory and Chairman of iRobot Corporation. He builds robots.