Posts Tagged ‘futurism’

“It’s insulting to imply that only a system of rewards and punishments can keep you a decent human being.”

Isaac Asimov was an extraordinary mind and spirit — the author of more than 400 science and science fiction books and a tireless advocate of space exploration, he also took great joy in the humanities (and once annotated Lord Byron’s epic poem “Don Juan”), championed humanism over religion, and celebrated the human spirit itself (he even wrote young Carl Sagan fan mail). Like many of the best science fiction writers, he was as exceptional at predicting the future as he was at illuminating some of the most timeless predicaments of the human condition. In a 1988 interview with Bill Moyers, found in Bill Moyers: A World of Ideas (public library) — the same remarkable tome that gave us philosopher Martha Nussbaum on how to live with our human fragility — Asimov explores several subjects that still stir enormous cultural concern and friction. With his characteristic eloquence and sensitivity to the various dimensions of these issues, he presages computer-powered lifelong learning and online education decades before it existed, weighs the question of how authors will make a living in a world of free information, bemoans the extant attempts of religious fundamentalism to drown out science and rational thought, and considers the role of science fiction as a beacon of the future.

The conversation begins with a discussion of Asimov’s passionate belief that when given the right tools, we can accomplish far more than what we can with the typical offerings of formal education:

MOYERS: Do you think we can educate ourselves, that any one of us, at any time, can be educated in any subject that strikes our fancy?

ASIMOV: The key words here are “that strikes our fancy.” There are some things that simply don’t strike my fancy, and I doubt that I can force myself to be educated in them. On the other hand, when there’s a subject I’m ferociously interested in, then it is easy for me to learn about it. I take it in gladly and cheerfully…

[What’s exciting is] the actual process of broadening yourself, of knowing there’s now a little extra facet of the universe you know about and can think about and can understand. It seems to me that when it’s time to die, there would be a certain pleasure in thinking that you had utilized your life well, learned as much as you could, gathered in as much as possible of the universe, and enjoyed it. There’s only this one universe and only this one lifetime to try to grasp it. And while it is inconceivable that anyone can grasp more than a tiny portion of it, at least you can do that much. What a tragedy just to pass through and get nothing out of it.

MOYERS: When I learn something new — and it happens every day — I feel a little more at home in this universe, a little more comfortable in the nest. I’m afraid that by the time I begin to feel really at home, it’ll all be over.

ASIMOV: I used to worry about that. I said, “I’m gradually managing to cram more and more things into my mind. I’ve got this beautiful mind, and it’s going to die, and it’ll all be gone.” And then I thought, “No, not in my case. Every idea I’ve ever had I’ve written down, and it’s all there on paper. I won’t be gone. It’ll be there.

Page from 'Charley Harper: An Illustrated Life'

Asimov then considers how computers would usher in this profound change in learning and paints the outline of a concept that Clay Shirky would detail and term “cognitive surplus” two decades later:

MOYERS: Is it possible that this passion for learning can be spread to ordinary folks out there? Can we have a revolution in learning?

ASIMOV: Yes, I think not only that we can but that we must. As computers take over more and more of the work that human beings shouldn’t be doing in the first place — because it doesn’t utilize their brains, it stifles and bores them to death — there’s going to be nothing left for human beings to do but the more creative types of endeavor. The only way we can indulge in the more creative types of endeavor is to have brains that aim at that from the start.

You can’t take a human being and put him to work at a job that underuses the brain and keep him working at it for decades and decades, and then say, “Well, that job isn’t there, go do something more creative.” You have beaten the creativity out of him. But if from the start children are educated into appreciating their own creativity, then probably almost all of us can be creative. In the olden days, very few people could read and write. Literacy was a very novel sort of thing, and it was felt that most people just didn’t have it in them. But with mass education, it turned out that most people could be taught to read and write. In the same way, once we have computer outlets in every home, each of them hooked up to enormous libraries, where you can ask any question and be given answers, you can look up something you’re interested in knowing, however silly it might seem to someone else.

Today, what people call learning is forced on you. Everyone is forced to learn the same thing on the same day at the same speed in class. But everyone is different. For some, class goes too fast, for some too slow, for some in the wring direction. But give everyone a chance, in addition to school, to follow up their own bent from the start, to find out about whatever they’re interested in by looking it up in their own homes, at their own speed, in their own time, and everyone will enjoy learning.

Later, in agreeing with Moyers that this revolution in learning isn’t merely for the young, Asimov adds:

That’s another trouble with education as we now have it. People think of education as something that they can finish. And what’s more, when they finish, it’s a rite of passage. You’re finished with school. You’re no more a child, and therefore anything that reminds you of school — reading books, having ideas, asking questions — that’s kid’s stuff. Now your’e an adult, you don’t do that sort of thing anymore…

Every kid knows the only reason he’s in school is because he’s a kid and little and weak, and if he manages to get out early, if he drops out, why he’s just a premature man.

Embroidered map of the infant Internet in 1983 by Debbie Millman

Speaking at a time when the Internet as we know it today was still an infant, and two decades before the golden age of online education, Asimov offers a remarkably prescient vision for how computer-powered public access to information would spark the very movement of lifelong learning that we’ve witnessed in the past decade:

You have everybody looking forward to no longer learning, and you make them ashamed afterward of going back to learning. If you have a system of education using computers, then anyone, any age, can learn by himself, can continue to be interested. If you enjoy learning, there’s no reason why you should stop at a given age. People don’t stop things they enjoy doing just because they reach a certain age. They don’t stop playing tennis just because they’ve turned forty. They don’t stop with sex just because they’ve turned forty. They keep it up as long as they can if they enjoy it, and learning will be the same thing. The trouble with learning is that most people don’t enjoy it because of the circumstances. Make it possible for them to enjoy learning, and they’ll keep it up.

When Moyers asks him to describe what such a teaching machine would look like — again, in 1988, when personal computers had only just begun to appear in homes — Asimov envisions a kind of Siri-like artificial intelligence, combined with the functionality of a discovery engine:

I suppose that one essential thing would be a screen on which you could display things… And you’ll have to have a keyboard on which you ask your questions, although ideally I could like to see one that could be activated by voice. You could actually talk to it, and perhaps it could talk to you too, and say, “I have something here that may interest you. Would you like to have me print it out for you?” And you’d say, “Well, what is it exactly?” And it would tell you, and you might say, “Oh all right, I’ll take a look at it.”

But one of his most prescient remarks actually has to do not with the mechanics of freely available information but with the ethics and economics of it. Long before our present conundrum of how to make online publishing both in the public interest and financially sustainable for publishers, Asimov shares with Moyers the all too familiar question he has been asking himself — “How do you arrange to pay the author for the use of the material?” — and addresses it with equal parts realism and idealism:

After all, if a person writes something, and this then becomes available to everybody, you deprive him of the economic reason for writing. A person like myself, if he was assured of a livelihood, might write anyway, just because he enjoyed it, but most people would want to do it in return for something. I imagine how they must have felt when free libraries were first instituted. “What? My book in a free library? Anyone can come in and read it for free?” Then you realize that there are some books that wouldn’t be sold at all if you didn’t have libraries.

(A century earlier, Schopenhauer had issued a much sterner admonition against the cultural malady of writing solely for material rewards.)

Painting of hell by William Blake from John Milton's 'Paradise Lost' (click image for more)

I’d like to think that people who are given a chance to learn facts and broaden their knowledge of the universe wouldn’t seek so avidly after mysticism.

[…]

It isn’t right to sell a person phony stock, and take money for it, and this is what mystics are doing. They’re selling people phony knowledge and taking money for it. Even if people feel good about it, I can well imagine that a person who really believes in astrology is going to have a feeling of security because he knows that this is a bad day, so he’ll stay at home, just as a guy who’s got phony stock may look at it and feel rich. But he still has phony stock, and the person who buys mysticism still has phony knowledge.

Science doesn’t purvey absolute truth. Science is a mechanism, a way of trying to improve your knowledge of nature. It’s a system for testing your thoughts against the universe and seeing whether they match. This works not just for the ordinary aspects of science, but for all of life.

MOYERS: You wrote a few years ago that the decline in America’s world power is in part brought about by our diminishing status as a world science leader. Why have we neglected science?

ASIMOV: Partly because of success. The most damaging statement that the United States has ever been subjected to is the phrase “Yankee know-how.” You get the feeling somehow that Americans — just by the fact that they’re American — are somehow smarter and more ingenious than other people, which really is not so. Actually, the phrase was first used in connection with the atomic bomb, which was invented and brought to fruition by a bunch of European refugees. That’s “Yankee know-how.”

MOYERS: There’s long been a bias in this country against science. When Benjamin Franklin was experimenting with the lightning rod, a lot of good folk said, “You don’t need a lightning rod. If you want to prevent lightning from striking, you just have to pray about it.”

ASIMOV: The bias against science is part of being a pioneer society. You somehow feel the city life is decadent. American history is full of fables of the noble virtuous farmer and the vicious city slicker. The city slicker is an automatic villain. Unfortunately, such stereotypes can do damage. A noble ignoramus is not necessarily what the country needs.

(What might Asimov, who in 1980 voiced fears that the fundamentalists coming into power with President Reagan would turn the country even more against science by demanding that biblical creationism be given an equal footing with evolution in the classroom, if he knew that a contemporary television station can edit out Neil deGrasse Tyson’s mention of evolution?)

'The Expulsion of Adam and Eve from the Garden of Eden' by William Blake from John Milton's 'Paradise Lost' (click image for more)

But when Moyers asks the writer whether he considers himself an enemy of religion, Asimov answers in the negative and offers this beautifully thoughtful elaboration on the difference between the blind faith of religion and the critical thinking at the heart of science:

My objection to fundamentalism is not that they are fundamentalists but that essentially they want me to be a fundamentalist, too. Now, they may say that I believe evolution is true and I want everyone to believe that evolution is true. But I don’t want everyone to believe that evolution is true, I want them to study what we say about evolution and to decide for themselves. Fundamentalists say they want to treat creationism on an equal basis. But they can’t. It’s not a science. You can teach creationism in churches and in courses on religion. They would be horrified if I were to suggest that in churches they should teach secular humanism as nan alternate way of looking at the universe or evolution as an alternate way of considering how life may have started. In the church they teach only what they believe, and rightly so, I suppose. But on the other hand, in schools, in science courses, we’ve got to teach what scientists think is the way the universe works.

That is really the glory of science — that science is tentative, that it is not certain, that it is subject to change. What is really disgraceful is to have a set of beliefs that you think is absolute and has been so from the start and can’t change, where you simply won’t listen to evidence. You say, “If the evidence agrees with me, it’s not necessary, and if it doesn’t agree with me, it’s false.” This is the legendary remark of Omar when they captured Alexandria and asked him what to do with the library. He said, “If the books agree with the Koran, they are not necessary and may be burned. If they disagree with the Koran, they are pernicious and must be burned.” Well, there are still these Omar-like thinkers who think all of knowledge will fit into one book called the Bible, and who refuse to allow it is possible ever to conceive of an error there. To my way of thinking, that is much more dangerous than a system of knowledge that is tentative and uncertain.

Riffing off the famous and rather ominous Dostoevsky line that “if God is dead, everything is permitted,” Asimov revisits the notion of intrinsic vs. extrinsic rewards — similarly to his earlier remark that good writing is motivated by intrinsic motives rather than external incentives, he argues that good-personhood can’t be steered by dogma but by one’s own conscience:

It’s insulting to imply that only a system of rewards and punishments can keep you a decent human being. Isn’t it conceivable a person wants to be a decent human being because that way he feels better?

I don’t believe that I’m ever going to heaven or hell. I think that when I die, there will be nothingness. That’s what I firmly believe. That’s not to mean that I have the impulse to go out and rob and steal and rape and everything else because I don’t fear punishment. For one thing, I fear worldly punishment. And for a second thing, I fear the punishment of my own conscience. I have a conscience. It doesn’t depend on religion. And I think that’s so with other people, too.

'The Rout of the Rebel Angels' by William Blake from John Milton's 'Paradise Lost' (click image for more)

He goes on to extend this conscience-driven behavior to the domain of science, which he argues is strongly motivated by morality and a generosity of spirit uncommon in most other disciplines, where ego consumes goodwill. (Mark Twain memorably argued that no domain was more susceptible to human egotism than religion.) Asimov offers a heartening example:

I think it’s amazing how many saints there have been among scientists. I’ll give you an example. In 1900, De Vries studied mutations. He found a patch of evening primrose of different types, and he studied how they inherited their characteristics. He worked out the laws of genetics. Two other guys worked out the laws of genetics at the same time, a guy called Karl Correns, who was a German, and Erich Tschermak von Seysenegg, who was an Austrian. All three worked out the laws of genetics in 1900, and having done so, all three looked through the literature, just to see what has been done before. All three discovered that in the 1860s Gregor Mendel had worked out the laws of genetics, and people hadn’t paid any attention then. All three reported their findings as confirmation of what Mendel had found. Not one of the three attempted to say that it was original with him. And you know what it meant. It meant that two of them, Correns and Tschermak von Seyenegg, lived in obscurity. De Vries is known only because he was also the first to work out the theory of mutations. But as far as discovering genetics is concerned, Mendel gets all the credit. They knew at the time that this would happen. That’s the sort of thing you just don’t find outside of science.

Moyers, in his typical perceptive fashion, then asks Asimov why, given how much the truth of science excites him, he is best-known for writing science fiction, and Asimov responds with equal insight and outlines the difference, both cultural and creative, between fiction in general and science fiction:

In serious fiction, fiction where the writer feels he’s accomplishing something besides simply amusing people — although there’s nothing wrong with simply amusing people — the writer is holding up a mirror to the human species, making it possible for you to understand people better because you’ve read the novel or story, and maybe making it possible for you to understand yourself better. That’s an important thing.

Now science fiction uses a different method. It works up an artificial society, one which doesn’t exist, or one that may possibly exist in the future, but not necessarily. And it portrays events against the background of this society in the hope that you will be able to see yourself in relation to the present society… That’s why I write science fiction — because it’s a way of writing fiction in a style that enables me to make points I can’t make otherwise.

Painting by Rowena Morrill

But perhaps the greatest benefit of science fiction, Moyers intimates and Asimov agrees, is its capacity to warm people up to changes that are inevitable but that seem inconceivable at the present time — after all, science fiction writers do have a remarkable record of getting the future right. Asimov continues:

Society is always changing, but the rate of change has been accelerating all through history for a variety of reasons. One, the change is cumulative. The very changes you make now make it easier to make further changes. Until the Industrial Revolution came along, people weren’t aware of change or a future. They assumed the future would be exactly like it had always been, just with different people… It was only with the coming of the Industrial Revolution that the rate of change became fast enough to be visible in a single lifetime. People were suddenly aware that not only were things changing, but that they would continue to change after they died. That was when science fiction came into being as opposed to fantasy and adventure tales. Because people knew that they would die before they could see the changes that would happen in the next century, they thought it would be nice to imagine what they might be.

As time goes on and the rate of change still continues to accelerate, it becomes more and more important to adjust what you do today to the fact of change in the future. It’s ridiculous to make your plans now on the assumption that things will continue as they are now. You have to assume that if something you’re doing is going to reach fruition in ten years, that in those ten years changes will take place, and perhaps what you’re doing will have no meaning then… Science fiction is important because it fights the natural notion that there’s something permanent about things the way they are right now.

Painting by William Blake from Dante's 'Divine Comedy' (click image for more)

Given that accepting impermanence doesn’t come easily to us, that stubborn resistance to progress and the inevitability of change is perhaps also what Asimov sees in the religious fundamentalism he condemns — dogma, after all, is based on the premise that truth is absolute and permanent, never mind that the cultural context is always changing. Though he doesn’t draw the link directly, in another part of the interview he revisits the problem with fundamentalism with words that illuminate the stark contrast between the cultural role of religion and that of science fiction:

Fundamentalists take a statement that made sense at the time it was made, and because they refuse to consider that the statement may not be an absolute, eternal truth, they continue following it under conditions where to do so is deadly.

Indeed, Asimov ends the conversation on a related note as he considers what it would take to transcend the intolerance that such fundamentalism breeds:

MOYERS: You’ve lived through much of this century. Have you ever known human beings to think with the perspective you’re calling on them to think with now?

ASIMOV: It’s perhaps not important that every human being think so. But how about the leaders and opinion-makers thinking so? Ordinary people might follow them. It would help if we didn’t have leaders who were thinking in exactly the opposite way, if we didn’t have people who were shouting hatred and suspicion of foreigners, if we didn’t have people who were shouting that it’s more important to be unfriendly than to be friendly, if we didn’t have people shouting that the people inside the country who don’t look exactly the way the rest of us look have something wrong with them. It’s almost not necessary for us to do good; it’s only necessary for us to stop doing evil, for goodness’ sake.

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.

You can also become a one-time patron with a single donation in any amount:

Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

In 1962, Buckminster Fuller delivered a prophetic lecture at Southern Illinois University on the future of education aimed at “solving [educational] problems by design competence instead of by political reform.” It was eventually published as Education Automation: Comprehensive Learning for Emergent Humanity (public library) — a prescient vision for online education decades before the web as we know it, and half a century before the golden age of MOOCs, with elements of TED and Pandora mixed in.

Fuller begins by tracing his own rocky start in traditional education and his eventual disillusionment with the establishment:

I am a New Englander, and I entered Harvard immaturely. I was too puerilely in love with a special, romantic, mythical Harvard of my own conjuring — an Olympian world of super athletes and alluring, grown-up, worldly heroes. I was the fifth generation of a direct line of fathers and their sons attending Harvard College. I arrived there in 1913 before World War I and found myself primarily involved in phases of Harvard that were completely irrelevant to Harvard’s educational system. For instance, because I had been quarterback on a preparatory school team whose quarterbacks before me had frequently become quarterbacks of the Harvard football team, I had hoped that I too might follow that precedent, but I broke my knee, and that ambition was frustrated.

[…]

Though I had entered Harvard with honor grades I obtained only “good” to “passing” marks in my college work, which I adolescently looked upon as a chore done only to earn the right to live in the Harvard community. But above all, I was confronted with social problems of clubs and so forth.

[…]

The problems they generated were solved by the great [Greek] House system that was inaugurated after World War I. My father died when I was quite young, and though my family was relatively poor I had come to Harvard from a preparatory school for quite well-to-do families. I soon saw that I wasn’t going to be included in the clubs as I might have been if I had been very wealthy or had a father looking out for me, for much of the clubs’ membership was prearranged by the clubs’ graduate committees. I was shockingly surprised by the looming situation. I hadn’t anticipated these social developments. I suddenly saw a class system existing in Harvard of which I had never dreamed. I was not aware up to that moment that there was a social class system and that there were different grades of citizens. My thoughts had been idealistically democratic. Some people had good luck and others bad, but not because they were not equal. I considered myself about to be ostracized or compassionately tolerated by the boys I had grown up with. … I became panicky about that disintegration of my idealistic Harvard world, went on a pretended “lark,” cut classes, and was “fired.”

Out of college, I went to work and worked hard. In no time at all, reports went to Harvard that I was a good and able boy and that I really ought to go back to college; so Harvard took me back. However, I was now considered a social maverick, and I saw none of my old friends; it hurt too much. Again I cut classes, spent all my year’s allowance, and once more was “fired.” After my second “firing” I again worked very hard. If World War I hadn’t come along, I am sure the university would have taken me back again, and I am sure I would have been “fired” again. Each time I returned to Harvard I entered a world of gnawing apprehensions, not an educational institution, and that was the problem.

But Fuller, known for his public distaste for specialization, managed to get an education anyway, “in due and slow course” — one of “[his] own inquiring, experimenting, and self-disciplining.” In the thirty years since his “Harvard fiasco,” he was invited as a “lecturer, critic, or experimental seminarist” at 106 universities around the world, including nine times at Princeton, eight at MIT, and four at Cornell.

Then, forty-seven years after he had been expelled from Harvard, the university’s Dean Bundy, at the time one of JFK’s White House advisors, invited him to return to Harvard as the Charles Eliot Norton Professor of Poetry — a prestigious and unusual position created to emphasize the value of cross-pollinating ideas from different fields, lending to the concept of “poetry” the most expansive possible definition, also held by such luminaries as Umberto Eco, T.S. Eliot, E.E. Cummings, Jorge Luis Borges, Italo Calvino, and Leonard Bernstein. Fuller, who wasn’t a “poet” in the traditional sense despite his poetic scientific revision of The Lord’s Prayer, writes:

The chair was founded because its donor felt that the university needed to bring in individuals who on their own initiative have long undertaken objective realizations reflecting the wisdom harvested by the educators, which realizations might tend to regenerate the vigor of the university world. Harvard fills this professorship with men* who are artists, playwrights, authors, architects, and poets. The word poet in this professorship of poetry is a very general term for a person who puts things together in an era of great specialization wherein most people are differentiating or “taking” things apart. Demonstrated capability in the integration of ideas is the general qualification for this professorship.

(* Alas, this isn’t merely a reflection of the era’s gender-biased language, wherein “men” really means people. “Men” really does mean men here, as Harvard didn’t appoint a female Norton poet until 1979 — Helen Gardner — and as of this writing, has only had two other women hold the position since its inception in 1925 — Nadine Gordimer in 1994–1995 and Linda Nochlin in 2003–2004.)

Fuller considers what made him qualified for the position:

By my own rules, I may not profess any special preoccupation or capability. I am a random element. … There is nothing even mildly extraordinary about me except that I think I am durable and inquisitive in a comprehensive pattern. I have learned much; but I don’t know very much; but what I have learned, I have learned by trial and error. And I have great confidence in the meager store of wisdom that I have secured.

He admonishes that traditional education incentivizes and measures precisely the opposite — not the ability to be “inquisitive in a comprehensive pattern,” but to memorize with a narrow focus, which in turn stifles innovation:

I am convinced that humanity is characterized by extraordinary love for its new life and yet has been misinforming its new life to such an extent that the new life is continually at a greater disadvantage than it would be if abandoned in the wilderness by the parents.

[…]

The kind of examination procedure that our science foundations and other science leaders have developed is one in which they explore to discover whether [a] capable student is able to unlearn everything he has learned, because experience has shown that that is what he is going to have to do if he is to become a front-rank scientist. The frontiers of science are such that almost every morning many of our hypotheses of yesterday are found inadequate or in error. So great is the frontier acceleration that now in a year of such events much of yesterday’s conceptioning becomes obsolete.

[…]

I am quite confident that humanity is born with its total intellectual capability already on inventory and that human beings do not add anything to any other human being in the way of faculties and capacities. What usually happens in the educational process is that the faculties are dulled, overloaded, stuffed and paralyzed, so that by the time that most people are mature they have lost use of many of their innate capabilities. My long-time hope is that we may soon begin to realize what we are doing and may alter the “education” process in such a way as only to help the new life to demonstrate some of its very powerful innate capabilities.

Writing several years before the historic moon landing, Fuller argues that while such cosmic feats might be admirable aspirations for scientific, technological, and cultural progress, “nothing is going to be quite so surprising or abrupt in the forward history of man as the forward evolution in the educational processes.” He goes on to present a prescient vision for the future of mobile, time-shifted, tele-commuted education:

Today we are extraordinarily mobile… Comprehensively, the world is going from a Newtonian static norm to an Einsteinian all-motion norm. That is the biggest thing that is happening at this moment in history. We are becoming “quick” and the graveyards of the dead become progressively less logical. [Educational planners] will have to be serving the children of the mobile people who really, in a sense, don’t have a base…

And yet, noting that the world’s population is increasing in exponential rates and greater and greater numbers of people will need to be educated, he prefaces his vision for the future of education with a cautionary note about the importance of remaining rooted in history and connected with the great minds who have come before. (Massimo Vignelli’s famous contention that “a designer without a sense of history is worth nothing” is equally true, after all, of any field.) Fuller writes:

The new life needs to be inspired with the realization that it has all kinds of new advantages that have been gained through great dedications of unknown, unsung heroes of intellectual exploration and great intuitively faithful integrities of men groping in the dark. Unless the new life is highly appreciative of those who have gone before, it won’t be able to take effective advantage of its heritage. It will not be as regenerated and inspired as it might be if it appreciated the comprehensive love invested in that heritage.

He goes on to outline the technological advances that would give our world’s “new life” access to universal education, describing a paradigm that presages the concept of network-based online education, combined with the high production value of TED talks:

I have taken photographs of my grandchildren looking at television. Without consideration of the “value,” the actual concentration of a child on the message which is coming to him is fabulous. They really “latch on.” Given the chance to get accurate, logical, and lucid information at the time when they want and need to get it, they will go after it and inhibit it in a most effective manner. I am quite certain that we are soon going to begin to do the following: At our universities we will take the men who are the faculty leaders in research or in teaching. We are not going to ask them to give the same lectures over and over each year from their curriculum cards, finding themselves confronted with another roomful of people and asking themselves, “What was it I said last year?” This is a routine which deadens the faculty member. We are going to select, instead, the people who are authorities on various subjects — the men who are most respected by other men within their respective departments and fields. They will give their basic lecture course just once to a group of human beings, including both the experts in their own subject and bright children and adults without special training in their field. This lecture will be recorded. . . . They will make moving picture footage of the lecture as well as hi-fi tape recording. Then the professor and his faculty associates will listen to this recording time and again.

“What you say is very good,” his associates may comment, “but we have heard you say it a little better at other times.” The professor then dubs in a better statement. Thus begins complete reworking of the tape, cleaned up, and cleaned up some more, as in the moving picture cutting, and new illustrative “footage” will be added on. The whole of a university department will work on improving the message and conceptioning of a picture for many months, sometimes for years. The graduate students who want to be present in the university and who also qualify to be with the men who have great powers and intellectual capability together with the faculty may spend a year getting a documentary ready. They will not even depend upon the diction of the original lecturer, because the diction of that person may be very inadequate to his really fundamental conceptioning and information, which should be superb. His knowledge may be very great, but he may be a poor lecturer because of poor speaking habits or false teeth. Another voice will take over the task of getting his exact words across. Others will gradually process the tape and moving picture footage, using communications specialists, psychologists, etc.

For instance, I am quite certain that some day we will take a subject such as Einstein’s Theory of Relativity, and with the “Einstein” of the subject and his colleagues working on it for a year, we will finally get it reduced down to what is “net” in the subject and enthusiastically approved by the “Einstein” who gave the original lecture. What is net will become communicated so well that any child can turn on a documentary device, a TV, and get the Einstein lucidity of thinking and get it quickly and firmly. I am quite sure that we are going to get research and development laboratories of education where the faculty will become producers of extraordinary moving-picture documentaries. That is going to be the big, new educational trend.

Noting that these “documentaries” will be distributed in various ways, Fuller even presages the feedback loop of content recommendation algorithms, envisioning a sort Pandora of education:

There is a direct, fixed, wireless connection, an actual direct linkage to individuals; and it works in both directions. Therefore, the receiving individual can beam back, “I don’t like it.” He may and can say “yes” or “no.” This “yes” or “no” is the basis of a binary mathematical system, and immediately brings in the “language” of the modern electronic computers. With two-way TV, constant referendum of democracy will be manifest, and democracy will become the most practical form of industrial and space-age government by all people, for all people.

It will be possible not only for an individual to say, “I don’t like it,” on his two-way TV but he can also beam-dial (without having to know mathematics), “I want number so and so.” It is also possible with this kind of two-way TV linkage with individuals’ homes to send out many different programs simultaneously; in fact, as many as there are two-way beamed-up receiving sets and programs. It would be possible to have large central storages of documentaries — great libraries. A child could call for a special program information locally over the TV set.

While Fuller acknowledges the “general baby-sitting function” of traditional schools and the value of these social experiences for kids, he argues that this new model of long-distance education also provides the additional benefit of greater capacity for solitary contemplation of materials. He points to Einstein, who he had met a few years earlier, to illustrate his point:

Einstein, when he wanted to study, didn’t sit in the middle of a school room. That is probably the poorest place he could have gone to study. When an individual is really thinking, he is tremendously isolated. He may manage to isolate himself in Grand Central Station, but it is despite the environment rather than because of it. The place to study is not in a school room.

This, Fuller argues, wouldn’t threaten traditional universities but, rather, fortify them and amplify their power by allowing education to serve a broader purpose in human culture — not mere memorization, but a lens on how to live. He concludes optimistically:

Education will then be concerned primarily with exploring to discover not only more about the universe and its history but about what the universe is trying to do, about why man is part of it, and about how can, and may man best function in universal evolution.

[…]

The universities are going to be wonderful places. Scholars will stay there for a long, long time — the rest of their lives — while they are developing more and more knowledge about the whole experience of man. All men will be going around the world in due process as everyday routine search and exploration, and the world experiencing patterning will be everywhere — all students from everywhere all over the world. That is all part of the new pattern that is rushing upon us.

Public domain photographs courtesy of the State Archives of North Carolina via Flickr Commons

Donating = Loving

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.

You can also become a one-time patron with a single donation in any amount.

Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

One right prediction in any one body of work would be lucky, but this many right answers can’t be luck — clearly, something sets these people apart. Many of the greatest sci-fi writers also had serious scientific training: Isaac Asimov had a Ph.D. in biochemistry, and Arthur C. Clarke had degrees in math and physics; H.G. Wells had a degree in biology…

At its core, good science fiction must rest on good science…

How far can we see into the future? Well, it depends on what we’re looking for — Isaac Asimov said that when we look at stars or galaxies or DNA, we’re looking at simple things, things that follow nice, neat rules and equations; but when we look at human history, it’s chaotic, unpredictable, our vision is limited. Science transforms the complex into the simple — that’s how we explain the chaos. Science is how we see farther, and science fiction is where we write down what we see.

Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.

You can also become a one-time patron with a single donation in any amount.

Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.

newsletter

donating = loving

Brain Pickings remains free (and ad-free) and takes me hundreds of hours a month to research and write, and thousands of dollars to sustain. If you find any joy and value in what I do, please consider becoming a Member and supporting with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:

You can also become a one-time patron with a single donation in any amount:

Brain Pickings participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn commissions by linking to Amazon. In more human terms, this means that whenever you buy a book on Amazon from a link on here, I get a small percentage of its price. That helps supportBrain Pickings by offsetting a fraction of what it takes to maintain the site, and is very much appreciated.