EVENTS

So there you are in a fast food drive-through, waiting for the people in the car ahead of you to place their order. They do so and move on, and you slowly move up to the speaker. It takes about 10 seconds for this shifting of cars to take place. Haven’t you wondered what the person at the other end of the speaker is doing with that 10 seconds of downtime? Me, neither.

But the good folks at fast food corporate headquarters care. They worry that the employee may be goofing off, perhaps drinking some water, thinking about their children or friends, what to make for dinner later, perhaps even thinking about how they can climb out of this kind of dead-end job. Committed as the corporate suits are to maximizing employee productivity, they feel that those 10 seconds between cars could be put to better use than to allow idle thoughts. But how?

Enter the internet. What if you outsourced the order taking to someone at a central location, who then enters the order into a computer and sends it back via the internet to the store location where you are? The beauty of such a situation is that the person at the central location could be taking an order from another store somewhere else in the country in the 10 second interval that was previously wasted. Genius, no?

Sound bizarre? This is exactly what McDonalds is experimenting with in California. The New York Times on April 11, 2006 reports on the way the process works and one such call center worker, 17 year old Julissa Vargas:

Ms. Vargas works not in a restaurant but in a busy call center in this town [Santa Maria], 150 miles from Los Angeles. She and as many as 35 others take orders remotely from 40 McDonald’s outlets around the country. The orders are then sent back to the restaurants by Internet, to be filled a few yards from where they were placed.

The people behind this setup expect it to save just a few seconds on each order. But that can add up to extra sales over the course of a busy day at the drive-through.

What is interesting about the way this story was reported was that it was focused almost entirely on the technology that made such a thing possible, the possible benefits to customers (saving a few seconds on an order) and the extra profits to be made by the company. “Saving seconds to make millions” as one call center executive put it.

There was no discussion of the possible long-term effects on the workers, or the fact that the seconds are taken from the workers’ lives while the millions are made by the corporation and its top executives and shareholders. This is typical of the way the media tend to underreport the perspective of the workers, especially low-paid ones.

Look at the working conditions under which the call center people work, all of which are reported as if they are nifty innovations in the business world, with no hint that there was anything negative about these practices:

Software tracks [Ms. Vargas’] productivity and speed, and every so often a red box pops up on her screen to test whether she is paying attention. She is expected to click on it within 1.75 seconds. In the break room, a computer screen lets employees know just how many minutes have elapsed since they left their workstations. . . The call-center system allows employees to be monitored and tracked much more closely than would be possible if they were in restaurants. Mr. King’s [the chief executive of the call center operation] computer screen gives him constant updates as to which workers are not meeting standards. “You’ve got to measure everything,” he said. “When fractions of seconds count, the environment needs to be controlled.”

This is the brave new world of worker exploitation. But in many ways it is not new. It is merely an updated version of what Charlie Chaplin satirized in his 1936 film Modern Times, where workers are given highly repetitious tasks and closely monitored so that they can be made to work faster and faster.

The call center workers are paid barely above minimum wage ($6.75 an hour) and do not get health benefits. But not to worry, there are perks! They do not have to wear uniforms, and “Ms. Vargas, who recently finished high school, wore jeans and a baggy white sweatshirt as she took orders last week.” And another plus, she says, is that after work “I don’t smell like hamburgers.”

Nowhere in the article was any sense of whether it is a good thing to push workers to the limit like this, to squeeze every second out of their lives to increase corporate profit. Nowhere in the article is there any sign that the journalist asks people whether it is ethical or even healthy for employees to be under such tight scrutiny where literally every second of their work life is monitored, an example of how the media has internalized the notion that what is good for corporate interests must be good for everyone. Just because you work for a company, does this mean they own every moment of your workday? Clearly, what these call centers want are people who are facsimiles of machines. They are not treating workers as human beings who have needs other than to earn money.

In many ways, all of us are complicit in the creation of this kind of awful working situation, by demanding low prices for goods and unreasonably quick service and not looking closely at how those prices are driven down and speed arrived at. How far are we willing to go in squeezing every bit of productivity from workers at the low end of the employment scale just so that the rest of us can save a few cents and a few seconds on a hamburger and also help push up corporate profits? As Voltaire said many years ago, “The comfort of the rich depends upon the abundance of the poor.”

The upbeat article did not totally ignore what the workers thought about this but even here things were just peachy. “Ms. Vargas seems unfazed by her job, even though it involves being subjected to constant electronic scrutiny.” Yes, a 17-year old woman straight put of high school may not be worn out by this routine yet. In fact the novelty of the job may even be appealing. Working with computers may seem a step up from flipping hamburgers at the store. But I would like to hear what she says after a year of this kind of work.

This kind of story, with its cheery focus on the benefits accruing to everyone except the worker, and its callous disregard for what the long-term effects on the workers might be, infuriates me.

I have been fortunate to always work in jobs where I had a great deal of autonomy and where the luxury of just thinking and even day-dreaming are important parts of work, because that is how ideas get generated, plans are formulated, and programs are envisaged. But even if people’s jobs do not require much creativity, that is not a reason to deny them their moments of free thought.

Share this:

So why do people end up sometimes plagiarizing? There are many reasons. Apart from the few who deliberately set out to do it because they are too lazy to do any actual writing of their own and lack any compunction about plagiarizing, I believe most end up doing it out of fear that they expected to say something that is interesting, original, and well written, usually (in the case of classroom assignments) about topics that they have little or no interest in.

This is a highly inflated and unrealistic expectation. I doubt that more than a few college or high school teacher really expect a high level of originality in response to classroom assignments, though that does not mean one should not try to achieve it.

A misplaced emphasis on originality creates unrealistic expectations that can cause insecure writers to plagiarize. I think that students who end up plagiarizing make the mistake of thinking that they must start by coming up with an original idea. Few people (let alone students who usually have very little writing experience) can reach such a high standard of originality. This is why they immediately hit a wall, lose a lot of time trying to get an idea, and in desperation end up plagiarizing by finding others who have said something interesting or relevant and “borrowing” their work. But since they want the reader to think that they have done the writing, they sometimes hide the borrowing by means of the ‘pointless paraphrase’ I wrote about previously.

Originality in ideas is often something that emerges from the writing and is not prior to the writing. A blindingly original idea may sometimes strike you, but this will be rare even for the most gifted and original writers. Instead, what you will usually find is a kind of incremental originality that emerges naturally out of the act of writing, where you are seemingly doing the mundane task of putting together a clear piece of writing using other people’s (cited) ideas. If you are writing about things that interest you, then you will be surprised to find that the very act of writing brings about something original, where you discover new relationships between old ideas.

As an instructor, what I am really looking for in student writing is something that just meets the single criterion of being well written. As for being interesting, all I want is to see that at least the writer is interested in the topic, and the evidence for that takes the form of the writer making the effort to try and convince the reader of the writer’s point of view. This seems like a modest goal but if followed can lead to pretty good writing.

In my experience, the most important thing is for writers to be interested enough in the topic that they want to say something about it, so the first condition for good writing is that the writer must care about the topic. The second condition is that the writer cares enough about it to want to make the reader care too. Once these two factors are in place, originality (to a greater or lesser degree) follows almost automatically from them.

It took me a long time to understand this. I had never written much in the earlier stages of my career (apart from scientific papers) because I was waiting for great new ideas to strike me, ideas that never came. But there came a time when I felt that a topic I cared a lot about (the nature of science) was one in which the point of view I held was not being articulated clearly enough by others. I began writing about it, not because I had an original idea, but because I felt a need to synthesize the ideas of many others into a simpler, more clearly articulated, position that I felt was missing from the discussion. In the process of creating that synthesis, some papers and my first book Quest for Truth: Scientific Progress and Religious Beliefs emerged. What turned out to be original (at least slightly) in them was the application of the ideas of certain classical philosophers and historians of science to the contemporary science-religion debate, something that I had not had in mind when I started writing. That feature emerged from the writing.

My second book The Achievement gap in US education: Canaries in the mine followed that same pattern. I was very concerned about what I felt were great misunderstandings about the causes of the achievement gap between black and white students in the US and how to deal with it. I felt that my experience and interests in science and education and politics and learning theory put me in a good position where I could bring ideas from these areas together. I did not have anything really original in mind when I started writing but whatever is original in the book emerged from the act of writing, the attempt to create a synthesis.

The same applies to these blog entries. I write about the things I care about, trying to make my point clear, without seeking to be original. After all, who can come up with original ideas five times per week? But very often I find that I have written things that I had not thought about prior to the writing.

To be continued. . .

POST SCRIPT: Is there no end to the deception?

One of the amazing things about they current administration is how brazen they are about misleading the public. The latest is that President Bush rushed to declare that “We have found [Iraq’s] weapons of mass destruction” in the form of mobile biological weapons laboratories, even while some intelligence investigators were finding that there was nothing to that charge.

The defense being offered by the administration’s spokespersons that these negative findings had not reached the president makes no sense. Before making a serious charge, it is the President and his staff’s responsibility to check what information is being gathered and processed. To shoot off his mouth when there was no urgency to do so is to be irresponsible at best and deceitful at worst.

Kevin Drum of Washington Monthly is maintaining a list of the more egregious examples of things the administration knew were not true or for which there were serious doubts, but went ahead and declared them as ‘facts’ anyway, to justify decisions that they had already made about attacking Iraq.

He is up to #8 and there is no reason to think that the list will not keep growing.

Share this:

In an interview, Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy, who called himself a “radical atheist,” explains why he uses that term (thanks to onegoodmove):

I think I use the term radical rather loosely, just for emphasis. If you describe yourself as “Atheist,” some people will say, “Don’t you mean ‘Agnostic’?” I have to reply that I really do mean Atheist. I really do not believe that there is a god – in fact I am convinced that there is not a god (a subtle difference). I see not a shred of evidence to suggest that there is one. It’s easier to say that I am a radical Atheist, just to signal that I really mean it, have thought about it a great deal, and that it’s an opinion I hold seriously…

People will then often say “But surely it’s better to remain an Agnostic just in case?” This, to me, suggests such a level of silliness and muddle that I usually edge out of the conversation rather than get sucked into it. (If it turns out that I’ve been wrong all along, and there is in fact a god, and if it further turned out that this kind of legalistic, cross-your-fingers-behind-your-back, Clintonian hair-splitting impressed him, then I think I would chose not to worship him anyway.) . . .

And making the move from Agnosticism to Atheism takes, I think, much more commitment to intellectual effort than most people are ready to put in. (italics in original)

I think Adams is exactly right. When I tell people that I am an atheist, they also tend to suggest that surely I must really mean that I am an agnostic. (See here for an earlier discussion of the distinction between the two terms.) After all, how can I be sure that there is no god? In that purely logical sense they are right, of course. You cannot prove a negative so there is always the chance that not only that a god exists but, if you take radical clerics Pat Robertson and Jerry Falwell seriously, has a petty, spiteful, vengeful, and cruel personality.

When I say that I am atheist, I am not making that assertion based on logical or evidentiary proofs of non-existence. It is that I have been convinced that the case for no god is far stronger than the case for god. It is the same reasoning that makes me convinced that quantum mechanics is the theory to use for understanding sub-atomic phenomena or natural selection is the theory to be preferred for understanding the diversity of life. There is always the possibility that these theories are ‘wrong’ in some sense and will be superceded by other theories, but those theories will have to have convincing evidence in their favor.

If, on the other hand, I ask myself what evidence there is for the existence of a god, I come up empty. All I have are the assurances of clergy and assertions in certain books. I have no personal experience of it and there is no scientific evidence for it.

Of course, as long time readers of this blog are aware, I used to be quite religious for most of my life, even an ordained lay preacher of the Methodist Church. How could I have switched? It turns out that my experience is remarkably similar to that of Adams, who describes why he switched from Christianity to atheism.

As a teenager I was a committed Christian. It was in my background. I used to work for the school chapel in fact. Then one day when I was about eighteen I was walking down the street when I heard a street evangelist and, dutifully, stopped to listen. As I listened it began to be borne in on me that he was talking complete nonsense, and that I had better have a bit of a think about it.

I’ve put that a bit glibly. When I say I realized he was talking nonsense, what I mean is this. In the years I’d spent learning History, Physics, Latin, Math, I’d learnt (the hard way) something about standards of argument, standards of proof, standards of logic, etc. In fact we had just been learning how to spot the different types of logical fallacy, and it suddenly became apparent to me that these standards simply didn’t seem to apply in religious matters. In religious education we were asked to listen respectfully to arguments which, if they had been put forward in support of a view of, say, why the Corn Laws came to be abolished when they were, would have been laughed at as silly and childish and – in terms of logic and proof -just plain wrong. Why was this?
. . .
I was already familiar with and (I’m afraid) accepting of, the view that you couldn’t apply the logic of physics to religion, that they were dealing with different types of ‘truth’. (I now think this is baloney, but to continue…) What astonished me, however, was the realization that the arguments in favor of religious ideas were so feeble and silly next to the robust arguments of something as interpretative and opinionated as history. In fact they were embarrassingly childish. They were never subject to the kind of outright challenge which was the normal stock in trade of any other area of intellectual endeavor whatsoever. Why not? Because they wouldn’t stand up to it.
. . .
Sometime around my early thirties I stumbled upon evolutionary biology, particularly in the form of Richard Dawkins’s books The Selfish Gene and then The Blind Watchmaker and suddenly (on, I think the second reading of The Selfish Gene) it all fell into place. It was a concept of such stunning simplicity, but it gave rise, naturally, to all of the infinite and baffling complexity of life. The awe it inspired in me made the awe that people talk about in respect of religious experience seem, frankly, silly beside it. I’d take the awe of understanding over the awe of ignorance any day.

What Adams is describing is the conversion experience that I described earlier when suddenly, switching your perspective seems to make everything fall into place and make sense.

For me, like Adams, I realized that I was applying completely different standards for religious beliefs than I was for every other aspect of my life. And I could not explain why I should do so. Once I jettisoned the need for that kind of distinction, atheism just naturally emerged as the preferred explanation. Belief in a god required much more explaining away of inconvenient facts than not believing in a god.

POST SCRIPT: The Gospel According to Judas

There was a time in my life when I would have been all a-twitter over the discovery of a new manuscript that sheds a dramatically different light on the standard Gospel story of Jesus and Judas. I would have wondered how it affected my view of Jesus and god and my faith.

Now this kind of news strikes me as an interesting curiosity, but one that does not affect my life or thinking at all. Strange.

Share this:

Just last week, it was reported that twenty one Ohio University engineering graduates had plagiarized their master’s theses. Why would they do that?

I think it is rare that people deliberately set out to use other people’s words and ideas while hiding the source. Timothy Noah in his Chatterbox column has a good article in Slate where he points to Harvard’s guidelines to students which state that unintentional plagiarism is a frequent culprit:

Most often . . . the plagiarist has started out with good intentions but hasn’t left enough time to do the reading and thinking that the assignment requires, has become desperate, and just wants the whole thing done with. At this point, in one common scenario, the student gets careless while taking notes on a source or incorporating notes into a draft, so the source’s words and ideas blur into those of the student.

But lack of intent is not a valid defense against the charge of plagiarism. That has not prevented even eminent scholars like Doris Kearns Goodwin from trying to invoke it. But as Noah writes, the American Historical Association’s (AHA) and the Organization of American Historians’ (OAH) statement on plagiarism is quite clear on this point:

The plagiarist’s standard defense-that he or she was misled by hastily taken and imperfect notes-is plausible only in the context of a wider tolerance of shoddy work. . . . Faced with charges of failing to acknowledge dependence on certain sources, a historian usually pleads that the lapse was inadvertent. This excuse will be easily disposed of if scholars take seriously the injunction to check their manuscripts against the underlying texts prior to publication.

Noah cites many authorities that say that citing the source does not always absolve you from the charge of plagiarism either.

Here’s the MLA Guide:

Presenting an author’s exact wording without marking it as a quotation is plagiarism, even if you cite the source [italics Chatterbox’s].

Here’s the AHA and the OAH:

Plagiarism includes more subtle and perhaps more pernicious abuses than simply expropriating the exact wording of another author without attribution. Plagiarism also includes the limited borrowing, without attribution, of another person’s distinctive and significant research findings, hypotheses, theories, rhetorical strategies, or interpretations, or an extended borrowing even with attribution [italics Chatterbox’s].

Noah gives an example of this. In the original FDR, My Boss, the author Grace Tully writes:

Near the end of the dinner Missy arose from her chair to tell me she felt ill and very tired. I urged her to excuse herself and go upstairs to bed but she insisted she would stay until the Boss left. He did so about 9:30 and within a few minutes Missy suddenly wavered and fell to the floor unconscious.

Doris Kearns Goodwin in her book In No Ordinary Time writes:

Near the end of the dinner, Grace Tully recalled, Missy arose from her chair, saying she felt ill and very tired. Tully urged her to excuse herself and retire to her room, but she insisted on staying until the president left. He did so at 9:30 p.m. and, moments later, Missy let out a piercing scream, wavered and fell to the floor unconscious.

Is this plagiarism? After all, she cites the original author in the text itself, and the wording has been changed slightly. Yes, plagiarism has occurred says Noah, citing Harvard’s guidelines:

If your own sentences follow the source so closely in idea and sentence structure that the result is really closer to quotation than to paraphrase . . .you are plagiarizing, even if you have cited the source [italics Chatterbox’s].

The whole point of a paraphrase is to make a point more clearly, to emphasize or clarify something that may be hidden or obscure in the original text. Russ Hunt gives a good example of the wrongful use of the paraphrase, which he takes from Northwestern University’s website The Writing Place:

Original

But Frida’s outlook was vastly different from that of the Surrealists. Her art was not the product of a disillusioned European culture searching for an escape from the limits of logic by plumbing the subconscious. Instead, her fantasy was a product of her temperament, life, and place; it was a way of coming to terms with reality, not of passing beyond reality into another realm. Hayden Herrera, Frida: A Biography of Frida Kahlo(258)

Paraphrase

As Herrera explains, Frida’s surrealistic vision was unlike that of the European Surrealists. While their art grew out of their disenchantment with society and their desire to explore the subconscious mind as a refuge from rational thinking, Frida’s vision was an outgrowth of her own personality and life experiences in Mexico. She used her surrealistic images to understand better her actual life, not to create a dreamworld (258).

As Hunt says:

What is clearest about this is that the writer of the second paragraph has no motive for rephrasing the passage other than to put it into different words. Had she really needed the entire passage as part of an argument or explanation she was offering, she would have been far better advised to quote it directly. The paraphrase neither clarifies nor renders newly pointed; it’s merely designed to demonstrate to a sceptical reader that the writer actually understands the phrases she is using in her text.

I think that this kind of common excuse, that the authors did not know they were plagiarizing because they had used the ‘pointless paraphrase’ or because they cited the source, is disingenuous. While they may not have been aware that this kind of paraphrasing technically does constitute plagiarism, it is hard to imagine that the perpetrators were not aware that they were doing something wrong.

The lesson, as I see it, is to always prefer the direct quote with citation to the ‘pointless paraphrase.’ Changing wording here and there purely for the sake of thinking that doing so makes the passage one’s own should be avoided.

POST SCRIPT: Discussing controversial ideas

Chris Weigold, who is a reader of this blog and also a Resident Assistant in one of Case’s dorms, has invited me to a free-wheeling discussion about some controversial propositions that I have discussed previously in my blog as well as those that I will probably address in the future, such as:

Should military service be mandatory for all citizens?

Should everyone be required to work in a service-oriented job for two years?

Is torture warranted in some situations?

Why shouldn’t Iran be allowed to become a nuclear power?

Should hospitals be allowed to refuse to keep a patient on life-support if the patient cannot pay?

Is patriotism a bad thing?

Are atheists more moral than religious people?

Why is killing innocent people in war not considered wrong?

If we can experiment on non-human animals, why not on humans?

How do people decide which religion is right?

or any other topic that people might raise.

The discussion takes place in the Clarke Tower lobby from 8:00-9:30pm on Wednesday, April 12, 2006. All are welcome.

Share this:

I believe that V for Vendetta will go down in film history as a classic of political cinema. Just as Dr. Strangelove captured the zeitgeist of the cold war, this film does it for the perpetual war on terrorism.

The claim that this film is so significant may sound a little strange, considering that the film’s premise is based on a comic book series written two decades ago and set in a futuristic Britain. Let me explain why I think that this is something well worth seeing.[Read more…]

Share this:

I don’t normally post on the weekends but last night I saw the film V for Vendetta and it blew me away. It is a brilliant political thriller with disturbing parallels to what is currently going on in the US. It kept me completely absorbed.

I’ll write more about it next week but this is just to urge people to see it before it ends its run.

Several well-preserved skeletons of the fossil fish were uncovered in sediments of former stream beds in the Canadian Arctic, 600 miles from the North Pole, it is being reported on Thursday in the journal Nature. The skeletons have the fins and scales and other attributes of a giant fish, four to nine feet long.

But on closer examination, scientists found telling anatomical traits of a transitional creature, a fish that is still a fish but exhibiting changes that anticipate the emergence of land animals – a predecessor thus of amphibians, reptiles and dinosaurs, mammals and eventually humans. . .

The scientists described evidence in the forward fins of limbs in the making. There are the beginnings of digits, proto-wrists, elbows and shoulders. The fish also had a flat skull resembling a crocodile’s, a neck, ribs and other parts that were similar to four-legged land animals known as tetrapods. . .

Embedded in the pectoral fins were bones that compare to the upper arm, forearm and primitive parts of the hand of land-living animals. The scientists said the joints of the fins appeared to be capable of functioning for movement on land, a case of a fish improvising with its evolved anatomy. In all likelihood, they said, Tiktaalik flexed its proto-limbs primarily on the floor of streams and may have pulled itself up on the shore for brief stretches.

In their journal report, the scientists concluded that Tiktaalik is an intermediate between the fish Panderichthys, which lived 385 million years ago, and early tetrapods.

For those of us who have long accepted natural selection and evolution as the theoretical prism through which to understand how the diversity of life came about, this discovery comes as a welcome, but not revolutionary, development since it seems to be one more confirmation of a major theory.

But what of those people who reject evolution and think that each species was an act of special creation? Should they treat this new discovery as a counter-example to their model and thus lead to its rejection?

Early (‘naïve’) versions of falsificationist theories of scientific development would argue that they should. In that model, advocated by philosopher of science Karl Popper, while no number of confirming instances can prove a theory right, a single counterexample can prove a theory wrong and warrant its elimination from the scientific canon.

Some people will argue that Tiktaalik is just that kind of counterexample and that it should serve as the death knell of creationism. Michael J. Novacek, a paleontologist at the American Museum of Natural History in Manhattan is quoted as saying: “We’ve got Archaeopteryx, an early whale that lived on land and now this animal showing the transition from fish to tetrapod. What more do we need from the fossil record to show that the creationists are flatly wrong?”

But he misunderstands how these arguments work because the naïve falsificationist model, while having an appealing intellectual simplicity, was soon shown to not really describe how science actually progresses. It turns out that there are many ways in which a theory can survive a single, or even several, counter-examples. I expect that the reaction to the Tektaalik discovery will provide us with a ringside seat in real time to see these defenses brought out.

Committed creationists will take one of two tacks. One argument is to assert that this new fossil is “really” just a fish or “really” just an animal, thus forcing it into an existing category, leaving the ‘gap’ unfilled. For example, this is how the creationist website Answers in Genesisdismisses the claim that the earlier Archaeopteryx was a transitional form between reptile and bird, saying: “Archaeopteryx was genuine. . . as shown by anatomical studies and close analysis of the fossil slab. It was a true bird, not a ‘missing link’.”

This problem is inevitable because of the way we classify things, requiring that they fit into discrete and identifiable boxes that can be labeled and treated as if they were distinct categories. But in reality we are really dealing with a continuum of items, and have to make choices of how to label it, and pressure exists to put it into a pre-existing box rather than create a new box. For example, is the new fossil that was discovered a fish? Or a land animal? It is actually neither but the way we structure our evolutionary scheme and our language seems to put pressure on us to make that kind of choice.

If the attempt to put the new discovery into a pre-existing category does not work and a fossil is widely accepted as being transitional, creationists can take a different tack and argue that now there are new gaps that require new features and no fossils exist that have them. This is what has been done in the past with previous finds. As the New York Times article points out:

One creationist Web site (emporium.turnpike.net/C/cs/evid1.htm) declares that “there are no transitional forms,” adding: “For example, not a single fossil with part fins part feet has been found. And this is true between every major plant and animal kind.”

I think it was Ernst Mayr who said that trying to satisfy people who demand that missing links be provided to convince them of evolution had occurred was to pursue a chimera. Because when you find a link to fill the ‘gap’ between two species, your opponents now have two new ‘gaps’ that they can ask you to fill, where they only had one before. After all, if your a theory predicts that species A evolved from species Z, and you discover a transitional fossil M, now critics can ask where the transitional fossils are between A and M and between M and Z. And so on.

The new fossil Tiktaalik is called a fishapod because it is both fish and tetrapod. But creationists can soon begin to ask “What about the gap between fish and fishapod? Or between fishapod and tetrapod?” So paradoxically, the more intermediate fossils that are found, the more ‘gaps’ in the fossil record that will be created.

It is a little like colors. We know that the colors of pigments form a continuum, going smoothly from one to another depending on how the primary pigments are mixed. But historically and out of a need to be able to communicate with one another, we have classified them into distinct colors: red, green, blue, yellow, brown, black, etc, suggesting that colors are discrete and separable. When we encounter colors that do not fit into existing categories, we have ‘gaps.’ We sometimes invent new names like magenta, cyan, taupe, beige, etc to describe these transitional colors. But that simply raises new and more gaps. What is the shade of color between magenta and red? Between magenta and blue? We can never eliminate all the gaps in a continuum. Trying to do so only creates an increasing number of gaps.

Scientific evidence alone can never prove which theory is true. But what it can do is convince some people that one side is more plausible or fruitful or useful than the other. Thomas Kuhn in his book The Structure of Scientific Revolutions argues that to switch allegiances from one scientific theory to another is often not a reasoned decision but to have something similar to a conversion experience. But the dramatic conversion experience, unlike Paul’s conversion to Christianity on the road to Damascus, may not have a dramatic cause.

Scientific conversions often occur because of incremental changes in the available evidence. As new evidence comes to light, holding on to an old theory becomes more difficult. At some point, one reaches a tipping point that causes one to completely switch one’s perspective. It is like the way a see-saw or teeter-totter works when it is near balance. Even a small change can cause it to change its orientation completely. While the process leading up the change may be gradual, the change itself is sudden.

It is thus for the individual and scientific theories. A single new element added to the mix can cause the switch. Suddenly, you see things in a new light and cannot imagine why the old idea ever appealed to you, and defending it seems pointless. And when that happens, you go back and re-evaluate all that you had believed before in the light of this new viewpoint.

Tiktaalik will not sway those who are deeply committed to creationist ideas because they have many ways with which to justify retaining their beliefs. But somewhere, there are people who are reading about it and saying to themselves, “Hmmm. . . You know, maybe I should look into this evolution business more closely.” And it is those people who will eventually switch.

Share this:

Some time ago, a commenter to this blog sent me a private email expressing this view:

Have you ever noticed people say “Do you believe in evolution?” just as you would ask “Do you believe in God?” as if both schools of thought have equal footing? I respect others’ religious beliefs as I realize I cannot disprove God just as anyone cannot prove His existence, but given the amount of evidence for evolution, shouldn’t we insist on asking “Do you accept evolution?”

It may just be semantics, but I feel that the latter wording carries an implied affirmation just as “Do you accept that 2+2=4?” carries a different meaning than “Do you believe 2+2=4?”

I guess the point I’m trying to make is that by stating something as a belief, it opens the debate to the possibility that something is untrue. While this may fine for discussions of religion, shouldn’t the scientific community be more insistent that a theory well supported by physical evidence, such as evolution, is not up for debate?

It’s a good point. To be fair, scientists themselves are partly responsible for this confusion because we also say that we “believe” in this or that scientific theory, and one cannot blame the general public from picking up on that terminology. What is important to realize, though, is that the word ‘believe’ is being used by scientists in a different sense from the way it is used in religion.

The late and deeply lamented Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy, who called himself a “radical atheist” puts it nicely (thanks to onegoodmove):

First of all I do not believe-that-there-is-not-a-god. I don’t see what belief has got to do with it. I believe or don’t believe my four-year old daughter when she tells me that she didn’t make that mess on the floor. I believe in justice and fair play (though I don’t know exactly how we achieve them, other than by continually trying against all possible odds of success). I also believe that England should enter the European Monetary Union. I am not remotely enough of an economist to argue the issue vigorously with someone who is, but what little I do know, reinforced with a hefty dollop of gut feeling, strongly suggests to me that it’s the right course. I could very easily turn out to be wrong, and I know that. These seem to me to be legitimate uses for the word believe. As a carapace for the protection of irrational notions from legitimate questions, however, I think that the word has a lot of mischief to answer for. So, I do not believe-that-there-is-no-god. I am, however, convinced that there is no god, which is a totally different stance. . .

There is such a thing as the burden of proof, and in the case of god, as in the case of the composition of the moon, this has shifted radically. God used to be the best explanation we’d got, and we’ve now got vastly better ones. God is no longer an explanation of anything, but has instead become something that would itself need an insurmountable amount of explaining…

Well, in history, even though the understanding of events, of cause and effect, is a matter of interpretation, and even though interpretation is in many ways a matter of opinion, nevertheless those opinions and interpretations are honed to within an inch of their lives in the withering crossfire of argument and counterargument, and those that are still standing are then subjected to a whole new round of challenges of fact and logic from the next generation of historians – and so on. All opinions are not equal. Some are a very great more robust, sophisticated and well supported in logic and argument than others.

When someone says that they believe in god, they mean that they believe something in the absence of, or even counter to, the evidence, and even to reason and logic. When scientists say they believe a particular theory, they mean that they believe that theory because of the evidence and reason and logic, and the more evidence there is, and the better the reasoning behind it, the more strongly they believe it. Scientists use the word ‘belief’ the way Adams says, as a kind of synonym for ‘convinced,’ because we know that no scientific theory can be proven with 100% certainty and so we have to accept things even in the face of this remaining doubt. But the word ‘believe’ definitely does not carry the same meaning in the two contexts.

This can lead to the generation of confusion as warned by the commenter but what can we do about it? One option is, as was suggested, to use different words, with scientists avoiding use of the word ‘believe.’ I would have agreed with this some years ago but I am becoming increasingly doubtful that we can control the way that words are used.

For example, there was a time when I used to be on a crusade against the erroneous use of the word ‘unique’. The Oxford English Dictionary is pretty clear about what this word means:

Of which there is only one; one and no other; single, sole, solitary.

That is or forms the only one of its kind; having no like or equal; standing alone in comparison with others, freq. by reason of superior excellence; unequalled, unparalleled, unrivalled.

Formed or consisting of one or a single thing

A thing of which there is only one example, copy, or specimen; esp., in early use, a coin or medal of this class.

A thing, fact, or circumstance which by reason of exceptional or special qualities stands alone and is without equal or parallel in its kind.

It means, in short, one of a kind, so something is either unique or it is not. There are no in-betweens. And yet, you often find people saying things like “quite unique” or “very unique” or “almost unique.” I used to try and correct this but have given up. Clearly, people in general think that unique means something like “rare” and I don’t know that we can ever change this even if we all become annoying pedants, correcting people all the time, avoided at parties because of our pursuit of linguistic purity.

Some battles, such as with the word unique are, I believe, lost for good and I expect the OED to add the new meaning of ‘rare’ some time in the near future. It is a pity because then we would then be left with no word with the unique meaning of ‘unique’, but there we are. We would have to say something like ‘absolutely unique’ to convey the meaning once reserved for just ‘unique.’

In science too we often use words with precise operational meanings while the same words are used in everyday language with much looser meanings. For example, in physics the word ‘velocity’ is defined operationally by the situation when you have an object moving along a ruler and, at two points along its motion, you take ruler readings and clock readings, where the clocks are located at the points where the ruler readings are taken, and have been previously synchronized. Then the velocity of the moving object is the number you get when you take the difference between the two ruler readings and divide by the difference between the two clock readings.

Most people (especially sports commentators) have no idea of this precise meaning when they use the word velocity in everyday language, and often use the word synonymously with speed or, even worse, acceleration, although those concepts have different operational meanings. Even students who have taken physics courses find it hard to use the word in its strict operational sense.

Take, for another example, the word ‘theory’. By now, as a result of the intelligent design creationism (IDC) controversy, everyone should be aware that the way this word is used by scientists is quite different from its everyday use. In science, a theory is a powerful explanatory construct. Science depends crucially on its theories because they are the things that give it is predictive power. “There is nothing so practical as a good theory” as Kurt Lewin famously said. But in everyday language, the word theory is used as meaning ‘not factual,’ something that can be false or ignored.

I don’t think that we can solve this problem by putting constraints on how words can be used. English is a wonderful language precisely because it grows and evolves and trying to fix the meanings of words too rigidly would perhaps be stultifying. I now think that we need to change our tactics.

I think that once the meanings of words enter mainstream consciousness we will not be successful in trying to restrict their meanings beyond their generally accepted usage. What we can do is to make people aware that all words have varying meanings depending on the context, and that scientific and other academic contexts tend to require very precise meanings in order to minimize ambiguity.

Heidi Cool has a nice entry where she talks about the importance of being aware of when you are using specialized vocabulary, and the need to know your audience when speaking or writing, so that some of the pitfalls arising from the imprecise use of words can be avoided.

We have to realize though that despite our best efforts, we can never be sure that the meaning that we intend to convey by our words is the same as the meaning constructed in the minds of the reader or listener. Words always contain an inherent ambiguity that allows the ideas expressed by them to be interpreted differently.

I used to be surprised when people read the stuff I wrote and got a different meaning than I had intended. No longer. I now realize that there is always some residual ambiguity in words that cannot be overcome. While we can and should strive for maximum precision, we can never be totally unambiguous.

I agree with philosopher Karl Popper when he said, “It is impossible to speak in such a way that you cannot be misunderstood.” The best we can hope for is to have some sort or negotiated consensus on the meanings of ideas.

Share this:

In the previous post on this topic, I discussed the plagiarism case of Ben Domenech, who had lifted entire chunks of other people’s writings and had passed them off as his own.

How could he have done such a thing? After all, all high school and college students get the standard lecture on plagiarism and why it is bad. And even though Domenech was home schooled, it seems unlikely that he thought this was acceptable practice. When he was confronted with his plagiarism, his defense was not one of surprise that it was considered wrong but merely that he had been ‘young’ when he did it or that he had got permission from the author to use their words or that the offending words had been inserted by his editors.

The cautionary lectures that students receive about plagiarism are usually cast in a moralistic way, that plagiarism is a form of stealing, that taking someone else’s words or ideas without proper attribution is as morally reprehensible as taking their money.

What is often overlooked in this kind of approach is that there are many other reasons why writers and academics cite other people’s works when appropriate. By focusing too much on this stealing aspect, we tend to not give students an important insight into how scholarship and research works.

Russ Hunt at St. Thomas University argues that writers cite others for a whole complex of reasons that have little to do with avoiding charges of plagiarism:

[P]ublished scholarly literature is full of examples of writers using the texts, words and ideas of others to serve their own immediate purposes. Here’s an example of the way two researchers opened their discussion of the context of their work in 1984:

To say that listeners attempt to construct points is not, however, to make clear just what sort of thing a ‘point’ actually is. Despite recent interest in the pragmatics of oral stories (Polanyi 1979, 1982; Robinson 1981), conversations (Schank et al. 1982), and narrative discourse generally (Prince 1983), definitions of point are hard to come by. Those that do exist are usually couched in negative terms: apparently it is easier to indicate what a point is not than to be clear about what it is. Perhaps the most memorable (negative) definition of point was that of Labov (1972: 366), who observed that a narrative without one is met with the “withering” rejoinder, “So what?” (Vipond & Hunt, 1984)

It is clear here that the motives of the writers do not include prevention of charges of plagiarism; moreover, it’s equally clear that they are not. . .attempting to “cite every piece of information that is not a) the result of your own research, or b) common knowledge.” What they are doing is more complex. The bouquet of citations offered in this paragraph is informing the reader that the writers know, and are comfortable with, the literature their article is addressing; they are moving to place their argument in an already existing written conversation about the pragmatics of stories; they are advertising to the readers of their article, likely to be interested in psychology or literature, that there is an area of inquiry — the sociology of discourse — that is relevant to studies in the psychology of literature; and they are establishing a tone of comfortable authority in that conversation by the acknowledgement of Labov’s contribution and by using his language –“withering” is picked out of Labov’s article because it is often cited as conveying the power of pointlessness to humiliate (I believe I speak with some authority for the authors’ motives, since I was one of them).

Scholars — writers generally — use citations for many things: they establish their own bona fides and currency, they advertise their alliances, they bring work to the attention of their reader, they assert ties of collegiality, they exemplify contending positions or define nuances of difference among competing theories or ideas. They do not use them to defend themselves against potential allegations of plagiarism.

The clearest difference between the way undergraduate students, writing essays, cite and quote and the way scholars do it in public is this: typically, the scholars are achieving something positive; the students are avoiding something negative. (my italics)

I think that Hunt has hit exactly the right note.

When you cite the works of others, you are strengthening your own argument because you are making them (and their allies) into your allies, and people who challenge what you say have to take on this entire army. When you cite reputable sources or credible authorities for facts or ideas, you become more credible because you are no longer alone and thus not easily dismissed, even if you personally are not famous or a recognized authority.

It seems like idiotic statements attributing natural events to supernatural causes are not restricted to Christian radical clerics like Pat Robertson. Some Sri Lankan Buddhist clergy are challenging him for the title of Religious Doofus.

Since Sri Lanka sits very close to the equator, the length of the day is the same all year round, not requiring the ‘spring-forward-fall-back’ biannual adjusting of the US. Sri Lankan time used to be 5.5 hours ahead of Universal Time (UT) but in 1996 the government made a one-time shift it to 6.5 hours in order to have sunset arrive later and save energy. But the influential Buddhist clergy were not happy with the change. As a compromise, the clocks were then again adjusted to make it just 6.0 ahead of UT as a compromise. Now the government is thinking of going back to the original 5.5. hours.

Some of the country’s Buddhist clergy are rejoicing at the prospect of a change because they say Sri Lanka’s “old” time fitted better with their rituals.

They believe a decade living in the “wrong” time has upset the country’s natural order with terrible effect.

The Venerable Gnanawimala says the change moved the country to a spiritual plane 500 miles east of where it should be.

“After this change I feel that many troubles have been caused to Sri Lanka. Tsunamis and other natural disasters have been taking place,” he says.

This is what happens when you mix religion and the state. You now have to worry about what your actions are doing to the longitudinal coordinates of your nation’s spiritual plane.

Share this:

Evan Hunter, who was the screenwriter on Alfred Hitchcock’s 1963 film The Birdsrecalled an incident that occurred when he was discussing the screenplay with the director.

I don’t know if you recall the movie. There’s a scene where after this massive bird attack on the house Mitch, the male character, is asleep in a chair and Melanie hears something. She takes a flashlight and she goes up to investigate, and this leads to the big scene in the attic where all the birds attack her. I was telling [Hitchcock] about this scene and he was listening very intently, and then he said, “Let me see if I understand this correctly. There has been a massive attack on the house and they have boarded it up and Mitch is asleep and she hears a sound and she goes to investigate?” I said, “Well, yes,” and he said, “Is she daft? Why doesn’t she wake him up?”

I remembered this story when I was watching the film The Interpreter with Nicole Kidman and Sean Penn. The Kidman character accidentally overhears something at the UN that puts her life at risk. After she complains to government agent Penn that no one seems to be bothered about protecting her from harm, Penn puts her on round-the-clock surveillance. So then what does Kidman do? She sneaks around, giving the slip to the very people assigned to protect her and refuses to tell Penn where she went and to whom she spoke and about what, causing herself and other people to be put at risk and even dying because of her actions. Hitchcock would have said, “Is she daft?”

This is one of my pet peeves about films, where the female character insists on doing something incredibly stupid that puts her and other people at peril. Surely in this day and age we have gone beyond the stale plot device of otherwise smart women behaving stupidly in order to create drama? Surely writers have more imagination than that? Do directors really think that viewers won’t notice how absurd that is?

According to Hunter, Hitchcock was always exploring the motivations of characters, trying to make their actions plausible. Hunter says:

[Hitchcock] would ask surprising questions. I would be in the middle of telling the story so far and he would say, “Has she called her father yet?” I’d say, “What?” “The girl, has she called her father?” And I’d say, “No.” “Well, she’s been away from San Francisco overnight. Does he know where she is? Has she called to tell him she’s staying in this town?” I said, “No.” And he said, “Don’t you think she should call him?” I said, “Yes.” “You know it’s not a difficult thing to have a person pick up the phone.” Questions like that.

(Incidentally, the above link has three screenwriters Arthur Laurents, who wrote Rope (1948), Joseph Stefano, who wrote Psycho (1960), and Evan Hunter reminiscing about working with Hitchcock. It is a fascinating glimpse behind the scenes of how a great director envisages and sets about creating films. The last quote actually reads in the original: “Yes, you know it’s not a difficult thing to have a person pick up the phone.” I changed it because my version makes more sense, and the original is a verbatim transcript of a panel discussion, in which such kinds of punctuation errors can easily occur.)

More generally, I hate it when characters in films and books behave in ways that are unbelievable. The problem is not with an implausible premise, which is often necessary to create a central core for the story. I can even accept the violation of a few laws of physics. For example, I can accept the premise of Superman that a baby with super powers (but susceptible to kryptonite) arrives on Earth from another planet and is adopted by a family and needs to keep his identity secret. I can accept of Batman that a millionaire like Bruce Wayne adopts a secret identity in order to fight crime.

What I cannot stand is when they and the other people act implausibly, when the stories built on this premise have logical holes that you can drive a Batmobile through. The latter, for example, is a flashy vehicle, to say the least, easily picked out in traffic. And yet, nobody in Gotham thinks of following it back to the Batcave, to see who this mysterious hero is. Is the entire population of that city daft?

And how exactly does the Bat-Signal that the Police Commissioner lights up the sky with supposed to work? You don’t need a physics degree to realize that shining a light, however bright, into the sky is not going to create a sharp image there. And what if it’s daytime? And if there are no clouds? (It’s been a long time since I read these comics. Maybe the later editions fixed these problems. But even as a child these things annoyed me.)

And don’t get me started on Spiderman going in and out of his apartment window in a building in the middle of a big city in broad daylight without anyone noticing.

As a fan of films, it really bugs me when filmmakers don’t take the trouble to write plots that make sense, and have characters who don’t behave the way that you would expect normal people to behave. How hard can it be to ensure this, especially when you have the budget to hire writers to create believable characters and a plausible storyline?

If any directors are reading this, I am willing to offer my services to identify and fix plot holes.

So please, no more daft women! No more ditzy damsels in distress! No more Perils of Pauline!

POST SCRIPT: CSA: Confederate States of America

I saw this film last week (see the post script to an earlier posting), just before it ended its very short run in Cleveland. It looks at what history would have been like if the south had won the civil war. Imagine, if you will, an America very much like what we have now except that owning black slaves is as commonplace as owning a dishwasher.

What was troubling is that although this is an imagined alternate history presented in a faux documentary format, much of it is plausible based on what we have now. What was most disturbing for me was seeing in the film racist images and acts that I thought were the over-the-top imaginings of the screenwriter about that might have happened in this alternate history, and then finding out that they actually happened in the real history.

Although the film is a clever satire in the style of This is Spinal Tap, I could not really laugh because the topic itself is so appalling. It is easy to laugh at the preening and pretensions of a rock band. It is hard to laugh at people in shackles.