Friday, September 30, 2011

I read fiction in sequential order, starting with the front cover and progressing page by page until I reach the back cover. Why not jump to random places in the book as we read, as we do with computer programs or mathematical proofs? Because fiction represents life, and life is lived in sequential order, each day following the previous one, starting with the day of our birth and progressing until the day of our death.

Computer games also force us to sequentially progress from level to level, and we usually cannot progress to the next level until we have succeeded in the current level. Why can we not skip ahead? Is it because they're supposed to model life?

In fact, in most computer games the character keeps getting stronger and stronger as they accumulate experience. Are there no computer games in which the character, after a certain point, starts diminishing in strength and has to practice compensating for decreased abilities? For example, if they have to throw some objects at a target, beyond some level, there might be some random perturbation that makes them shoot not so straight and miss the target with increased probability. I wonder how unpopular such a game might be.

Thursday, September 29, 2011

On Tuesday MSR had a panel about privacy. Alessandro Acquisti asked a probing question: "Why is it that people care so much about privacy?" He was not referring to people protecting their social security or credit card number, but to the near-universally expressed desire to keep one's private life away from the glare of public eyes. He suggested, without further developing it, that there was an issue of control: when someone has information about you, it gives them the means to control you.

That reminded me of Exodus chapter 3, when Moses asks God for his name. I once heard a commentary explain that, in some way, by giving his name to him, God lets Moses have some power over him. (As today is Rosh Hashannah, it also brings to mind the story of the creation, with Adam naming the creatures of the earth, thus also signaling his power over them). Today I found a whole wikipedia article on "True name", about the power of knowing somebody's true name, with mentions of the German folk tale "Rumpelstiltskin", and of many other old references.

The instinct to avoid revealing personal information goes far back in time, and is more deeply ingrained, more primal than the mere avoidance of manipulative marketing ploys!

Wednesday, September 28, 2011

Yesterday there was a celebration of the 20th anniversary of Microsoft Research. MSR New England is actually only 3 years old - but those events were happening on the same day at every MSR site. Here are some highlights.

The enforcer. There was a dense schedule of 8 minute talks, enforced by Christian Borgs. Madhu Sudan's presentation ended with: "I am sticking to the time limit because Christian is a little bit taller than me."

The setting. The talks were professionally filmed and the speakers were bathed in projector lights. At one point I saw Christian, as he was looking around for questions, raising his hand to shield his eyes from the light. That kind of setup, I think, makes it difficult to engage the audience. Every series of talks was preceded by a short movie that was basically a commercial for MSR, and there was music as each speaker came forward, as though they were presentators of a TV show. Weird! A couple of researchers slipped right into the role as if it was natural for them: "Hi, everybody!", cheerfully shouted Kate Crawford into the microphone as she arrived.

Tuesday, September 27, 2011

Yesterday Brown had a mandatory training in ethics for students funded by NSF. By coincidence, in the last few days I have been reading a book by Kierkegaard that also talks about ethics. It says: "The ethical as such is the universal, and as the universal it applies to everyone, which may be expressed from another point of view by saying that it applies every instant. He then proposes the Greek hero as a model that is intelligible for us -- Agamemnon sacrificing his daughter Iphigenia to do his higher duty: "the tragic here renounces himself in order to express the universal". Perhaps there is a microscopic version of this whenever someone chooses to do what they recognize as their duty even it is at significant personal cost.

K. then moves on to examine the behavior of Abraham getting ready to sacrifice his son Isaac. It is beyond heroic and becomes a lived paradox. Abraham cannot talk about it because no one would understand him. They would think he is out of his mind. He can only go forward in solitude, fear and trembling. From what I understood so far, K. finds that Abraham's crazy mixture of fatherly love, resignation to do as he is asked, and faith in a miracle, is the ultimate ethical behavior. But I bet that Abraham would not have fared well in Brown's ethical training session!

There is one obvious difference, as I have experienced, between ethical judgments among the French and the Americans: how to react when witnessing actions against the rules or against the common good. In the US, the natural ethical reaction (courageously ignoring the risk of retaliation against oneself) is to warn proper authorities about it, so as to help maintain order and justice by having the offender found out and properly dealt with. In France, the natural ethical reaction (courageously ignoring the risk of personally becoming a target of the law) is to hide the offender and help him or her, in solidarity, escape from the untrusted official authorities. The American would judge the French to be abetting criminal actions, the French would judge the American to be a dreadful collabo. Both sides think that their perspective is obviously the right one.

Monday, September 26, 2011

Watching a movie on a laptop is not without risks.If you set a laptop on a low table, getting ready to watch, setting down some glasses of sweet soda. If someone gets up to adjust the lighting and trips on the mechanism of the Lazyboy chair, then what happens?The front of the chair suddenly springs up and hits the table.The table tilts forward and falls on the floor.As it falls, the glasses of soda fall on the open laptop and spill.The wet laptop hits the floor and closes.

The table can be set back up, the floor and laptop can be wiped dry (although still a bit sticky), the laptop can be rebooted,and then you can discover something completely new: what a cracked computer screen looks like. It's not without aesthetics!

But this raises a new challenge: how does one work with a screen that is only functional on about 40% of its surface area?

I think that designers of slick environments should have a version for sick computers. There ought to be simplified settings suitable for each type of damage that can happen to a laptop. That would be progress.

Last week I had a conference deadline. The last few days were busy trying to tighten or strengthen some results and tidy up some proofs. As always, there was a little bit too much to do for the time imparted. I get caught every time. Why do computer scientists always have to wait until the last minute? It seems almost impossible to avoid: a draft is prepared well ahead of time, then, when the deadline approaches, proofreading opens up new possibilities, and then there is a mad rush to explore those possibilities before the deadline.

My students are already learning the pattern, with assignments routinely submitted in the last few minutes before the deadline. I try to scold them, but to no effect: there is a culture of doing things in a rush.

The office that checks grant proposals says that computer scientists are among the worst offenders, in terms of giving them sudden spikes of workload right before a deadline. Other disciplines are not as bad. I have often had the chance to work with mathematicians, and have seen them watch, bemused, as our work style shifts to crisis mode when a deadline comes close. What really perplexes them is that it is not an accidental occurrence but almost the norm. Mathematicians do not work in that way. Some years ago I prepared an application for an award and discussed it with a physicist friend. He said in passing: "When my application was supposed to be about ready, as the deadline was getting close, two weeks before the due date ..." - I thought: two weeks? That's a really long amount of time in my field! How come those last-day marathons do not happen to scientists in other fields?

One might ask: what is wrong with those projects completed at the last minute? Well, even if, with experience, one learns to (usually) submit results that are correct and readable, there are still a lot of negatives to this way of working. It disrupts the usual routine, and other things fall by the wayside: friends and family are temporarily ignored, lectures are poorly prepared, sleep and exercise take a setback, emails stay unattended, etc. It's stressful and it surely cannot be good for us.

Sometimes, if I happen to pass by the department late at night, I peek into one of the computer labs. I see young men (and women, but, really, mostly men), their faces unshaven, their unwashed bodies covered with torn jeans and stained T-shirts, either with their eyes fixated on the screen, or talking animatedly about arcane programming topics. I disapprove of the way in which they let their interest in programming take precedence over things that, in most of society, are considered basic requirements of a normal social life. But at the same time, I cannot say that I do not understand them.

Sunday, September 25, 2011

1. Have ready excuses for being late coming home or making dinner. For example: traffic; forgetfulness; quantum time jumps ("I was just about to leave. I had just one more quick thing to do before calling it a day; but when I next looked at the clock, somehow an hour had gone by.") If you do it frequently enough, you can train the people around you to accept it, and they won't mind it any more when you're late: either that, or they disappear from your life. Either way frees your time.

2. Wait before recharging your cell phone, until someone points out that they tried to reach you and couldn't get a hold of you. Eventually people stop calling you. That also frees your time.

3. As much as possible, keep your eyes focused on the paper or the screen that you are currently working with. Briefly glance at the human shapes coming home, just long enough to check that they are not aliens, and immediately bring your eyes back to the target of your attention.

4. Have a well-designed set of questions that you can go through without using any part of your brain, so that you can continue working at the same time as you are holding a conversation. For example:

"- How was your day?- Ok. The econ teacher didn't come, so we had study instead.- Excellent. And how was your day?- You just asked me that question.- Oh, sorry... have you had dinner yet?- No, not yet.- Oh. I think there's something in the fridge. Maybe. Just help yourself, ok?- Actually, could I go out with friends instead? - Great idea. Do you need money?- Yes. For dinner, and maybe for condoms. Can I take $20?- Sure, go ahead. I'm just going to stay here and work while you're gone. Don't come back too early -- um, I mean, don't come back too late!"

Saturday, September 24, 2011

The other day I was working with colleagues on the analysis of a streaming algorithm. We got to the stage where it seemed plausible that we had gathered all the ideas needed to analyze that algorithm - or maybe some variant, if necessary - using a hierarchy of cases with some predicates that we had roughly outlined - or maybe some variant if our study was incomplete. I had the impression that we might have had all the ideas that were necessary to build a proof, but things were hazy in my mind. I thought there was a proof lurking in there, waiting to be revealed, but I was not sure of it. It was as if we were trying to bake a cake, had gathered what we hoped was a sufficient collection of ingredients and utensils, and the only work that remained was to mix and bake them the right way.

At that point, what is required is no longer brain-storming but a more careful, structured attempt to execute a proof, and in my experience at that stage it is more efficient to work alone. I decided to give it a go. But, after two hours editing a Latex file (that already contained some proof of a related but weaker result), I gave up in frustration. The distraction of text editing and the inability to have at hand the shifting pictures that had been trying to form in my mind, prevented me from making any progress. I kept losing track of where I was headed, and was becoming less confident that there wasn't a major hole somewhere. Working with the file was just a waste of time.

Then I picked up my tablet and stylus, and gave it another try.

I slowly went through a proof attempt, drawing small sketches, writing definitions and calculations next to them. For my figures, there were as many colors available as I wished. When I made a mistake, erasing and modifying was easy and did not create a mess, as it would have if I had been working on paper. When I had to consider several similar cases, I could copy-paste the same figure several times, instead of having to redraw it, as I would have had to if I had been working on the white board. With no such side concerns to distract me, I was able to focus exclusively on the Ariane thread of my reasoning, and brought the rough proof to what I thought was a successful conclusion after filling a dozen tablet slides. On the last slide, using copy-paste, I gathered all the inequalities scattered throughout the text, then stared at them and verified that the proof was complete. At that point, I knew I had a proof. But nobody else would have been able to tell, because of various inconsistencies that only I would know were minor. In fact, even myself, if I had let it sit and had waited a few days before picking it up again, would then have been puzzled, would have wondered whether I really did have a valid proof, and would then have had to work through it again, wasting time by a needless repetition of side trips into dead end reasonings.

So I went through the proof again to clean it up: between the first and the last slide, notations had evolved, and some separations into sub-cases had turned out to be unnecessary. I went through it slowly, going back and forth to check notations, conditions, and to ensure consistency. Along the way, I underlined the main steps in the calculation - now that I had done it once, I knew better which ones were important in the end. With the fresh knowledge of how they fit together in the final wrap-up argument, I re-ordered them so that they were proved, as much as possible, in the order in which they would be needed in the end -- checking, as I was doing that, that that re-organization did not conflict with the timing of the definitions and notations.

I was pleased with the final result on the tablet. All I had to do was quickly type a Latex version of the slides (omitting the figures, that are too much work to do), which turned out to be about a page long, and I was done.

In just a few hours, I had gone from a collection of slightly fuzzy ideas to a proof that was maybe not yet a model of readability, but that was complete, structured, and coherent - in other words, a proof that, for a patient reader, was verifiable. I could now claim the result.

Without a tablet, I don't think that I could have done it so fast. With the tablet, everything that I try to picture in my mind can be laid down immediately. I am free to focus exclusively on the mathematical reasoning, without any unwelcome distractions: there is no technological barrier. It is for such reasons that I am a big fan of tablets with pen input.

This data gives some idea of what it means to have a paper accepted at one of those conferences.

For example I have always thought of CONCUR, COCOON, COCOA as conferences of roughly similar prestige: I have never published there and have never attended one, but I know people who do go there, and the names have so many letters in common that they're almost fused in my mind. Now I see that a paper at CONCUR is much more meaningful than at the other conferences with similar names.

The biggest surprise for me is the contrast between RANDOM and APPROX. I've always thought of them as twin conferences, but actually, RANDOM has much more impact - perhaps because it is better targeted. APPROX is broader and its scope is harder to distinguish from, say, mainstream SODA papers, so it only gets the leftovers after SODA has been served, whereas RANDOM, perhaps, gets some specialized papers that are sometimes a little bit marginal for SODA but that are still very nice papers that will be well cited by people in the area.

If you hit a niche and have the leading conference in a particular subtopic, then your conference can have high impact even if the number of papers and of attendees is relatively small.

Wednesday, September 21, 2011

Last week in class I was talking about comments: "In our programming language, a comment must be preceded by a double semi-columncolon." Then the mathematical meaning of those words struck me and I added: "But beware that it is not the same as a columncolon." I was disappointed that my wit earned me no laughs. Why not? I thought it was funny! But perhaps the students were too busy listening to the content of what I was saying to pay attention to the actual words.

So, this week, I decided to try again. I reminded them: "Recall that a comment must be preceded by a double semi-columncolon. But beware! A double semi-columncolon is not the same as a columncolon!" I spoke slowly, carefully articulating the words and emphasizing the "double semi-columncolon". Still no reaction. -- The students must have thought that the reason why I was speaking with such emphasis was because I was saying something particularly important about the programming language. Instead of getting some chuckles, I then saw one of the TAs speak up: "Actually, in this course we ask you to use two semi-columnscolon, but technically, even if you use a single one it is already parsed as a comment." I answered lamely: "Oh. I didn't know. I never tried putting a single semi-columncolon!" and moved on to the next topic of the lecture.

Making new jokes requires real skill! And identifying the word "double" with the number 2 and the word "semi" with the number 1/2 might be more natural in French than it is in English. Perhaps it is better than I stay away from jokes that involve the English language. There are some nuances that I will never acquire, no matter how many years I spend speaking English. Language jokes are for native speakers only!

Tuesday, September 20, 2011

A couple of weeks ago I taught "pair programming" in class, portraying it as a setting in which one person (the "driver") writes and the other one (the "navigator") advises and comments.

A few days ago, with a colleague wrapping up their visit, we decided to write our preliminary findings in a draft paper. -"Since we have a few hours left, let's start it together."-"Ok. Let me get my laptop out. Do you want to be the one typing, or shall I?"

Then I had an idea: let's do it simultaneously! We opened a google document, and, each one of us working on our own computer, sitting in the same room, simultaneously edited the manuscript, occasionally exchanging brief comments.

It's a very different way of working. There is no formal role. Each can switch from driver to navigator at will, alternatively looking at what the other person has been doing, possibly editing some of it at the same time, and writing away our own ideas.

The result of that experiment? Too early to tell, but I definitely want to try it again. When we discuss research together, each has a view of how the other person is reasoning about problems; it's a wonderful way to learn more about how to do research. When we draft a paper together in this manner, it gives us a view of how the other person goes about composing a text: do they go top-down, linearly, is there a lot of backtracking, do they stop to think every few words or every sentence or every paragraph? We can see it all. It might be a wonderful opportunity to learn more about how to write a paper, if we exploited it.

Monday, September 19, 2011

Proof-reading a preliminary manuscript.Correcting typos. Adding in some details.Finding small, or, perhaps, not so small errors. Spending more and more time on it.Delving deeper and deeper into a proof. Suddenly having some ideas for a new result. Shifting into "DEAR" mode, as they say around here in elementary school: Drop Everything And Research...

.. And that is why being organized and on-time is an elusive, never quite reachable goal in research jobs.

Sunday, September 18, 2011

As I am teaching CS 17, a freshman course with a large enrollment, this Fall I have a whole army of TAs: more than a dozen, plus two head TAs. Between them, they manage the course and take care of pretty much everything other than lecturing: organization, labs, homework, projects, grading, office hours, etc.

What is the role of the professor in this situation?

One role is to provide a reassuring presence vested with some status that comes with the position.Another role is to take responsibility if anything goes awry. Another role is to have decisional power for the final grades at the end of the semester.Another role is to update the course content according to my vast (ahem) knowledge of computer science at large.Another role is to help resolve difficulties that may come up, by providing the superior (ahem) counsel that comes from experience - all the TAs are themselves undergraduates, so this is an example of students teaching students.

Without the TAs, even if I spent 100 percent of my time working on the course, it would be simply impossible to teach the course in the way in which it is now done. But without me, the TAs could get organized to add lecturing to their already long list of tasks, and the course would go fine. However it would be difficult for them to assign final grades, and as the course evolves from year to year, I think that it would be difficult to maintain its coherence. In some ways, I am the head that officially represents the course, and one of my tasks is to provide both the continuity and the reforms that should take place from year to year, as we analyze what went well or not so well and brainstorm for modifications for the following year.

I am the Pope of CS17. The head TAs are the Roman Curia, the TAs are the dedicated priests, and the students are the Faithful.

Saturday, September 17, 2011

Here are the Soda papers that have an Arxiv version - not quite one third of them. I probably missed a few (searched for the exact title, and got a few syntax errors in my search); will complete the list if people send me corrections. This is part of my desire to make conferences less important in our field. One of their purposes used to be fast dissemination of new research, but now that purpose may be served better, faster, and more cheaply by Arxiv.

This is an ad for Brown Algorithms. The list of papers accepted to Soda is out. I noticed the high number of papers with authors from Brown. Among those, the PhD students are: David Eisenstat, Shay Mozes, and Olga Ohrimenko. Here is the list:

Friday, September 16, 2011

Recently I was looking at the paper "Finding hidden Hamiltonian cycles" by Broder, Frieze and Shamir. Take a random sparse graph (edge probability p=O(1)/n), and add a (hidden) Hamiltonian cycle to create the input graph. Can you then reconstruct some Hamiltonian cycle?

I found and downloaded the paper (a 2006 Random Structures and Algorithms paper, the child of a 1991 STOC paper) from Frieze's home page at CMU. I started reading it, and on page 6, surprise! I saw the following margin note: "Are we off by one here? There can no be two triangles with a common endpoint, but [...] since the corresponding edges are not independent." I immediately scanned the rest of the paper and found two other margin notes. Page 16, by Remark 2: "Are we off by one here? See the side note on page 6", and again on page 18: "If the premise is true (see my previous side note) then it actually follows that [...]. Why? At first blush it seems that it should be [...] Is there a better argument?"

Thus we get a glimpse of how their collaboration works. The mystery question: who wrote that comment? I am betting on Alan Frieze. What a lovely expression, "at first blush", to describe a mathematical estimate!

Thursday, September 15, 2011

In the US professors enjoy the support of administrative assistants who help them work more efficiently by doing some of their tasks for them. In France administrative support is low, and the general attitude is that people manage by themselves and only go to the department administrative support when they stumble onto a block and there is something that they cannot do by themselves.

Here in the US, when I do specific tasks for the department (such as PhD admissions or recruiting), I have lots of precious help from the department administrative staff. But I am thinking about the many day-to-day tasks that go with being a professor, mostly having to do with my individual work, not with department service. For those tasks, what do theoreticians use their staff help for?

For example, does everybody other than me use them to book their travels? I feel funny having my assistant choose the time and cost of my tickets, or the tradeoff location/cost of lodging, so I do it myself, but even the simplest travel takes at least a couple of hours to arrange, I have found.

Does anyone use them to manage email? There is a mix of personal and general content of emails to me, and I find it hard to imagine my assistant dealing with it, so a different approach to email might be required. Does anyone use them for journal editor work or NSF panel work or conference program committee work? for advising students? For setting up visits? In all those cases, although a significant fraction of my time is spent doing things that could in principle be delegated ("How do I download those papers?" "What are the degree requirements for a Master's, track 2?" "Will this particular course be taught next year?", etc), they are inextricably intertwined with questions that are impossible to delegate, and I do not know how to separate the two. As a result, I am really not using my administrative help hardly at all for matters other than department committee service.How do other people do it?

Wednesday, September 14, 2011

Claire: Why are your papers so long? They're sophisticated and groundbreaking, but I don't have the energy or time to read such long,complex papers.Dan: I don't want them to be that way. It's accidental that they are solong. I just don't know when to give up. I keep going further and furtherinto the problem until some solution comes, and sometimes it takes ahundred pages and years of work.

Claire: But if many people are like me, then, having papers that are muchlonger than the norm, do you really think that that's a good marketingstrategy?Dan: Probably not. But enough people read them to satisfy me. Anyway, theway to go if you want to learn about the work I have done is not to read mypapers. I find that the lecture notes format is much moresatisfying. When I write a paper, I feel that somehow I have to state andprove the strongest result I have. When I teach, I give a weaker versionthat has a much simpler, shorter, more understandable proof, and that'swhat I put in my lecture notes, so that's what you should read. Also, laterversions are usually quite a bit simpler.

Claire: How do you choose research problems?Dan: I find that I am attracted to problems that are hard. It's difficultfor me to get motivated unless I have a sense that the problem isimportant; that's what keeps me going.

Claire: But among all hard and important open problems, how do you knowwhich one is a "good" problem?Dan: It has to be a problem that gives me a lot of ideas, where I see a lotof possible connections with other areas, other subjects, where there aremany angles that I want to explore, many papers to read and new things tolearn. Then, along the way, I find that I am proving results on the sidethat are of interest in themselves. That's how I get a strong sense thatit's a good problem to be working on.

Claire: That seems to require a lot of experience. Is that how you pickedproblems when you were a graduate student?Dan: No, you're right, of course that's not how I worked when I was astudent. Then, I did the usual thing, you know, look around for hot topicson which there had recently been partial progress and new results had comeout that were begging for improvements, and then I was able to find thoseimprovements before professors just because I had a lot more time to devoteto research than those professors.

Claire: Do you think that you are a better researcher now than then?Dan: yes, I think I am better at research. Although, I must admit that Ihave a lot less time for research, and, in particular, I no longer havetime to keep up and read an insane number of papers, as I did when I was astudent.

Claire: At this point you're about mid-career. What do you want to do forthe next half of your career?Dan: That's a scary thought. I'm not really thinking about it. Right now Ihave projects that will keep me quite busy for the next few years, and bythen, presumably I will have learned new things that will drive myresearch for the following few years, and so on. I do predict that I willbe doing more and more programming.

Claire: Do you see yourself as a mathematician or as a computer scientist?Dan: I must say that there are many things mathematicians do that I do notget, in the sense that I have no clue why they work on thoseparticular questions. I can read a Math paper and like its content verymuch, yet not have any idea why the author worked on that question. Maybebecause someone else asked it before? It's a mystery to me. As for myself,I come to problems very much from the perspective of a computer sciencemotivation.

Tuesday, September 13, 2011

Some years ago, Dick Karp told me that he only worked on "easy" problems.Last week Dan Spielman told me that he likes to work on "hard" problems.

Some years ago, Dick Karp told me that when he almost never got stuck on aproblem; he never worked on a problem unless he had a way forward. Last weekDan Spielman told me that he was very stubborn, having a hard timeabandoning a problem.

Monday, September 12, 2011

Teaching a functional programming language to first year students, I tell them: "We can define constants by writing statements such as: (define myconstant 3.14). In the rest of the program, whenever the term "myconstant" is encountered, the quantity 3.14 will be substituted for it."

I explain the rules. Questions come. Then, suddenly there was an avalanche of questions offering crazier and crazier possibilities: -"What if you write (define 3 4)?" -"How about (define + 8)?" -"What about (define define 2)?" (I had to make the student repeat that question before I understood it). -"How about (define define define)?"

I had never thought of doing any of those things. Very creative! They're teenagers: they like to see how far they can push the rules before being hit back with an error message. Suddenly a boring lecture turned into an exciting challenge: just what crazy bits of code does the language let you get away with? That does not encourage good programming habits, but it's an exploration of what constant definition really means, and it may help understand it in more depth. Maybe some other year there should be an exercise around that!

Sunday, September 11, 2011

Today, September 11, is the 10 year anniversary of the hijacking of four airplanes by terrorists, resulting in the death of all passengers, the destruction of the Twin Towers in Manhattan, and damaging the Pentagon. Nearly 3000 died in the attacks. The attacks caused the US to go to war in Afghanistan, and, by a leap of logic, also caused the US to invade and occupy Iraq. It also led to the opening of the Guantanamo bay detention camp. So far there have been 1700 US casualties in (and around) Afghanistan and 4400 US casualties in Iraq. The estimates of the total number of deaths, all nationalities included, in Afghanistan and Iraq varies wildly, but the bulk (100000 to 800000) consists of Iraqi civilians.

Today, September 11, is also the 6 months anniversary of the Great East Japan earthquake, tsunami, and nuclear meltdown. The earthquake shifted the Earth on its axis by estimates of between 10 cm (4 in) and 25 cm (10 in). The speed of the Earth's rotation increased, shortening the day by 1.8 microseconds due to the redistribution of Earth's mass. (This affects the way that spacecraft being sent to Mars are navigated.) It triggered tsunami waves of up to 40.5 metres (133 ft) in places. The earthquake and tsunami caused 125000 buildings to be damaged or destroyed, as well as the meltdown of three nuclear reactors in Fukushima. 20000 people are dead or missing, more than 140000 people living near Fukushima were evacuated and are not returning there in the foreseeable future, and the consequences of the meltdowns are still developing.

Radioactive particles keep popping up in more and more places. Almost every day there is a measurement somewhere showing some radioactivity. The amounts are sometimes small, sometimes with sudden local spikes, but they show an irresistible spread. What are the long term consequences of relatively low-level radioactivity in the environment, the food, the water? Time will tell.

Progress in Fukushima: if you look at the raw data, it is not easy to spot signs of progress, however for the last couple of weeks the temperature of the first nuclear reactor has dropped below 100 Celsius degrees (http://atmc.jp/plant/temperature/?n=1), so there is some slow, slow progress.

Saturday, September 10, 2011

This week, first week of classes at Brown, I tried to pay more attention than usual to how I spend my time. Monday was a holiday. From Tuesday to Friday, I arrived at work between 8:30 and 9 am, and left at: 5, 7, 5, 8 pm. On the two days on which I left early, I spent, one evening 2 hours and the other evening 4 hours working at home. Discounting lunches, in total I spent 40 hours working over four days.

How was that time spent? Primarily, meeting advisees, students, teaching assistants, etc. In second position, dealing with various aspects of course organization. In third position comes actual lecture preparation and delivery. Then there was all the other stuff.

In those 40 hours, how many hours did I spend doing basic research activities?- attending seminars: 1 - discussing research questions: 2 - thinking, by myself, about research questions: 0- reading papers: 0 - reading titles and abstracts of new results on arxiv: 0 - working on my own papers: 1 Total: 4 hours, that is, 10 percent of my work time. What a minuscule amount of time! At this rate I'm not going to prove any breakthrough result any time soon...

Today, since (unsurprisingly) I have already fallen behind in my research schedule, I have set aside 3 hours to be spent working on the first draft of a paper. Bliss! (Of course it means that it will be the turn of home chores to fall behind, but who cares?)

Friday, September 9, 2011

The ice in the Arctic is melting. The total ice area has been decreasing steadily for years. There were many articles about it in 2007, when there was a sudden plunge to previously unseen values. The minimum of the year is usually reached some time in September. The 2007 minimum has just been passed, but I have not seen many articles about it.

http://neven1.typepad.com/blog/2011/09/new-area-record.html

The consequences:- local: loss of habitat for various animals- changing wind patterns, therefore changing climate worldwide- great savings in transportation by having boats, in late summer, go through the Bering straight and the NorthWest passage to transport goods from one side of the world to the other.- possible exploitation of oil reserves in the area- unlocking of methane in the thawing arctic seabed, possibly triggering other mechanisms that might accelerate global warming

If I was a college student now and was interested in the sciences, I would go into climatology. The scope of the changes will make the next 50 years really "interesting"!

Meanwhile I fly across continents and oceans many times every year, grumbling about travel woes, suffering jet lag, and contributing a hefty individual amount of CO2 in the high atmosphere, large enough that commuting by bike instead of car is a negligible effort to curb my consumption.

Thursday, September 8, 2011

During my first lecture this semester, the students were split into two groups in two different classrooms, and I taught the same half-lecture twice, once in each room. I made a few jokes, and observed that, in the first classroom, students laughed and seemed to appreciate them, but in the second classroom, the exact same jokes elicited, at best, a very lukewarm response.

Same jokes, same public, same context. Why do people sometimes laugh and sometimes not?

Wednesday, September 7, 2011

There are many more students than planned for my course. How can I teach interactively in a large crowd? My idea for today: provide the questions and answers myself. That enables me to be rude more than I would dare to be in real life.

I've always felt stupid reminding students that cheating was forbidden, as if they didn't know it already. Last year I took the moral high ground - "one should not teachcheat, not because of the threat of penalties, but because it's intrinsically wrong; and it's wrong for the following various reasons... " but that felt rather silly.

So this year, instead, I have planned the following dialogue between the fake student and the fake teacher who appear on my slides:Student-"What's Brown's policy regarding teaching?"Teacher-"It is not allowed."Student-[disappointed]"Oh."Teacher-"Duh"How satisfactory to be able to show my true thoughts about the need to remind students of the non-cheating rules.

Tuesday, September 6, 2011

Marek Chrobak sent me the following message: "FOCS'11 registration is now open, at http://www.cs.ucr.edu/~marek/FOCS11/. If you could mention this at your blog, we would appreciate it. In particular, you may want to mention the tutorials, generous student support, and the Piore award talk."

Generous student support: there will be a total of nearly $22,000 to support travel and student registration for FOCS 2011. Apply by September 22.

I note that the hotel has splendid pictures of a billiard table and of a swimming pool. It has a list of things to do: golfing, shopping and art museums I don't think will be too popular with FOCS attendees, but the visits of wineries might appeal to some. Long-distance travelers could fly in on Thursday night to give themselves an extra day to deal with travel mishaps, recover from jet lag, prepare their talk, and get drunk.

Monday, September 5, 2011

There is a discussion in climate change online forums about the resignation of a journal editor who accepted a paper that he now decided should not have been published.

BBC: “The editor of a science journal has resigned after admitting that a recent paper casting doubt on man-made climate change should not have been published. The paper, by US scientists Roy Spencer and William Braswell, claimed that computer models of climate inflated projections of temperature increase. It was seized on by “sceptic” bloggers, but attacked by mainstream scientists. Wolfgang Wagner, editor of Remote Sensing journal, says he agrees with their criticisms and is stepping down.”

This gave me some perverse ideas. When you do bad work, how do you get it published? Here are the two key points:(1) Choose a journal that is not specialized in the topic, and for which the subject of your paper is only marginally relevant. This avoids confrontation with experts.(2) Carefully put together an incomplete bibliography, so as to make sure that the editor doesn't accidentally send the paper for review to people who would shoot it down.

So simple!

One may imagine other helpful steps:(3) Get a reputed scientist to sign on as a coauthor.(4) Pressure behind the scenes. Name-dropping; allude to large grants.(6) Spell-checking, careful avoidance of those irritating details that so annoy reviewers.(7) Use your own terminology to make sure that the editors doesn't stumble upon relevant papers when using a search engine to look to researchers who have worked on the subject.

So much of the field depends on people's integrity! How can we guard against such techniques?

Sunday, September 4, 2011

"This discovery reveals a large but overlooked source of methane gas escaping from permafrost underwater, rather than on land," the study said. "More widespread emissions could have dramatic effects on global warming in the future."[...] More than 80 percent of the deep water and more than half of surface water had methane levels around 8 times higher than found in normal seawater, according to the study published in the journal Science. The researchers warned that the release of even a fraction of the methane stored in the shelf could trigger abrupt climate warming.

In more detail: http://www.nsf.gov/news/news_summ.jsp?cntn_id=116532

September 2, 2011. "Russian, US scientists set to study methane release in Arctic". -- http://en.rian.ru/science/20110902/166364635.html

"This expedition was organized on a short notice by the Russian Fund of Fundamental Research and the U.S. National Science Foundation following the discovery of a dramatic increase in the leakage of methane gas from the seabed in the eastern part of the Arctic, said Professor Igor Semiletov, the head of the expedition."

(boldface emphasis mine)

I know that the media love a dramatic story, since nothing sells better than undertones of catastrophic risks, but I must say that that makes me wonder: what's going on? Is the "short notice" expedition because of something new, or is it a consequence of the 2010 Science paper (based on measurements from previous years)? After searching the web fruitlessly, in the absence of results I conclude that the "dramatic increase" mentioned is the one discussed in the 2010 Science paper, not a dramatic increase coming on top of the previous already dramatic increase... as for the "short notice", every article picks up the same quote, but none mentions anything more about it, so it must be that the scientist just happened to drop those scary words, but it does not imply anything newly ominous. In other words, the terms used reflect the style of those scientists (I think) rather than newly dire events.

In any case, these people are fantastically good at P.R. Science, Scientific American, National Geographic, and short movies on youtube: all apparently referring to that same (important) piece of research published in Science.