Friday, May 14, 2010

The Value of Experiments

I'm not sure who he meant by "We" -- I guess complexity theorists -- but I found the statement very strange at the time. I do experiments all the time. (Note: Here I am considering computer simulations of various sorts as "experiments". Some people may quibble at this, but since it seems ALL THE REST OF SCIENCE is moving toward increasing use of computers, including and perhaps especially computer simulation, I think it would be odd not to call a computer scientist running a simulation an experiment.)

For example, a couple of days ago, I thought I had proved something about random sequences for a project I'm working on. I asked the graduate student on the project to code it up to check my work -- I often sanity check proofs with simulation code when I can -- and he shipped me some results that seemed surprising. They were consistent with my proof, but effectively showed that things behaved even better than I had proven (or expected).

So on the car ride home, I thought about it, and came up with what I think is a nice proof that explains what the student found in the simulation experiments. This improved proof will end up in the eventual paper, I'm sure.

While I wouldn't call myself a complexity theorist, it seems to me the results I'm working on here are in the class of complexity results -- I'm trying to show that certain permutations have certain properties with high probability, and I'm showing it by developing an algorithm that allows me to prove what I want. Perhaps that's sufficiently far removed from "complexity theory" that some people think it doesn't count, but then it seems you'd have to throw the whole probabilistic method out of complexity theory, which seems strange to me.

My point is that the actual use of computers -- for example to simulate processes in order to sanity check proofs or develop insights and conjectures -- is part of how I do my theoretical research. While certainly that approach might not be for everyone, I have deep concerns when a major-blogging-theorist says something like "We don't do experiments..." I worry that increasingly that computer science graduate students in theory are loathe to actually use computers, and indeed that this is part of a broader problem that theory students specialize narrowly so early they don't get exposure to and an understanding of the rest of computer science.

31 comments:

Anonymous
said...

A fascinating question, which provokes all kinds of responses.

An experimental systems person, who builds and measures systems, would not call a simulation an experiment.

This is because your simulation is built by you, with your assumptions implicitly or explicitly built in. It also, by definition, doesn't capture phenomenon that you are not aware of (or more cynically, messes up your model). The real world, in contrast, has phenomenon that manifest themselves whether you like it or not.

That's not to say a simulation is not useful, because it does help you examine, debug, and optimize your thinking. But it's not an experiment, and it's not science.

The definition of science is, of course, a debate onto itself. I would define science as the process of modeling and understanding the world, be it plants, planets, or planetlab. Central to that process is validating (i.e., testing) the models via experiments (measurements) of the actual system. An actual system is something physical, which includes a piece of hardware running software.

When the model doesn't work, you refine it until it does, where "works" means it explains behavior to within some threshold.

Hi, Anonymous - As an experimental systems person, I must politely disagree and stand up for the reasonableness of myself and my brethren. :) I completely agree with you about the potential pitfalls of simulation in some contexts, but at the same time, I would call simulation an experiment. In many experiments, the experimenter must make decisions about conditions, parameters, what to measure, etc.; simulation represents a particular choice of those. But it is an experiment. The real issue with simulation is that, like any experiment, you must understand what conclusions you can and cannot validly draw from the experiment.

It may just be a matter of words but I often hear some physicists making a difference between "experimental researchers" and "theorists", the latter only doing simulations on their computers. In some area where actual experiments exist, computer-based simulations do not seem to be considered as experiments.

While I would not go so far as to say that TOC cannot benefit from simulation (it does and I have seen colleagues who have proved major theoretical results but got lot of help by solving say SDPs on small instances). However, I think its pretty safe to say that Complexity theory does not usually use a lot of simulation tools. In particular, most of the cases, you are trying to prove something about asymptotics with huge constants involved which will make sense only when the instance size becomes huge and intractable for even small clusters to solve. I think the same problem holds w.r.t. probabilistic method as well.

For e.g., may be you want to prove that some construction of a graph is an expander but the spectral gap may be only 1/10^6. So, even for large enough graphs, you wont get any sense by computing their spectral gap.

I consider myself a systems person, and I DO simulations to quickly run lots of experiments (with certain assumptions like zero overhead etc.) to get a sense of effectiveness of my algorithm/system. They provide great insight into corner cases, which are difficult to reproduce in real systems. Of course, I am most satisfied, when I can prove that my approach works in a real system. I feel, simulations are an integral part of any approach to solving CS problems.

Simulation, calculation and computation are synonyms that are unambiguous to the rest of the community. If you're "experimenting" with an idea by running a simulation, no one's going to stop you from calling that an experiment.

However, when somebody says "We (CS'ers) don't do experiments", she/he definitely means experiment in the commonly understood sense and she/he'd be right. You can't refute that by saying that you happen to call your simulation an experiment.

This is anon #1. Dave, I guess I should have said, "as an experimental systems person" rather than just "an experimental systems person" because of course not everyone is going to agree with me and I don't speak for all experimental systems people. No insinuation of unreasonableness intended. But I still stand by my definition. Your point is well taken about drawing conclusions, because even in the physical world things are happening you may not be aware of (e.g., non-relativistic effects, temperature).

Pradeep, I did say simulations are useful. But if you *only* did simulations and never built anything, would/could you call yourself an experimental systems person?

Anon #1 yet again. I do consider astronomy a science, since it is coming up with models of behavior and performing measurements to confirm/deny/support them. I guess the difference from measuring a simulation is that there is something "real" being measured.

I realize this hinges on what the definition of "real" is, which of course is hard to do (Matrix anyone?). But to me, a Web server is real, but an OPNET simulation is not.

Thank you for expressing so eloquently what I would have expressed much less well (and politely).

Various Anonymi --

I don't really get how you're interpreting "experiment" if you're not including simulations. You seem to be saying it has to involve the "real world". One thing I do is build hash tables; they're used in the real world, and my simulations are accurate analogs of what they'll do in real-world situations. So aren't they experiments? But I guess I agree, if you want to interpret "experiment" as "using a test tube, or a large flame of some sort", then I'd happily agree I don't do much of that sort of experiment. That's just not what I understand an experiment to be.

One Anonymous said: "However, I think its pretty safe to say that Complexity theory does not usually use a lot of simulation tools..." Exactly what I am saying -- and I don't think we'd disagree here -- is that there's probably more room for complexity theorists to use such tools to gain insight into their work. My results are almost always asymptotic; that doesn't mean that simulations don't help me understand what's going on. Indeed, they're often key to understanding how to prove what's going on -- which leads to the bound on asymptotic behavior.

You have to understand that Lance's blog represents the "old school" of theoretical computer science.

There was just a _serious_ post there about examples where knowing something "slightly outside your area" can help you prove a result. This is an insane kind of thinking that ignores one of the biggest tools we have at our disposable: The internet and its ability to facilitate the wide-spread dissemination of knowledge. The majority of fundamental TCS results in the past 10 years have begun with the understanding of a new mathematical concept.

Likewise, if a complexity theorist is not doing experimental math (with the likes of MATLAB, Mathematica, etc.) then they are (probably due to lazyness) ignoring a major tool. Complexity theorists also play with small examples for intuition, some of which fail to be illuminating because asymptotic phenomena have not yet become apparent at that scale. And of course computer simulations can expand the scale of such examples by orders of magnitude.

There seems to be a misunderstanding here. Computation is not an experiment. In principle you can run your simulations/computations on a Turing machine on a paper, or in your head. Thus, what you do is just shortening the way you compute things. This can be done by any mathematician. It still does not say that mathematics, and complexity theory for that matter, is an experimental science. The latter was Lance's question.

P.S., I disagree that Lance represents "old school" complexity theory. This statement has no justification.

A trial or special observation, made to confirm or disprove something uncertain; esp., one under controlled conditions determined by the experimenter; an act or operation undertaken in order to discover some unknown principle or effect, or to test, establish, or illustrate some hypothesis, theory, or known truth; practical test; proof. [1913 Webster]

Sounds like simulations (particularly repeated simulations of random processes...) fall into this definition to me. Your definition is "Computation is not an experiment." What, exactly, then, is?

In principle astronomy should be an observational science (like paleontology). In practice, making the observations requires building devices that may or may not work, and otherwise have their own quirks, i.e., are experimental. In other words, to observe astronomical-scale phenomena, they need to experiment on detecting devices. Take LIGO for instance.

It doesn't make sense to restrict "experiment" away from questions that can be answered by a Turing machine, in principle. In principle, everything can be simulated by a Turing machine. In practice, there are even physical systems where all the relevant physics are believed to be understood, and yet even still real-world physical experiments are required because all known classical simulations require exponential time. It is becoming more common to construct physical devices in order to run "analog" simulations of purely mathematical quantum models. As technology advances, these will eventually become general-purpose quantum computers, but I don't know that there will be a clear before-and-after dividing line.

If these are experiments, then so are simulations run on your laptop, just at a different point on the continuum. However, I admit that I am a mathematical realist. A good theorem is as solid as a good desk, and experiments on bits are as valid as experiments on atoms.

Multiplying two numbers certainly can be "an act or operation undertaken in order to discover some unknown principle or effect," so the answer is trivially yes: this is an experiment under the definition you proposed.

Compare your post with the following two hypothetical ones:

Blogpost 1: "I do experiments all the time...For example, a couple of days ago, I thought I had proved something about the roots of the riemann zeta function...I asked the graduate student on the project to code it up to check my work -- I often sanity check proofs with code when I can -- and he shipped me some results that seemed surprising. They were consistent with my proof, but effectively showed that things behaved even better than I had proven (or expected).

So on the car ride home, I thought about it, and came up with what I think is a nice proof that explains what the student found..."

Blogpost 2: "I do experiments all the time...For example, a couple of days ago, I thought I had proved something about the riemann zeta function...

I spent several days performing tedious computations in my head -- I often sanity check proofs with computation when I can -- and the results that seemed surprising. They were consistent with my proof, but effectively showed that things behaved even better than I had proven (or expected).

So on the car ride home, I thought about it, and came up with what I think is a nice proof that explains what I found..."

Your post has negligible differences with the first one, which in turn has only negligible differences with the second one.

And anyway, I'm not the first to make this point. An earlier anonymous pointed out that you can do all your simulations/computations on a Turing machine on a paper, or in your head.

Alex, you are pushing the (inevitably) loose definition of an ordinary English word to the very extreme.

Michael's post simply claims that computation (either mental or mechanical) *can* be considered experiment, in any reasonable interpretation of the dictionary defn, even if the ultimate goal is to prove a theorem.

Lance's original post refers to “experiment” in the narrow sense, where the ultimate purpose is to infer (properties of a complex system, eg Nature, a computer network, a society), not to prove.

As a concrete example, you say "I experimented with different ways to construct a gadget, to prove that A can be reduced to B".

30 years ago people did this mentally; now they instruct the computer. Nothing essential changed, except now theorists can focus on more creative parts of the proof process, and find ways to “minimize” these creative parts.

There are two separate issues - one is using computers, and the other is doing experiments. There is a difference between simulations and calculations, and I believe simulation is indeed an experiment. (In fact, in contrast to Anon 1, I'd say it's much more an experiment thanbuilding a real world system, exactly because it is controlled and you are verifying or refuting a concrete prediction. But personally I find debates on the meaning of words such as "experiment" or "science" to be rather boring, so am happy to concede this point.)

Ryan Williams had a very nice survey on "applying practice to theory" - http://arxiv.org/abs/0811.1305 - though much of what he describes is using computers to do calculations whose outcome is a rigorous proof.

I used computers in complexity a couple of times, in particular I once tried to use them to get insight on a lower bound question. The main obstacle was that the algorithm I needed to run was exponential time, and I just couldn't get much insight on the small input sizes that were feasible to run it on.

I think many theorists used computer experiments much more than I, in particular I believe Feige had done quite a lot of simulations on planted clique and random 3SAT.

Lance probably meant that proofs rather than experiments are currently the most common tool in complexity. But I hope students realize that they shouldn't take these kinds of assertions made in blogs too seriously.

"Using a computer to multiply 16872934*34313411" is an experiment if you don't know precisely how the computer represents numbers, handles overflow or might have a strange error similar to what happened with Intel's floating point 16 years ago. Computers are full of surprises. Sometimes the experiment you thought you were running gets sidetracked by a reality shift.

In the Science podcast of 21 May [transcript], Craig Venter talks about biology in computer science terms: "cell operating system," "booting up" chromosomes, "debugging" the system, "slow growth" over a six week "cycle time," etc. This six week cycle time severely limis the number of experiments they can run in a year or a human lifetime. It would be nice if a valid computer simulation could be created that would help produce answers faster.

Continuing with the interview of Craig Venter where he asked, "What would be a minimal genome for a cell operating system that could go through self-replication?", it struck me how much what he was doing sounded like the slow process of assembly language programming we did in the 1970s, only now using biological systems. And that makes me wonder about the biological compilers and expressive languages this century will produce. Gerry Sussman is working on a biology book to follow SICM. Who else will blaze a trail?

More on Craig Venter in this On Point 25 May interview. I've just never heard a biologist speak to a computer person like me so clearly before. He makes me want to pick up a genome and start programming, except that's an instruction set I don't know.

Perhaps what makes "science" science is that we do not know everything about the variables involved and the interaction among the variables. If we did, then we would not need to experiment.

Similarly, what makes "computer science" science is that we do not know everything about the variables involved (storage choices, user preferences, behavior patterns) and the interaction among the variables.

In this manner, simple addition and subtraction could be considered science to a first grader who does not yet understand the variables involved (logical integers) and the interaction among the variables (logical addition or subtraction).

In Vladimir Arnold's NYT obituary (10-Jun-2010), he is quoted saying, in 1997, "Mathematics is a part of physics. Physics is an experimental science, a part of natural science. Mathematics is the part of physics where experiments are cheap."