An interesting recollection by Robert Weisbrot of Edward Witten's way to physics:

"I am reminded of a friend from the early 1970s, Edward Witten. I liked Ed, but felt sorry for him, too, because, for all his potential, he lacked focus. He had been a history major in college, and a linguistics minor. On graduating, though, he concluded that, as rewarding as these fields had been, he was not really cut out to make a living at them. He decided that what he was really meant to do was study economics. And so, he applied to graduate school, and was accepted at the University of Wisconsin. And, after only a semester, he dropped out of the program. Not for him. So, history was out; linguistics, out; economics, out. What to do? This was a time of widespread political activism, and Ed became an aide to Senator George McGovern, then running for the presidency on an anti-war platform. He also wrote articles for political journals like the Nation and the New Republic. After some months, Ed realized that politics was not for him, because, in his words, it demanded qualities he did not have, foremost among them common sense. All right, then: history, linguistics, economics, politics, were all out as career choices. What to do? Ed suddenly realized that he was really suited to study mathematics. So he applied to graduate school, and was accepted at Princeton. I met him midway through his first year there--just after he had dropped out of the mathematics department. He realized, he said, that what he was really meant to do was study physics; he applied to the physics department, and was accepted.

I was happy for him. But I lamented all the false starts he had made, and how his career opportunities appeared to be passing him by. Many years later, in 1987, I was reading the New York Times magazine and saw a full-page picture akin to a mug shot, of a thin man with a large head staring out of thick glasses. It was Ed Witten! I was stunned. What was he doing in the Times magazine? Well, he was being profiled as the Einstein of his age, a pioneer of a revolution in physics called "String Theory." Colleagues at Harvard and Princeton, who marvelled at his use of bizarre mathematics to solve physics problems, claimed that his ideas, popularly called a "theory of everything," might at last explain the origins and nature of the cosmos. Ed said modestly of his theories that it was really much easier to solve problems when you analyzed them in at least ten dimensions. Perhaps. Much clearer to me was an observation Ed made that appeared near the end of this article: every one of us has talent; the great challenge in life is finding an outlet to express it. I thought, he has truly earned the right to say that. And I realized that, for all my earlier concerns that he had squandered his time, in fact his entire career path--the ventures in history, linguistics, economics, politics, math, as well as physics--had been rewarding: a time of hard work, self-discovery, and new insight into his potential based on growing experience."

Some months ago I was sent a link to an April fools day paper, funny-haha, physicists style. That paper has now resurfaced on my desk: Schrödinger's Cat is not Alone. It's a humorous take on the interpretation of quantum mechanics and cat dynamics. Not the sort of humor that deepens my laugh wrinkles, but I thought some of you might find it amusing.

Saturday, October 23, 2010

Science, especially fundamental research, used to be a pastime of the rich. Within the last century its potential for innovation has been discovered. Today, fundamental research is widely recognized as an investment of our societies into the future. While this societal support and appreciation has opened the stage for everybody to participate, it came with a side-effect. Research is being more and more confined and run by the same rules that have been efficient for the producing and service-providing parts of our economies, the standards that are being used by corporations and companies, the framework that policy makers are used to think in. While this is not a complete disaster - after all science does still work remarkably well - the problem is that it is not an approach which works for all sorts of research.

I have discussed at this blogmany times the differences and similarities between the "Marketplace of Ideas" and the free marketplace of products. The most relevant difference is the property the system should optimize. For our economies it is profit and - if you believe the standard theory - this results ideally in a most efficient use of resources. One can debate how well the details work, but by and large it has indeed worked remarkably well. In the academic system however, the property to optimize is "good research" - a vague notion with subjective value. Before nature's judgement on a research proposal is available, what does or doesn't constitute good research is fluid and determined by the scientific community, which is also the first consumer of that research. Problems occur when one tries to impose fixed criteria for the quality of research, some measure of success. It sets incentives that can only deviate the process of scientific discovery (or invention?) from the original goal.

That is, as I see it, the main problem: setting wrong incentives. Here, I want to focus on a particular example, that of accountability and advance planning. In many areas of science, projects can be planned ahead and laid out in advance in details that will please funding agencies. But everybody who works in fundamental research knows that attempting to do the same in this area too is a complete farce. You don't know where your research will take you. You might have an idea of where to start, but then you'll have to see what you find. Forced to come up with a 3-year, 5 point plan, I've found that some researchers apply for grants after a project has already been finished, just not been published, and then spend the grant on what is actually their next project. Of course that turns the whole system ad absurdum, and few can afford that luxury of delaying publication.

The side-effect of such 3-year pre-planned grants is that researchers adapt to the requirements and think in 3-years pre-plannable projects. Speaking about setting incentives. The rest is good old natural selection. The same is true for 2 or 3 year postdoc positions, that just this month thousands of promising young researchers are applying for. If you sow short-term commitment, you reap short-term thinking. And that's disastrous for fundamental research, because the questions we really need answers to will remain untouched, except for those courageous few scientists who willingly risk their future.

Let us look at where the trends are going: The number of researchers in the USA holding faculty positions 7 years after obtaining their degree has dropped from 90% in ’73 to 60% in 2006 (NSF statistics, see figure below). The share of full-time faculty declined from 88% in the early 1970s to 72% in 2006. Meanwhile, postdocs and others in full-time nonfaculty positions constitute an increasing percentage of those doing research at academic institutions, having grown from 13% in 1973 to 27% in 2006.

The American Association of University Professors (AAUP) has compiled similar data showing the same trend, see the figure below depicting the share of tenured (black), tenure-track (grey), non-tenured (stripes) and part-time (dots) faculty for the years 1975, 1989, 1995 and 2007 [source] (click to enlarge).

In their summary of the situation, the AAUP speaks clear words "The past four decades have seen a failure of the social contract in faculty employment... Today the tenure system [in the USA] has all but collapsed... the majority of faculty work in subprofessional conditions, often without basic protections for academic freedom."

In their report, the AAUP is more concerned with the quality of teaching, but these numbers also mean that more and more research is done by people on temporary contracts, who at the time they start their job already have to think about applying for the next one. Been there, done that. And I am afraid, this shifting of weight towards short-term thinking will have disastrous consequences for the fundamental research that gets accomplished, if it doesn't already have them.

In the context of setting wrong incentives and short-term thinking another interesting piece of data is Pierre Azoulay et al's study

In their paper, the authors compared the success of researchers in the life sciences funded under two different programs, the Howard Hughes Medical Institute (HHMI), which "tolerates early failure, rewards long-term success, and gives its appointees great freedom to experiment" and the National Institute of Health (NIH), with "short review cycles, pre-defined deliverables, and renewal policies unforgiving of failure." Of course the interpretation of the results depends on how appropriate you find the used measure for scientific success, the number of high-impact papers produced under the grant. Nevertheless, I find it tale-telling that, after a suitable adjustment of researcher's average qualification, the HHMI program funding 5 years with good chances of renewal produces a better high-impact output than the NIH 3 year grants.

And speaking of telling tales, let me quote for you from the introduction of Azoulay et al's paper which contains the following nice anecdote:

"In 1980, a scientist from the University of Utah, Mario Capecchi, applied for a grant at the National Institutes of Health (NIH). The application contained three projects. The NIH peer-reviewers liked the first two projects, which were building on Capecchi's past research effeorts, but they were unanimously negative in their appraisal of the third project, in which he proposed to develop gene targeting in mammalian cells. They deemed the probability that the newly introduced DNA would ever fi nd its matching sequence within the host genome vanishingly small, and the experiments not worthy of pursuit.

The NIH funded the grant despite this misgiving, but strongly recommended that Capecchi drop the third project. In his retelling of the story, the scientist writes that despite this unambiguous advice, he chose to put almost all his efforts into the third project: "It was a big gamble. Had I failed to obtain strong supporting data within the designated time frame, our NIH funding would have come to an abrupt end and we would not be talking about gene targeting today." Fortunately, within four years, Capecchi and his team obtained strong evidence for the feasibility of gene targeting in mammalian cells, and in 1984 the grant was renewed enthusiastically. Dispelling any doubt that he had misinterpreted the feedback from reviewers in 1980, the critique for the 1984 competitive renewal started, "We are glad that you didn't follow our advice."

The story does not stop there. In September 2007, Capecchi shared the Nobel prize for developing the techniques to make knockout mice with Oliver Smithies and Martin Evans. Such mice have allowed scientists to learn the roles of thousands of mammalian genes and provided laboratory models of human afflictions in which to test potential therapies."

Tuesday, October 19, 2010

It's not very technical, so don't hesitate to have a look. It's basically a summary of interesting developments and hopefully explains why I like working in the area. If you're not from the field, you might stumble over one or the other expression, but I think you'll still get a pretty good impression what it's all about.

"The astronomical community did not believe we would ever really make the data public," says Mr. Szalay. The typical practice in the mid-1990s was to guard data because it was so difficult to get telescope time, and scholars did not want to get scooped on an analysis of something they gathered.

One incident demonstrates the mood at the time. A young astronomer saw a data set in a published journal and wanted to reanalyze it, so he asked his colleague for the numbers. The scholar who published the paper refused, so the junior scholar took the published scatterplot, guessed the numbers, and published his own analysis. The original scholar was so upset that he called for the second journal to retract the young scholar's paper.

Mr. Szalay said that astronomers changed their minds once the first big data sets hit the Web, starting with some images from NASA, followed by the official release of the first Sloan survey results in 2000.

I was surprised by that anecdote, but then I only started working in physics in '97. I recall though converting one or the other figure into a table to be able to reuse the data - an extremely annoying procedure, even with the use of suitable software. However, these were figures from decade-old textbooks, the data of which I needed to check whether a code I had written would make a sufficiently good fit. And 5 years back or so, when I had a phase of sudden interest in neutrino physics, I noticed that while one finds plenty of papers on the results of Monte-Carlo simulations to fit neutrino experiments, the data used is not for all experiments listed. In one case, I ended up browsing a bulk of Japanese PhD thesis (luckily in English) till I found the tables in the appendix of one, and then I had to type them off. Not sure how much the situation in that area has changed since. But change is inevitably on its way...

Wednesday, October 13, 2010

I recently came across a study in the sociology of science and have been wondering how to interpret the results:

Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States DataBy Daniele FanelliPLoS ONE 5(4): e10271. 1

There are many previous studies showing that papers are more likely to get published and cited if they report "positive results." Fanelli now has found a correlation between the likeliness of reporting positive results and the total number of papers published in a sample of papers with a corresponding author in the USA, published in the years 2000 - 2007, across all disciplines. The papers were sampled by searching the Essential Science Indicator's database with the query "test* the hypothes*" and then the sample was separated into positive and negative results by individual examination (both by the author and by an assistant). The result was as follows:

In a random sample of 1316 papers that declared to have “tested a hypothesis” in all disciplines, outcomes could be significantly predicted by knowing the addresses of the corresponding authors: those based in US states where researchers publish more papers per capita were significantly more likely to report positive results, independently of their discipline, methodology and research expenditure... [T]hese results support the hypothesis that competitive academic environments increase not only the productivity of researchers, but also their bias against “negative” results.

When I read that, I was somewhat surprised about the conclusion. Sure, such a result would "support" the named hypothesis in the sense that it didn't contradict it. But it seems to me like jumping to conclusions. How many other hypothesis can you come up with that are also supported by the results? I'll admit that I hadn't even read the whole paper when I made up the following ones:

Authors who publish negative results are sad and depressed people and generally less productive.

A scientist who finds a negative result wants more evidence to convince himself his original hypothesis was wrong, thus the study takes longer and in toto less papers are published.

Stefan suggested that the folks who published more papers are of the sort who hand out a dozen shallow hypothesis to their students to be tested, and are likely to be confirmed. (Stefan used the, unfortunately untranslatable, German expression "Dünnbrettbohrer," which means literally "thin board driller.")

After I had read the paper, it turns out Fanelli had something to say about Stefan's alternative hypothesis. Before I come to that however, I have to say that I have an issue with the word "positive result." Fanelli writes that he uses the term to "indicate all results that support the experimental hypothesis." That doesn't make a lot of sense to me, as one could simply negate the hypothesis and find a positive result. If it was that easy to circumvent a more difficult to publish, less likely to be cited, summary of ones research results, nobody would ever publish a result that's "negative" in that sense. I think that in most cases a positive result should be understood as one that confirms a hypothesis that "finds something" (say, an effect or a correlation) rather than one that "finds nothing" (we've generated/analyzed loads of data and found noise). I would agree that this isn't well-defined but I think in most cases there would be a broad agreement on what "find something" means, and a negation of the hypothesis wouldn't make the reader buy it as a "positive result." (Here is a counter-example). The problem is then of course that studies which "find nothing" are equally important as the ones that "find something," so the question whether there's a bias in which ones are published is important.

Sticking with his own interpretation, Fanelli considers that researchers who come to a positive result, and in that sense show themselves correct, are just the smarter ones, who are also more productive. He further assumes that the more productive ones are more likely to be found at elite institutions. With his own interpretation this alternative hypothesis doesn't make a lot of sense, because when the paper goes out, who knows what the original hypothesis was anyway? You don't need to be particularly smart to just reformulate it. That reformulation however doesn't make a non-effect into an effect, so let's better consider my interpretation of "positive result." Fanelli argues the explanation that people smart enough to do an experiment where something is to be found are also the ones who publish more papers generally doesn't explain the correlation for two reasons: First, since he assumes these people will be at elite institutions, there should be a correlation with R&D expenditure, which he didn't find. Second, because this explanation alone (without any bias) would mean that in states where 95% - 100% of published results were positive, the smart researchers hardly every misjudged in advance the outcome of an experiment and the experiment was always such that the result was statistically significant, even though other studies have shown that this is not generally the case.

To the alternative hypothesis that Stefan suggested, Fanelli writes:

A possibility that needs to be considered in all regression analyses is whether the cause-effect relationship could be reversed: could some states be more productive precisely because their researchers tend to do many cheap and non-explorative studies (i.e. many simple experiments that test relatively trivial hypotheses)? This appears unlikely, because it would contradict the observation that the most productive institutions are also the more prestigious, and therefore the ones where the most important research tends to be done.

Note that he is first speaking about "states" (which was what actually went into his study) and then later about "institutions." Is it the case indeed that the more productive states (that would be DC, AZ, MD, CA, IL) are also the ones where the most important research is done? It's not that I entirely disagree with this argument, but I don't think it's particularly convincing without clarifying what "most important research" means. Is it maybe research that is well cited? And didn't we learn earlier that positive results tend to get better cited? Seems a little circular, doesn't it?

In the end, I wasn't really convinced by Fanelli's argument that the correlation he finds is a result of systematic bias, though it does sound plausible, and he did verify his own hypothesis.

Let me then remark something about the sample he's used. While Fanelli has good arguments the sample is representative for the US states, it is not clear to me that it is in addition also representative for "all disciplines." The term "test the hypothesis" might just be more commonly used in some fields, e.g. medicine, than in others, e.g. physics. The thing is that in physics what is actually a negative result often comes in the form of a bound on some parameter or a higher precision of confirming some theory. Think of experiments that are "testing the hypothesis" that Lorentz-invariance is broken. There's an abundance of papers that do nothing than report negative results and more negative results (no effect, nothing new, Lorentz-invariance still alive). Yet, I doubt these papers would have shown up in the keyword search, simply because the exact phrase is rarely used. More commonly it would be formulated as "constraining parameters for deviations from Lorentz-invariance" or something similar.

That is not to say however I think there's no bias for positive results in physics. There almost certainly is one, though I suspect you find more of it in theoretical than in experimental physics, and the phrase "testing the hypothesis" again would probably not be used. Thing is that I suspect that a great many of attempts to come up with an explanation or a model that, when confronted with the data, fails, do never get published. And if they do, it's highly plausible that these papers don't get cited very much because it's unlikely very many people will invest further time into a model that was already shown not to work. However, I would argue that such papers should have their own place. That's because it presently very likely happens that many people are trying the same ideas and all find them to fail. They could save time and effort if the failure was explained and documented once and for always. So, I'd be all in favor of a journal for "models that didn't work."

Sunday, October 10, 2010

The appeal of string theory is in the simplicity of the idea. The devil, as usual, is in the details that follow. But one-dimensional objects are common in physical systems, and sometimes have little to do with string theory as the candidate theory of everything. The Lund string-model for example is an effective description for the fragmentation of color flux-tubes resulting in hadronization. And then there's cosmic strings.

Cosmic strings are stable, macroscopic, one-dimensional objects of high energy density that might be created in the early universe. It was originally suggested by Kibble in 1976 that such objects could form from symmetry-breaking phase transitions in quantum field theory that would take place when the universe was young and hot. These strings then form a network of (infinitely) long strings and loops that evolves with the expansion of the universe. It was thought for a while that strings might seed the density perturbations leading to the large-scale structures we see today, but this turned out not be consistent with the increasingly better data. While we know now that cosmic strings cannot have dominated in the early universe, some of them might still have been present, and still be present today.

The topic raised to new attention when it was found that cosmic strings might alternatively also be created in a string theory scenario in the early universe and then grow to macroscopic sizes. That is interesting because cosmic strings have a bunch of possibly observable consequences. For the purposes of testing string theory, the question is of course if one could distinguish a cosmic string created by ordinary quantum field theory from a cosmic super-string-theory-string.

Two of the most outstanding observables are that cosmic strings create peculiar gravitational lensing effects and can, while they move around, create cusps that release bursts of gravitational waves. There are other, more subtle, signatures, such as the creation of small non-Gaussianities in the cosmic microwave background (CMB) and some influence on the CMB tensor-modes, but the gravitational lensing and gravitational wave bursts have so far gotten the most attention due to the already good experimental prospects of detecting them.

There are however differences between fundamental and non-fundamental cosmic strings that have been pointed out during the last years. These stem from the presence of additional spatial dimensions in super-string theory. These have the consequence of altering the evolution of the string network, resulting in a denser network today, that might give one the hope that bursts of gravitational radiation are more likely to occur. Recently though, a more detailed study has been done, examining the motion of the string and the gravitational radiation emitted by taking into account the additional dimensions:

In their analysis, the researchers found that the presence of compactified extra dimensions larger than the width of the string dampens the gravitational wave emission. The effect depends on the the number of extra dimensions, and the damping can be several orders of magnitude. While this is interesting in the sense that the signal carries information about the sort of string one is dealing with, it means unfortunately that the signal is also far less likely to be detected at all. The strength of the damping depends also on the ratio of the width of the string and the size of the extra-dimensions, though this dependence is hidden within the model and not obvious from the results. I wrote to one of the authors of the above paper, Ruth Gregory, and was explained that simulating the dynamics of a thick string was quite a challenge which is why they had to resort to an empirical model.

A signal of cosmic strings would be tremendously exciting either way. But so far the prospects of being able to unambiguously assign such a signal to string theory seem slim.

Today, ten years later, he has been awarded the Nobel Prize in Physics for 2010, together with Konstantin Novoselov, for "for groundbreaking experiments regarding the two-dimensional material graphene".

Graphene, as this chicken-wire single-atom carbon layer is called, is a cool material for theorists and experimentalists alike - just have a look at Google to see how popular and important this stuff has become.

It seems to me that the way how Geim and Novoselov discovered graphene in 2004 by using adhesive tape to peel a single layer of carbon atoms off a piece of graphite - the "Scotch tape method" - and the levitating frog clearly show the same playful attitude towards physics, a great way to do science!

Monday, October 04, 2010

“But you have correctly grasped the drawback that the continuum brings. If the molecular view of matter is the correct (appropriate) one, i.e., if a part of the universe is to be represented by a finite number of moving points, then the continuum of the present theory contains too great a manifold of possibilities. I also believe that this too great is responsible for the fact that our present means of description miscarry with the quantum theory. The problem seems to me how one can formulate statements about a discontinuum without calling upon a continuum (space-time) as an aid; the latter should be banned from the theory as a supplementary construction not justified by the essence of the problem, which corresponds to nothing “real”. But we still lack the mathematical structure unfortunately. How much have I already plagued myself in this way!”

It's from a 1916 letter to Hans Walter Dällenbach, a former student of Einstein. (Unfortunately the letter is not available online.) I hadn't been aware Einstein thought (at least then) that a continuous space-time is not “real.” It's an interesting piece of history.

The phenomenology of quantum gravity is a still fairly young research field, and it is good to see it is attracting more interest and efforts every year. Experimental test, also in form of constraints, is an important guide on our search for a theory of quantum gravity. The challenge is that gravity is such a weak force compared to the other interactions, which has the consequence that quantum effects of gravity are extremely difficult to detect - they become important only at the Planck scale, at energies 16 orders of magnitude above what the Large Hadron Collider (LHC) will reach. However, during the last decade proposals have been put forward how quantum gravity could be testable nevertheless.

To that end, a number of models have been developed that arguably are at different levels of sophistication and plausibility, not to mention man-hours. As you can guess, this makes the field very lively, with many controversies still waiting to be settled. So far, none of these models have actually been rigorously derived from a candidate theory of quantum gravity. Instead, they are means to capture specific features that the fundamental theory has been argued to have. Such phenomenological models should thus be understood as simplifications, and one would expect them to be incomplete, leaving questions open for the fundamental theory to be answered.

The best place to look for quantum gravitational effects is in regions of strong curvature, that would be towards the center of black holes or towards the first moments of the universe. Since black hole interiors are hidden from our observation by the horizon, this leaves the early universe as the best place to look. It is thus not surprising that the bulk of effort has been invested into cosmology, most notably in form of String Cosmology and Loop Quantum Cosmology. The typical observables to look for are the amplitudes of tensor modes in the cosmic microwave background (CMB) and non-gaussianities.

The other area of quantum gravity phenomenology that has attracted a lot of attention are violations and deformations of Lorentz-invariance. These have been argued to appear in many approaches towards quantum gravity, including Loop Quantum Gravity (LQG), String Theory, Non-commutative geometry and emergent gravity, thus the large interest in the subject. However, the details are subtle. As I mentioned, no actual derivation exists from either LQG nor string theory, so don't jump to conclusions. Violations of Lorentz-invariance, which have a preferred restframe, can be captured in an effective field theory and are testable to extremely high precision with particle physics experiments (both collider and astrophysics) that allows us to tightly constrain them despite the smallness of the Planck scale. Deformations of Lorentz-invariance have no preferred frame and have been argued not be expressible as effective field theories, thus evading the tight constraints on Lorentz-invariance violations. Deformations of Lorentz-invariance generically lead to a modification of the dispersion relation and an energy-dependent speed of light, which may be observable in gamma ray burst events. As you know from my earlier writing, there's some discussion at the moment about the consistency of these models, and Lee Smolin gave a nice talk on that. Giovanni Amelino-Camelia summarized some of the recent work on the field, and added an interesting new proposal.

Besides these areas into which most of the work has been invested, there's a number of interesting models based on ideas about the fundamental structure of space-time. There is, for example, the causal sets approach, which is Lorentz-invariant, yet results in diffusion, aspects of which may be observable in the CMB polarization, which Fay Dowker spoke about at the workshop. Again, note however that the diffusion equation is motivated by, though not yet actually derived from, the causal sets approach. Then there's the quantum graphity models which I personally find very promising. Unfortunately, Fotini Markoupoulo could not make it to our meeting. I am reasonably sure though that we'll hear more about that model and its phenomenological implications in the future. And there's models about space-time foam leading to decoherence and/or CPT violation, models about space-time granularity leading to modifications of Eötvös' experiment (preprint here) - and I won't attempt to make this a complete listing because I'll inevitably forget somebody's pet model.

A class of models that one should discuss separately are those with a lowered Planck scale. It can happen in scenarios with large extra dimensions that quantum gravitational effects are not actually as feeble as we think they are from extrapolating the strength of gravity over 16 orders of magnitude. (For details, see my earlier post on such models.) It might instead be the Planck scale is just around the corner, making it accessible for collider experiments. A lot of work has been done in this area and these models are now up to being tested at the LHC. Thomas Rizzo gave a great talk on these prospects, and Marco Cavaglia spoke about the production of mini black holes in particular.

Then there's the possibility that we do already have observational evidence for quantum gravity, we just haven't recognized it for what it is. Stephon Alexander talked about a model that generates the neutrino masses, the cosmological constant, and makes additional predictions. Can you ask for more? (Preprint here.) And Greg Landsberg gave a talk about his recent work, trying out the idea that on short scales space-time is not higher- but lower-dimensional (preprint here). This idea has been around for some years now (even New Scientist noticed), but in my impression it so far lacks a really good phenomenological model.

We had three discussion sessions during the week. One on the question what principles might be violated by quantum gravity, one on experiments and thought experiments, and one on the future of particle physics. Unfortunately the recording of the last one, which was the most lively one, failed, but check out the other two. The discussions went very well, and I think they served their purpose of people getting to know each other and exchanging their opinions about the central questions of the field.

All together, I am very pleased with the workshop. Despite a number of organizational glitches, it went very smoothly. The experimentalists mixed well with the theorists, we covered a fair share of the relevant topics, and it didn't rain on the BBQ. To offer some self-criticism, we did this year have a lack of string phenomenology. Some may want to count Mavromatos as "stringy," but we didn't have anybody speaking on string cosmology for instance. That was not by design, but by chance, since, as usual, some of the people we invited declined or could eventually not make it. One of the lessons that I personally have drawn from this workshop is that there is some degeneracy in the predictions of various models that should be sorted out by combining several predictions. This has been well done in the case of extra dimensional models where a clear distinction between signatures of different scenarios has been invested a lot of effort into. Similar studies are however missing when it comes, for example, to quantum gravity phenomenology in the early universe as predicted by different models.

In any case, I hope that we will have more workshops in this series in the future. I'll keep you posted. And I'm sure, one day the workshop will come when we'll actually have evidence to discuss...