Tuesday, August 31, 2010

Einstein considered that quantum mechanics must only be an approximate theory which was derivable from a "classical" theory which did not have the same philosophical problems. In a 1949 essay, Reply to Criticisms published in response to the essays in Albert Einstein: Philosopher-Scientist he wrote

...Within the framework of statistical quantum theory there is no such thing as a complete description of the individual system. ....The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems. .... For if the statistical quantum theory does not pretend to describe the individual system (and its development in time) completely, it appears unavoidable to look elsewhere for a complete description of the individual system; in doing so it would be clear from the very beginning that the elements of such a description are not contained within the conceptual scheme of the statistical quantum theory....this scheme could not serve as the basis of theoretical physics. Assuming the success of efforts to accomplish a complete physical description, the statistical quantum theory would, within the framework of future physics, take an approximately analogous position to the statistical mechanics within the framework of classical mechanics. I am rather firmly convinced that the development of theoretical physics will be of this type; but the path will be lengthy and difficult.

Saturday, August 28, 2010

I find it interesting that there are some physicists who won't even entertain this question. I remember raising it in a "round table" discussion at a conference and people just laughed and did not want to engage with the question. Hence, it is nice that last year Science published a short piece, Is Quantum Theory Exact? by Stephen Adler and Angelo Bassi.

They discuss a physical collapse model called the continuous spontaneous localization (CSL) model which involves adding noise terms to the Schrodinger equation in order to produce spontaneous wave function collapse on "macroscopic" scales. It should be stressed that this is a radical proposal involving fundamental new "forces" in the universe. They discuss physical bounds from known experiments for the parameters in the model.

Unfortunately, there are several significant issues that the short article does not mention.

Most of the dynamical equations and corresponding experimental signatures that these physical collapse models produce are identical to those for decoherence models. Hence, it will be difficult to experimentally distinguish the CSL model from the ubiquitous effects of decoherence from the environment .

Given that decoherence does not solve the measurement problem because of the problem of definite outcomes the physical collapse models seem to me to do only slightly better, invoking the "gamblers ruin" problem to derive the Born rule.

Decoherence has the distinct advantage of being derived directly from the laws of standard quantum mechanics, whereas current collapse models are required to postulate their reduction mechanism as a new fundamental law of nature. On the other hand, collapse models yield, at least for all practical purposes, proper mixtures, so they are capable of providing an “objective” solution to the measurement problem. The formal similarity between the time evolution equations of the collapse and decoherence models nourishes hopes that the postulated reduction mechanisms of collapse models could possibly be derived from the ubiquituous and inevitable interaction of every physical system with its environment and the resulting decoherence effects. We may therefore regard collapse models and decoherence not as mutually exclusive alternatives for a solution to the measurement problem, but rather as potential candidates for a fruitful unification.

Friday, August 27, 2010

If you are applying for something it is really worth proof reading your application a couple of times and getting someone else to as well. This is a nice and helpful thing that students and postdocs can do for each other. Don't rely on your busy supervisor.

At times I have to review large numbers of applications (jobs, grants, Ph.D proposals, or papers to referee ...). One thing I have noticed that quickly irritates me and some of my senior colleagues is the number of typos or incomplete information in many applications. Maybe it should not matter, but it does have an effect on how people perceive your application.

Also if you are reapplying (or resubmitting) do NOT assume that people will remember the last version and why they asked you to revise for re-submission or re-application. It really helps your case if people can see that you have taken on board the feedback given. Ignoring it can be the kiss of death....

Again, you may not like this or agree with it. But, that is the way things are ....

Thursday, August 26, 2010

This is a subtle question with subtle answers. I have hesitated on posting on this because the more I read the less sure I am of what the answer is. Basically, it seems there are a few key (distinct but related) aspects to the problem:

how does a measurement convert a coherent state undergoing unitary dynamics to a "classical" mixed state for which we can talk about probabilities of outcomes?

why is the outcome of an individual measurement is definite for the "pointer states" of the measuring apparatus?

can one derive of the Born rule which gives the probability of a particular outcome?

It seems that decoherence only solves the first problem, but not the last two.

Fullerenes .... showing quantum interference in two-slit experiments whereas they can be seen in a tunnelling electron microscope, for instance, at classically well-defined locations. This shifting boundary is confirmed by the decoherence mechanism. But to argue that this is evidence against the Copenhagen interpretation, as the author does, is unjustified: the Copenhagen interpretation itself says that whether an object is classical or quantum is a function of the chosen experimental set-up.

Decoherence is, to follow physicist John Bell, for all practical purposes sufficient to describe the loss of quantum features for large systems. There are still unanswered questions. It is well known, which Schlosshauer also stresses, that the interference terms never strictly vanish, so decoherence can tell us only that the interference terms disappear effectively but not rigorously. Even after accepting that approximation, we are still left with the system represented as a mixture of various possibilities, like being in two places at once. In the classical world, we know that the system is always at this place or at that place. To explain the two as equivalent is again, for all practical purposes, sufficient. Yet it involves, as Bell points out, another interpretive leap.

Monday, August 23, 2010

Seth Olsen brought to my attention an Editorial in the American Chemical Society journal Chemical BiologyDeep Impact: Scientific Evaluation by the Numbers. It discusses some of the problems associated with the metric Impact Factors for journals:

Acta Crystallographica—Section A had an impact factor of 2.0 in 2008, which vaulted up to 49.9 in 2009 .... What is even more remarkable is that this rise can be predominantly attributed to citations to a single article published in 2008....

Nature calculated that 25% of published articles contributed to 89% of the journal’s 2005 impact factor.

While on the subject of research metrics the Wikipedia page on the h-index is worth reading.

Nobody is silly enough to think that an elephant will only fall under gravity if its genes tell it to do so, but the same underlying error can easily be made in less obvious circumstances. So [we must] distinguish between how much behavior, and what part, has a genetic origin, and how much comes solely because an organism lives in the physical universe and is therefore bound by physical laws.

– Ian Stewart, Life’s Other Secret

As with each chapter Nelson begins with a Biological question and a Physical idea:

Biological question: Why do bacteria swim differently from fish?

Physical idea: The equations of motion appropriate to the nanoworld behave differently under time reversal from those of the macroworld.

Figure 5.1 is a picture showing the peculiar character of laminar flow characteristic of a Reynolds number less than one. A really cool video of the same experiment is here.

I also enjoyed a video on Reynolds Number from Sixty Symbols which includes the image above of vortex-antivortex pairs created after a volcano eruption, taken by NASA.

Thursday, August 19, 2010

I have been asked to give a talk at a Symposium on this topic. All the speakers are meant to come up with one BIG question. Later, I will write some of the questions other speakers have proposed later. I am struggling to come up with my question. I want it to be focussed and ultimately addressable by experiment in the next decade. Here are a few possibilities:

Is there some scale (energy, length, and time) on which the linear superposition principle breaks down?

Is the quantum-classical "transition" a crossover or a phase transition?

What are the distinct signatures of quantum complexity?

Which interpretations of quantum theory are not experimentally falsifiable?

Are there any biological processes which require/involve quantum coherence beyond the nanometer length scale and picosecond time scale?

Are there any experiments which should cause one to abandon a philosophical position of critical realism?

Wednesday, August 18, 2010

I am helping teach BIPH3001 Frontiers in Biophysics which is running as a reading course (with a blog) based on Philip Nelson's brilliant text, Biological Physics: Energy, Information, and Life. This week we are looking a chapter 4, Random Walks, Friction, an Diffusion. Here a few highlights. As I have noted previously he has great section headings which often summarise the main point.

4.1.2 Random walks lead to diffusive behavior

4.1.3 The diffusion law is model independent

4.1.4 Friction is quantitatively related to diffusion

Einstein showed that

friction * diffusion constant = kB T

This is the first example of a fluctuation-dissipation relation.

4.2. Excursion: Einstein's role

Einstein saw that the fluctuation-dissipation relation gave a falsifiable quantitative hypothesis. The experiments he proposed and Perrin performed (his actual data is in the Figure above) finally convinced people that atoms really did exist.

4.3.1 The conformation of polymers

The diffusion constant of a polymer scales inversely with the square root of the length (which is proportional to the mass of the polymer)

4.4.1 Diffusion rules the subcellular world

4.6.1 The permeability of artificial membranes is diffusive.

Note how the experimental data below extends over about 6 orders of magnitude!

Monday, August 16, 2010

I am working on my public lecture for tomorrow night, "van der Waals: his legacy 100 years later," for the UQ Physics Museum, marking the centenary of his Nobel Prize.

Reading his Prize speech the following quote is particularly interesting:

in all my studies I was quite convinced of the real existence of molecules, that I never regarded them as a figment of my imagination, nor even as mere centres of force effects. I considered them to be the actual bodies.... We do not know the nature of a molecule consisting of a single chemical atom. It would be premature to seek to answer this question but to admit this ignorance in no way impairs the belief in its real existence. When I began my studies I had the feeling that I was almost alone in holding that view. And when, as occurred already in my 1873 treatise, I determined their number in one gram-mol, their size and the nature of their action, I was strengthened in my opinion, yet still there often arose within me the question whether in the final analysis a molecule is a figment of the imagination and the entire molecular theory too. And now I do not think it any exaggeration to state that the real existence of molecules is universally assumed by physicists. Many of those who opposed it most have ultimately been won over, and my theory may have been a contributory factor.

It easy for us to forget (and students to not fully appreciate) that there was a time when the existence of atoms and molecules was a contentious position.

Previously I posted about the importance of written reports before and after weekly meetings with Ph.D students and postdocs. I think it is worth re-reading.

One thing students sometimes struggle with is "But, I dont have anything to report. I have been working on problem X and made no progress."

I think this is when a detailed and specific report is even more important. The difficult process of writing down the specifics of what you have been trying to do and why it has not worked can really focus your attention and clarify the way forward. Even if you don't see it your supervisor may. But, if they don't have the concrete specifics it is hard to help.

A superfluid has no viscosity. But turbulence is still possible. Feynman suggested in 1955 that this could arise as a disordered tangle of vortices.

Three features of quantum turbulence

1. dynamics is described by a quantum dynamical equation (e.g., a non-linear Schrodinger equation) rather than the Navier-Stokes equation.

2. Kolmogorov scaling (this was observed in 1998)

3. disordered tangled arrangement of vortices

BECs have "high potential" for step-by-step construction of a quantum turbulent state.

There are only a million atoms in the BECs studied here.

[But isnt this just 100^3? What is the max. no of vortices one could put in such a small system, 100?]

Spontaneous vortices can be produced with a temperature quench.

It was claimed that dissociation of vortex-antivortex pairs is related to quantum turbulence. However, in two dimensions this dissociation is just the Kosterlitz-Thouless transition which I doubt this has anything to do with quantum turbulence.

However, I failed to see these advances from the talk. The experiments are beautiful and fascinating. But, I could not see how the experiments or simulations have led to any new insights or advances beyond those from Kolmogorov in 1941 and Feynman in 1955. To me this is another example of how people in the BEC community oversell the significance of their work. Potential advances and hoped for insights are not the same as real advances and insights.

For an example of a real advance in a difficult problem which spread across disciplines consider the case of the Hopfield net, which was influenced by ideas from spin glasses in condensed matter physics. This had a large influence on neural networks in computer science and biology. The fact that Hopfield is now a Professor of Molecular Biology at Princeton is a testimony to the advances he made.

Chemical Engineering departments now regularly hire faculty who do research using density functional theory (DFT). This is testimony to the advances that have been made in modelling real materials and chemical processes using quantum chemical methods.

When departments of Aeronautical and Mechanical Engineering hire people to work on quantum turbulence will be a real sign of a significant contribution.

Friday, August 13, 2010

On Complex Matters, Steve Simon has a post claiming that two public lectures given by Shankar at Aspen are the "best physics lectures ever". I have not watched them yet but would be curious to here what others think. The lectures are on quantum physics and relativity.

Thursday, August 12, 2010

Quantum oscillations such as the de Haas - van Alphen and Shubnikov - de Haas oscillations are generally viewed as a distinct signature of a well-defined Fermi surface. These oscillations have proven to be a powerful probe of the electronic properties of metals: in elemental metals they have provided information about the geometry of the Fermi surface, effective masses, and scattering times.This information is extracted by making use of the expressions of Lifshitz and Kosevich (LK) for the dependence of the amplitude of the oscillations on the temperature and magnetic field. They derived their results for non-interacting electrons in three dimensions, but have since been extended to two dimensions and quasi-two dimensions. Kartsovnik has given a nice review of how magnetic oscillations have also provided significant information about layered organic metals.Most experiments are consistent with the LK form, but there are exceptions, e.g., in α-(BEDT-TTF)2NH4Hg(SCN)4 and β″-(BEDT-TTF)2SF5CH2CF2SO3.

The effect of many-body effects on the LK form has been considered and reviewed by Wasserman and Springford. But what happens in a non-Fermi liquid has been an open question. Long ago Pelzer considered the case of a marginal Fermi liquid. [We were postdocs with John Wilkins together]. Unfortunately, his paper is rarely referenced (e.g., in the paper below).

Given all of the above I was very interested to see a recent paper Generalized Lifshitz-Kosevich scaling at quantum criticality from the holographic correspondence. One of their many results is that the amplitude of the oscillations can be a non-monotonic function of temperature, as is observed in β″-(BEDT-TTF)2SF5CH2CF2SO3. Before I read this paper I was rather skeptical about attempts of some string theorists to obtain results of relevance and use to condensed matter physics, but this paper certainly has some very concrete results which can be compared to experimental results. It is certainly worth digesting.

Tuesday, August 10, 2010

Anderson gives several concrete examples to stress the importance of non-perturbative effects in quantum many-body theory, whether it is the standard model of particle physics or molecular bonding. Bound states are a prime example of a non-perturbative effect. He also mentions the highly non-trivial (and counter-intuitive) case of the chiral anomaly where the whole Fermi sea matters. Anderson cites Roman Jackiw's Dirac Prize lecture as a beautiful exposition of the relevant physics.

As an aside I think the title is a bit unfair to Feynman. Feynman would have been the last person to "brainwash" anyone and he certainly appreciated the importance of non-perturbative effects. The problem is really people who are enamoured by the success of Feynman diagrams (in their appropriate context) and then apply the formalism when one can't necessarily expect it to work.

Monday, August 9, 2010

In Seattle, I had a really interesting and helpful discussion with Charlie Campbell about doped rare earth oxides.

Cerium oxides have attracted a lot of industrial attention because they have an amazing ability to reversibly release and uptake oxygen. [Just like hemoglobin in your blood!]. Hence, along with many others I thought this was a fundamental issue about pure cerium oxide. However, it turns out all the industrial materials (such as solid oxide fuel cells) are doped with transition metal ions. So the fundamental problem is the following: mixed alloys of ceria and zirconia (ZrO2) have this large uptake-release capacity; it is much larger than pure zirconia or pure ceria.

This paper [which my Indian colleagues made me aware of when I visited Bangalore earlier this year] examines the corresponding question for titania-ceria alloys. [A paper on zirconia-ceria is here.] They find that in the alloys there is a significant relaxation of the oxygen sublattice. In particular four of the metal-oxygen bonds become much longer, reflecting weak bonding of oxygen.

I wonder whether

-thinking about a Jahn-Teller distortion could be helpful here?

-there are high resolution crystal structure data that is amenable to the bond valence sum analysis similar to that performed here.

Sunday, August 8, 2010

I wish I had bought a copy years ago. I knew of the books existence but assumed it was just another introductory book on quantum chemistry. However, the book is very different. There is significant focus on dynamics, condensed phase effects, and density matrices. Second quantisation is used extensively [something many theoretical chemists are not comfortable with but physicists are]. Much of the material is things the last few years I have struggled to learn myself or find where it is clearly written down. It has nice problems and chemical applications.

This is a rather definitive study using a high powered perturbative continuous unitary transformation which allows them to derive an effective spin Hamiltonian in the Mott insulating phase.

They find that as U/t decreases there is a first-order phase transition from the 120 degree Neel ordered phase to a spin liquid phase (no net magnetic moment) and large numbers of singlet excitations below the lowest lying triplet excitation. The Figure to the right shows the excitation spectrum as a function of U/t with red (empty) dots being magnetic (non -magnetic) excitations.

The Neel phase is destroyed by higher order spin interactions such as ring exchange which become more important as t/U increases.

They identify this spin liquid state with the "spin Bose metal" proposed by Motrunich.

The first-order transition from the magnetically ordered state to the spin liquid is associated with a small jump in the double site occupancy.

They also predict that the transition to the metallic state does not occur until U/t decreases to about 6-8. Hence, there is a significant range of U/t for which the Mott insulator is a spin liquid.

Sorry. This is about the TV show, The Big Bang Theory. I just flew on Air New Zealand and they showed two episodes I had not seen (we don't have TV) from Season 3. I thought The Creepy Candy Coating was one of the funniest ever. (Although it is disappointing there wasn't any real physics...) Here is one great scene.

Wednesday, August 4, 2010

When is it time to mothball Density functional theory? [sorry for the second pun..]

Yesterday I had a nice meeting with Charlie Campbell and members of his group at the University of Washington. It was fascinating to see their lab (as a theorist it is always a reality check!) . Over the past decade they have developed several high resolution microcalorimeters which allow accurate determination of the binding energies of different atoms and molecules to specific surfaces. These results present a significant challenge/benchmark for electronic structure methods (such as density functional theory) which claim to be able to calculate accurately such quantities. The results are also of fundamental importance for understanding mechanisms of heterogeneous catalysis.

I found results for adsorption of benzene and napthalene on Pt particularly interesting. They are summarised in the Figure above. DFT gets a binding energy which is too small by almost a factor of three.

I discovered my friends Jeff Reimers and Noel Hush are co-authors of a paper, Adsorption of Benzene on Copper, Gold, and Silver surfaces, where they do a systematic comparison of DFT with higher level quantum chemisty. [complete-active-space self-consistent field theory with second-order Møller−Plesset perturbation corrections (CASPT2) for the interaction of benzene with a Cu13 cluster model for the Cu(110) surface].

A couple of other interesting things they state:

For all systems, the bonding is found to be purely dispersive in nature with minimal covalent character.....

this cluster [of 13 Cu atoms] is actually too reactive and provides a poor chemical model for the system.

A fundamental question arises as to whether DFT is gets the wrong results just because it is bad at dispersion forces or whether there are other strongly correlation effects at play. I would have thought the latter since both benzene is strongly correlated and Pt is moderately correlated that there may be more to it than dispersion forces.

Below are the measured adsorption energies of different alkanes on three different surfaces. The straight lines suggest that there is also a well defined binding energy per carbon atom.

Last year I helped run a reading course PHYS3170 Intermediate Biophysics where we made use of a course blog. Some of my posts related to the course can be seen here.

This course has now morphed into BIPH3001 Frontiers in Biophysics. Again we are running it as a reading course due to low enrolments. The course blog has got off to a good start. Students have to post 2 items per week and comment on other student posts. It is worth looking at to see what is possible with this medium.

Tuesday, August 3, 2010

Here is the current version of the slides for a talk, "Charge redistribution near oxygen vacancies in cerium oxides", that I am giving tomorrow in the Chemistry Department at University of Washington.

The main point of the talk is that the standard model of charge localisation (pictured above) is incorrect. A detailed discussion is contained in a review co-authored with Elvis Shoko and Michael Smith.

VIABLE nuclear fusion has been only 30 years away since the idea was first mooted in the 1950s. Its latest three-decade incarnation is ITER,...

It is worrying how the EU is considering diverting substantial amounts of other scientific funding towards ITER. The article ends

it is far from clear whether the best way of countering this trend in energy funding is to plough yet more money into the fusion project, with its vested political interests, at the expense of less prominent scientific endeavours.

It never ceases to amaze me how these big projects have such large cost over-runs and that the initial budgets were based on wishful thinking. Coincidentally, the same issue of The Economist has an article about thespiralling costs of the London Olympics.

Sunday, August 1, 2010

A recent PRB (and an Editors Choice) by Starykh, Katsura, and Balents makes a similar point in great detail, for the specific case of Cs2CuCl4, which as a first approximation can be modelled by a Heisenberg model on an anisotropic triangular lattice. Here is some of their abstract:

First, we find that when the magnetic field is oriented within the triangular layer, spins are actually most strongly correlated within planes perpendicular to the triangular layers. This is despite the fact that the interlayer exchange coupling in Cs2CuCl4 is about an order of magnitude smaller than the weakest (diagonal) exchange in the triangular planes themselves.

Second, the phase diagram in such orientations is exquisitely sensitive to tiny interactions, heretofore neglected, of order a few percent or less of the largest exchange couplings. These interactions, ....induce entirely new phases

What are the general implications of this?

There is both very good and bad news for theory and modelling such materials.

The good news is there is lots of rich and beautiful physics to understand.

The bad news is that most materials (and their residual interactions) are not as well characterised as Cs2CuCl4 it is going to be extremely difficult to model and describe such materials in a definitive way.

Subscribe To

About Me

I have fun at work trying to use quantum many-body theory to understand electronic properties of complex materials.
I am married to the lovely Robin and have two adult children and a dog, Priya (in the photo). I also write an even more personal blog Soli Deo Gloria [thoughts on theology, science, and culture]

Followers

Disclaimer

Although I am employed by the University of Queensland and funded by the Australian Research Council all views expressed on this blog are solely my own. They do not reflect the views of any present or past employers, funding agencies, colleagues, organisations, family members, churches, insurance companies, or lawyers I currently have or in the past have had some affiliation with.

I make no money from this blog. Any book or product endorsements will be based solely on my enthusiasm for the product. If I am reviewing a copy of a book and I have received a complimentary copy from the publisher I will state that in the review.