No inconsistency has been proved. It is simply unknown whether QED or the standard model are consistent or not. Settling this might have important consequences for the methods to squeeze out predictions from QFT, and hence is at least as important as finding a unified theory of the standard model plus gravity - which is unlikely to have any significant experimental consequences, as you say yourself:

So far, all observations concerning gravity are in accord with GR, but that's likely to be the case, because all we can observe concerning gravity is about its action on macroscopic systems, and there the classical theory is very accurate (in close analogy to our everyday experience that classical electrodynamics/optics is very accurate although here we know QED as the underlying quantum theory). So it's very hard to find specific phenomena where and how GR (or maybe some other classical field theory describing gravitation better, although I don't know of any clear empirical hint that this might be the case) has to be joined with QT.

I'm glad that you think so. But I would like to stress that I arrived to those ideas by starting from philosophical questions. The point is that philosophy may lead to something that is more than philosophy, so one should not ignore ideas which at first look as just philosophy.

What is the nature of this indeterminacy you mention? Is it that the system is physically/ontologically smeared across its possible values, so the value exists in an unsharp way? Is it that the value sharply in an eigenstate, and we just can't say which until measurement? Or is the value truly non-existent until it is measured?

What do you mean by this question? If somebody starts the question with "what's the nature/mechanism...", usually he or she has a conceptual misunderstanding what the natural sciences are methodologically aiming at. First of all they are empirical sciences, i.e., a phenomenon of nature is investigated by making quantitative observations about it. If the phenomenon is reproducible and shows a clear regularity, one can compare it with the predictions of existing models/theories, as far as one is able to apply the formal, mathematical structure of the model/theory to it (and that's all you need to make a mathematical system a physical theory, and that's it concerning interpretation as far as natural science is concerned). If then the observation agrees, within the accuracy of the observation, you have found "an explanation" in the sense that you can understand the phenomenon in terms of the existing models/theories.

In some sense, that's the "boring" case, because then we haven't learnt something new about nature. So an experiment and theoretical analysis becomes interesting, if the observation deviates from what's expected from the existing theories. Then, usually a careful reaxamanation of the experimental setup starts, and one might figure out errors of this setup, or one tries to find variations of the measurement to see whether everything is reproducible with other methods. If the deviation with theory withstands all these careful testing, then a new model/theory is needed, and then some new model/theory is developed, with the caveat that the new theory has to work in all the cases the old theory worked before. If this is established, usually one can understand the success of the old theory as well as its failure in "explaining" the new phenomenon in the sense that the old theory can be understood as some approximation of the new theory, valid only in a certain realm and not applicable where the old theory failed.

This is of course a quite complicated mutual process, i.e., theories do not only "explain" in the above summarized sense a phenomenon or have to be modified due to a newly discovered phenomenon where it fails, but it also provides ideas for more experiments. E.g., quantum theory all of a sudden made it interesting to investigate how electrons behave when shot on a double slit. Before, when everybody thought electrons are just little classical particles, this experiment may never have been realized, because nobody would have thought that it might lead to interesting new phenomena. With de Broglie's hypothesis of "wave-particle duality" this became interesting, and indeed it's among the most interesting early experiments leading to the development of modern quantum mechanics, which finally made the quite inconstistent idea of "wave-particle duality" obsolete and lead to the probabilistic standard interpretation.

As any mathematical model, also physical theories start with some postulates, which cannot be reduced further to "more simple" postulates. Here, of course, simplicity is a subjective idea, but what theoretical physicists aim at is to find a minimal set of fundamental postulates for a theory, from which as many as possible phenomena can be "explained" in the above sense. The "fundamental postulates" themselves cannot be "explained" in this sense, but are just compact summaries of the results of, sometimes decade-long, efforts of experimentalists and theorists.

I don't think that there's anything wrong with QT in the realm where we really need it to describe the phenomena. It's not consistingly describing the gravitational interactions, and that's imho the only clear physical hint of failure of QT or rather its incompleteness.

I'm not sure I understand your point here. There are no gravitational interactions according to GR. Inertia just bends spacetime. GR describes that. It's predictions agree with observations. What else do you want? Any aesthetic considerations should not be relevant to physics, right?

No inconsistency has been proved. It is simply unknown whether QED or the standard model are consistent or not. Settling this might have important consequences for the methods to squeeze out predictions from QFT, and hence is at least as important as finding a unified theory of the standard model plus gravity - which is unlikely to have any significant experimental consequences, as you say yourself:

Well, I have some hope that solving the problem of a QT description of gravity might also solve the mathematical problems of QFT as a whole. E.g., some decades the (failed) hope was that string theory and its various relative might solve both the problem of quantum gravity and provide a mathematically consistent description of QFT, with the standard model as an approximation in the sense of an "effective theory", and as such it works pretty well, one could even say too well, because no clear contradicting observation has been made yet, despite its mathematical inconsistency.

Which preparatin nprocedure determines the quantum state of the solar system? How is the result of a measurement of some observables (say the mass of the Sun and the major planets) of this quantum state described from first principles (assuming Newonian gravity, which is fully adequate for this situation)?

The solar system is not coupled to an external measurement device as in the usual analysis of measurements.; the measurement is done from within. Without an explanation how this works, even ordinary quantum mechanics is an incompletely understood (and indeed incomplete) theory.

The preparation procedure of the solar system is not too well known, but the common idea is that stars and planets etc. around them form out of clouds of some material which is denser at some location than on average, and then gravity does its job. All this is of course well described by classical (even Newtonian) physics. A complete microscopic description is neither possible nor necessary.

A simpler example is the coffee in the cup on my desk, which has been prepared by me just some minutes ago. It's pretty well described as a system of local thermal equilibrium slowly equilibrating further to finally reach the temperature of my office which is also pretty well described to be in local thermal equilibrium, providing a "heat bath" within which the coffee sits and exchanges energy and water molecules, i.e., it cries to be described as a grand canonical ensemble close the thermal equilibrium and thus with classical physics like (viscous) hydro dynamics with heat conduction etc.

So preparation procedures need not be very "artificial" as, e.g., at a collider like the LHC, where proton bunches are accelerated and kept at high precision at a certain energy to get something accurately prepared to make experiments, where the much more microscopic detail is resolved. Already substituting the protons by Pb nuclei and performing heavy-ion collisions changes this completely, and again there's no better chance to understand what's going on than to use (semi-)classical methods like kinetic theory, relativistic viscous hydro, Langevin processes, etc. to describe the system. Even in p Pb and pp collisions one observes quite some "collectivity", at least in "high-multiplicity events".

It's a gift and a curse of nature at once that macroscopic, or even "mesoscopic" systems of some 1000 of particles, tend to behave according to classical or semi-classical models. It's a gift, because we have the chance to understand more by describing these system approximately with simpler models but also a curse, because we cannot so easily observe the (maybe) interesting quantum phenomena we are after.

The preparation procedure of the solar system is not too well known, but the common idea is that stars and planets etc. around them form out of clouds of some material which is denser at some location than on average, and then gravity does its job. All this is of course well described by classical (even Newtonian) physics. A complete microscopic description is neither possible nor necessary.

But to say that a complete microscopic description is not possible says that quantum theory does not apply for the solar system as a whole. A complete microscopic quantum description should therefore be possible in principle, even though we may never know the exact details. And such a microscopic quantum description would represent a single system only, which cannot be interpreted by a purely statistical interpretation. This is the reason why interpretation questions are still open, and why they may shed light on what is missing for a fundamental description of all of Nature.

I have the opposite hope that solving the mathematical problems of QFT might also solve the problem of a QT description of gravity. Mathematical problems always point to lack of theoretical understanding, and progress comes through fixing these conceptual issues.

But to say that a complete microscopic description is not possible says that quantum theory does not apply for the solar system as a whole. A complete microscopic quantum description should therefore be possible in principle, even though we may never know the exact details. And such a microscopic quantum description would represent a single system only, which cannot be interpreted by a purely statistical interpretation. This is the reason why interpretation questions are still open, and why they may shed light on what is missing for a fundamental description of all of Nature.

Well, why do you say, it's not possible in principle? The single system then is described in terms of probabilities. The more I think about it, the less I see anything problematic in this. Even on the classical level, the many-body system is described in terms of statistical phsics and thus with probabilities, including fluctuations and all that, and that classical description can be derived, at least to some extent, from quantum many-body theory.

I'm glad that you think so. But I would like to stress that I arrived to those ideas by starting from philosophical questions. The point is that philosophy may lead to something that is more than philosophy, so one should not ignore ideas which at first look as just philosophy.

I'm not sure I understand your point here. There are no gravitational interactions according to GR. Inertia just bends spacetime. GR describes that. It's predictions agree with observations. What else do you want? Any aesthetic considerations should not be relevant to physics, right?

Well, GR has built in its own failure, namely the unavoidable singularities of all physically relevant solutions. When it comes to the singularities of cosmology ("big bang") and black holes ("Schwarzschild, Kerr" of very compact objects), the physical laws of GR break down, and it's expected that an appropriate quantum treatment "cures" these deficiencies of the classical theory.

Further, there's nothing in GR that forbids to think about gravity as an interaction. The geometrization even can be derived from this ansatz (e.g., due to Weinberg; see also Feynman's book, "The Feynman lectures on gravitation", which is a brillant textbook on the subject).

At the same time, there are several reasons pilot wave theory is not entirely convincing as a true theory of nature. One is the empty ghost branches, which are parts of the wave function which have flowed far (in the configuration space) from where the particle is and so likely will never again play a role in guiding the particle. These proliferate as a consequence of Rule 1, but play no role in explaining anything we’ve actually observed in nature. Because the wave function never collapses, we are stuck with a world full of ghost branches. There is one distinguished branch, which is the one guiding the particle, which we may call the occupied branch. Nonetheless, the unoccupied ghost branches are also real. The wave function of which they are branches is a beable.

The ghost branches of pilot wave theory are the same as the branches in the Many Worlds Interpretation. In both cases they are a consequence of having only Rule 1. Unlike the Many Worlds Interpretation, pilot wave theory requires no exotic ontology in terms of many universes, or a splitting of observers, because there is always a single occupied branch where the particle resides. So there is no problem of principle, nor is there a problem of defining what we mean by probabilities. But if one finds it inelegant to have every possible history of the world represented as an actuality, that sin is common to Many Worlds and pilot wave theory.

(Note: Rule 1 is simply "Given the quantum state of an isolated system at one time, there is a law that will predict the precise quantum state of that system at any other time." Smolin called this law Rule 1. "It is also sometimes called the Schrödinger equation. The principle that there is such a law is called unitarity."

Why didn't Demysitifer worry about the ghost branches? Are there many kinds of Bohmians with regards to how they treat the wave function? How does Demystifer and other Bohmians treat it compared to Smolin?

But the quote is obviously wrong, because we very well can use quantum theory to describe real-world experiments, and there's both the notion of the state, described by the statistical operator, its deterministic (!) time evolution, given the Hamiltonian of the system, and its probabilistic interpretation.

I think that what David Wallace was saying is obviously right. He didn't say that we can't use quantum theory. He was saying that in practice we treat macroscopic quantities different from microscopic in an ad-hoc way. That's true. It doesn't make quantum theory useless, but it makes it "softly inconsistent", to use my own phrase.

The state is determined by a preparation procedure, and it implies that not all observables of the system takes determined values, but a measurement of these observables give random results with probabilities given by the states. There's no contradiction in the sense of logic.

Well, GR has built in its own failure, namely the unavoidable singularities of all physically relevant solutions. When it comes to the singularities of cosmology ("big bang") and black holes ("Schwarzschild, Kerr" of very compact objects), the physical laws of GR break down, and it's expected that an appropriate quantum treatment "cures" these deficiencies of the classical theory.

Further, there's nothing in GR that forbids to think about gravity as an interaction. The geometrization even can be derived from this ansatz (e.g., due to Weinberg; see also Feynman's book, "The Feynman lectures on gravitation", which is a brillant textbook on the subject).

Of course, just because there is one valid theory does not mean there can't be other theory describing the same phenomena. But we have a theory that correctly predicts observable phenomena. So why bother?

What do you mean by this question? If somebody starts the question with "what's the nature/mechanism...", usually he or she has a conceptual misunderstanding what the natural sciences are methodologically aiming at

I am asking if you think the wavefunction of, say, an electron, describes 1) a new sort of spatially extended object, 2) a 0D classical point whose position is simply unknown, or 3) nothing but the probability of a classical detector click. I believe from you other comments your view is 3, in which case the measurement problem is akin to wondering: where did all these classical detectors come from in the first place? Why do we say they are made of electrons, if electrons on the quantum scale have no purchase as objectively existing objects?

My sense is your answer is going to be "who cares, the theory works." That's fine, but don't confuse not being interested in a problem with whether others are wrong to identify the problem as legitimate. Math works very well in practice, but Godel's incompleteness theorem is still an issue to contend with. The MP is similar in form.

I think that what David Wallace was saying is obviously right. He didn't say that we can't use quantum theory. He was saying that in practice we treat macroscopic quantities different from microscopic in an ad-hoc way. That's true. It doesn't make quantum theory useless, but it makes it "softly inconsistent", to use my own phrase.

I think it is a contradiction. To me, the following two claims are just logically inconsistent (together with the rest of the quantum formalism)

A measurement always produces an eigenvalue of the quantity being measured.

Measurement devices and observers are themselves described by quantum mechanics.

Why is there a contradiction? Is there any empirical evidence that 1. is wrong? If that were the case, there'd be a big crises of QT as a whole, and every theorist would struggle to find an alternative theory ;-).

Concerning 2. it's to some degree a matter of taste whether you except the standard quantum-statistical arguments as "description" of the measurement devices or not. So again, about this you can fight enternelly without coming to any conclusion either.

That there's any necessity to also describe us as quantum systems too to solve the "measurement problem" is somewhat exotic to me, because there's really no more direct interaction between us and the measured system (except in the case that you consider our own senses as measurement device of quantum systems, like the very interesting possibility to use our eyes directly as single-photon detectors which seems to be possible in principle according to new studies on the subject).

These hypothetical singularities can not be observed. Why should we care about them?

Of course, just because there is one valid theory does not mean there can't be other theory describing the same phenomena. But we have a theory that correctly predicts observable phenomena. So why bother?

What other theory you are talking about. GR is GR, no matter whether you describe gravity as an interaction or insist on the quite common interpretation that it's entirely a kinematical effect of curved spacetime. For me the interpretation of gravity as an interaction as all the other fundamental interactions (i.e., electroweak and strong interactions) is of some attraction, because it's simplifying and unifying the picture, but that's again just a matter of personal taste of little importance in the sense of science.

I am asking if you think the wavefunction of, say, an electron, describes 1) a new sort of spatially extended object, 2) a 0D classical point whose position is simply unknown, or 3) nothing but the probability of a classical detector click. I believe from you other comments your view is 3, in which case the measurement problem is akin to wondering: where did all these classical detectors come from in the first place? Why do we say they are made of electrons, if electrons on the quantum scale have no purchase as objectively existing objects?

My sense is your answer is going to be "who cares, the theory works." That's fine, but don't confuse not being interested in a problem with whether others are wrong to identify the problem as legitimate. Math works very well in practice, but Godel's incompleteness theorem is still an issue to contend with. The MP is similar in form.

The wave function describes probabilities for measurement results no more no less. It's wrong to say an electron is the wave function (the more that at the most fundamental level wave functions do not make much sense but are quantized themselves). So indeed I think 3) describes my point of view best.

The classical detectors come from the physicists' curiosity to learn more about nature. That's why they build with some effort ever better ones (and that's often pretty expensive and we can be lucky to get them financed by tax-payers money).

That matter around us, and thus also measurement devices, are made of atomic nuclei and electrons is the conclusion that we very well understand their properties as many-body systems with atomic nuclei and electrons as the relevant degrees of freedom. That's also known as condensed-matter physics and a very successful application of quantum (field) theory. It's so successful that we have all the funny gadgets like the laptop I'm writing this text on and also to construct ever better measurement devices for all kinds of measurements on quantum systems down to the most fundamental building blocks, as far as we know them, and perhaps one day helping us to find even new ones.

The wave function describes probabilities for measurement results no more no less. It's wrong to say an electron is the wave function (the more that at the most fundamental level wave functions do not make much sense but are quantized themselves). So indeed I think 3) describes my point of view best.

That matter around us, and thus also measurement devices, are made of atomic nuclei and electrons is the conclusion that we very well understand their properties as many-body systems with atomic nuclei and electrons as the relevant degrees of freedom.

So you claim free electrons do not exist - neither as an extended object nor as a classical point, as in options 1 and 2 in post # 42. But you also claim electrons suddenly do start to exist when composing macroscopic, many body systems.

Why is there a contradiction? Is there any empirical evidence that 1. is wrong? If that were the case, there'd be a big crises of QT as a whole, and every theorist would struggle to find an alternative theory ;-).

For a while, maybe. But if no satisfactory alternative theory was found, many of them would shift to pretending that the evidence doesn't REALLY show what it seems to show. In other words, many theorist would just live in denial about it. And that's exactly what we do have.

Concerning 2. it's to some degree a matter of taste whether you except the standard quantum-statistical arguments as "description" of the measurement devices or not. So again, about this you can fight enternelly without coming to any conclusion either.

The reason there is eternal fighting about it is because 1 & 2 are contradictory, and there is no agreement about how to fix it. On the other hand, there is a "rule of thumb" that allows us to get past the contradiction, which is that we have heuristics for when to treat a system as a measurement device obeying 1, and when to treat it as a quantum mechanical system. You can't consistently do both.

Suppose that you have a Stern-Gerlach type situation in which an electron that is spin-up in the z-direction is deflected left, and makes a black spot on the left side of a photographic plate. An electron that is spin-down makes a black spot on the right side.

Now suppose that we set up initial conditions that are precisely left-right symmetric. We send an electron through the Stern-Gerlach device that is spin-up in the x-direction.

According to rule 1, eventually the system evolves into a final state that is not left-right symmetric. Either the electron goes left and makes a spot on the left, or goes right and makes a spot on the right. So the final state does not satisfy the left-right symmetry of the initial state.

That is not possible, if everything obeys the laws of quantum mechanics. If the initial state is left-right symmetric, and the Hamilton is similarly left-right symmetric, then the final state will be left-right symmetric.

I think, as with the question about a consistent QT of gravity, we'd need some empirical evidence clearly indicating that there's a real problem in describing an unanimously observed phenomenon which contradicts QT.

Your quote was specifically about QT of gravity (I think), but I'd like to say that in my opinion the best thing that could happen to quantum physics overall is empirical evidence that contradicts QT. If that would happen, things maybe would start to go in new, interesting directions.

So you claim free electrons do not exist - neither as an extended object nor as a classical point, as in options 1 and 2 in post # 42. But you also claim electrons suddenly do start to exist when composing macroscopic, many body systems.

I do not claim that free electrons do not exist. Where do you get this from? Of course free electrons exist. They are, of course, neither well described as a classical point particle (there's no consistent description of a classical point particle anyway) nor as a classical extended objects. That's why we use Q(F)T to describe them. According to the standard model they are described as the charged leptons (spin-1/2 Dirac fermions), thus carrying electric and WISO charges but no color charge.

Measurement devices are composed of atomic nuclei (protons and neutrons, which themselves are bound states of quarks and gluons) and electrons. That's what you have asked about not about the existence or non-existence of free electrons. Why should my statement above imply such obvious nonsense of non-existence of free electrons?

For a while, maybe. But if no satisfactory alternative theory was found, many of them would shift to pretending that the evidence doesn't REALLY show what it seems to show. In other words, many theorist would just live in denial about it. And that's exactly what we do have.

The reason there is eternal fighting about it is because 1 & 2 are contradictory, and there is no agreement about how to fix it. On the other hand, there is a "rule of thumb" that allows us to get past the contradiction, which is that we have heuristics for when to treat a system as a measurement device obeying 1, and when to treat it as a quantum mechanical system. You can't consistently do both.

Suppose that you have a Stern-Gerlach type situation in which an electron that is spin-up in the z-direction is deflected left, and makes a black spot on the left side of a photographic plate. An electron that is spin-down makes a black spot on the right side.

Now suppose that we set up initial conditions that are precisely left-right symmetric. We send an electron through the Stern-Gerlach device that is spin-up in the x-direction.

According to rule 1, eventually the system evolves into a final state that is not left-right symmetric. Either the electron goes left and makes a spot on the left, or goes right and makes a spot on the right. So the final state does not satisfy the left-right symmetry of the initial state.

That is not possible, if everything obeys the laws of quantum mechanics. If the initial state is left-right symmetric, and the Hamilton is similarly left-right symmetric, then the final state will be left-right symmetric.

Obviously I'm to stupid to understand, why there's a contradiction between 1 and 2. Particularly your example of an apparent paradox is completely incomprehensible to me, because it contradicts the standard interpretation of QT. Of course, as Bohr has already analyzed, you cannot perform the SG expeirment with free electrons in practice. So let me put Ag atom instead of electron (because that was the atom Stern and Gerlach used at the time)

If you send an Ag atom with indetermined spin-z component through a Stern-Gerlach apparatus it is deflected with the probabilities given by the corresponding (pure or mixed) state either to the left or the right. That's the point, why Born introduced the probability interpretation of the quantum state in the first place: You cannot split a single Ag atom into pieces, because you never find a smeared classical-field like entity that's smeared according to the wave-function squared, but you always find a single spot on the screen after it run through the SG-magnet. It ends either "to the left" or "to the right" and thus either, through the entanglement between position and spin-z component through the running through the magnet we conclude that the spin component is either up or down, depending on where the electron landed. You can even prepare pure spin-z eigenstates by just looking at the corresponding partial beam (through filtering away all electrons running in the other region of space). I.e., here you have a paradigmatic example for a von Neumann filter measurement. Of course, for the single electron there's no way to predict, which value will be found. You can only say that with the probabilities given by the state the Ag atom is prepared in

In the case of the original experiment it was a beam of Ag atoms from an oven running through a little opening, so it's some mixed state of roughly given by (written as product of the spatial/momentum part and the spin part)
$$\hat{\rho} \propto \int_{\text{aperture}} \mathrm{d}^3 p \exp[-\vec{p}^2/(2m k T)] |\vec{p} \rangle \langle \vec{p}| \otimes \hat{1}_{\text{spin}}/2.$$
Of all Ag atoms the probability for either spin-z component up or down is 1/2, i.e., the symmetric situation you assume. Of course, each single atom will go either the one or the other direction, and for the single atom the situation is not symmetric. Only the probability distribution is symmetric, and that's what's also predicted by QM. This can be even calculated in good approximation analytically (I've still not found the time to write this up completely, but it's really not too complicated).

The final state of the Ag atom is a quantum state again and only describes probabilities, and this distribution obeys the left-right symmetry you rightly assume.

In experimental terms: What's symmetric is the distribution of many Ag atoms, all prepared in the same initial state, running through the setup. Each single atom "breaks" the symmetry of course, landing either "left" or "right" from the symmetry plane.

It's the same with any random experiment. Although a perfectly fair die is symmetric, any outcome is showing 1 of the six edges and thus breaks the cubic symmetry. Only "statistically", i.e., the "average" over many outcomes is symmetric, i.e., the probability for each outcome is the same, 1/6.