Specifically, Judge Wells wrote, “I agree with a quote by John Allen Paulos, a professor of mathematics at Temple University, when he wrote that, ‘the margin of error in this election is far greater than the margin of victory, no matter who wins.’ Further judicial process will not change this self-evident fact and will only result in confusion and disorder.” ...

I believed and still believe that the statistical tie in the Florida election supported a conclusion opposite to the one Wells drew. The tie seemed to lend greater weight to the fact that Al Gore received almost half a million more popular votes nationally than did Bush. If anything, the dead heat in Florida could be seen as giving Gore’s national plurality the status of a moral tiebreaker. At the very least the decision of the rest of the court to allow for a manual recount should have been honored since Florida’s vote was pivotal in the Electoral College. Even flipping a commemorative Gore-Bush coin in the capitol in Tallahassee would have been justified since the vote totals were essentially indistinguishable.

Historical counterfactuals are always dubious undertakings, but I doubt very much that the United States would have gone to war in Iraq had Gore been president. I also think strong environmental legislation would have been pursued and implemented under him.

This guy is giving mathematicians a bad name.

His argument is essentially this: The Bush-Gore Florida ballot count was like tossing a coin. So we should have kept tossing the coin until it came up for Gore, because he would have advanced the cause of global warming politics, or some such nonsense.

Or maybe some judges should have decided to use the dispute as an excuse to invalidate the Electoral College.

Florida did have a manual recount, in accordance with Florida statutes. The Gore argument was certain select counties should be recounted in secret by Judge's clerks, until there is a more favorable outcome.

The US Supreme Court ultimately decided that Florida law had to be followed, and the court could not keep inventing new recount procedures.

Saying that the Florida count was like a coin toss should mean we should stick to the procedure that was agreed to before the election. It does not mean to keep tossing the coin.

Accepting Paulos's procedure would be to tell citizens that their individual votes do not count. If an election appeared to be decided by one vote, then he would declare the vote to be statistically insignificant, and suggest flipping a coin instead.

Besides being fundamentally anti-democratic, this would create all sorts of new problems. Who gets to decide what is statistically insignificant? And what if a vote difference misses that cutoff by an amount that is statistically insignificant?

And Paulos suggests that Florida decide its electoral votes by how the rest of the country voted? That is crazy.

That is, I think it is crazy. There are actually some serious support for the National Popular Vote Interstate Compact, where most of the states would decide their electoral votes based on what the other states do. Maybe Paulos would like it.

Saturday, March 26, 2016

Tom: Hi, Bill. Tom, from Western Australia. If quantum entanglement or quantum spookiness can allow us to transmit information instantaneously, that is faster than the speed of light, how do you think this could, dare I say it, change the world?

Bill Nye: Tom, I love you man. Thanks for the tip of the hat there, the turn of phrase. Will quantum entanglement change the world? If this turns out to be a real thing, well, or if we can take advantage of it, it seems to me the first thing that will change is computing. We’ll be able to make computers that work extraordinarily fast. But it carries with it, for me, this belief that we’ll be able to go back in time; that we’ll be able to harness energy somehow from black holes and other astrophysical phenomenon that we observe in the cosmos but not so readily here on earth. We’ll see. Tom, in Western Australia, maybe you’ll be the physicist that figures quantum entanglement out at its next level and create practical applications. But for now, I’m not counting on it to change the world.

I am not going to pile on. Nye gives dopey explanations of everything.

Furthermore, this is a reasonable layman's interpretation of the usual physics hype on quantum computers, entanglement, and the black hole paradox. The physicists throw around some impressive buzzwords, but Nye is right that none of this stuff is going to change the world.

You could say that quantum entanglement has already changed the world. But they are not talking about the concepts that were developed in 1925-30. They are talking about cats that are alive and dead, spooky action at a distance, experiments that reverse cause and effect, quantum gates that are simultaneously on and off, splittings into parallel universes, and info that leaks out of black holes.

We cannot Nye to see thru all this when our leading physicists keep reciting this nonsense. At least he understands that it is not changing the world.

Tuesday, March 22, 2016

I am not a positivist. Positivism states that what cannot be observed does not exist. This conception is scientifically indefensible, for it is impossible to make valid affirmations of what people ‘can’ or ‘cannot’ observe. One would have to say that ‘only what we observe exists’which is obviously false.

Positivism does not say "what cannot be observed does not exist." Positivism does not try to describe things that are unobservable or nonexistent. Those are metaphysical issues that positivism was invented to avoid.

Einstein's biggest influence was to turn theoretical physicists from positivists to anti-positivists. The anti-positivists are always making pronouncements about things that cannot be observed or tested. It is the modern equivalent of arguing about "How many angels can dance on the head of a pin?".

Science is all about explaining what can be observed or tested. Once you start debating the existence of things that can never be observed anyway, you have left science. Such questions are meaningless, in the sense that no one can prove you right or wrong.

The positivist conception is the core of science, so of course it is scientifically defensible. Einstein's view is not. We can make all sorts of valid affirmations about what we can or cannot observe. Most of science is about what we can observe. There are also lots of examples of what we cannot observe, with famous examples being Heisenberg uncertainty (cannot simultaneously observe position and momentum of an electron precisely), and black holes (cannot observe any dynamics inside the event horizon). A simpler example is that you can burn a letter to make it unreadable.

Special relativity is the main reason Einstein is considered a great genius, and the main reason he gets credit is that he denied the existence of the aether. He did not really deny the aether any more than Lorentz did, but people think he did.

Poincare had the more correct posture. He said that the aether was unobservable, but it is convenient for some purposes.

So Einstein is more famous because he said that the aether does not exist, but he later denied that it is possible to ever say that something is or is not observable!

I would have thought that Einstein's bad philosophy would be abandoned by now, but the opposite is true. More and more, physicists insist on making big claims about parallel universes, strings, and other unobservables.

Tuesday, March 15, 2016

Certain calculations could take even the most advanced supercomputers billions of years to solve. But quantum computers, says Microsoft researcher Krysta Svore, might be able to dispatch what she calls “lifetime-of-the-universe” problems in a matter of hours or days.

She implied that Microsoft would be solving these problems for the public in its cloud services in 5 or 10 years.

I have repeatedly posted skepticism about the achievability of quantum computers, and how the supposed progress on problems like 3x5=15 is no progress at all.

But even if Microsoft succdessfully builds a quantum computer, it would have no practical value to the public. The main theoretical advantage is factoring large numbers and breaking secure internet connections.

That would not be good for society. It would be a disaster. All of the billions of computers in the world would have to be re-engineered to use different cryptographic methods that are slower, more expensive, less reliable, and are of more dubious security. Only thieves and criminals would be buying those Microsoft cloud services.

Some quantum computer advocates will argue that a quantum computer has other uses, such as doing unindexed database searches, and similating chemical reactions. While there is some theoretical complexity advantage in artificial scenarios, there is no practical advantage for any real world problem that anyone has found. Databases would still be better searched by indexing them and using conventional computers. Likewise, there are methods for simulating chemical reactions that are far better than using a quantum computer.

She explained quantum computers as the ultimate parallel processor because it can process qubits as being 0 and 1 at the same time, thereby giving an exponential speedup. Scott Aaronson posts that this view is inaccurate, and leads to an inflated view of what quantum computers are all about. He is writing a whole book on this subject, but I doubt that it will slow down the quantum computer hype machine.

Science Friday also had this, in celebration of Pi Day, 3-14 (yesterday):

Mathematician James Grime of the YouTube channel Numberphile has determined that 39 digits of pi—3.14159265358979323846264338327950288420 — would suffice to calculate the circumference of the known universe to the width of a hydrogen atom.

No, this is nonsense, and Grime gives mathematicians a bad name.

The radium of the known universe is only known to about 1 signficant figure, so using Pi=3 is sufficient for any such calculation. Furthermore, the universe is not Euclidean on that scale, so Pi is not the circumference/diameter ratio.

I do not know any physical situation where more than about 6 decimal places are needed. I would be very surprised if more than 10 were ever needed.

Monday, March 14, 2016

Why special relativity should not be a template for a fundamental reformulation of quantum mechanics ...

It is argued that the methodology of Einstein’s 1905 theory represents a victory of pragmatism over explanatory depth; and that its adoption only made sense in the context of the chaotic state state of physics at the start of the 20th century — as Einstein well knew.

Whenever someone discusses the history or foundations of relativity, they either tell the Einstein story as if he did it all, or they acknowledge that others had all the parts to the theory before Einstein, but go out of their way to argue that some subtle nuance means that Einstein should get all the credit. This paper is no exception.

Einstein is famous for claiming in 1905, on the basis of his relativity principle, that all laws of physics, including those of electrodynamics, take the same form in all inertial reference frames, so happily Maxwell’s equations can be used just as well in the moving laboratory frame. But this conclusion, or something very close to it, had already been anticipated by several great ether theorists, including H. A. Lorentz, Joseph Larmor and particularly Henri Poincaré. This was largely because there had been from the middle of the nineteenth century all the way to 1905 a series of experiments involving optical and electromagnetic effects that failed to show any sign of the ether wind rushing through the laboratory: it was indeed as if the earth was always at rest relative to the ether. (The most famous of these, and the most surprising, was of course the 1887 Michelson-Morley experiment.) Like the above-mentioned ether theorists, Einstein realized that the covariance of Maxwell’s equations — the form invariance of the equations — is achieved when the relevant coordinate transformations take a very special form, but Einstein was unique in his understanding that these transformations, properly understood, encode new predictions as to the behaviour of rigid bodies and clocks in motion. That is why, in Einstein’s mind, a new understanding of space and time themselves was in the offing.

Both the mathematical form of the transformations, and at least the nonclassical distortion of moving rigid bodies were already known to Lorentz, Larmor and Poincaré — indeed a family of possible deformation effects was originally suggested independently by Lorentz and G. F. FitzGerald to explain the Michelson-Morley result.16 It was the connection between them, i.e. between the coordinate transformations and motion-induced deformation, that had not been fully appreciated before Einstein. In the first (“kinematical”) part of his 1905 relativity paper, Einstein established the operational meaning of the so-called Lorentz coordinate transformations and showed that they lead not just to a special case of FitzGerald-Lorentz deformation (longitudinal contraction), but also to the “slowing down” of clocks in motion — the phenomenon of time dilation. Now it is still not well known that Larmor and Lorentz had come tantalizingly close to predicting this phenomenon; they had independently seen just before the turn of the century how it must hold in certain very special cases. But as a general effect that does not depend on the constitution of a clock, its discovery was Einstein’s own.

I don't know how they can say that Einstein's own discovery that the time dilation does not depend on the constitution of a clock. Here is Einstein's famous 1905 paper:

Thence we conclude that a balance-clock7 at the equator must go more slowly, by a very small amount, than a precisely similar clock situated at one of the poles under otherwise identical conditions.

[footnote] 7. Not a pendulum-clock, which is physically a system to which the Earth belongs. This case had to be excluded.

There is no such nonsense in the papers of Lorentz and Poincare, as they applied the time transformation to everything. Poincare was one of the nominators for Lorentz's 1902 Nobel Prize, and specifically praised his ingenious application of local time to all clocks.

No, Einstein was not unique in his understanding that Lorentz transformations effect predictions about rigid bodies and clocks. Lorentz and Poincare were adamant that they explain the Michelson-Morley experiment, and Poincare went so far as to apply them to planetary orbits. Einstein did not even claim that the scope of his theory went beyond Lorentz's, and never objected to those calling it the Lorentz-Einstein theory. Poincare and Minkowski explicitly claimed to have a new theory of space and time, but not Einstein. (Einstein comes closest here.)

The paper does rebut the idea that Einstein's theory was superior because it was kinematical, while Lorentz's was dynamical.

Second, consider the criticism Abraham Pais made of H. A. Lorentz in his acclaimed 1982 biography of Einstein: “Lorentz never fully made the transition from the old dynamics to the new kinematics.”25 As late as 1915 Lorentz thought that the relativistic contraction of bodies in motion can be explained if the known property of distortion of the electrostatic field surrounding a moving charge is supposed to obtain for all the other forces that are involved in the cohesion of matter. In other words, Lorentz viewed such kinematical effects as length contraction as having a dynamical origin, and it is this notion that Pais found reprehensible. Yet, when Einstein appeals to the nature of rods and clocks as “moving atomic configurations”, it seems that not even he ever fully accepted the distinction between dynamics and kinematics. For to say that length contraction is intrinsically kinematical would be like saying that energy or entropy are intrinsically thermodynamical, not mechanical — something Einstein would never have accepted.26

Pais's argument is stupid for other reasons anyway. Lorentz deduced his transformations from experiments, and then gave a constructive explanation of what is physically really going on. Einstein distilled a couple of postulates from Lorentz's reasoning, gave an alternate derivation of those same transformations, and wanted to give a constructive explanation because he could not get it to work.

Lorentz's 1915 explanation was completely correct, as was later proved using quantum field theory. It is not the preferred explanation, as that has been the Poincare-Minkowski non-Euclidean spacetime explanation. Einstein has nothing to do with that.

The point of the above paper is to rebut the argument that we should be happy with an instrumentalist interpretation of quantum mechanics because Einstein was happy with an instrumentalist interpretation of special relativity. Einstein was not so happy with his 1905 interpretation, the paper argues. That's right, and others were not so happy with it either, as the geometric interpretation was already more popular in 1908.

The paper does not mention it, but I believe that in the Bohr-Einstein debates, Bohr (or maybe Pauli or someone else) expressed surprised to Einstein that he was so negative about instrumentalism when that was his chief claim to fame from his 1905 relativity paper. Einstein disavowed the view that relativity was just something to accept from postulates.

FitzGerald-Lorentz (1889-1992) Logical consequence of interpreting Michelson-Morley as a constant speed of light for all observers.

Lorentz (1995) Solids contract because a moving electromagnetic field distorts space and time parameters.

Einstein (1905) Logical consequence of postulating a constant speed of light for all observers.

Poincare-Minkowski (1905-1908) Non-Euclidean geometry of spacetime.

I consider the geometry a satisfactory explanation, but you might consider it mathematical trickery. I guess Brown considers the Lorentz electromagnetic theory a more constructive physical explanation.

But what is the lesson for quantum mechanics? It is nice to have some explanations that gives an intuitive feel for what is really going on, but not essential. The trend among anti-positivist philosophers is to latch on to some goofy interpretation of quantum mechanics because it is more realist, whatever that means. It usually does not mean a more intuitive theory to me.

Poincare's 1905 explanation was that there were two interpretations: the one given by Lorentz, and saying that relativity is a theory about “something which would be due to our methods of measurement.” 25 years later Bohr would explain quantum mechanics as being about our methods of measurement, and uncertainty about what we cannot measure. That seems reasonable to me.

"All events are causally ordered such that for every pair of events you can say that one event is cause or effect of the other," says quantum physicist Caslav Brukner. Causality is so ingrained in the texture of our lives, and our brains, that it is hard to imagine letting go of it. Yet this is exactly what Brukner addressed with the help of an FQXi grant of over $63,000. The implications of this idea could be enormous: we might find that space, time and causality are not the basic building blocks of nature. It would have practical consequences too—potentially helping us to build quantum computers to outperform today’s devices. ...

Removing causality is more than just an interesting conceptual shift; it may also have benefits for those attempting to build super fast computers that exploit quantum laws to outperform today’s machines. While our personal computers store information as bits that can either be 1 or 0, quantum computers — which were first proposed in the 1980s — use quantum bits, or qubits, that can represent a 1, 0, or any superposition of these states. The ability of qubits to hold multiple states at once would allow quantum computers, in theory, to solve problems more quickly than classical ones.

Here’s where causality (or a lack of it) comes in: Calculations in a quantum computer follow a fixed sequence of quantum logic gates, at the end of which one number drops out. But quantum information without causal order could, in theory, allow for an even bigger speed-up because it would no longer be necessary for the calculations to follow a sequential route. The effect would be amplified, explains Brukner, when the question is scaled up from just two ports of call — Alice and Bob in Brukner’s research — to numerous others.

Without causality, how do you ever know when the computation has started, and when it finished?

If you drop causality, then you can do time travel, and get another speedup. Instead of doing a computation iteratively a million times, you could just go back in time a million times, and get a huge speedup.

Then there are paranormal and psychic influences. If mental energy can direct the computer towards a solution, you could get another speedup.

Of course there is not a shred of evidence for causality violations, or for time travel.

And there is no proof that a quantum computer can get a speedup from qubits holding multiple states at once.

There is plenty of evidence for quantum mechanics, and there are interpretations of it that suggest all sorts of weird things. I say to reject those interpretations.

Speaking of interpretations, Lumo seems to accept collapse of the wave function, but denies that it is unitarity violation. This seem peculiar, because collapses are given by a Hilbert space projection, and projections are not unitary.

He wants to hang onto unitarity, because that is associated to probabilities adding up to one. Probabilities adding to one seems like a mathematical necessity, and hence a physical law, in the view of some.

I am not sure of his argument, but I do agree that abstract considerations of mathematical probability do not forbid collapse of the wave function.

If I toss a coin, and keep the result covered, then there is probability of 0.5 that it is heads, and 0.5 that it is tails. If I then show the coin to be heads, then the possibility of tails disappears. But no one would argue that discarding that probability of tails violates the law that probabilities add up to one.

Likewise, collapse of the wave function discards some possibilities.

The people who promote the many-worlds interpretation (such as Sean M. Carroll) would say that unitarity requires that all those possibilities stay alive in some parallel universe. So if I toss a coin and see heads, I will have split into two men with the other seeing tails.

There isn't really any physics behind many-worlds. There is just a mathematical belief that probabilities cannot be eliminated, so they must be shoved into a parallel universe. Unitarity is just a fancy way of saying that.

Friday, March 11, 2016

Misuse of the P value — a common test for judging the strength of scientific evidence — is contributing to the number of research findings that cannot be reproduced, the American Statistical Association (ASA) warns in a statement released today1. The group has taken the unusual step of issuing principles to guide use of the P value, which it says cannot determine whether a hypothesis is true or whether results are important.

This is the first time that the 177-year-old ASA has made explicit recommendations on such a foundational matter in statistics, says executive director Ron Wasserstein. The society’s members had become increasingly concerned that the P value was being misapplied in ways that cast doubt on statistics generally, he adds.

In its statement, the ASA advises researchers to avoid drawing scientific conclusions or making policy decisions based on P values alone. ...

The statement’s six principles, many of which address misconceptions and misuse of the p-value, are the following:

1. P-values can indicate how incompatible the data are with a specified statistical model.

2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.

3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.

4. Proper inference requires full reporting and transparency.

5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.

6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.

In the medical and social sciences, p-values rule. Every paper cites them. The p-value determines whether the paper is publishable or not.

Tuesday, March 8, 2016

Briefly, the new work uses Kitaev’s version of Shor’s factoring algorithm, running on an ion-trap quantum computer with five calcium ions, to prove that, with at least 90% confidence, 15 equals 3×5. Now, one might object that the “15=3×5 theorem” has by now been demonstrated many times using quantum computing ...

Nevertheless, as far as I can tell, the new work is a genuine milestone in experimental QC, because it dispenses with most of the precompilation tricks that previous demonstrations of Shor’s algorithm used. “Precompilation tricks” are a fancier term for “cheating”: i.e., optimizing a quantum circuit in ways that would only make sense if you already assumed that 15 was, indeed, 3×5. So, what’s new is that a QC has now factored 15 “scalably”: that is, with much less cheating than before.

Of course, as I’m sure the authors would acknowledge, the word “scalable” in their title admits multiple interpretations, rather like the word “possible.” ...

In conclusion, let me suggest the following principle:

I will not assign a nonzero probability to something like 3×5=7, for which there’s no explanatory theory telling me how it possibly could be true. That doesn’t mean that I’ll assign a zero probability — i.e., that I’ll accept a bet with infinite odds on anything that strikes me (perhaps mistakenly) as a logical or metaphysical impossibility. It just means that I’ll refuse to bet at all.

It is still cheating to say they factored 15 "scalably", because the method does not scale up to larger numbers.

In the make-believe quantum world, anything is possible. Everything is just probabilities. If it doesn't happen here, then it happens in some parallel universe.

Scott admits that no one has demonstrated quantum computing, but he like to study, so he likes to believe it is possible. And even if it is not possible, it is a rewarding thing to study. And he cannot positively rule it out, just as he cannot rule out 3×5=7.

Speaking of possibilities, it is possible that the elusive dark matter is black holes. Until the LIGO discovery, we only knew about star-sized black holes and giant million-star black holes. The star-sized ones show up as part of a binary star where the other star is visible. The big ones are at the nucleus of galaxies, and could be essential for galaxy formation.

Common sense would indicate that there must be some intermediate sizes. The LIGO team claims they say 2 30-Sun black holes collide. This could be a freak event, or there could be billions of them scattered all over the place, and we would never notice. If the latter, they could be the dark matter whose gravity is essential to holding galaxies together.

I posted some skepticism about LIGO, but we could soon have a lot more data to settle the question. Maybe another country with build a LIGO, and not build in the capability to fake results.

IT has not seemed to me appropriate, nor would there be time, nor should I be able, to enter into an exhaustive study of the life-work of a master-mind like Jules Henri Poincaré. Indeed, to analyze his contributions to astronomy needs a Darwin; to report on his investigations in mathematical physics needs a Planck; to expound his philosophy of science needs a Royce; to exhibit his mathematical creations in all their fullness needs Poincaré. Let it suffice that he was the pride of France, not only of the aristocracy of scholars, but of the nation. He was inspired by the genius of France, with its keen discernment, its eternal search for exact truth, its haunting love of beauty. The mathematical world has lost its incomparable leader, and its admiration for the magnitude of his achievements will be tempered only by the vain desire to know what visions he had not yet given expression to. Investigators of brilliant power for years to come will fill out the outlines of what he had time only to sketch. His vision penetrated the universe from the electron to the galaxy, from instants of time to the sweep of space, from the fundamentals of thought to its most delicate propositions. ...

Poincaré's conception of science can be summed up in these terms: Science consists of the invariants of human thought. ...

We are witnesses too of an evolution in science and mathematics from the continuous to the discontinuous. In mathematics it has produced the function defined over a range rather than a line — a chaos, as it were, of elements — and the calculable numbers of Borel. In physics it has produced the electron, the magneton, and the theory of quanta, 5 about which Poincaré said shortly before his death:

A physical system is capable of only a finite number of distinct states; it abruptly jumps from one state to another without passing through the inter- mediate states.

Do all the processes of the universe, which appear to go on smoothly and continuously, gliding from one state to another, really take place with a series of infinitesimal jerks? Does a ball, when thrown into the air, move with a series of tiny leaps so close together than they blend to the eye? This is precisely what takes place in a moving picture. Is nature, in this respect, one vast cinematograph? This would appear to be the result of a striking and almost revolutionary theory propounded first in Germany, but elucidated and extended in the Revue Scientifique (Paris, February 24) by Henri Poincaré, an eminent French physicist. According to this theory, energy consists of discontinuous portions just as matter does. There are "atoms" of energy as well as of matter, and possibly also "atoms" of time, causing all duration to be jerky instead of smooth, as it appears to be.

Poincare's last essay in 1912 was on The New Conceptions of Matter. He tells how he was finally persuaded by atomism. Presiously he had been a proponent of continuity, and referred to the "atomic hypothesis" as it were just a convention.

Wednesday, March 2, 2016

How has the mathematics community reacted? With Gödel, at first there was a lot of shock. As a student in the late 1950s and early 1960s, I would read essays by Weyl, by John von Neumann, and by other mathematicians. Gödel really disturbed them. He took away their belief in the Platonic world of ideas, the principle that mathematical truth is black or white and provides absolute certainty.

But now, strangely enough, the mathematics community ignores Gödel incompleteness and goes on exactly as before, in what I would call a Hilbertian spirit, or following the Bourbaki tradition, the French school inspired by Hilbert. Formal axiomatic theories are still the official religion in the mathematics community. ...

Gödel incompleteness is even unpopular among logicians. They are ambivalent. On the one hand, Gödel is the most famous logician ever. But, on the other hand, the incompleteness theorem says that logic is a failure.

Yes, of course mathematicians use axiomatic theories. They have for over two millennia. That is what mathematics is.

Godel certainly did not prove that logic is a failure. He would have vehemently resisted interpreting his work that way.

I am surprised at this, because Rudolf Carnap and Hempel were a logical postivist, and today's philosophers say that was wrong. Popper was an anti-positivist, but people like him because they think he was a positivist. Kuhn was even more anti-positivist, and popularized denying objective truth.

Update: Here is a philosopher whining about philosophers not getting respect from a science popularist Bill Nye. For example, Pigliucci complains that Nye falls fort he fallacy that dropping a hammer on his foot convinces him that the hammer is real. I don't know, this seems as good an argument as anything else that the hammer is real. But I guess philosophers like to live in a pretend-world where nothing is real.