The page is permanently located at the menu bar at the top of the page, at the far left. Look for Uncertainty Book.

Comments are welcome on this post, the one you’re reading now, but if they are meant to be permanent, or are asking fundamental questions, or are pointing out a rare typo placed by my enemies after the several rounds of editing, and if you put your comments here, on the page you are reading now, they will eventually get lost.

I had an old book page (with my previous two, now free books), but it was plain HTML. I’ll (soon) have that redirect to the new book page. This is not that page.

I’m forever honing, tweaking, and muscling a pithy preview, of which the best I have so far is this (repeated on the real book page):

The Big Gist

All probability is conditional;

Probability is not decision.

From those simple and proved truths flows these consequences:

Probability cannot discern cause;

Therefore no hypothesis test, by wee p-value or Bayes factor, should ever be used;

Therefore parameters are of no interest to man or beast;

Therefore verified probability models should be used in a predictive sense only;

Therefore to understand cause and provide explanation we must look to nature, essence, and power.

Therefore buy the book and be the first on your block to come to a wondrous, penetrating understanding of probability & statistics. Out with the new and in with the old! The older, better, and true understanding of cause and probability, that is. Eschew mathematics for the sake of mathematics, flee ad hocery in all its forms and wiles, and put probability to its intended genuine use!

This includes you, too, computer scientists, with your big deep data neural net machine “learning” fuzzy algorithms which are all probability models by (admittedly) cuter and more precious names.

Favor To Ask

For those who have read the book, could I beg of you to review it at Amazon? The book is as of this writing still the #1 New Release in Statistics, and Epistemology, and Logic. That happy state won’t last. But since the way Amazon’s book recommendations work, the more the book sells, and the more reviews it has, the more it is put in front of browsers and thus the more it sells, and so on. A positive feedback.

In this case a positively positive positivity. Let’s get the world to abandon hypothesis tests and parameter estimation and return probability to its roots!

Classwork

We already agree every working scientist who creates models of any kind should read the book, most especially those who create probability models. But can the book be used for students? You know it can.

Can the book be used for a class in Philosophy of Probability? Yes. How about an intro class of probability for those not focused on math? Sure (there are only a few rough passages requiring advanced techniques, and these can be skipped). Can it be used to teach Philosophy of Science? Yes. Can it be used as a survey course for the purposes and goals of Statistics or Modeling of any kind? Yes. Can it be used as an adjunct to any course in probability models or statistical methods? Absolutely.

Can it be used as a methods cookbook? No. I have really only one running example, and a simple one. The point is on understanding what any method means, not on serving up examples.

Methods are less of a concern. The beauty is that customers ask for Pr(Y|X), where X are some set of premises or conditions (including possibly those about an approximate infinitary model) and Y some proposition of interest. Uncertainty says given them that and nothing else! Classical, frequentist or Bayesian, methods answer every question but Pr(Y|X). No: Bayes factors, p-values, and parameter estimates are not Pr(Y|X).

On the other hand, those wanting to discover new methods—those looking to write paper, that is—will find a wealth, a veritable plethora of ideas! Be the first on your block to discover the Pr(Y|X)-way of SNP analysis! Or discrete value (like all values are)regression!

Or do sample size calculations. I’m working with a guy for a cool experiment (details later) in which I had to figure out sample size. Unlike in classical methods, the probability-as-probability method espoused in Uncertainty, skill is a necessary component. Skill is detailed in Uncertainty, and it means “beating” a simpler (or another) model. I’ll put up an Arxiv paper for how to do calculations for this kind of simple, very common model.

Memetic compliments

For those who like goofy pictures with quirky quips, I got ’em for ya. Suggestions for more are welcome. Click the pics for larger images; re-size at leisure.

UPDATE There are two reviews! God bless my readers!

5.0 out of 5 stars
The end of the p-value?
By Peter Houlding on July 25, 2016

Let me (try to) be the first. So far I’ve had a lot of enjoyment from this book, which, like most of Briggs’ writing, tackles weighty subjects in language as plain as the subject allows and which presents them with his good humour shining through. It’s a book which I would gladly set for my Philosophy students, and for the mathematicians drifting towards any form of modelling activity. There are gems everywhere, sufficient to excite a class and to generate debate. And how does Flippenbergerology relate to climatology? Briggs doesn’t say, but it’s worthy of discussion..

Look out for Section 9.7: “Die, p-Value, Die, Die, Die”!

————————-

5.0 out of 5 stars
But I think that interpretation better accounts for the state of the world as I’ve …
By Mark Miller on July 25, 2016

Audacious. If he’s right – and the argument of the book seems to me sound – it would drastically alter the way any number of scientific pursuits should be carried out. I’m reasonably well read in philosophy, but weak in mathematics, so I was apprehensive about picking it up, but I found it entirely comprehensible. He seems practiced in the art of teaching. I don’t think the lack of math detracted from the argument. It’s listed on Amazon under “Science & Math › Mathematics” but this is really is a work of epistemology. The title is appropriate. There may be a whole lot more uncertainty going around than we are inclined to admit, but I think that interpretation better accounts for the state of the world as I’ve experienced it.

Miller’s right that if I’m right (and I am) that it should alter the way science is pursued. Get your copy today and discover why.

Related

6 Comments

I’m into chapter two so this scattershot “review” (scarequotes intended) is only for chapter one. I found four incursions by your enemies, but they’re rather minor and easily can be missed, even by fresh eyes. The text reads quite clearly although the vocabulary is advanced for what passes as undergraduates these days. A glossary would help with some terms. I found the last section more dense than the earlier parts of the chapter. It could use some editing. Good to see our old friend George the Martian in a hat, again. I found the type exceedingly small for these old eyes, but I suppose the page limitation on so much material forced the compression. Overall, a nice start at laying the foundation with just enough breeziness here and there to keep it readable.

neural net machine … which are all probability models by (admittedly) cuter and more precious names.

FYI: The weight settings may be probabilistic in nature but are not arrived at by using probability. Instead the gradient of the training error surface is used to find the settings yielding the minimum error (at least locally). Conceptually the same as the Newton-Raphson method if not identical.

The predictions of a neural network have a probability but that is determined outside of the network. The same is true of regressions. It’s their use which involves a probability model which is not inherent in (or used by) the computed model. (Most anyway, there are a few that do. For example, there is a Probabilistic Neural Network which uses probability to evaluate class membership. The PNN is really a fuzzy model but doesn’t use fuzzy math.)

Any model, however is makes predictions and however it is constructed, is considered in Uncertainty. I consider the whole panoply of models, from causal, to deterministic (not the same as causal), to probability, to mixed. I’m not interested per se in how a neural net, or any comp sci model, is made. But in the end it can be used in a predictive sense. And so what do those predictions mean? How are they evaluated? Do they help understand cause? Or is the model purely correlative? And so on.

Sorry for the terse comment. I was in a “we have to talk” moment.
Message was just that one buys electronic books so that one has control of font size which means one does not have to hunt down those expletive deleted reading glasses.