This is what I get for skimming an entertainment website for a momentary diversion.

So, everybody’s seen the cool new video, “‘Cantina Theme’ played by a pencil and a girl with too much time on her hands,” right?

And we’ve heard the claim, via Mashable and thence The AV Club, that the formula “can actually be used to determine the speed of light,” yes?

It’s a joke. The “proof” is words thrown into a box and filled with numbers so that nobody reads it too carefully. The algebra isn’t even right — hell, it does FOIL wrong — but that’s just a detail. I tried to think of a way to use it as a hook to explain some real science, as I’ve tried beforeuponoccasion, but there just wasn’t any there there. The whole thing is goofing off.

Obvious goofing off, I would have thought. Somewhere south of a Star Trek Voyager technobabble speech. But no, never underestimate the ability of numbers to make a brain shut down.

A few years ago, I found a sentence in a Wikipedia page that irritated me so much, I wrote a 25-page article about it. Eventually, I got that article published in the Philosophical Transactions of the Royal Society. On account of all this, friends and colleagues sometimes send me news about Wikipedia, or point me to strange things they’ve found there. A couple such items have recently led me to Have Thoughts, which I share below.

This op-ed on the incomprehensibility of Wikipedia science articles puts a finger on a real problem, but its attempt at explanation assumes malice rather than incompetence. Yes, Virginia, the science and mathematics articles are often baffling and opaque. The Vice essay argues that the writers of Wikipedia’s science articles use the incomprehensibility of their prose as a shield to keep out the riffraff and maintain the “elite” status of their subject. I don’t buy it. In my opinion, this hypothesis does not account for the intrinsic difficulty of explaining science, nor for the incentive structures at work. Wikipedia pages grow by bricolage, small pieces of cruft accumulating over time. “Oh, this thing says [citation needed]. I’ll go find a citation to fill it in, while my coffee is brewing.” This is not conducive to clean pedagogy, or to a smooth transition from general-audience to specialist interest.

Have no doubt that a great many scientists are terrible at communication, but we can also imagine a world in which Wikipedia would attract the scientists that actually are good at communication.

There’s communication, and then there’s communication. (We scientists usually get formal training in neither.) I know quite a few scientists who are good at outreach. They work hard at it, because they believe it matters and they know that’s what it takes. Almost none of them have ever mentioned editing Wikipedia (even the one who used his science blog in his tenure portfolio). Thanks to the pressures of academia, the calculation always favors a mode of outreach where it’s easier to point to what you did, so you can get appropriate credit for it.

Thus, there might be a momentary impulse to make small-scale improvements, but there’s almost no incentive to effect changes that are structured on a larger scale — paragraphs, sections, organization among articles. This is a good incentive system for filling articles with technical minutiae, like jelly babies into a bag, but it’s not a way to plan a curriculum.

The piece in Vice says of a certain physics article,

I have no idea who the article exists for because I’m not sure that person actually exists: someone with enough knowledge to comprehend dense physics formulations that doesn’t also already understand the electroweak interaction or that doesn’t already have, like, access to a textbook about it.

You’d be surprised. It’s fairly common to remember the broad strokes of a subject but need a reference for the fiddly little details.

Writers don’t just dip in, produce some Wikipedia copy, and bounce.

I’m pretty sure this is … actually not borne out by the data? Like, many contributors just add little bits when they are strongly motivated, while the smaller active core of persistent editors clean up the content, get involved in article-improvement drives, wrangle behind the scenes, etc.

[EDIT TO ADD (24 November): To say it another way, both the distribution of edits per article and edits per editor are “fat tailed, which implies that even editors and articles with small numbers of edits should not be neglected.” Furthermore, most edits do not change an article’s length, or change it by only a small amount. The seeming tendency for “fewer editors gaining an ever more dominant role” is a real concern, but I doubt the opacity of technical articles is itself a tool of oligarchy. Indeed, I suspect that other factors contribute to the “core editor” group becoming more insular, one being the ease with which policies originally devised for good reasons can be weaponized.]

If you want “elitism,” you shouldn’t look in the technical prose on the project’s front end. Instead, you should go into the backroom. From what I’ve seen and heard, it’s very easy to run afoul of an editor who wants to lord over their tiny domain, and who will sling around policies and abbreviations and local jargon to get their way. Any transgression, or perceived transgression, is an excuse to revert.

Just take a look at “WP:PROF” — the “notability guideline” for evaluating whether a scholar merits a Wikipedia page. It’s almost 3500 words, laying out criteria and then expounding upon their curlicues. And if you create an article and someone else decides it should be deleted, you had better be familiar with the Guide to deletion (roughly 6700 words), which overlaps with the Deletion process documentation (another 4700 words). More than enough regulations for anyone to petulantly sling around until they get their way!

If Bechly’s article was originally introduced due to his scientific work, it was deleted due to his having become a poster child for the creationist movement.

I strongly suspect that it would have been deleted if it had been brought to anyone’s attention for any other reason, even if Bechly hadn’t gone creationist. His scientific work just doesn’t add up to what Wikipedia considers “notability,” the standard codified by the WP:PROF rulebook mentioned above. Nor were there adequate sources to write about his career in Wikipedia’s regulation flat, footnoted way. The project is clearly willing to have articles on creationists, if the claims in them can be sourced to their standards of propriety: Just look at their category of creationists! Bechly’s problem was that he was only mentioned in passing or written up in niche sources that were deemed unreliable.

If you poke around that deletion discussion for Bechly’s page, you’ll find it links to a rolling list of such discussions for “Academics and educators,” many of whom seem to be using Wikipedia as a LinkedIn substitute. It’s a mundane occurrence for the project.

And another thing about the Haaretz article. It mentions sockpuppets arriving to speak up in support of keeping Bechly’s page:

These one-time editors’ lack of experience became clear when they began voting in favor of keeping the article on Wikipedia – a practice not employed in the English version of Wikipedia since 2016, when editors voted to exchange the way articles are deleted for a process of consensus-based decision through discussion.

Uh, that’s been the rule since 2005 at least. Not the most impressive example of Journalisming.

I did start drafting an essay I call “To Thems That Have, Shall Be Given More”. There are a sizable number of examples where Feynman gets credit for an idea that somebody else discovered first. It’s the rich-get-richer of science.Continue reading To Thems That Have→

There are other people named Blake Stacey around the United States. I know this because (a) I came across their records when opting myself out of person-search websites, and (b) sometimes they use my GMail address when signing up for things. (Or, to be fair, perhaps they write their address in a form and someone else types it incorrectly.) I keep getting customer satisfaction surveys and even credit-card receipts from an auto dealership in a state I haven’t even visited in years.

I finally gave up on Twitter. It had been descending into mediocrity and worse for a long time. The provocation that gave me the nudge I needed was dropping in after a few days away and finding my timeline cluttered into uselessness, because their Algorithm (in its ineffable Algorithmhood) had decided to interpret “likes” as retweets. This is a feature they decided the world needed, and they decided that it was so beneficial that there would be no way to turn it off. What’s more, it comes and goes, so one cannot plan around it or adapt one’s habits to it, and when it is present, it is applied stochastically.

Consequently, the meaning of clicking the “like” icon is not constant over time. If you care at all about what your followers experience, you cannot expect taking the same action to have the same result. The software demands, by definition, insanity.

So, now I fill my subway-riding time with paperback books that I’d bought at the Harvard Bookstore warehouse sale and never gotten around to reading.

The big scandal this weekend: Peter Boghossian and James Lindsaypulled a hoax on a social-science journal by getting a deliberately nonsensical paper published there, and then crowed that this demonstrates the field of gender studies to be “crippled academically.” However, when people with a measure of sense examined B&L’s stunt, they found it to be instead evidence that you can get any crap published if you lower your standards far enough, particularly if you’re willing to pay for the privilege and you find a journal whose raison d’être is to rip people off. Indeed, B&L’s paper (“The conceptual penis as a social construct”) was rejected from the first journal they sent it to, and it got bounced down the line to a new and essentially obscure venue of dubious ethical standing. Specifically, I can’t find anybody who had even heard of Cogent Social Sciences apart from spam emails inviting them to publish there. This kind of bottom-feeding practice has proliferated in the years since Open Access publishing became a thing, to unclear effect. It hasn’t seemed in practice to tarnish the reputation of serious Open Access journals (the PLOS family, Scientific Reports, Physical Review X, Discrete Analysis, etc.). Arguably, once the infrastructure of the Web existed, some variety of pay-to-publish scam was inevitable, since there will always be academics angling for the appearance of success—as long as there are tenure committees.

Boghossian and Lindsay made the triumphant announcement of their hoax in Skeptic, a magazine edited by Michael Shermer. And if you think that I’ll use this as an occasion to voice my grievances at Capital-S Skepticism being a garbage fire of a movement, you’re absolutely correct. I agree with the thesis of Ketan Joshi here:

The article in Skeptic Magazine highlights how regularly people will vastly lower their standards of skepticism and rationality if a piece of information is seen as confirmation of a pre-existing belief – in this instance, the belief that gender studies is fatally compromised by seething man-hate. The standard machinery of rationality would have triggered a moment of doubt – ‘perhaps we’ve not put in enough work to separate the signal from the noise’, or ‘perhaps we need to tease apart the factors more carefully’.

That slow, deliberative mechanism of self-assessment is non-existent in the authorship and sharing of this piece. It seems quite likely that this is due largely to a pre-existing hostility towards gender studies, ‘identity politics’ and the general focus of contemporary progressive America.

Boghossian and Lindsay see themselves as the second coming of Alan Sokal, who successfully fooled Social Text into publishing a parody of postmodern theory-babble back in 1999. But after the fact, Sokal said the publication of his hoax itself didn’t prove much at all, just that a few people happened to be asleep at the wheel. (His words: “From the mere fact of publication of my parody I think that not much can be deduced.”) Then he wrote two books of footnotes and caveats to show that he had lampooned some views he himself held in more moderate form.

A few weeks back, I reflected on why mathematical biology can be so hard to learn—much harder, indeed, than the mathematics itself would warrant.

The application of mathematics to biological evolution is rooted, historically, in statistics rather than in dynamics. Consequently, a lot of model-building starts with tools that belong, essentially, to descriptive statistics (e.g., linear regression). This is fine, but then people turn around and discuss those models in language that implies they have constructed a dynamical system. This makes life quite difficult for the student trying to learn the subject by reading papers! The problem is not the algebra, but the assumptions; not the derivations, but the discourse.

Recently, a colleague of mine, Ben Allen, coauthored a paper that clears up one of the more confusing points.

Hamilton’s rule asserts that a trait is favored by natural selection if the benefit to others, $B$, multiplied by relatedness, $R$, exceeds the cost to self, $C$. Specifically, Hamilton’s rule states that the change in average trait value in a population is proportional to $BR – C$. This rule is commonly believed to be a natural law making important predictions in biology, and its influence has spread from evolutionary biology to other fields including the social sciences. Whereas many feel that Hamilton’s rule provides valuable intuition, there is disagreement even among experts as to how the quantities $B$, $R$, and $C$ should be defined for a given system. Here, we investigate a widely endorsed formulation of Hamilton’s rule, which is said to be as general as natural selection itself. We show that, in this formulation, Hamilton’s rule does not make predictions and cannot be tested empirically. It turns out that the parameters $B$ and $C$ depend on the change in average trait value and therefore cannot predict that change. In this formulation, which has been called “exact and general” by its proponents, Hamilton’s rule can “predict” only the data that have already been given.

When I face a writing task, my two big failure modes are either not starting at all and dragging my feet indefinitely, or writing far too much and having to cut it down to size later. In the latter case, my problem isn’t just that I go off on tangents. I try to answer every conceivable objection, including those that only I would think of. As a result, I end up fighting a rhetorical battle that only I know about, and the prose that emerges is not just overlong, but arcane and obscure. Furthermore, if the existing literature on a subject is confusing to me, I write a lot in the course of figuring it out, and so I end up with great big expository globs that I feel obligated to include with my reporting on what I myself actually did. That’s why my PhD thesis set the length record for my department by a factor of about three.

I have been experimenting with writing scientific pieces that are deliberately bite-sized to begin with. The first such experiment that I presented to the world, “Sporadic SICs and the Normed Division Algebras,” was exactly two pages long in its original form. The version that appeared in a peer-reviewed journal was slightly longer; I added a paragraph of context and a few references.

My latest attempt at a mini-paper (articlet?) is based on a blog post from a few months back. I polished it up, added some mathematical details, and worked in a comparison with other research that was published since I posted that blog item. The result is still fairly short:

It seems the best way to explain Mastodon to an old person (like me) is that it’s halfway between social networking, the way big companies do it, and email. You create an account on one server (or “instance”), and from there, you can interact with people who have accounts, even if those accounts are on other servers. Different instances can have different policies about what kinds of content they allow, depending for example on what type of community the administrators of the instance want to cater to.

If I ever administrate a Mastodon instance, I think I’ll make “content warnings” mandatory, but I’ll change the interface so that they’re called “subject lines.”

Recent years have seen significant advances in the study of symmetric informationally complete (SIC) quantum measurements, also known as maximal sets of complex equiangular lines. Previously, the published record contained solutions up to dimension 67, and was with high confidence complete up through dimension 50. Computer calculations have now furnished solutions in all dimensions up to 151, and in several cases beyond that, as large as dimension 323. These new solutions exhibit an additional type of symmetry beyond the basic definition of a SIC, and so verify a conjecture of Zauner in many new cases. The solutions in dimensions 68 through 121 were obtained by Andrew Scott, and his catalogue of distinct solutions is, with high confidence, complete up to dimension 90. Additional results in dimensions 122 through 151 were calculated by the authors using Scott’s code. We recap the history of the problem, outline how the numerical searches were done, and pose some conjectures on how the search technique could be improved. In order to facilitate communication across disciplinary boundaries, we also present a comprehensive bibliography of SIC research.

“This is probably what it felt like to be a British foreign service officer after World War II, when you realize, no, the sun actually does set on your empire,” said the mid-level officer. “America is over. And being part of that, when it’s happening for no reason, is traumatic.”

While I was writing Multiscale Structure in Eco-Evolutionary Dynamics, I found myself having a frustrating time reading through big chunks of the relevant literature. The mathematics in the mathematical biology was easier than a lot of what I’d had to deal with in physics, but the arguments were hard to follow. At times, it was even difficult to tell what was being argued about. A blog post by John Baez, on “biology as information dynamics,” called this frustration back to mind—not because it was unclear itself, but rather because it touched on the source of the fog.

I think the basic cause of the trouble is the following:

The application of mathematics to biological evolution is rooted, historically, in statistics rather than in dynamics. Consequently, a lot of model-building starts with tools that belong, essentially, to descriptive statistics (e.g., linear regression). This is fine, but then people turn around and discuss those models in language that implies they have constructed a dynamical system. This makes life quite difficult for the student trying to learn the subject by reading papers! The problem is not the algebra, but the assumptions. And that always makes for a thorny situation.

The APS, my professional organization, has made some dunderheaded moves of late, but this is more encouraging. An email from the APS president and CEO, broadcast today to the membership at large, begins thusly:

We share the concerns expressed by many APS members about recent U.S. government actions that will harm the open environment that is essential for a successful global scientific enterprise. The recent executive order regarding immigration, and in particular, its implementation, would reduce participation of international scientists and students in U.S. research, industry, education, and conference activities, and sends a chilling message to scientists internationally.