Dedicated to the mathematical arts.

Main menu

Post navigation

Justin Curry has written an excellent introduction to cosheaves. Cosheaves are the dual notion to sheaves, but many specific properties of sheaves of sets do not dualize, so they have a somewhat different flavor. The introduction includes some applications of cosheaves in networks.

As part of my new program of bringing you 2009’s internet to you today, I was fiddling around with Translation Party, which repeatedly translates a sentence in English into Japanese and back, until it finds a fixed point. Once I got tired of song lyrics, I tried various mathematical statements paraphrased into plain English. Most of the time, it converges right away on something close to the original sentence, or on gibberish, but I did find one example where it converted a true mathematical statement into an intelligible, but false, mathematical statement: that a matrix group is a group algebra.

I might be the last person in the world to find out, but I just found a website, Detexify, that clearly works on magic. You draw a symbol, and it looks for the closest Latex symbol. It works surprisingly well. I just draw a terrible approximation of the Weierstrass p symbol, and the actual Weierstrass p symbol was the 4th hit.

I like to read the Low-Dimensional Topology blog, despite the fact that I know almost nothing about the subject. (It’s possible I like to read it because I know nothing about the subject.)

Over the past year, several posts convey palpable excitement over a series of preprints that prove two conjectures: the virtually Haken conjecture and its generalization the virtually fibered conjecture. These were apparently the outstanding open conjectures after the proof of geometrization. This post in particular describes the techniques involved in the proof. To see how fast things changed over the past year, this post on the Wise conjecture (an important ingredient of the proof) makes it clear that from the perspective of March of this year it was very much an open question which way the result would most likely turn out.

I’d been meaning to learn more about the subject, just to have a better idea of what happened. (For example, I still don’t really understand what a Haken manifold is, even though I’ve read the definition. Fortuitously, Erica Klarreich has written a longgeneral-audience article that gives at least some of the flavor of what’s going on.

I was never really sure I believed Weibel’s famous footnote that a proof of the Snake Lemma appeared in a 1980 romantic comedy, It’s My Turn, but Oliver Knill has put together a gallery of math clips from movies and TV shows, and it’s there.

What’s interesting about the clip is that it’s clear to a math audience that the student who keeps interrupting is a blowhard who has no idea what he’s talking about. While it would be clear to any audience that the student is arrogant, I don’t know if it would be clear that the student doesn’t know what he’s talking about.

Nate Eldridge has written a program, Mathgen, to randomly generate a nonsense math paper. (It’s based on an older program, SCIgen, that generates random computer science papers.) While they don’t make any sense, the Mathgen papers capture the typical style of mathematical writing pretty well. The main quirk that gives it away is that a real math paper would repeat terminology, Mathgen creates new mathematical terms every sentence. (This an inevitable consequence that the algorithm used is context-free.)

Apparently it doesn’t give it away for everyone, though. A Mathgen-generated paper was submitted to a journal, Advances in Pure Mathematics, where it was accepted with revisions. I’ve never heard of this journal, so I would assume that it’s like the mathematical version of a vanity publisher that makes money from publication fees. But what’s amazing is that the paper was peer reviewed! The suggested revisions are of the form “please make this make sense”, but still, out there somewhere there’s a person who read this paper, and tried to make constructive comments. Who was this person?

As probably most of you have heard, Shinichi Mochizuki has announced a proof of the abc conjecture. At some point I decided to stop posting about announcements of solutions to famous unsolved problems, after several high-profile retractions. This time, it’s been long enough to wonder if the proof will hold up. The papers are of daunting technical complexity, so it sounds like it will be some time before we hear the verdict.

I find typing rules in theoretical computer science hard to read, and I just realized that it’s for a completely trivial reason: I subconsiously read “:” as having lower precedence than ⊨ and “,”, which is completely wrong, so I have to concentrate to group everything the right way.

The only reason I can think of for this is that way back when I learned Pascal, which allows declarations like “ x, y : integer”, which is somewhat like “:” being higher precedence than “:”.

This article from the New York Times has a startling statistic: sales of the print edition of the Encyclopedia Britannica have dropped from 120,000 in 1990 to 12,000 today. (The article says 8,000, but a later article says the whole print run of 12,000 sold out.) I knew that Wikipedia had seriously hurt the sales of encyclopedias, but I had no idea it was on the order of 90%.

When I was a kid, I had the Funk & Wagnalls encyclopedia. (They sold it at the supermarket, one letter at a time.) I remember vividly reading the article on algebra, where it had a big table of axioms, like “the commutative axiom,” and “the associative axiom.” I was fascinated to find out that someone had isolated a list of properties of numbers, and that these properties had names.

David Li’s Gaussian copula formula has been called the formula that killed Wall Street, because of how it was used in pricing mortgage-backed securities.

Sociologists Donald McKenzie and Taylor Spears have a new paperon the history of the Gaussian copula model, based on detailed interviews with quants before and after the crisis. They find that the limitations of the Li model were well understood by financial modelers on Wall Street, and that none of them took the model as literally true. The formula became widely used for institutional reasons outside of the quant community — for example, once the industry had a standard model, then the model could be used in evaluating profit and loss. This is further evidence for the thesis that mathematics is only dangerous when it falls into the wrong hands.

Cathy O’Neal highlights the parts of the paper that address the questions of institutional politics and blame.