Most search engines, including Google’s, mainly sort pages to see which come closest to some set of keywords (or their synonyms), but they do relatively little to integrate information across pages. If you want a list of all the books written by members of Congress in 2007, you can do a search, but you’ll end up lost. Unless someone has already compiled that information into a single page, you are likely to be directed to a series of individual pages, many with little relevance. It would be left to you to consolidate information across many, many pages; worse, you would have to start from scratch to get the same data for 2008.

But, in theory at least, Facebook Graph Search consolidates information over time and space (albeit in very limited ways). In effect, each user can now use Facebook as if it were a giant, custom-tailored database, not just a librarian that gives a list of documents that are most relevant to your query. Although the ideas behind Facebook Graph Search aren’t entirely original—Google can do similar things in limited domains, such as shopping, and Wolfram Alpha can do math (and draw graphs) based on data in its archives—it really is likely to change the way many people think about search…

Forget search engines. The real revolution will come when we have research engines, intelligent web helpers that can find out new things, not just what’s already been written. Facebook Graph Search isn’t anywhere near that good, but it’s a nice hint at greater things to come.

A nice hint at greater things to come? Or, like Wolfram Alpha, another case where some of the best programmers in the world, given massive amounts of resources and time, fail to bring us appreciably closer to the dream of the research engine? In other words, a hint that maybe there are not greater things to come, at least in this direction?

I’m not part of Facebook Graph Search’s “slow rollout,” but from the coverage I’ve read it sounds like it’s good at handling canned combinations of boolean searches. That’s no joke, but does it really represent progress towards the goal that Marcus has in mind?

Wolfram Alpha, of course, has no idea what books members of Congress wrote in 2007, but that’s not quite fair, because Wolfram Alpha isn’t supposed to know about books. What does Wolfram Alpha know about? Well, a query for “Missouri Senate election 2010″ gives you the results of that election, so we know it has state-level results for those elections. But it can’t put these together to answer “How many Republican senators were elected in 2010?”“Senators elected in 2010″, which you might think would give you a list, doesn’t – it does, though, tell you that 24 seats went to Republicans and 10 to Democrats, along with the meaningless data of the total votes cast in the US for GOP and Democratic Senate candidates. “List of senators elected in 2010″ gives the same result. WA obviously has access, state by state, to the names of the Republican senators who won elections in 2010; but it apparently can’t put that information together into a single list. Given that, I think gathering their book credits is pretty far off.

Were any of those books written in 2007? Who knows? More to the point — who cares? That’s the genius of the Google approach. You know how they tell you, if you’re confused about something in class and you want to know the answer, you should raise your hand and ask, because probably other people have the same question? That’s the Google principle, except they take it one step further; if you need an answer, not only do other people have the same question, but one such person has already found the answer and put it on the web. Google can’t tell you which states that entered the Union after 1875 have public universities with animals as their mascots, or which Congressional district ranks 10th by percentage of area covered by water, which is the kind of thing Wolfram Alpha is ace at; but that’s because no one has ever asked those questions, and no one ever will.

To the contrary, there will surely be a new secretary of state visiting you next year with the umpteenth road map for “confidence-building measures” between Israelis and Palestinians. He or she may even tell you that “this is the year of decision.” Be careful. We’ve been there before. If you Google “Year of decision in the Middle East,” you’ll get more than 100,000,000 links.

Can this really be true? Nope. In fact if you Google that phrase you get fewer than 12,000 links.

The problem here is that Thomas Friedman apparently doesn’t know that when you search Google for a phrase you need to put quotes around it. Without the quotes, you do indeed get more than 100,000,000 results. That’s because a lot of web pages mention years, decisions, and things located either in the middle or to the east.

It seems plausible that long-time New York Times columnists might not know how to use Google, but it’s appalling if the people who edit and fact-check the columns don’t know how to use Google.

Share this:

Like this:

Google+ may not have killed Facebook, but it is developing into a nice place for tearoom style chats about math; less formal than MathOverflow, more characters than FB. This thread Allen Knutson started about circle packing is a case in point. If I’m reading the thread and I say to myself “Matt Kahle should be weighing in on this,” I can just type in his name with a + prepended to it — and he’s summoned! That’s a functionality that really doesn’t exist elsewhere.

Things quickly went blooey. Google’s purported answer — fiercely argued for by lots of Landsburg’s readers — is 1/2. Landsburg said the right answer was less. A huge comment thread and many follow-up posts ensued. Lubos Motl took time out from his busy schedule of yelling at mathematicians about string theory to yell at Landsburg about probability theory. Landsburg offered to bet Motl, or anybody else, $15,000 that a computer simulation would demonstrate the correctness of his answer.

What’s going on here? How could a simple probability question have stirred up such a ruckus?

Here’s Landsburg’s explanation of the question:

What fraction of the population should we expect to be female? That is, in a large number of similar countries, what would be the average proportion of females?

If G is the number of girls, and B the number of boys, Landsburg is asking for the expected value E(G/(G+B)). And let’s get one thing straight: Landsburg is absolutely right about this expected value. For any finite number of families, it is strictly less than 1/2. (See the related Math Overflow thread for a good explanation.) Landsburg has very patiently knocked down the many wrong arguments to the contrary in his comments section. Anybody who bets against him, on his terms, is going to lose.

Nonetheless, I’m about to explain why Landsburg is wrong.

You see, Google’s version of the question doesn’t specify anything about expectation. They might just as well have meant: “What is the proportion of the expected number of females in the expected population?” Which is to say, “What is E(G)/E(G) + E(B)”? And the answer to that question is 1/2. Just to emphasize the subtlety involved here:

On average, the number of boys and the number of girls are the same. Furthermore, the proportion of girls is, on average, less than 1/2.

Weird, right? E(G)/E(G) + E(B) isn’t what Landsburg was asking for — but, if Google’s answer was 1/2, it’s presumably the question they had in mind. To accuse them of getting their own question “wrong” is a bit rich.

But let me go all in — I actually think Landsburg’s interpretation of the question is not only different from Google’s, but in some ways inferior! Because averaging ratios with widely ranging denominators is kind of a weird thing to do. You can certainly compute the average population density of all the U.S. states — but should you? What meaning or use would the result have?

I had a really pungent example ready to deploy, which illustrates the perils of averaging ratios and explains why Landsburg’s version of the question was a little weird. Then I went to the Joint Meetings before getting around to writing this post. And when I got back, I discovered that Landsburg had posted the same example on his own blog — in support of his point of view! Awesome. Here it is:

There’s a certain country where everybody wants to have a son. Therefore each couple keeps having children until they have a boy; then they stop. In expectation, what is the ratio of boys to girls?

The answer to this question is, of course, infinity; in a finite population there might be no girls, so B/G is infinite with some positive probability, so E(B/G) is infinite as well.

But the correctness of that answer surely tells us this is a terrible question! Averaging is a terribly cruel thing to do to a bunch of ratios. One zero denominator and you’ve wiped out your entire dataset.

What if Landsburg had phrased his new question along the lines of Google’s original puzzle?

There’s a certain country where everybody wants to have a son. Therefore each couple keeps having children until they have a boy; then they stop. What is the ratio of boys to girls in this country?

Honest question: does Landsburg truly think that infinity is the only “right answer” to this question? Does he think infinity is a good answer? Would he hire a person who gave that answer? Would you?

Like this:

I thought I’d never see a definitive answer to this one, but thanks to the brand-new Google NGrams Viewer, the facts are clear:

It is “another think coming,” and it has always been “another think coming.”

A lot of words and phrases (though not these) show a dip starting in 2000 or so. I wonder if the nature of the corpus changes at that point to include more words? You see the same effects with name frequencies — the frequency of any given name has been decreasing over the last twenty years, just because names are getting more and more widely distributed; the most popular names today take up a smaller share of namespace than much lower-ranked names did in the 1950s. A quick and dirty thing to check would be the entropy of the word distribution; is it going up with time?

Like this:

I woke up the other morning thinking to myself, you know what would be funny? To go from Toni Morrison’s depiction of Bill Clinton as the first black president to the observation that Barack Obama, having missed his chance to be the first black president, could still be the first Jewish president: child of immigrants, excels in school, good at basketball, bad at bowling, subject to whispers that his religious commitments might bind him to America’s enemies, etc. etc.

But nowadays you’ve got to Google a gag before you deploy it. And you quickly find that Harold Pollack got there first at Huffington Post, back in January — which didn’t stop Howard Fineman from using the gag in March in Newsweek, or Josh Gerstein from bringing it back in the New York Sun this week.

You have to figure that Google has a certain chilling effect on gag-based feature writing. Of course ten people are going to come up with the same joke. And if that produces ten different columns, then maybe the funniest one has a chance to get popular. Is it really better if the first person to post the gag online salts the field for everybody else?

In this case, it’s for the best — Pollack’s piece is better than its successors, and better than what I would have written. But can we shed a single tear for the gag-based features that, thanks to Google, never tasted life?

Reader challenge: come up with a “Barack Obama is Jewish” gag that doesn’t appear in Google. “Obamulke” and “Baruch Obama” have both been done, but I think I can claim priority on “Barak Mitzvah.” For what that’s worth.

Share this:

Like this:

When it started up, Google Books had spotty coverage for literary fiction. But I’m happy to report that they now offer The Grasshopper King – well, not the full text, but all of the first chapter, and enough of the rest to get a sense of the book.