Soft snow, fuzzy sweaters, cozy nights by the fireplace… Winter is my favorite season! And with the excitement of the holidays approaching, it’s hard not to get swept up in it all. Whether you look forward to the delicious food, the great company, or all the holiday traditions, this time of year is worth the bitter cold that comes with it. (Well, for the Northern Hemisphere, at least!) More »

I was a raging sugar-holic as a kid. (Let’s face it, who wasn’t?) So, naturally, Halloween was a glorious, much-anticipated, high-energy free-for-all. My brother and I used to return from trick-or-treating dragging heavy pumpkin buckets and overstuffed pillowcases behind us—the candy wrappers would make that crinkle-squish sound as we dumped out our riches to sort, trade, construct giant candy pyramids… and then devour. More »

This blog post is the continuation of my last two posts (1, 2) about formulas for curves. So far, we have discussed how to make plane curves that are sketches of animals, faces, fictional characters, and more. In this post, we will discuss the constructions of some filled curves (laminae). More »

My family lives all over, with varying worldviews and equally varied career choices, from video game producers to truck drivers. So certainly reconnecting with family, both nuclear and extended, can be a daunting holiday experience. But don’t fret—Wolfram|Alpha is here to sort you out with the perfect ice breakers.

This is an edited version of a short talk I gave last weekend at The Nantucket Project—a fascinatingly eclectic event held on an island that I happen to have been visiting every summer for the past dozen years.

Lots of things have happened in the world in the past 100 years. But I think in the long view of history one thing will end up standing out among all others: this has been the century when the idea of computation emerged. More »

If you’re a chemist, a student, a mad scientist bent on destroying the earth for irrational reasons, an enlightened scientist bent on saving the population from the mad one, or as understandably enthused about isotopes and radiation as we are at Wolfram, then you’ll love the new Wolfram Isotopes Reference App and Wolfram Radiation Protection Reference App. Released for $4.99 and $9.99, respectively, for the iPod touch, iPhone, iPad, and PC, the apps are arguably the most comprehensive set of tools for the topics so far. More »

Wolfram|Alpha’s goal is to cover all things computational, from mathematics and the sciences to movies and sports. But the set of all things computable encompasses areas outside of the real world as well. With the 25th anniversary of Star Trek: The Next Generation coming up, we can now compute the relationship between warp factors and the speed of light. More »

In our first post on American Community Survey estimates in Wolfram|Alpha, we showed you how Wolfram|Alpha could answer questions about the age and sex of the population in practically any town or region in the United States. But that’s only a small fraction of what we can do with this wealth of detailed demographic data. Over the next few weeks, we’ll also share some examples of how Wolfram|Alpha can help you find and analyze information about education, income, and more.

But first, let’s take a look at two of the most frequently asked for demographic topics in Wolfram|Alpha: race and Hispanic origin. If you’ve never done so before, it’s worth taking a moment to brush up on the difference between these two concepts, in Census terminology. Although people often lump the two concepts together, race and Hispanic origin are two completely separate attributes in Census data: a person can be of any race and also be of Hispanic or non-Hispanic origin. Even with the basic data we’ve had in Wolfram|Alpha since its launch, people have regularly complained that our numbers “don’t add up”—and it’s always because they’ve added Hispanic population estimates to figures for the population by race and ended up with a figure larger than the country’s total population.More »

The precursors of what we’re trying to do with computable data in Wolfram|Alpha in many ways stretch back to the very dawn of human history—and in fact their development has been fascinatingly tied to the whole progress of civilization.

Last year we invited the leaders of today’s great data repositories to our Wolfram Data Summit—and as a conversation piece we assembled a timeline of the historical development of systematic data and computable knowledge.

The story the timeline tells is a fascinating one: of how, in a multitude of steps, our civilization has systematized more and more areas of knowledge—collected the data associated with them, and gradually made them amenable to automation. More »

An algorithm is, in essence, a procedure given by a finite description that solves some computational problem. The field of computational complexity deals with questions of the efficiency of algorithms, i.e. “For a computational problem X, how many steps does the best algorithm perform in solving X?” You might think that questions in this field would be confined to the realm of computer science, except for the fact that computational complexity theory contains the mathematical problem of the century! Currently, many mathematicians around the world are attempting to solve the famed open problem P vs. NP, a problem so important that it is one of the seven millennium problems of the Clay Mathematics Institute and carries a million dollar prize. In fact, according to our logs, many of you tried to ask Wolfram|Alpha this same question before this new functionality was available! But before we talk about how Wolfram|Alpha can help you become a millionaire, let us begin with a historical overview of the subject. More »

Wolfram|Alpha is making possible a whole new very interesting and very powerful kind of computing. And with the release today of version 2.0 of the Wolfram|Alpha API, it’s going to be considerably easier for a broad range of software developers to take advantage of it.

I’m happy to say that it seems as if Wolfram|Alpha is pretty useful to humans—for example through the wolframalpha.com website. But it also turns out that Wolfram|Alpha is extremely useful to programs. And in fact, even today, the number of requests coming to Wolfram|Alpha each second from programs often exceeds by some margin all the requests coming directly from humans.

The reason for this popularity is really pretty simple: Wolfram|Alpha completely changes the economics of a lot of programming. You see, these days a remarkable number of programs rely on having some kind of knowledge. And traditionally, the only way to get knowledge into a program was for the programmer to painstakingly put it there.

But with Wolfram|Alpha in the picture, it’s a different story. Because built into Wolfram|Alpha is already a huge amount of computable knowledge. And if a program is connected to Wolfram|Alpha, then it can immediately make use of all that knowledge.

Whether one’s building a website or a mobile app or desktop software or an enterprise application, the point is that one can use Wolfram|Alpha as a “knowledge-based computing” platform—so that having all sorts of computable knowledge becomes effectively free from an engineering point of view.

How does a program communicate with Wolfram|Alpha? It uses the Wolfram|Alpha API. (These days, API is pretty much a term on its own, but it comes from “Application Program Interface”.)More »

As we bid adieu to 2010, we want say thank you to all of our loyal blog readers and commenters. Today we’re taking a look back at some of 2010′s most popular Wolfram|Alpha Blog posts. 2010 was a year full of product releases, such as Wolfram|Alpha Widgets and new data for everything from movies to taxes.

These selections are only highlights of the topics we’ve covered in 2010. If you’re feeling really nostalgic, or if you’re new to the Wolfram|Alpha Blog, we invite you to read more in the archives.

We recently hosted the inaugural Wolfram Data Summit 2010 in Washington, DC. The summit brought together key people responsible for the world’s great data repositories to exchange ideas, learn from each others’ experiences, and develop innovative data management strategies for the future.

The summit officially opened with a keynote address from Stephen Wolfram, Wolfram Research CEO and creator of Wolfram|Alpha. In his talk, Stephen discussed the complex nature of gathering systematic knowledge and data, explained how Mathematica helps with the challenges of making all data computable, and hinted at some new technologies you can expect from us in the near future. You can read more in the transcript of Stephen’s talk below.More »

I spent a decade of my life writing A New Kind of Science. Most of that time was devoted to discovering the science in the book. But another part was spent figuring out how to present the science in the best possible way—using words and pictures.

It took a lot of technology to do that presentation. On the software side, the biggest part was using Mathematica to create elaborate algorithmic diagrams—thousands of them. But then came the question of how to actually deliver everything. And back in 2002 when A New Kind of Science was published, the only real possibility was to print a book on paper, using the very best printing technology of the time.

The actual print production process was quite an adventure—going right to the edge of what was possible. But in the end we got many compliments on the object we produced. And from that time to this, that 5.5 lb (2.5 kg) lump of paper has been the definitive representation of my decade-plus of intellectual work.

But today I’m excited to be able to say that there’s something new and in some ways even better: a full version on the iPad.

At the recent London Computational Knowledge Summit, Wolfram|Alpha content manager C. Alan Joyce gave attendees an insider’s look into Wolfram|Alpha. He shared how Wolfram|Alpha’s teams of Mathematica programmers, knowledge-domain experts, and data and linguistics curators have been able to transform raw data from public and private sources into “computable knowledge” that can be accessed and manipulated through natural-language input. Click the image below to view the video of his presentation:

As a scientist and a technology CEO, Stephen Wolfram often thinks about the future—both near-term and long-term. On June 12 he gave an unusual keynote talk at the 2010 H+ Summit @ Harvard, titled “Computation and the Future of the Human Condition”.

Check out the transcript to find Stephen’s latest thoughts on our future…

Today (June 23, 2010) would have been Alan Turing‘s 98th birthday—if he had not died in 1954, at the age of 41.

I never met Alan Turing; he died five years before I was born. But somehow I feel I know him well—not least because many of my own intellectual interests have had an almost eerie parallel with his.

And by a strange coincidence, Mathematica‘s “birthday” (June 23, 1988) is aligned with Turing’s—so that today is also the celebration of Mathematica‘s 22nd birthday.

I think I first heard about Alan Turing when I was about eleven years old, right around the time I saw my first computer. Through a friend of my parents, I had gotten to know a rather eccentric old classics professor, who, knowing my interest in science, mentioned to me this “bright young chap named Turing” whom he had known during the Second World War.

One of the classics professor’s eccentricities was that whenever the word “ultra” came up in a Latin text, he would repeat it over and over again, and make comments about remembering it. At the time, I didn’t think much of it—though I did remember it. Only years later did I realize that “Ultra” was the codename for the British cryptanalysis effort at Bletchley Park during the war. In a very British way, the classics professor wanted to tell me something about it, without breaking any secrets. And presumably it was at Bletchley Park that he had met Alan Turing.

A few years later, I heard scattered mentions of Alan Turing in various British academic circles. I heard that he had done mysterious but important work in breaking German codes during the war. And I heard it claimed that after the war, he had been killed by British Intelligence. At the time, at least some of the British wartime cryptography effort was still secret, including Turing’s role in it. I wondered why. So I asked around, and started hearing that perhaps Turing had invented codes that were still being used.

I’m not sure where I next encountered Alan Turing. Probably it was when I decided to learn all I could about computer science—and saw all sorts of mentions of “Turing machines”. But I have a distinct memory from around 1979 of going to the library, and finding a little book about Alan Turing written by his mother, Sara Turing.

And gradually I built up quite a picture of Alan Turing and his work. And over the 30 years that have followed, I have kept on running into Alan Turing, often in unexpected places. More »

The creation of large data repositories has been a key historical indicator of social and intellectual development—and indeed perhaps one of the defining characteristics of the whole progress of civilization.

And through our work on Wolfram|Alpha—with its insatiable appetite for systematic data—we have gained a uniquely broad view of the many great data repositories that exist in the world today.

Some of these repositories are maintained by national or international agencies, some by companies and other organizations, and some by individuals. A few of the repositories are quite new, but many date back 40 or more years, and some well over a century. But there is one thing in common across essentially every great data repository: a core of diligent and committed people who have carefully shepherded its development.

Curiously, though, few of these people have ever met their counterparts in other domains of data. And in our work on Wolfram|Alpha we are almost certainly the first group ever to have had the pleasure of getting to know such a broad range of leaders of great data repositories.

And one of the things that we have discovered is that there is much in common in both the methods used and the issues faced by these data repositories. So as part of our contribution to the worldwide data community we have decided to sponsor a data summit to bring together for the first time the leaders of today’s great data repositories.

Years ago I wondered if it would ever be possible to systematically make human knowledge computable. And today, one year after the official launch of Wolfram|Alpha, I think I can say for sure: it is possible.

It takes a stack of technology and ideas that I’ve been assembling for nearly 30 years. And in many ways it’s a profoundly difficult project. But this year has shown that it is possible.

Wolfram|Alpha is of course a very long-term undertaking. But much has been built, the direction is set, and things are moving with accelerating speed.

Over the past year, we’ve roughly doubled the amount that Wolfram|Alpha knows. We’ve doubled the number of domains it handles, and the number of algorithms it can use. And we’ve actually much more than doubled the amount of raw data in it.

Things seem to be scaling better and better. The more we put into Wolfram|Alpha, the easier it becomes to add still more. We’ve honed both our automated and human processes, progressively building on what Wolfram|Alpha already does.

When we launched Wolfram|Alpha a year ago, about 2/3 of all queries generated a response. Now over 90% do.

So, what are some of the things we’ve learned over the past year? More »

If you’re a regular reader of Boing Boing you might have seen this beautiful homemade Turing machine that tinkerer Mike Davey put together (it’s definitely worth watching the video). For those who don’t know, Turing machines are theoretical idealizations of computers. While not intended to be practical, they do allow mathematicians to construct rigorous proofs about what can be computed and what cannot. And now, you can experiment with them directly on Wolfram|Alpha!

To begin with, let’s ask Wolfram|Alpha to simulate the program that Mike Davey used in his video, a binary counter. Using Stephen Wolfram‘s notation, this one is 2-state 3-color machine number 1317953. It “counts” in binary, and marks each successive integer when the machine’s head returns to the initial position. We can more easily see how it computes the sequence 1,2,3,4,5… by only showing the steps when it returns to the center column.

Next we can try a Turing machine at random from the infinite “universe” of possible machines. Let’s say we find this particular Turing machine, and want to see how it behaves on different input tapes. We can try a tape filled with random colors, or a finite tape that wraps around, or a tape with an infinite pattern on it, or even a combination of the above. We can also try a random Turing machine that operates with many colors, such as “random 7-color Turing machine”. More »

We use this blog as a vehicle to highlight many of our big ideas and discoveries. Today we’re pleased to share with you Stephen Wolfram‘s talk from the 2010 TED Conference in Long Beach, California, where he talked about the tools and methods he’s spent the last 30 years developing in his quest to explore computational knowledge.

TED, an organization devoted to bringing together the technology, entertainment, and design industries’ most innovative thinkers to present “Ideas Worth Sharing”, recently shared Stephen’s ideas with the world as a “TED Talk of the Day”. In the signature 18-minute video, Stephen discusses how his lifelong scientific pursuits led to the development of Mathematica, A New Kind of Science, and the computational knowledge engine Wolfram|Alpha. He continues, asking new questions and proposing a fourth project—discovering our physical universe through our computational universe.

“Will we find the whole of physics? I don’t know for sure. But I think at this point it’s sort of almost embarrassing not to at least try.” —Stephen Wolfram

When Wolfram|Alpha was introduced, Stephen Wolfram blogged about it being the first “killer app” that resulted from his work on A New Kind of Science (NKS). We can now use this application of NKS to further our exploration and study within the NKS field. For example, one class of systems discussed in NKS is that of substitution systems. Now that a host of string substitution systems have been integrated into Wolfram|Alpha, we can explore a variety of these systems—not just the ones that are well known.

A string substitution system is composed of two parts: a string and a set of rules. The string looks like a series of numbers, say “0″ and “1”. The rules describe what happens to each number in the string; for example, “1” -> “0” and “0” -> “10”. Under our rules, our example string, “1”, transforms to “0”. In true NKS fashion, repeated iteration of these simple rules can give interesting behavior. Our example, which seems deceptively simple, can model the Fibonacci numbers. We simply document the length of the string each time we apply the rules to find that the series of lengths obtained at the end of each substitution corresponds to the Fibonacci series: {1, 1, 2, 3, 5…}. We see this in the following result:

Similarly, there is a string substitution system that models the Cantor set. The rules that define this substitution system are 1->101 and 0->000: More »

When we launched Wolfram|Alpha in May 2009, it already contained trillions of pieces of information—the result of nearly five years of sustained data-gathering, on top of more than two decades of formula and algorithm development in Mathematica. Since then, we’ve successfully released a new build of Wolfram|Alpha’s codebase each week, incorporating not only hundreds of minor behind-the-scenes enhancements and bug fixes, but also a steady stream of major new features and datasets.

We’ve highlighted some of these new additions in this blog, but many more have entered the system with little fanfare. As we near the end of 2009, we wanted to look back at seven months of new Wolfram|Alpha features and functionality.

Stephen Wolfram recently received an award for his contributions to computer science. The following is a slightly edited transcript of the speech he gave on that occasion.

I want to talk about a big topic here today: the quest for computable knowledge. It’s a topic that spans a lot of history, and that I’ve personally spent a long time working on. I want to talk about the history. I want to talk about my own efforts in this direction. And I want to talk about what I think the future holds. More »

If you’ve been following the launch of Wolfram|Alpha, then you have probably heard that two supercomputer-class systems are a big part of what is behind the scenes. One of them is the R Smarr system, belonging to our good friends at R Systems, which is featured in this video. The other is our custom Dell system, highlighted in the Rack ‘n’ Roll video. (That’s me in the blue shirt and the crazy blond hair.) Between the two of them, we can handle around 1800 queries per second (qps). Many people have asked about how we pulled together all of this infrastructure.

First, some background.

Back in mid-March our development team was intensely focused on building Wolfram|Alpha. As each day went by, the pace of development was accelerating and the further we progressed, the faster Wolfram|Alpha was growing in both content and functionality. On the infrastructure side, we had put in place a prudent plan. We knew the rollout would have an audience of early adopters amongst the professional audiences that our company is very familiar with, and we had planned accordingly for a capacity of 200 queries per second. A few colocations spread throughout the United States should do the job; we were well on track to set them up in plenty of time. And we thought that our “I’m sorry, Dave, I’m afraid I can’t do that” message would be seen occasionally in the first few weeks if there was overflow beyond our capacity. More »

Wolfram|Alpha went live in test mode at 8:48pm CST on Friday. Our teams worked intensely through the weekend to complete load testing, fix bugs, and begin to address the feedback you have provided—over 22,000 feedback messages. During testing, Wolfram|Alpha processed nearly 23 million queries; by our estimates, approximately 3 out of 4 gave satisfactory results.

By late Sunday night, we were able to test all compute clusters at full capacity.