Remember those Where's Waldo books? (They even made a TV show about them.) Slate ran a story back in 2013 showing you the location of Waldo in all seven books in the series, but the interesting part of that story was that the authors claimed to have devised the best search strategy for finding the character. AI researcher and data visualization geek Randy Olson took Slate's data one step further, putting Waldo's locations through a genetic algorithm to devise an optimal search path. Of course, you and I know that the point of those books was never really to find Waldo. It was to admire the art of illustrator Martin Handford. Handford, like cross-section artist Stephen Biesty, worked in a uniquely dense visual style that I most associate with 90s' children's books. They were the Quentin Blake for millennials. (h/t Laughingsquid)

Numberphile interviews Stanford professor Persi Diaconis about the factors that make a coin toss seemingly completely random, and how he and his colleagues determined what factors most affect the outcome. And some of you may already know this, but theoretical analysis of the physics of coin flipping shows that the odds aren't actually exactly 50-50. There's an ever-so-slight bias toward the side that was facing up during the toss.

Ready to give your brain a little workout? See if you can follow along in this Numberphile video as Tony DeRose of Pixar Research explains some of the mathematics behind the rendering and animation of characters in modern CG films. It went over my head at around the five-minute mark, but the gist is that the use of certain math principles and algorithms let rendering programs subdivide vertices in geometry for smooth curves and surfaces. Computer scientists know this as the Catmull-Clark algorithm.

Inventern champ Sean Charlesworth joins us in the Tested office this week to share one of his prized possessions: a Curta mechanical calculator. Designed in the 1940s before electronic calculators, this hand-cranked device was considered the the most precise pocket calculator available, and was used by rally car drivers and aviators.

In the new Planet of the Apes movie, Keri Russell's character briefly talks about how she had a young daughter who died of the simian flu virus. As the character was telling the story, my friend--who had not seen the film--leaned over to me and said "I bet her daughter's name was Sarah." And indeed, just a second later, that's what was uttered on screen. This prediction led to a discussion post-screening about why Sarah was such a suitable (and predictable) name to evoke the image of a child never seen in the film. Why is Sarah evocative of a young child and not a name like Bessie or Helen? Earlier this year, Nate Silver's FiveThirtyEight did a statistical analysis of the popularity of names, based on public data from Social Security Administration. We've seen websites and apps that show how popular names are over time, but Silver's team went a step further to calculate the median ages for every common and uncommon name, for both male and female names. Of all living Sarah's, for example, the median age is 26. While if you were to meet a Helen in person, it's more likely that she's older, given that the median age for Helen's still alive is 73. And the names with the youngest median age? For girls, it's Ava, and for boys, it's Liam. Jayden comes in at a close second. Thanks, Will and Jada.

MIT Technology Review has details about a recent study carried out at the Zhejiang University in China on rock-paper-scissors strategy. Conventional thinking was that the best strategy for not losing in the long run was to choose your play at random--as defined by the Nash equilibrium mixed strategies solution of game theory. But the research done with 360 students at the university indicated that play choices were conditional and patterns emerged. Specifically, winners of one round tend to stick with the same action, while losers switch to the next action in a sequence (in the order of rock, paper, scissors). The researchers are preparing future studies to determine whether this type of conditional response is a basic decision-making mechanism or a byproduct of fundamental neural mechanism. Gambit play was not taken into consideration.

"The Infinite Hotel, a thought experiment created by German mathematician David Hilbert, is a hotel with an infinite number of rooms. Easy to comprehend, right? Wrong. What if it's completely booked but one person wants to check in? What about 40? Or an infinitely full bus of people?" A fun thought experiment to visualize the concept of infinity. Your brain starts to hurt at the two-and-a-half-minute mark. The full TED-Ed lesson is here.

A poignant interview with the late Benoît Mandelbrot, in which the legendary mathematician talks about the beauty in which a simple formula can create a universe of complexity. "IBM celebrates the life of Benoit B. Mandelbrot, IBM Fellow Emeritus and Fractal Pioneer. In this final interview shot by filmmaker Erol Morris, Mandelbrot shares his love for mathematics and how it led him to his wondrous discovery of fractals. His work lives on today in many innovations in science, design, telecommunications, medicine, renewable energy, film special effects, video game graphics, and more." Learn more about Mandelbrot here, and watch the 2010 TED talk he gave on fractals here.

In casinos, the odds always favor the house; it’s just math. But every once in a while, ambitious gamblers will try to skew those odds to break the system. We’ve hunted down ten stories of hustlers who managed to bring down the house in ways that would make Hollywood proud. Some used science, some used skill, and some straight-up cheated, but they all walked away with tons of cash in their pockets.

Any a film critic or movie buff for the golden age of film, and you'll probably get an answer that includes the 1960s and 1970s. The 1940s and 1950s gave us incredible performances and scripts from the Hollywood studio system, but the later decades gave rise to an incredible array of filmmakers like Jean-Luc Godard, Stanley Kubrick, and Francis Ford Coppola, who pushed forward the cinematography and psychology of movies.

Turns out there are actually numbers to back up the wisdom of movie fanatics. Wired picked up on a study that analyzed IMDB tags to determine how creative movies have been decade-by-decade. Granted, novelty--especially novelty as defined by tags on IMDB--isn't going to be the most universally accurate measurement of film quality. But the numbers show that more new concepts and creative films came from the industry in the 1960s.

Image credit: Warner Bros. Pictures

The study's authors used "crowdsourced keywords from the Internet Movie Database as a window into the contents of films, and prescribe[d] novelty scores for each film based on occurrence probabilities of individual keywords and keyword-pairs." The study focused on keywords used as tags on IMDB, which describe specific story elements and locations, genres, and other movie trends.

So how does novelty, and by association creativity, come into play? Analyzing those keyword tags. "We devise[d] a method to assign a novelty score to each film on the basis of the keywords associated with it and the keywords appearing in all films that were released prior to it," the study explains. They collected data from the years 1929 through 1998, then ran everything through some equations to deduce novelty. Sure enough:

Ultimate Tic-tac-toe is a game math nerd Ben Orlin recently discovered at a mathematician's picnic. The game is played with one giant tic-tac-toe grid, with each of the nine squares filled with another smaller tic-tac-toe game board. The rules are a little more sophisticated than regular tic-tac-toe, though. The object is to win the big board by winning the right combinations of small boards, but each turn takes place in a different small board, which is determined by where your opponent last played. So players have to play a meta game of strategically placing X's and O's in the small boards to direct where their opponent will get to play next. Orlin says that when he plays it, strategies surface where players make intentionally bad moves in the small boards to avoid sending the other player into a good places in the larger grid. Yo dawg, it's tc-tac-toe Inception. (h/t Boingboing, Kottke, Andy Baio)

Stanford student (and World Cube Association member) Ravi Fernando uploaded this video of his famous rubix's cube juggling feat from a first-person perspective. The cubes are solved at the 1:38, 4:15, and 5:55 marks. There's even a near drop!

If math is the only universal language, as the saying goes, then it's a language more or less like any other. And approaching it like a language makes us think about elements of mathematics that we normally take for granted. For example: When and how did the symbols for addition and subtraction come from? Astrophysicist Mario Livio was curious and decided to find out the answers for himself, and the resulting blog post is an interesting mathematics history lesson.

Though mathematics has been around for more than two thousand years--famous mathematician Pythagoras lived in the 6th century BC--Livio traced the + sign back to the 1300s.

"There is little doubt that our + sign has its roots in one of the forms of the word 'et,' meaning 'and' in Latin," writes Livio. "The first person who may have used the + sign as an abbreviation for et was the astronomer Nicole d’Oresme (author of The Book of the Sky and the World) at the middle of the fourteenth century. A manuscript from 1417 also has the + symbol (although the downward stroke is not quite vertical) as a descendent of one of the forms of et."

The - sign, meanwhile, hasn't been around as long--Livio writes that it first appeared in 1481 in a German algebra manuscript. Neither the + or - symbol appeared in English writing on math until 1551. And, like any other language, the writing of mathematics has evolved over the years. Livio notes a few examples of how the symbols have changed into the forms we now know:

Mirrors are old. Thousands of years old. They don't exactly seem like the trickiest bit of technology to invent--once you've spotted your reflection in a shiny piece of stone or metal, you're going to figure it out pretty quickly. And once glassmaking came around, well, the leap to glass mirrors seems only natural. The silvered-glass mirrors that we know and love today are relatively young, comparatively--they were invented in the early 1800s. Since then, inventors have discovered convex glass can provide a wider field of view of the world, and glass surfaces with both concave and convex segments (aka carnival mirrors) can create crazy distorted reflections of reality.

The work of mathematics professor R. Andrew Hicks may represent the most significant evolution in mirror technology since...well, glass. Hicks has been using math for years to design mirrors that reflect light in just the right ways--they're essentially finely-tuned versions of the carnival mirrors that make everything look all wacky--and has come up with some impressive reflective surfaces.

For example, he's developed a curved mirror that reflects the world without reversing its image. It's one smooth piece of glass, not a pair of mirrors connected at a 90-degree angle like a traditional non-reversing mirror. As you'd expect from a mathematics professor, algorithms made it all possible--Hicks worked out equations to represent the kind of reflection he wanted to create, then used those to develop the coordinates for thousands of tilted points on the mirror's surface.

Hicks has invented and patented a driver-side mirror for cars that eliminates blind spots.

When those coordinates are fed to a machine and ground away with a diamond, they can create all sorts of mirror variations--another example, which Hicks calls the vampire mirror, doesn't even create a true mirror image. If you look into the mirror and wave your left hand around, it'll look like you're actually moving you right.

You stand poised for action in the lobby, eyes darting back and forth between the doors in front of you. Any moment now one of them will open. But which one? You wait to hear that wonderful Ding! that means arrival, doors opening, salvation from the interminable boredom of a 30 second wait. When you hear it you'll spring into action, rushing into the elevator and jamming on the button for your floor. The doors close. And then you wait again.

It's a ritual we all know by heart, but it's amazing how much math and planning go into the 18 billion elevator rides taken annually in the United States alone. When everything goes right, you won't even have to wait 30 seconds for the doors to open. According to Theresa Christy, a mathematician and researcher at Otis Elevator Company, 20 seconds is the magic number for an elevator wait. And that number hasn't changed in about 50 years.

You'd think we'd have faster elevators five decades after 20 seconds became the target waiting time, but speed isn't really the issue. It's all about the number of stops elevators have to make and juggling the wait times for people on every floor of a building. A recent profile of Christy in the Wall Street Journal reveals just how much math goes into every imaginable elevator use scenario.

When Christy programs elevators, she has to take into account the size and weight of elevators and how many people can fit in them. Building owners want to install as few elevators as possible, since they take up a great deal of space. Passengers in various countries prefer different amounts of personal space. So, for example, more Japanese riders will crowd into elevators than Americans, but they want to know in advance which elevator they'll be getting into, so they can line up in front of the right set of doors.

The elevator code has to strike a balance between convenience for riders and convenience for waiters. If an elevator has already made three stops, should it make a third to pick up someone who's been waiting for 30 seconds and inconvenience its current passengers? Christy runs simulations to analyze the decisions elevators make according to their programming, then tweaks that programming to better her score.

She compares it to a video game; we hope she's never played Mass Effect. NPR's Marketplace calls her work an art. We think either label works--it's an underappreciated, endlessly challenging job that will never have a perfect solution. Christy's short Marketplace interview is an interesting look into a job that most of us would never think about, despite how much it affects our daily lives.

What do we know about coins? They're legal tender, they usually depict dead men on one side, and they're the go-to tiebreakers for problems big and small. Thing is, coins aren't perfectly suited to that role. A study on coin tosses reveals that the "randomness" of a toss is actually weighted ever so slightly towards the side of the coin that's facing upwards when a flip begins. "For natural flips, the chance of coming up as started is about .51," the study concludes.

The paper, written by statistics and math professors from Stanford and UC Santa Cruz, also points out that a perfect coin toss can reproduce the same result 100 percent of the time. Of course, the perfect flip was performed by a machine, not a person. And the results that lean ever-so-slightly in favor of flipped-side-up don't take into account flipping a coin after catching it or letting it bounce around on a floor or table. In practical usage, the .51 bias is so slight that you'll never notice.

If, like me, you'd always heard that coins tend to land tails-up because the heads side is heavier, there's some science available for you, too. Spinning, rather than flipping, an old penny will land on heads something like 80 percent of the time. Lincoln's head is heavier than the Lincoln memorial on the reverse, which leaves tails facing up more often than not. Unless the penny has accrued enough dirt or oil to throw the weight off.

And let's be honest--how often do you come across an old, clean penny?

With the help of a high-speed camera, Will and Norm test the theory that cracking a leather bullwhip breaks the speed of sound and creates a mini sonic boom. Take note--slow motion silliness, maths, and bath towels within.

When it comes to high-speed data transfers, most of the technological breakthroughs we catch wind of use some specialized, experimental hardware. We'd all love to be able to send friends HD video files at 26 terabits per second, but who has the equipment lying around to encode data into 300 beams of light? Researchers at MIT (who else?) have worked out an alternative that apparently has real potential--companies have already licensed their technology, which increased Internet bandwidth from one to 16 megabits per second in a recent test.

How's it work? With math, naturally. Specifically, algebra, which the researchers hope to use to eliminate or drastically reduce packet loss. When packets are dropped due to interference or clogged airwaves, devices have to re-request the missing information, and that information has to be sent again, contributing to the congestion problem.

The technology transforms the way packets of data are sent. Instead of sending packets, it sends algebraic equations that describe series of packets. So if a packet goes missing, instead of asking the network to resend it, the receiving device can solve for the missing one itself. Since the equations involved are simple and linear, the processing load on a phone, router, or base station is negligible.

Coded TCP, as it's called, has already increased bandwidth of sluggish 1mbps and .5mbps connections to 16mbps and 13.5 mbps. Those were lab tests, so it's hard to judge how well Coded TCP would work in real-world situations. But it's currently operating with a proxy server stashed in Amazon's cloud, which makes the technology especially exciting. You may need to download an app for your phone to turn algebraic equations into bits of usable data, but you won't need new hardware to make use of Coded TCP.

New hardware could help, as well: the researchers claim Coded TCP can seamlessly merge data from Wi-Fi and cellular connections without switching between the two.

Want to read a whole lot more about a technology that could be driving faster network throughput in a few years? Dig into this Coded TCP white paper.

Webcomic xkcd regularly revolves around jokes that require a degree in math or physics to appreciate--which makes sense, because author Randall Munroe has an undergraduate degree in physics. He recently started up a weekly blog called What If? that answers reader questions by putting his past career as a physicist to use. And the first one is awesome: "What would happen if you tried to hit a baseball pitched at 90% the speed of light?"

As Munroe explains, things wouldn't go well for the batter. Or the pitcher. Or anyone within a square mile, really.

At 90 percent the speed of light, or 604,000,000 miles per hour, the ball would be traveling so much faster than the air particles around it that it would collide with the particles in front of it. That collision would release gamma rays and tear apart air molocules, creating an expanding bubble of plasma that arrives at the plate before the ball itself.

Image credit: XKCD.com via Creative Commons.

Well, the ball doesn't even get there at all, really: in the 70 nanoseconds it takes to arrive, it's turned into a cloud of debris. That's about the time the batter is swept backwards into the backstop and disintegrates, even though he hasn't even seen the pitcher release the ball yet. Within a microsecond everything else disintegrates, too.

The whole thing's more fun with Munroe's illustrations, so check out the original post. Still, the lesson's pretty clear: steer clear of lightspeed baseballs.