Hitchbot might have made it across Canada, but it appears that the US wasn't quite so kind to this mechanical traveler. The hitchhiking robot's American journey has ended after a mere two weeks thanks to a vandal attack in Philadelphia. While the team behind Hitchbot vows that its experiments with artificial intelligence and human interaction are "not over," it's clear that this nomad isn't about to resume its cross-America trek all that quickly. You'll hear more details on August 5th -- here's hoping that this includes plans for Hitchbot to bum rides once again, whether it's in the States or abroad.

If you don't like the thought of autonomous robots brandishing weapons, you're far from alone. A slew of researchers and tech dignitaries (including Elon Musk, Stephen Hawking and Steve Wozniak) have backed an open letter calling for a ban on any robotic weapon where there's no human input involved. They're concerned that there could be an "AI arms race" which makes it all too easy to not only build robotic armies, but conduct particularly heinous acts like assassinations, authoritarian oppression, terrorism and genocide. Moreover, these killing machines could give artificial intelligence a bad name. You don't want people to dismiss the potentially life-saving benefits of robotic technology just because it's associated with death and destruction, after all.

It's bad enough that robots are writing professionally (albeit badly), but now they're criticizing, too? IBM has unveiled the Watson Tone Analyzer, the latest tool in its "cognitive computing" suite of cooking, health, shopping and other apps. Once you input a piece of text, the system will perform a "tone check" to analyze three different aspects of it: emotional, social and writing style. Each of those is divided into further categories -- for instance, it can tell you if your writing style is confident or tentative, and whether the emotional tone is cheerful, angry or negative. From there, it can give you a breakdown of the overall tone and suggest new words to "fix" it.

As clever as learning computers may be, they only have as much potential as their software. What if you don't have the know-how to program one of these smart systems yourself? That's where Microsoft Research thinks it can help: it's developing a machine teaching tool that will let most anyone show computers how to learn. So long as you're knowledgeable about your field, you'd just have to plug in the right parameters. A chef could tell a computer how to create tasty recipes, for example, while a doctor could get software to sift through medical records and find data relevant to a new patient.

Spam is always annoying, but it can occasionally be disastrous. Google has now deployed its artificial neural network to stop more of it from arriving in your Gmail inbox, something it hinted at earlier. It's designed to "detect and block the especially sneaky spam -- the kind that could actually pass for wanted mail," according to the company. The system also uses machine learning to track your usage patterns and figure out if you want certain kinds of mail, like newsletters or promos. Most critically, Google said that Gmail is now better at catching impersonation -- when emails appear to be from a known contact, but were sent by someone who is definitely not your friend.

It's easy to find computer vision technology that detect objects in photos, but it's still tough to sift through photos... and that's a big challenge for the military, where finding the right picture could mean taking out a target or spotting a terrorist threat. Thankfully, the US' armed forces may soon have a way to not only spot items in large image libraries, but help human observers find them. DARPA's upcoming, artificial intelligence-backed Visual Media Reasoning system both detects what's in a shot and presents it in a simple interface that bunches photos and videos together based on patterns. If you want to know where a distinctive-looking car has been, for example, you might only need to look in a single group.

After Elon Musk donated $10 million to the Future of Life Institute (FLI) to finance studies aiming to keep AIs safe and beneficial (i.e., prevent them from going down Skynet's path) almost 300 teams submitted their research proposals. Now, the institute is finally done reviewing them all and has decided to grant $7 million from Musk and the Open Philanthropy Project to 37 projects over the next three years. Some of the studies want to teach AI what humans prefer based on body language, one aims to develop a system that can explain its decision to humans, while another vows to figure out how to make sure robots and other intelligent weapons are always kept under human control.

Chatbots are pretty common these days -- a simple search can surface numerous variants you can talk to on a lonely Friday night. The one Google is developing, however, isn't your run-of-the-mill chatbot: it wasn't programmed to respond to questions a specific way. Instead, it uses neural networks (a collection of machines that mimic the neurons in the human brain) to learn from existing conversations and conjure up its own answers. Mountain View, along with Facebook and Microsoft, already uses neural networks for other purposes, such as to create works of art, to identify objects in images and to recognize spoken words.

Early on in AMC's newest sci-fi show, Humans, a teenager wonders aloud if there's any point in going to college and spending years training to be a neurosurgeon. After all, why invest all that time and work when an advanced android, which are commonplace in the show's world, can be programmed with those skills almost instantly. Call it the death of human expertise. Meanwhile, her mother is worried that her family's new "synth" (the show's term for androids) might replace her; her father hopes it can bring her family back together; and her teenaged brother is having sexually confused feelings about their attractive new robot helper. In Humans, the problems of the near future are practically indistinguishable from the issues we're facing today. And that's a big part of why the show works so well.

Photography reached the mainstream early on; Kodak's Brownie made daily snapshots accessible and Polaroid's pioneering cameras provided instant gratification. Now we can capture and share moments on a whim with smartphones packing high-resolution optics. Over the years, though, we've been treated to some incredible imaging hacks that've allowed our eyes to travel into the exotic -- far beyond what you had for dinner last night. Technological leaps in the field have been spurred by bets, accidents and imagination, providing both scientific insight and artistic experimentation. Our eyes have been opened wider than ever before and we've collected just a few moments in imaging's history to help grasp the bigger picture.

A number of companies have developed photo software for facial recognition, but what happens when your face is partially hidden? What if it's completely covered up? Facebook's artificial intelligence lab developed an algorithm that remedies the issue by picking out folks with other clues. Instead of using facial features, the software can identify people using things like hair style, pose, clothing and body type. Of course, a tool like this could lend a hand in a photo app like Facebook Moments or even Google's revamped Photos software. However, it also raises privacy questions when you can be identified in a snapshot even if your face is concealed, especially if you're trying to remain hidden on purpose. Facebook's algorithm is pretty good too, identifying people with an 83 percent success rate in tests, so we'll be curious to see if it makes its way into the social network's photo galleries in the future.

For Facebook and Google, it's not enough for computers to recognize images... they should create images, too. Both tech firms have just shown off neural networks that automatically generate pictures based on their understanding of what objects look like. Facebook's approach uses two of these networks to produce tiny thumbnail images. The technique is much like what you'd experience if you learned painting from a harsh (if not especially daring) critic. The first algorithm creates pictures based on a random vector, while the second checks them for realistic objects and rejects the fake-looking shots; over time, you're left with the most convincing results. The current output is good enough that 40 percent of pictures fooled human viewers, and there's a chance that they'll become more realistic with further refinements.

Let's be blunt: Amazon's reviews sometimes suck. Many of them are hasty day-one reactions, others are horribly misinformed and a few are out-and-out fakes. The internet shopping giant thinks it knows how to sort the wheat from the chaff, however. It just launched a new machine learning system that understands which reviews are likely to be the most helpful, and floats them to the top. The artificial intelligence typically prefers reviews that are recent, receive a lot of up-votes or come from verified buyers. Amazon hopes that this will show you opinions that are not only more trustworthy, but reflect any fixes. In other words, you'll see reviews for the product you're actually likely to get.

Twitter thrives on its ability to understand both your tweets and the hot topic of the day, and it needs every bit of help it can get -- including from computers. Accordingly, the social network just snapped up Whetlab, a startup that makes it easier to implement machine learning (aka a form of artificial intelligence). The two companies are shy about what the acquisition means besides an improvement to Twitter's "internal machine learning efforts." However, the likely focus is on highlighting the content that's most relevant to you based on your activity and who you follow, as well as hiding abusive tweets before you have to reach for the "block" option. Whetlab's technology could get the ball rolling on these robotic discovery techniques much faster than before, and give you a custom-tailored Twitter experience that requires little effort on your part.

Perhaps it's that all the levels have simple, left-to-right objectives, or maybe it's just that they're so iconic, but for some reason older Mario games have long been a target for those interested in AI and machine learning. The latest effort is called MarI/O (get it?), and it learned an entire level of Super Mario World in 34 tries.

Tired of your $10,000 anatomically-correct sex doll just lying there? Well, now RealDoll, purveyors of alarmingly lifelike silicone sex partners -- and apparently not movie buffs -- plans to give them personalities. According to the New York Times, RealDoll founder and CEO Matt McMullen has hired a team away from Hanson Robotics for the new project, dubbed Realbotix, for the express purpose of animating these dolls. The team is reportedly developing an artificial intelligence system capable both of following commands and verbally responding to its user. What's more, RealDoll is also working on an animatronic head (complete with blinking eyes and movable mouth).

Remember Baymax's pain scale in Big Hero 6? In the real world, machines might not even need to ask whether or not you're hurting -- they'll already know. UC San Diego researchers have developed a computer vision algorithm that can gauge your pain levels by looking at your facial expressions. If you're wincing, for example, you're probably in more agony than you are if you're just furrowing your brow. The code isn't as good at detecting your pain as your parents (who've had years of experience), but it's up to the level of an astute nurse.

Be careful about snapping pictures of your obscenely tasty meals -- one day, your phone might judge you for them. Google recently took the wraps off Im2Calories, a research project that uses deep learning algorithms to count the calories in food photos. The software spots the individual items on your plate and creates a final tally based on the calorie info available for those dishes. If it doesn't properly guess what you're eating, you can correct it yourself and improve the system over time. Ideally, Google will also draw from the collective wisdom of foodies to create a truly smart dietary tool -- enough experience and it could give you a solid estimate of how much energy you'll have to burn off at the gym.

Robots are getting pretty good at carrying on after taking a knock, but what if they lose a limb? Scientists from the US and France have given a six-legged 'bot the smarts to keep going even if two of its legs are disabled by, say, a Sarah Connor shotgun blast. The team created and then rated a number of simulations for how its robot could keep moving forward despite losing a leg or two. Once that information was programmed into the robot, it was able to rapidly evaluate the options and use the one that worked best in the real world.

As a rule, robots have to learn through explicit instruction, whether it's through new programming, watching videos or holding their hands. UC Berkeley's BRETT (Berkeley Robot for the Elimination of Tedious Tasks) isn't nearly that dependent, however. The machine uses neural network-based deep learning algorithms to master tasks through trial and error, much like humans do. Ask it to assemble a toy and it'll keep trying until it understands what works. In theory, you'd rarely need to give the robot new code -- you'd just make requests and give the automaton enough time to figure things out.

Robotic news editors promise to save the trouble of picking and writing news stories (and might put people like me out of work), but are they really ready to replace human writers? Yes and no, if you ask NPR. The outlet held a showdown between Automated Insights' WordSmith news generator and a seasoned reporter to see which of the two could not only finish an earnings story the quickest, but produce something you'd want to read. The results? WordSmith was much faster, producing its piece in two minutes versus seven, but the writing was more than a little stiff -- it lacked the colorful expressions that made NPR's version easy to digest. With that said, newsies might not want to relax just yet. It's technically possible for software to adapt to a given style, so flesh-and-bone writers may still want to update their resumés... y'know, just in case.

Talking into a smartwatch still isn't the most socially acceptable thing to do, but a pair of startups is hell-bent on at least making it worthwhile. Fetch and Expect Labs -- a personal shopping service and a purveyor of a voice-driven AI, respectively -- have teamed up to make shopping on your Apple Watch a little less tedious with an improved concierge that works from your wrist.

Wolfram Research can already do some pretty cool things, like answer Twitter questions and spot overhead flights. Now, the maker of the Mathematica programming language and Alpha knowledge engine can perform another trick: figuring out what's in a photo. The Wolfram Language Image Identification Project can make out about 10,000 common things, including animal species, gadgets and household objects. It uses a database of around ten million images to perform the trick, which Stephen Wolfram figures "is comparable to the number of distinct views of objects that humans get in their first couple of years of life."

With Ex Machina, the directorial debut of 28 Days Later and Sunshine writer Alex Garland, we can finally put the Turing test to rest. You've likely heard of it -- developed by legendary computer scientist Alan Turing (recently featured in The Imitation Game), it's a test meant to prove artificial intelligence in machines. But, given just how easy it is to trick, as well as the existence of more rigorous alternatives for proving consciousness, passing a test developed in the '50s isn't much of a feat to AI researchers today. Ex Machina isn't the first film to expose the limits of the Turing test, but it's by far one of the most successful. And, like the films 2001 and Primer, it's a work of science fiction that might end up giving you a case of philosophical whiplash.

We haven't talked about Numenta since an HP exec left to join the company in 2011, because, well, it's been keeping a pretty low-profile existence. Now, a big name tech corp is reigniting interest in the company and its artificial intelligence software. According to MIT's Technology Review, IBM has recently started testing Numenta's algorithms for practical tasks, such as analyzing satellite imagery of crops and spotting early signs of malfunctioning field machinery. Numenta's technology caught IBM's eye, because it works more similarly to the human brain than other AI software. The 100-person IBM team that's testing the algorithms is led by veteran researcher Winfried Wilcke, who had great things to say about the technology during a conference talk back in February.