Older Articles

The National Geographic Explorers Journal blog brought to our attention the OpenROV Project, which was recently funded by a successful Kickstarter campaign that raised over $100,000 USD. The project was founded by friends Eric and David, who wanted to build an ROV from low-cost off-the-shelf parts. The OpenROV can be used for educational purposes or for actual underwater exploration. The current version of the OpenROV is limited to a depth of 100 meters but the design is open source and you're invited to modify and improve it. In addition to the open source hardware, this little underwater robot relies on open source software running on a GNU/Linux-based embedded processor. There's a USB HD video camera and LED light arrays on board too so you can see where you're going. At present the OpenROV is strictly a DIY project that you build from the designs and source code available on the OpenROV wiki. But kits for about $750 and even fully assembled ROVs should be available soon. Read on to see the original kickstarter video that describes the ROV and a more recent video of the ROV in action

Circuit Cellar magazine recently posted in full their two-part article by Tom Kibalo on the construction of his subsumption-based mobile robot, called TOMBOT. Part 1 of the article covers construction of the hardware and Part 2 covers the subsumption software and basic behaviors for obstacle avoidance, collisions, and light tracking. The robot is a differential drive design using continuous-turn RC Servos as motor. It lacks wheel encoders. The robot sports an XBee radio, a PIC32 CPU, and a small LCD display. It's a good basic introduction to behavior-based robots and well worth a read.

Just as researchers today struggle to find working definitions for words like consciousness and intelligence, they struggled to find a standardized meaning for the word information in the early 1900s. Ralph Hartley, a research for Bell Laboratories, first introduced a theory of information based on the idea that information consisted of strings of symbols, a reasonable idea in the age of telegraph, telephones, and radio. Shannon and Weaver moved things along in the 1940s, resulting in Shannon-Weaver Information Theory (SWIT). While Hartley's theory was concerned primarily with sets of symbols, SWIT was concerned with the probability or uncertainty of events (the likelihood a particular structure or sequence of symbols are meaningful). Both theories fall far short of describing what a modern cognitive scientist or AI researcher means when they talk about information. A newer theory was developed in the field of psychological research, Representational Information Theory (RIT). The idea behind RIT is that communication between animals and their environment is mediated by concepts. The only drawback of RIT is that it only supported binary dimensions. RIT looks at information in terms of complexity rather than uncertainty like SWIT. In a new paper published in the journal Information, researchers described a generalized version of RIT, called GRIT that may be useful in the fields of AI and robotics:

"concepts live in the mental space of organisms ranging from aplasia to insects and from dolphins to humans. Some may argue that they also live in the mental spaces of intelligent robots and expert systems. Regardless, the point is that only by using concepts as mediators can information as a measurable quantity reflect human intuitions as to what is informative."

This week's edition of Best Robot Photos of the Week is a special holiday collection of Christmas robots submitted by our readers. We also received one holiday photo made by Hanukkah nanobots. No one posted photos of Kwanzaa bots or Festivus droids this year. Whatever your preferred winter holiday, just remember that Axial Tilt is the reason for the season and enjoy our these photos of holiday robots. Want to see your robot photo here? Post it to flickr and add it to the robots.net flickr group. If you're not a flickr member yet, it's free and easy to sign up. Read on to see the best robot photos of the week!

Yet another brain mapping project has announced some pretty amazing new findings. Researchers at UC Berkeley's Gallant Lab have succeeded in decoding the semantic mapping space in which the brain stores all the information we take in. They've mapped the space both as abstract, multi-dimensional graphics and they've mapped the actual locations where the information nodes are stored in the physical brain. They've learned all sorts of new things about how the brain categorizes things. For example, one semantic dimension (abbreviated PC) of our brain space categorizes things by whether they move - cars, motorcycles, people vs buildings, cities, and the sky. Another dimension distinguishes between things involved in social interaction (people, verbs, furniture) and things involved in less interactive outdoor activities (geological formations, animals, vehicles). They've identified four semantic dimensions so far but believe with higher resolution scans and more work, many more will be revealed.

"Across the cortex, semantic representation is organized along smooth gradients that seem to be distributed systematically. Functional areas defined using classical contrast methods are merely peaks or nodal points within these broad semantic gradients. Furthermore, cortical maps based on the group semantic space are significantly smoother than expected by chance. These results suggest that semantic representation is analogous to retinotopic representation, in which many smooth gradients of visual eccentricity and angle selectivity tile the cortex (Engel, Glover, & Wandell, 1997; Hansen, Kay, & Gallant, 2007). Unlike retinotopy, however, the relevant dimensions of the space underlying semantic representation are not known a priori, and so must be derived empirically"

The mapping of the semantic space onto the brain reveals that as much as 20% of the brain, including parts of the somatosensory and frontal cortices, is devoted to storing these highly organized semantic maps. Less surprisingly, the maps confirm the location of previously established specialized areas. Information about humans, for example, overlaps the fusiform face area (FFA) of the brain which is known to be involved in face recognition. For more see the paper "A continuous semantic space describes the representation of thousands of object and action categories across the human brain" (PDF format). The paper will be in Neuron Vol 76, Iss 6. If you're using a browser such as Google's Chrome that supports WebGL graphics, you can explore an interactive version of the researcher's semantic brain map. And read on to see examples of the semantic space mapped onto the physical brain as well as a short video describing the research.

For the past two weeks the Boston Dynamics LS3 (Legged Squad Support System) robot has been undergoing field tests in the woods of central Virginia with personnel from the Marine Corps Warfighting Lab. DARPA issued a news release with video of LS3 following a marine through real world terrain and responding to voice commands. The four legged robot is designed to carry up to 400 lbs of gear anywhere a squad can go. The robot is semi-autonomous and designed to look out for itself and while keeping up with the marines. The tests seem to have positive results, Lt. Col. Joseph Hitt of DARPA reports:

"This was the first time DARPA and MCWL were able to get LS3 out on the testing grounds together to simulate military-relevant training conditions. The robot’s performance in the field expanded on our expectations, demonstrating, for example, how voice commands and 'follow the leader' capability would enhance the robot’s ability to interact with warfighters. We were able to put the robot through difficult natural terrain and test its ability to right itself with minimal interaction from humans.”

Read on to see the video, which includes shots of the robot following a soldier through the woods and being intentionally forced into situations where it will stumble. At times the robot has to run to keep up, while climbing hills, slogging through mud, and following a soldier through a maze of shipping containers.

Berkeley Lab reports the creation of a powerful microscale actuator that can deliver three orders of magnitude greater force per weight than human muscles. The tiny actuators are only about 100 microns in size and made from vanadium dioxide. They could potentially replace less-powerful piezoelectric actuators, which are complicated to make and require toxic materials. From the abstract of the researcher's report:

Here we demonstrate a set of microactuators fabricated by a simple microfabrication process, showing simultaneously high performance by these metrics, operated on the structural phase transition in vanadium dioxide responding to diverse stimuli of heat, electric current, and light. In both ambient and aqueous conditions, the actuators bend with exceedingly high displacement-to-length ratios up to 1 in the sub-100 μm length scale, work densities over 0.63 J/cm3, and at frequencies up to 6 kHz. The functionalities of actuation can be further enriched with integrated designs of planar as well as three-dimensional geometries. Combining the superior performance, high durability, diversity in responsive stimuli, versatile working environments, and microscale manufacturability, these actuators offer potential applications in microelectromechanical systems, microfluidics, robotics, drug delivery, and artificial muscles.

Jan Scheuermann was diagnosed with spinocerebellar degeneration in 1996. As the connection between her brain and muscles degenerated, she lost any ability to move. In 2011 she saw a video of a UPMC research study that interfaced a robot arm to the brain of Tim Hemmes, another quadriplegic. She called immediately and said, "sign me up!" On Monday, UPMC issued a news release with her results.

Before three months had passed, she also could flex the wrist back and forth, move it from side to side and rotate it clockwise and counter-clockwise, as well as grip objects, adding up to what scientists call 7D control. In a study task called the Action Research Arm Test, Ms. Scheuermann guided the arm from a position four inches above a table to pick up blocks and tubes of different sizes, a ball and a stone and put them down on a nearby tray. She also picked up cones from one base to restack them on another a foot away, another task requiring grasping, transporting and positioning of objects with precision.

Two 96 channel intracortical microelectrodes were implanted to provide the brain-computer interface with the 7 DoF robot arm. More technical details can found in the paper, "High performance neuroprosthetic control by an individual with tetraplegia" (pay-walled) and in the NIH Study description. See also the UPMC photo gallery. Jan herself was less interested in the technical details than in the pleasure of being able to move a limb for the first time in eight years. And she had a goal in mind. "I’m going to feed myself chocolate before this is over," she said when the study started. Read on to see the video of her eating chocolate.

Every week we post a collection of the best robot photos submitted by our readers to our robots.net flickr group. Why? Because everyone likes to see cool new robots! This week's collection includes several well-known robots such as the Asimo, iCub, Nao and even a miraculous appearance of Crow T Robot. There's a blurrycam photo of a mysterious legged planetary robot prototype. Plus a few art robots too! Want to see your robot here? Post it to flickr and add it to the robots.net flickr group. It's easy! If you're not already a flickr member, it's free and easy to sign up. Read on to see the best robot photos of the week!