Category: robotics

Engineers at MIT have fabricated transparent, gel-based robots that move when water is pumped in and out of them. The bots can perform a number of fast, forceful tasks, including kicking a ball underwater, and grabbing and releasing a live fish.

The robots are made entirely of a hydrogel, a tough, rubbery, nearly transparent material that’s composed mostly of water. Each robot is an assemblage of hollow, precisely designed hydrogel structures, connected to rubbery tubes. When the researchers pump water into the hydrogel robots, the structures quickly inflate in orientations that enable the bots to curl up or stretch out.

The team fashioned several hydrogel robots, including a finlike structure that flaps back and forth, an articulated appendage that makes kicking motions, and a soft, hand-shaped robot that can squeeze and relax.

Because the robots are both powered by and made almost entirely of water, they have similar visual and acoustic properties to water. The researchers propose that these robots if designed for underwater applications, may be virtually invisible.

The group, led by Xuanhe Zhao, associate professor of mechanical engineering and civil and environmental engineering at MIT, and graduate student Hyunwoo Yuk, is currently looking to adapt hydrogel robots for medical applications.

“Hydrogels are soft, wet, biocompatible, and can form more friendly interfaces with human organs,” Zhao says. “We are actively collaborating with medical groups to translate this system into soft manipulators such as hydrogel ‘hands,’ which could potentially apply more gentle manipulations to tissues and organs in surgical operations.”

Zhao and Yuk have published their results this week in the journal Nature Communications. Their co-authors include MIT graduate students Shaoting Lin and Chu Ma, postdoc Mahdi Takaffoli, and associate professor of mechanical engineering Nicholas X. Fang.

Robots are increasingly able to outperform humans in a variety of mundane tasks(and even in more technical areas like surgery), but for all that efficiency they are still by and large one trick ponies. Most robots are designed to perform one very specific task and part of taking robotics to the next level is designing a multifunctional robot that can move with enough speed to make multifunctionality a desirable feature.

(Article by Motherboard)

To overcome this design problem, engineers have begun studying Mother Nature’s own solution to multifunctionality, which has given rise to a field of bio-inspired robotics. One of the newest areas of bio-bot research involves the creation of soft robots, where the idea is to take a cue from animals like the octopus and starfish and make a robot that is only made of soft components. Soft robotics is, in essence, the art and science of designing artificial muscles.

Just in the last five years engineers have seen enormous breakthroughs in soft robotics, but a fundamental problem still remains: these robots are still moving at starfish-like speeds. This is why a new approach to engineering robot muscles pioneered by researchers at Harvard’s School of Engineering and Applied Sciences which allows for flexible, efficient circuitry is being heralded by soft roboticists as “the holy grail” of the field.

The technical term for the artificial muscles that make a soft robot move is “actuators,” and historically these actuators have relied on hydraulic or pneumatic components (which make use of liquids or compressed gases, respectively) to function. The downside of pneumatic and hydraulic actuators is that they are slow to respond and rigid—which kind of defeats the whole point of soft robotics. Some engineers have looked at using soft, insulating materials called dielectric elastomers as an alternative to pneumatic actuators, but they also require rigid components and high voltage to deal with their complex and inefficient circuitry.

In this sense, the latest development out of Harvard is something of a revolution for dielectric elastomers. The research, published this week in Advanced Materials, culminated in the development of a dielectric elastomer that has a broad range of motion and hyper-efficient circuitry, thus requiring relatively low voltage to function.

“Electricity is easy to store and deliver, but until now the electric fields required to power actuators in soft robots has been too high,” said Mishu Duduta, a Harvard engineering graduate student and the paper’s lead author. “This research solves a lot of the challenges in soft actuation by reducing actuation voltage and increasing energy density, while eliminating rigid components.”

To make their paper-thin device, Duduta and his colleagues made use of a new type of elastomer developed at UCLA which doesn’t need to be pre-stretched over a rigid frame like other elastomers. For the device’s electrode, they used carbon nanotubes developed at Harvard instead of the typical carbon grease.

According to the team, this breakthrough could find use in everything from minimally invasive surgical tools to the artificial muscles for more complex and traditional robots.

“Actuation is one of the most difficult challenges in robotics,” said Robert Wood, a Harvard professor of engineering and co-author of the new paper. “This breakthrough in electrically-controlled soft actuators brings us much closer to muscle-like performance in an engineered system and opens the door for countless applications in soft robotics

The Russian space agency Roscosmos launched a robotic cargo ship early Wednesday (Feb. 22) on a mission to deliver fresh supplies to the International Space Station.

The autonomous Progress 66 resupply ship launched into orbit atop a Soyuz rocket at 12:58 a.m. EST (0558 GMT), lifting off from a pad at the Baikonur Cosmodrome in Kazakhstan. The cargo ship will arrive at the space station early Friday (Feb. 24).

The new spacecraft is due to dock itself at the station on Friday at 3:34 a.m. EST (0834 GMT). You can watch the Progress 66 docking live online, courtesy of NASA TV, beginning at 2:45 a.m. EST (0745 GMT).

A Russian Soyuz rocket launches the automated Progress 66 cargo ship toward the International Space Station from Baikonur Cosmodrome, Kazakhstan on Feb. 22, 2017, in this still from a NASA TV broadcast.
A Russian Soyuz rocket launches the automated Progress 66 cargo ship toward the International Space Station from Baikonur Cosmodrome, Kazakhstan on Feb. 22, 2017, in this still from a NASA TV broadcast.
Credit: NASA TV
Progress 66 is Russia’s first resupply mission to the space station since the loss of the Progress 65 cargo ship shortly after its launch on Dec. 1, 2016.

Wednesday’s launch occurred just hours before another cargo ship, a SpaceX Dragon capsule, was due to arrive at the International Space Station. But the Dragon aborted its approach at a range of seven-tenths of a mile due to an incorrect value in the global positioning system software used to pinpoint the spacecraft’s position relative to the space station, NASA officials said.

The International Space Station is currently stocked with supplies using a fleet of robotic spacecraft. In addition to Russia’s Progress vehicles and SpaceX’s Dragon capsules, the station is also resupplied by Orbital ATK’s Cygnus spacecraft and Japan’s H-2 Transfer Vehicles. SpaceX’s Dragon and Orbital ATK’s Cygnus are privately built spacecraft that resupply the space station under contracts with NASA.

Developing an unmanned aircraft is a complex and expensive process, and even retrofitting manned aircraft for autonomous operation can be tricky. At KAIST in South Korea, researchers are testing a humanoid robot that’s designed to operate a regular aircraft by sitting in the pilot’s seat and using the controls just like a human would. Pilot Robot demonstrated its skills on a flight simulator at IROS 2016.

It can manage all aspects of a flight: turning the engine on, taxiing, taking off, flying, and even landing. The robot relies on input from the simulator to determine the location and state of the aircraft and lands successfully 80 percent of the time.

At MIT’s 2016 Open House last spring, more than 100 visitors took rides on an autonomous mobility scooter in a trial of software designed by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the National University of Singapore, and the Singapore-MIT Alliance for Research and Technology (SMART).

The researchers had previously used the same sensor configuration and software in trials of autonomous cars and golf carts, so the new trial completes the demonstration of a comprehensive autonomous mobility system. A mobility-impaired user could, in principle, use a scooter to get down the hall and through the lobby of an apartment building, take a golf cart across the building’s parking lot, and pick up an autonomous car on the public roads.

The new trial establishes that the researchers’ control algorithms work indoors as well as out. “We were testing them in tighter spaces,” says Scott Pendleton, a graduate student in mechanical engineering at the National University of Singapore (NUS) and a research fellow at SMART. “One of the spaces that we tested in was the Infinite Corridor of MIT, which is a very difficult localization problem, being a long corridor without very many distinctive features. You can lose your place along the corridor. But our algorithms proved to work very well in this new environment.”

The researchers’ system includes several layers of software: low-level control algorithms that enable a vehicle to respond immediately to changes in its environment, such as a pedestrian darting across its path; route-planning algorithms; localization algorithms that the vehicle uses to determine its location on a map; map-building algorithms that it uses to construct the map in the first place; a scheduling algorithm that allocates fleet resources; and an online booking system that allows users to schedule rides.

Uniformity

Using the same control algorithms for all types of vehicles — scooters, golf carts, and city cars — has several advantages. One is that it becomes much more practical to perform reliable analyses of the system’s overall performance.

“If you have a uniform system where all the algorithms are the same, the complexity is much lower than if you have a heterogeneous system where each vehicle does something different,” says Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and one of the project’s leaders. “That’s useful for verifying that this multilayer complexity is correct.”

Furthermore, with software uniformity, information that one vehicle acquires can easily be transferred to another. Before the scooter was shipped to MIT, for instance, it was tested in Singapore, where it used maps that had been created by the autonomous golf cart.

Similarly, says Marcelo Ang, an associate professor of mechanical engineering at NUS who co-leads the project with Rus, in ongoing work the researchers are equipping their vehicles with machine-learning systems, so that interactions with the environment will improve the performance of their navigation and control algorithms. “Once you have a better driver, you can easily transplant that to another vehicle,” says Ang. “That’s the same across different platforms.”

Finally, software uniformity means that the scheduling algorithm has more flexibility in its allocation of system resources. If an autonomous golf cart isn’t available to take a user across a public park, a scooter could fill in; if a city car isn’t available for a short trip on back roads, a golf cart might be.

“I can see its usefulness in large indoor shopping malls and amusement parks to take [mobility-impaired] people from one spot to another,” says Dan Ding, an associate professor of rehabilitation science and technology at the University of Pittsburgh, about the system.

Changing perceptions

The scooter trial at MIT also demonstrated the ease with which the researchers could deploy their modular hardware and software system in a new context. “It’s extraordinary to me, because it’s a project that the team conducted in about two months,” Rus says. MIT’s Open House was at the end of April, and “the scooter didn’t exist on February 1st,” Rus says.

The researchers described the design of the scooter system and the results of the trial in a paper they presented last week at the IEEE International Conference on Intelligent Transportation Systems. Joining Rus, Pendleton, and Ang on the paper are You Hong Eng, who leads the SMART autonomous-vehicle project, and four other researchers from both NUS and SMART.

The paper also reports the results of a short user survey that the researchers conducted during the trial. Before riding the scooter, users were asked how safe they considered autonomous vehicles to be, on a scale from one to five; after their rides, they were asked the same question again. Experience with the scooter brought the average safety score up, from 3.5 to 4.6.

In the near future, the package that you ordered online may be deposited at your doorstep by a drone: Last December, online retailer Amazon announced plans to explore drone-based delivery, suggesting that fleets of flying robots might serve as autonomous messengers that shuttle packages to customers within 30 minutes of an order.

To ensure safe, timely, and accurate delivery, drones would need to deal with a degree of uncertainty in responding to factors such as high winds, sensor measurement errors, or drops in fuel. But such “what-if” planning typically requires massive computation, which can be difficult to perform on the fly.

Now MIT researchers have come up with a two-pronged approach that significantly reduces the computation associated with lengthy delivery missions. The team first developed an algorithm that enables a drone to monitor aspects of its “health” in real time. With the algorithm, a drone can predict its fuel level and the condition of its propellers, cameras, and other sensors throughout a mission, and take proactive measures — for example, rerouting to a charging station — if needed.

The researchers also devised a method for a drone to efficiently compute its possible future locations offline, before it takes off. The method simplifies all potential routes a drone may take to reach a destination without colliding with obstacles.

In simulations involving multiple deliveries under various environmental conditions, the researchers found that their drones delivered as many packages as those that lacked health-monitoring algorithms — but with far fewer failures or breakdowns.

“With something like package delivery, which needs to be done persistently over hours, you need to take into account the health of the system,” says Ali-akbar Agha-mohammadi, a postdoc in MIT’s Department of Aeronautics and Astronautics. “Interestingly, in our simulations, we found that, even in harsh environments, out of 100 drones, we only had a few failures.”

Agha-mohammadi will present details of the group’s approach in September at the IEEE/RSJ International Conference on Intelligent Robots and Systems, in Chicago. His co-authors are MIT graduate student Kemal Ure; Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics; and John Vian of Boeing.

Tree of possibilities

Planning an autonomous vehicle’s course often involves an approach called Markov Decision Process (MDP), a sequential decision-making framework that resembles a “tree” of possible actions. Each node along a tree can branch into several potential actions — each of which, if taken, may result in even more possibilities. As Agha-mohammadi explains it, MDP is “the process of reasoning about the future” to determine the best sequence of policies to minimize risk.

MDP, he says, works reasonably well in environments with perfect measurements, where the result of one action will be observed perfectly. But in real-life scenarios, where there is uncertainty in measurements, such sequential reasoning is less reliable. For example, even if a command is given to turn 90 degrees, a strong wind may prevent that command from being carried out.

Instead, the researchers chose to work with a more general framework of Partially Observable Markov Decision Processes (POMDP). This approach generates a similar tree of possibilities, although each node represents a probability distribution, or the likelihood of a given outcome. Planning a vehicle’s route over any length of time, therefore, can result in an exponential growth of probable outcomes, which can be a monumental task in computing.

Agha-mohammadi chose to simplify the problem by splitting the computation into two parts: vehicle-level planning, such as a vehicle’s location at any given time; and mission-level, or health planning, such as the condition of a vehicle’s propellers, cameras, and fuel levels.

For vehicle-level planning, he developed a computational approach to POMDP that essentially funnels multiple possible outcomes into a few most-likely outcomes.

“Imagine a huge tree of possibilities, and a large chunk of leaves collapses to one leaf, and you end up with maybe 10 leaves instead of a million leaves,” Agha-mohammadi says. “Then you can … let this run offline for say, half an hour, and map a large environment, and accurately predict the collision and failure probabilities on different routes.”

He says that planning out a vehicle’s possible positions ahead of time frees up a significant amount of computational energy, which can then be spent on mission-level planning in real time. In this regard, he and his colleagues used POMDP to generate a tree of possible health outcomes, including fuel levels and the status of sensors and propellers.

Proactive delivery

The researchers combined the two computational approaches, and ran simulations in which drones were tasked with delivering multiple packages to different addresses under various wind conditions and with limited fuel. They found that drones operating under the two-pronged approach were more proactive in preserving their health, rerouting to a recharge station midmission to keep from running out of fuel. Even with these interruptions, the team found that these drones were able to deliver just as many packages as those that were programmed to simply make deliveries without considering health.

Going forward, the team plans to test the route-planning approach in actual experiments. The researchers have attached electromagnets to small drones, or quadrotors, enabling them to pick up and drop off small parcels. The team has also programmed the drones to land on custom-engineered recharge stations.

“We believe in the near future, in a lab setting, we can show what we’re gaining with this framework by delivering as many packages as we can while preserving health,” Agha-mohammadi says. “Not only the drone, but the package might be important, and if you fail, it could be a big loss.”

The Lego bot can move each limb independently of the other thanks to its magnetically controlled screws placed in a unique layered magnetic field.

Magnetically controlled swarms of microscopic robots might one day help fight cancer inside the body, new research suggests.

Over the past decade, scientists have shown they can manipulate magnetic forces to guide medical devices within the human body, as these fields can apply forces to remotely control objects. For instance, prior work used magnetic fields to maneuver a catheter inside the heart and steer video capsules in the gut.

Previous research also used magnetic fields to simultaneously control swarms of tiny magnets. In principle, these objects could work together on large problems such as fighting cancers. However, individually guiding members of a team of microscopic devices so that each moves in its own direction and at its own speed remains a challenge. This is because identical magnetic items under the control of the same magnetic field usually behave identically to each other. [The 6 Strangest Robots Ever Created]

Now, scientists have developed a way to magnetically control each member of a swarm of magnetic devices to perform specific, unique tasks, researchers in the new study said.

First, the scientists created a number of tiny identical magnetic screws. The researchers next used a strong, uniform magnetic field to freeze groups of these magnetic screws in place. In small, weak spots within this powerful magnetic field, the microscopic screws are free to move. Superimposing a relatively weak rotating magnetic field could make these free screws spin, the researchers said.

In experiments, the researchers could make several magnetic screws whirl in different directions at the same time with pinpoint accuracy. In principle, the scientists noted, they could manipulate hundreds of microscopic robots at once.First, the scientists created a number of tiny identical magnetic screws. The researchers next used a strong, uniform magnetic field to freeze groups of these magnetic screws in place. In small, weak spots within this powerful magnetic field, the microscopic screws are free to move. Superimposing a relatively weak rotating magnetic field could make these free screws spin, the researchers said.

“One could think of screw-driven mechanisms that perform tasks inside the human body without the need for batteries or motors,” Rahmer told Live Science.

One application for these magnetic swarms could involve magnetic screws embedded within injectable microscopic pills. Doctors could use magnetic fields to make certain screws spin to open the pills, the researchers said. This could help doctors make sure that cancer-killing radioactive “seeds” within the pills target and damage only tumors rather than healthy tissues, cutting down on harmful side effects, the researchers said. Once the pills deliver a therapeutic dose of radiation, physicians could then use magnets to essentially switch the pills off. (The pills would be made of metallic material that would otherwise keep radiation from leaking out.)

Another potential application could be medical implants that change over time, the researchers said. For instance, as people heal, magnetic fields could help alter the shape of implants to better adjust to the bodies of patients, Rahmer said.

In the future, researchers could develop compact and magnetic field applicators to control tiny magnetic robots, and use imaging technologies such as X-ray machines or ultrasound scanners to show where those devices are located in the body, Rahmer suggested.

The scientists detailed their findings online Feb. 15 in the journal Science Robotics.