The Science Museum Blog

Menu

Category Archives: Robots

On the anniversary of Venera 7’s launch – the first spacecraft to successfully land on Venus – curator Doug Millard reflects on the challenge of exploring other worlds.

Over a 20-year period from the mid-1960s, Soviet scientists and engineers conducted one of the most successful interplanetary exploration programmes ever.

They launched a flotilla of spacecraft far beyond Earth and its Moon. Some failed, but others set a remarkable record of space firsts: first spacecraft to impact another planet, first controlled landing on another planet and the first photographs from its surface. The planet in question was not Mars – it was Venus.

Our knowledge of Venus at the time had been patchy. But as the Soviet probes journeyed down through the Venusian atmosphere it became clear that this planet – named after the Roman goddess of love – was a supremely hostile world. The spacecraft were named Venera (Russian for Venus) and the early probes succumbed to the planet’s immense atmospheric pressure, crushed and distorted as if made of paper.

Venera 3 did make it to the surface – the first craft ever to do so – but was dead by the time it impacted, destroyed by the weight of the air. Venera 4 was also shattered on the way down, but it survived long enough to return the first data from within another planet’s atmosphere. The engineers realised, though, they would have to reinforce still further the spacecraft’s titanium structures and silica-based heat shield.

The information coming in from the Venera probes was supplemented with readings from American spacecraft and ground-based observatories on Earth. Each added to an emerging picture of a hellish planet with temperatures of over 400 °C on the surface and an atmospheric pressure at ground level 90 times greater than Earth’s.

Spacecraft can only be launched towards Venus during a ‘window of opportunity’ that lasts a few days every 19 months. Only then do Earth and Venus’ relative positions in the Solar System allow for a viable mission. The Soviets therefore usually launched a pair of spacecraft at each opportunity. Venera 5 and 6 were launched on 5 and 19 January 1969, both arriving at Venus four months later.

There had not been time to strengthen these spacecraft against the unforgiving atmosphere, so instead the mission designers modified their parachutes so that they would descend faster and reach lower altitudes, sending back new data before their inevitable destruction.

This Venera 7 descent module (engineering model) with parachute lanyards clearly visible, was used for drop tests on Earth in 1970. Credit: Lavochkin Association/Photo: State Museum and Exhibition center, ROSIZO

Launched on 17 August 1970, Venera 7 made it intact to the surface of Venus on 15 December 1970 – the first probe ever to soft land on another planet. Its instruments measured a temperature of 465 °C on the ground. It continued to transmit for 23 minutes before its batteries were exhausted.

Venera 8 carried more scientific instruments which revealed that it had landed in sunlight. It survived for another 50 minutes. Venera 9, the first of a far stronger spacecraft design, touched down on 22 October 1975 and returned the first pictures from the surface of another planet. It too showed sunny conditions – comparable, the scientists reckoned, to a Moscow day in June.

This photograph, the first taken from the surface of another planet, was taken by the camera on board the Venera 9 descent module shortly after it landed on Venus on 25th October, 1975. Credit: NSSDC Photo Library

The surface was shown to be mostly level and made up of flat, irregularly shaped rocks. The camera could see clearly to the horizon – there was no dust in the atmosphere, but its thickness refracted the light, playing tricks and making the horizon appear nearer than it actually was. The clouds were high – about 50 km overhead.

The Soviet Union now had a winning spacecraft design that could withstand the worst that Venus could do. More missions followed, but then in the early 1980s the designers started making plans for the most challenging interplanetary mission ever attempted.

This photograph was taken by the Venera 13 camera using colour filters. It shows the serrated edge of the Venera 13 decent module gripping the soil on the rocky surface of Venus.Credit: NASA History Office

Scientists around the world were keen to send spacecraft to Halley’s Comet, which was returning to ‘our’ part of the Solar System on its 75-year orbit of the Sun. America, Europe and Japan all launched missions, but the Soviets’ pair of Vega spacecraft were the most ambitious, combining as they did a sequence of astonishing manoeuvres, first at Venus and then at Halley’s Comet.

Both craft were international in their own right, with many nations contributing to their array of scientific instruments. They arrived at Venus in June 1985.

Each released a descent probe into the Venusian atmosphere. Part of it released a lander that parachuted down to the surface while the other part deployed a balloon, with a package of scientific instruments suspended underneath that first dropped and then rose through the atmosphere to be carried around the planet by winds blowing at well over 200 miles per hour.

Meanwhile, the main part of each Vega spacecraft continued on past Venus, using the planet’s gravity to slingshot itself towards an encounter with Halley.

A little under a year later both arrived a few million kilometres distant from the comet. Both were battered and damaged by its dust, but their instruments and cameras returned plenty of information on the ancient, icy and primordial heavenly body.

A golden age of Russian planetary exploration had come to an end.

Russia plans to return to Venus, but meanwhile its Vega spacecraft, their instruments long dead, continue to patrol the outer reaches of the Solar System, relics of the nation’s pioneering days of space exploration.

Alan Winfield is Professor of Electronic Engineering and Director of the Science Communication Unit at the University of the West of England, Bristol. Alan will be on hand to discuss the cultural relevance and impact of swarm robotics at Robotville.

How did you become involved in robotic research?

Like many things in life there was a lot of luck involved. Although I have always been fascinated by robots I didn’t actively study them until I came to Bristol 20 years ago. I was lucky then because firstly I had a chance to set up a new research group, and secondly I met 2 other people who were also interested in robots. Together we started the robotics lab. We were also lucky because we managed to win the money (grants) to do robot research projects – without the funding the lab would have been very short lived.

Then, during the last 20 years my interest in robotics has changed, so that now I’m much more interested in basic scientific questions, like what is intelligence, how do animals evolve, how does culture emerge and so on, and use robots to try and answer (in a small way) those questions. So what makes me want to study robots now is a deep interest in some of the big questions of life.

What are swarm robots and what is it about them that fascinates you?

A robot swarm is a collection of relatively simple robots that interact with each other and the environment in ways that are inspired by the behaviour of social insects. Complex group behaviours, like flocking, foraging for food, or nest building can emerge from the micro-interactions of lots of individuals.

I’m fascinated by swarm robotics for two reasons. Firstly, future real-world applications for tasks as wide ranging as robotic agriculture, waste processing and recycling, search and rescue, or planetary exploration and colonisation are likely to use swarms of robots. And secondly, by building swarms of robots we can start to understand how the processes of emergence and self-organisation work, and how to engineer systems using these mechanisms.

Do you see a direct relationship between the swarm behaviour of robots and the herd like relationships of humans?

Yes, social insects (and the robots inspired by them) are not the only animals that show swarm intelligence. The flocking of birds, shoaling of fish and herding of mammals are all examples of the same kind of group behaviour. Humans are complicated, of course, but some aspects of human crowd behaviour are almost certainly swarm-like.

What are the ultimate goals and objectives attributed to swarm robotics?

The ultimate goal of swarm robotics is to be able to engineer safe and reliable robotic swarms for real-world applications. As I mentioned in the answer above there are many challenging real-world (and off-world) tasks that would benefit from a swarm robotics approach. Basically any task that is distributed in physical space, where it would be better to have multiple robots (than having just one robot).

Reaching this goal requires the solution of a number of difficult technical problems. One is how to design the behaviours of the individual robots so that when you put all the robots together in their working environment, they actually self-organise to complete the required task. The second challenge is how to design a human-swarm interface – in other words how would a human operator control and monitor a large swarm of robots. The third challenge is how to prove that the swarm of robots will always do the right thing, and never the wrong thing! And the final challenge is getting this all working in real-world applications so that people become confident in swarm robotics technology.

Our view of robots is shaped by books and film – do you think this is helpful or misleading?

Good question! I think robots in science fiction are both helpful and misleading. Helpful because many roboticists, myself included, were inspired by science fiction, and also because SF provides us with some great examples of ‘thought experiments’ for what future robots might perhaps be like – think of the movie AI, or Data from Star Trek. (Of course there are some terrible examples as well!)

But robots in science fiction are misleading too. They have created an expectation of what robots are (or should be) like that means that many people are disappointed by real-world robots. This is a great shame because real-world robots are – in many ways – much more exciting than the fantasy robots in the movies. And the misleading impression of robots from SF makes being a roboticist harder, because we sometimes have to explain why robotics has ‘failed’ – which of course it hasn’t!

Meet roboticist Peter McOwan, Professor of Computer Science in the School of Electronic Engineering and Computer Science at Queen Mary, University of London. At Robotville Peter will be showcasing software he has built that helps robots understands our emotions

How did you become involved in robotic research?

My interest started in artificial intelligence, understanding the brain using maths, then seeing how I could use that maths in a robot to help give it human like abilities, still a long way to go!

Your software teaches robots to understand emotions buthow long until a robot shows preference, or ” falls in love” with someone?

Ah well ‘what is love’? Love shows in the way you feel and act, it’s how you take in information about the object of your desire, and react to them. Arguably then, love is really just a special type of brain information processing. Deep inside our brains when we fall in love the nerve cells change the way they connect and signal to each other. Perhaps in the future we will understand how this happens and perhaps be able to build a machine that can perform this complex information processing task. At present our robot can read your expressions and it can be programmed to respond to them, smiling back at you when you smile at it - awww bless. The robot can even ‘know’ who you are and pull an especially sweet looking face when you look and smile its way, but would that be love? Not quite yet I expect.

Would that ever lead them to reprioritise or even change their programming?

If falling in love changes the way a human acts, they way their brain cells interact, then mimicking this in a computer could cause the same type of effect. But robots are basically complicated mechanical and electronic tools. Would it be useful for your vacuum cleaner to recognise you? Possibly yes so it can’t be stolen. But would it be useful for your vacuum cleaner to have a crush on you? Probably not.

In 2008, Nokia developed an anthropomimetic robot with an ‘imagination’. What further developments have there been in this field and are we any closer, or have we achieved a robot with a stream of consciousness?How does your emotion software sit within this field?Submitted by Luke

Our system detects the changes in human faces when we make expressions. Often we make expressions to signal our emotions to other humans, it’s a kind of useful social signaling code. Our robots then take the facial expression and react to it given a set of rules we have programmed. At present it’s as simple as that, that’s all our robots need to do to be useful ‘slightly socially aware’ tools at present.

Of course looking at the outside of the robot it’s easy for humans to believe there is a lot more going on, our brains love to process faces and social signals and build stories, making things seem more ‘human’ than they are. We all assume that there is a stream of consciousness going on in others when we observe them, and that imagination is in there too playing a part. All of this comes from the electrochemical signals swirling in our brains, and no one really understands how that all works yet, so it would be difficult to build a robot to mimic it directly.

Of course we can build part of the computer program that takes input patterns and looks for similar patterns in stored or previously learned patterns, or just creates new random patterns to output and call it ‘imagination’, and in some way it is doing a bit of what our imagination does. These sorts of experiments are useful because it lets us test our understanding, and perhaps over time as new facts about the brain emerge we can refine these to make them more human. But it’s a long road ahead.

Apple’s recent iPhone update includes what some are claiming to be the first consumer robot. How relevant is this and do you think this idea of robots in our pockets will have a wider impact on robotic research and developments?

Robots = tools, if it’s useful for us to have robots in our pockets, or on our phones, even if they have limited abilities, they can be developed. What we can cram onto the processors in phones today is limited, but as the technology improves we will be able to do more. But in the end what will drive it will be the need for that tool to be able to help us do something that’s useful for us.

What are the ongoing cultural implications of your latest research and how do you think they will affect everybody’s day to day lives?

The view of robots differs in different parts of the world. In Japan for example where robots are big business they are seen as a positive thing. In the west there are more mixed feeling perhaps coloured by the frequent portrayal of robots as sinister baddies in movies and TV shows. As part of (our research project) LIREC we are looking at people’s concerns, hopes and fears for the technology that we are developing, and that’s important. This way we can design robots in a way that they become beneficial technological aids to improve human life, tools to make things better.

When I was at school I didn’t really know what I wanted to do with my life, so I chose to study Artificial Intelligence (at the University of Birmingham) because I didn’t really believe that it was a real subject. I quickly fell in love with the idea of building intelligent systems, so stayed in Birmingham to do a PhD in AI for video games. It was only a few years later when I joined an ambitious project that wanted to use some of the ideas from my PhD to help build a cognitive robot that I moved away from virtual worlds and built things that ran in the real world. From then on I was hooked on robotic research.

How did you become involved in using robots for exploration?

We were interested in studying how robots could automatically extend their own knowledge about their worlds. We already had a robot that could build maps of its environment, but it had to be directed by a human using a joystick. Therefore it seemed like a natural extension to see if we could get the robot to take over from the human and guide the exploration itself.

What can Dora do?

The Dora robot can explore space to build up a map. It can choose where to explore based on how likely it is that a particular direction of exploration will produce interesting results (like previously unknown rooms). Dora can also search for objects, either in order to tell a human where they are, or to work out what kind of rooms are in the building (e.g. if it sees a kettle it will realise it is in a kitchen). Dora can also interact with humans in a limited way, by asking questions about the location of objects.

Do you think robots like Dora will become a household necessity?

I’m not sure they will become a necessity (unless we all become unable to care for ourselves!), but they will certainly become a luxury item in the future. It will be possible to order them from an online electronics store one day and have them arrive and start work the next. The big problem will then be training your robot to understand the layout of your home and the routines of your household. This is where we hope our work on exploration and curiosity will make people’s lives easier, as the robot will be motivated to learn things for itself, rather than wait to be taught everything.

Are the costs of transporting robots too prohibitive to send one to Mars? Submitted by Steve

The answer has to be no, as NASA has already sent two robots to Mars: Spirit and Opportunity. This has cost in excess of $900 million! One interesting question is not the cost of transport, but the difficulties of controlling robots once they get to Mars. The delay of a radio signal between Earth and Mars can be over half an hour, so this precludes any direct control by a human (just imagine playing a game where every command you send takes 30 minutes to have any effect!). Therefore these robots must be equipped with forms of Artificial Intelligence to allow them to safely make decisions for themselves with only limited human intervention.

Does the software that is tasked with being curious continuously increase its capacity to explore / make connections – is it getting more creative? Submitted by Charlie and Jake

I would say that it doesn’t really increase its capacity to explore (except beyond software that isn’t curious at all), but it does have a limitless appetite for exploration. Unless you artificially restrict its mobility, Dora will just keep trying to explore until its batteries run out or it gets stuck somewhere. A few years ago we were demonstrating Dora in a corridor that led to some toilets. Someone had left the door to the women’s toilets open, and before we knew it Dora had wandered off in there. Unfortunately, as our research team was all male at the time, we had to wait for someone female to come along to retrieve Dora!

Our view of robots is shaped by books and film – do you think this is helpful or misleading?

I think it’s mostly a good thing, as fiction (books, films and increasingly video games) motivate and inspire a large number of people to work in science and engineering to produce exciting and useful technologies such as robots. The main downside is that the reality of robotics research is a long way away from the images portrayed in fiction, so we risk disappointing people when we are unable to provide them with the robots that they imagine are possible. However, as you’ll see at Robotville, the robots that exist now are all really exciting in their own ways, and will only improve as more time and effort is dedicated to them.

Apple’s recent iphone update includes what some are claiming to be the first consumer robot. How relevant is this and do you think this idea of robots in our pockets will have a wider impact on robotic research and developments?

Whilst there have been consumer robots before Siri (including vacuum cleaners, lawn mowers and toys), the importance of Siri is that it demonstrates that it is possible to produce software that consumers can interact with about a range of tasks in a natural fashion (even if the results are not always perfect). Perhaps the biggest impact of the development of computers in our pockets will be the expectation that all our devices should share information to make our lives easier. For example my phone should know when I’m nearly home so that it can tell a future version of Dora to put the kettle on.

What are the ongoing cultural implications of your latest research?

Robots will change the way we live and work. The change will start from limited, well-defined tasks (such as robots that clean the floor and cars that park themselves) but will gradually become more noticeable across the whole of society. Some of the larger cultural questions we will ultimately need to face include whether we are happy to let robots care for our sick and elderly, and who is responsible when a robot that can learn and make decisions for itself causes some kind of problem.

Next week over 20 robots will be arriving at the Museum for the Robotville Festival. Over the next few days we will be posting interviews with some of the roboticists to find out more about their research projects and the cultural implications of these latest developments in robotics.

This is your chance to submit any questions you would like them to answer. Have a read of their biographies below and submit your questions via the comments.

Alan Winfield:

Alan Winfield is Professor of Electronic Engineering and Director of the Science Communication Unit at the University of the West of England, Bristol. He conducts research in swarm robotics in the Bristol Robotics Laboratory.

A robot swarm is a collection of relatively simple robots that interact with each other and with their environment, in ways that are inspired by the behaviour of the social insects. Even though the individual robot behaviours are simple we see fascinating group behaviours, such as flocking, emerge.

At Robotville Alan Winfield will demonstrate a swarm of miniature two wheeled mobile robots called e-pucks. Within the swarm, one group of robots are programmed with three simple rules which allow them to flock as a group. Another group of robots artificially ‘evolve’ their movement behaviours by both simulating and sharing solutions with one another whilst they move around.

Nick Hawes:

Dr Nick Hawes, is a lecturer in Intelligent Robotics at the University of Birmingham and will be bringing Dora The Explorer to Robotville. Dora is a mobile robot with a sense of curiosity and a drive to explore the world. Given an incomplete tour of an indoor environment, Dora is driven by internal motivations to probe the gaps in her spatial knowledge.

Dora will actively explore regions of space which she hasn’t previously visited but which she expects will lead her to further unexplored space. She will also attempt to determine the categories of rooms through active visual search for functionally important objects, and through ontology-driven inference on the results of this search

Peter McOwan:

Peter McOwan is currently a Professor of Computer Science in the School of Electronic Engineering and Computer Science at Queen Mary, University of London. His research interests are in visual perception and mathematical models for visual processing.

Peter will be showcasing software he has built that helps robots understands our emotions. This software will allow us to programme robots to understand how we feel, enabling them to respond to our various moods.

Robots are taking over the Museum! From Thursday 1 until Sunday 4 December, 23 robots will be setting up home in the Museum as part of our four day event – Robotville.

Come and meet Concept, a robot created to study how people react to a robotic face and help us understand why we’re drawn to lifelike technology. Watch his expression change as he learns from you and copies your movements.

Or test out Dora the Explorer’s skills. She could be the solution to those misplaced house keys! You can see her in action below.

Many of the robots on display have just come out of European research labs and will be on show to the British public for the first time. Check them out on our website.

The exhibition is divided into six zones which will include an area for domestic robots, swarming robots, humanoid robots and more. And if that wasn’t enough robot geekery for you, the roboticists will also be there to demonstrate their work and answer your questions.

The four day event explores the cultural impact robots have on our day to day life. The idea of robots in the home is familiar through science fiction, but the reality has yet to truly materialise. Until now…

To find out more about the event visit our website for more information and keep an eye on our blog. We will be posting interviews with some of the roboticists in the run up to the four day event and we will be asking you to send over some questions you may want them to answer.