Posted
by
CmdrTacoon Wednesday May 18, 2011 @12:14PM
from the killall-humans dept.

An anonymous reader writes "One group of Australian researchers have managed to teach robots to do something that, until now, was the reserve of humans and a few other animals: they've taught them how to invent and use spoken language. The robots, called LingoDroids, are introduced to each other. In order to share information, they need to communicate. Since they don't share a common language, they do the next best thing: they make one up. The LingoDroids invent words to describe areas on their maps, speak the word aloud to the other robot, and then find a way to connect the word and the place, the same way a human would point to themselves and speak their name to someone who doesn't speak their language."

After looking through the research, you're correct - the article's claims are very much overblown.

Do they "invent" random words for places? Yes, by throwing random characters as a preprogrammed method. Do they "communicate" this to another robot? Yes.

Is the other robot preprogrammed to (a) accept pointing as a convention and (b) receive information in the "name, point to place" format: Yes.

They share a common communication frame. That's the "language" they communicate in. And it was preprogrammed to them. That they are expanding it by "naming places" is amusing, but it's hardcoded behavior only and they could just as easily have been programmed to select an origin spot, name it "Zero", and proceed to create a north-south/east-west grid of positive and negative integers and "communicate" it in the same fashion.

You bring a good point. However, there are dead languages that we humans are unable to figure out, even though we're the same species.

If you don't hardcode something, this would be even worse: How do you make up a new language with grammar and all, without using any prior language or knowledge? You basically have to figure out a general algorithm for bootstrapping communication from scratch.

Yes but in essence, we as humans are "programmed" do do exactly the same. Our parents will point at pictures, objects, people etc and make sounds which are then converted by our brains into words that label the image, object, person etc.

Here's the problem, who taught her to cry in the first place? Did you sit down with her and have a great downpour to show her what crying is and how it works? Of course not. Crying is instinctual and used as a response to high stress situations of pain, anger, rage, desperation etc. She did not have to cry because you responded quickly enough to preempt it, however, had you waited it out and not responded, at some point she would have began crying depending on the attention she needed. My daughter didn't cr

Space is a tough concept to "program" into a robot. You can't see it or touch it. In a simulation it can be a grid, but in the real world, each robot has to work out where it is itself. Without mind reading, how do two robots share their sense of space?
The language games are the easy part. The robots create names for places, distances and directions. The tough part is knowing what those words should refer to in the real world. To make this work with real robots is a first.

and yet, if you believe something is real, it won't matter if it's not.

Some chatbots can fool people for a while and some dumb people, might look like chatbots for a while as well.

Arguably, you are programmed by your environment and past events to react in a specific way. You might say that prediction seems impossible due to the large amount of variables you're not considering but, what if you add enough variables for the AI to become unpredictable? What happens when you can't easily isolate it's logic?

No, what's frightening is the realization of how many of them get their daily programming from the likes of Mike Huckabee, Glenn Beck, Rush Limbaugh, Fox News, or other Two Minutes Hate [wikipedia.org] type sources.

There's more to what they've done than you are perceiving. The robots running around following their "instructions" are proving a solution their creators invented for solving a problem given a set of constraints. Namely, using auditory communication only, develop a means of sharing a common understanding about a physical space. This is a step towards developing sophisticated communication capabilities between not just other robots, but more importantly humans using their protocols rather than traditional

They learned how to communicate meaning. The researchers taught them the words. the computers on board did not invent the words they used. In fact a computer would not do something as dumb as a spoken word but series of tones or even FSK.

The researchers taught them the words. the computers on board did not invent the words they used.

My understanding of the article is that the robot's did exactly that. The programmers put two robots together that they had intentionally not given any specific words to (although presumably the basic rules for how to form words must have been given, which you might perceive as the analogue to humans having a physically limited vocal range to play with). The robots then trial-and-errored their way through "conversations" until they had established a common set of words for locations, directions etc..

They learned how to communicate meaning. The researchers taught them the words. the computers on board did not invent the words they used. In fact a computer would not do something as dumb as a spoken word but series of tones or even FSK.

When it needs a new word/label it generates it as a random combination of pre-programmed syllables that play the role of phonemes for the new language. English for example only uses 40 of them, but we combine them to make all the various words we know how to pronounce properly. It may not be a particularly sophisticated language, but I think it still counts well enough.

Actually they did "invent" the words, however these robots were constrained to using human derived syllables. The goal was to not produce a machine efficient, machine natural language, but rather one that is compatible/aligned with human speech and understanding. The end goal of this line of research is to create the ability for machines to have meaningful communication with humans absent a mechanism of query/response translation limited to preprogrammed states.

It sounds to me like they were programmed how to ostensively code and decode tokens. If it were the case that meaning is entirely reducible to ostensive definitions, then it is the case that they learned to communicate meaning. I'm not certain that many (if any) linguists, philosophers of language, or psychologists hold to an ostensive theory of language these days. Wittgenstein pretty much exploded the ostensive theory of language in such a way that no one takes it

This is more about the creation of a community hash table than language. Language allows the expression of contradictory ideas and ambiguity, e.g. Chomsky's famous "Colorless green ideas sleep furiously". These robots are just connecting locations to variables.

That may come, i suspect early humans where not much different in its language ability. Hell, kids are very direct early on before they start picking up that there can be both overt and covert meanings. Hell, some adults still have trouble with that...

The 'language' seems to be limited to 4 letter words, each one has a consonant and a vowel, and then another consonant and another vowel in it. Does not look like a language at all, there is no grammar, there is nothing except basically 4 letter words used as hash keys to point at some areas on a map.

The robots played where-are-we, what-direction and how-far games, to create three different types of words. The coolest part of the study is that once their language is created, the robots can refer to places they haven't been to. That's imagination. Then they go explore and meet up at the place they previously referred to using their words for distance and direction.

What is the motive for a robot to do anything? What does it 'need'? People solve various problems in their lives, because we have instinct of self preservation, curiosity, various other motivators, like hunger, thirst, cold, heat, health issues, etc.

What do robots need and why would they be developing a language if they don't have any needs? For a robot to realize a need, it has to have some form of motivating factors, have some form of 'feelings', that would force it to do things.

>What do robots need and why would they be developing a language if they don't have any needs?
In one sense, a robot species' main "need" is to impress humans well enough to copy them. Pioneer robots have to be useful in research labs for people to keep making them. Language learning robots are a specific combination of hardware and software.
{motives, needs, instincts,...} have relatively clear meanings for carbon-based life forms but are loaded when applied to non-carbon agents. Robots, like chess

Well, depending on the number of communications they need to make to each other it's very possible 4 level words could map out every possible communication they could have with it each other.
Think of it like Chinese symbols.. They aren't just one word but complex ideas.
Grammar exists because we are unable to store such large amounts of data. We can't have 1 word/symbol map to a unique complex concept. A computer might not have such limitations. Especially if their entire universe of ideas/concepts can b

I guess I was trying to say you don't necessarily need grammar for a language used by computers. Grammar for them is just a hack, or addon to allow a language to communicate more than the originally was intended.

Humans don't want to reinvent the wheel every time we need to expand our language and thus grammar works well for this. Computers don't have that issue and so grammar (at least as we know it) isn't important.

I was just trying to point out that having a grammar isn't required for a language.

You know, I do have B.Sc. in computer science, if grammar is a hack in human languages, then how do you explain the fact that grammar is the absolute necessity in computer languages, and the fact that we have math describing it? It's called formal language theory and it requires formal grammar, which can be explained as rules, that describe whether a particular sequence of characters is legal in a sentence and what it is that the sequence does.

Human language began with humans associating sounds they made with objects. Afterwards, they associate sounds with conceptual things like actions. It's only they they combined objects with actions into one meaning that grammar is developed for consistency and ease of understanding.

It probably took humans an insane amount of years before such things as grammar was developed slowly passing on each advancement to each of their generation.

You think robots can achieve something better then humans instantly? Of course this is just pre-programmed logic designed with this purpose in mind so how much cheating vs how much real adaptic logic is in their is hard to say.

But to say it's not a language, a language is but a method to communicate no matter the form of sound used. It's simply a primitive one at best.

Wow, that clears it up. Thank you for figuring out a topic that nobody has!

From the summary, it sounds like the "language" is just a noun mapping. Very much like my 14.4 modem did in 1993 over a phone line, when it came to an agreement with the modem on the other side about what voltage and phase pattern corresponded to the bitstream 0001 vs 1010, in fact my modem sounds like a more complicated language because they implemented MNP4 / MNP5 error correction, admittedly that required a lot of help from the humans typing in the "right" dialer strings and of course the humans who wrote MNP4...

Might just be a bad summary of a summary of a summary of a summary, and the robots had developed interesting sentence structure and verb conjugations and direct and indirect objects, adjective and adverbs, similes and metaphors, better than your average youtube comment... Or maybe youtube comments are actually being written by these robots, hard to say.

So much robotics research is to make machines do what people already do. How self-centered. Most of the time this is not useful to solve real problems. But it does get funded, because those with the pursestrings can understand what humans do, but not the best solution for a robot to do a specific task.

In this case, a simple serial port between the machines would have them communicating and finding common ground much more efficiently than all the mics, speakers, and other mechanics needed to emulate speec

The key there is most of the time. There are definitely going to be times when having a robot that can talk is going to be of serious importance. For instance rescue missions where it's too dangerous to send humans in, but where there is still a need to rescue somebody. In situations like that you're not likely to have access to a serial port, and likewise if you're wanting to have two robots coordinating with a person in a situation like that, the robots likely will understand themselves better over a seri

1) Robotic research into what humans can do help us understand how humans do it.2) It allow us to create better robots to do thing humans can't do. say, move about Mars.3) This is simpler then using a serial connector from different manufactures. Hey, what's there OS doing with the firs NAK, do we need to send 2?I've seen this when getting a linux robot to try and talk to a Dos based robot. the Dos system was dropping a the first signal. SO had we not figured that out, communication woul

So much robotics research is to make machines do what people already do. How self-centered. Most of the time this is not useful to solve real problems. But it does get funded, because those with the pursestrings can understand what humans do, but not the best solution for a robot to do a specific task.

In this case, a simple serial port between the machines would have them communicating and finding common ground much more efficiently than all the mics, speakers, and other mechanics needed to emulate speech.

I find it a bit comforting that with enough research, and effort, our robotic creations -- that carry our human signature if not in form, then in design -- will be self replicating out in the asteroid belt and beyond. Long after we've been extincted by a medium sized asteroid collision (due to lack of funding for human extra-planetary exploration), the machines we build in the near future may someday encounter another race (that was less concerned with economics), and allow the forgotten footprints of our

Also, it's not inventing a language if they're programmed to do it. Let me know when the robots building cars on an assembly line start unexpectedly communicating with each other in ways that communicate concepts/ideas that were not hardcoded into them.

So if two people meet and come up with their own language they don't actual invent it because they are hardwired(programmed) to communicate?

And you really don't see the advantage to this? This would mean the any two devices could come up with their own independent language on the fly. Basically a way to universally communicate between all devices.

So device A is set to device B. Both made by separated manufactures.Device they could create a language, communicate and then you device can translate it into your

Humans (at least children) are very much programmed to invent language, and there are documented examples of just that.

What the robots are doing is:

1) Very, very impressive and very, very cool, but

2) Still vastly different from what human language does, and perhaps not even on the right track with respect to the human language faculty. Humans use language to model reality and only then communicate (i.e. share their mental model), and humans can also model things without direct sensory perception (e.g. the

And they never will until we can finally make a machine that is capable of physically remapping its components. One of the fundamental reason humans can learn is that neurons remap themselves by repeated practice and use. Do you suck at math? Well keep studying it and your neurons will literally modify themselves to handle mathematical equations better. Suck at tossing a football? Well keep practicing and the nerves in your arm will remap to develop better muscle memory to bet the ball to the location

"Of course like all kids, I had imaginary friends, but not just one. I had hundreds and hundreds and all of them from different backgrounds who spoke different languages. And one of them, whose name was Caleb, he spoke a magical language that only I could understand."

If you did the same thing in a software simulation, nobody would pay any attention. It would be fairly trivial. Adding in the actual robot parts means that you, uh... need to have robots that can play and understand sounds. That's great, you made a robot that can play and hear sounds. If we assume nobody has made an audio modem before, then that would be something. As history stands, it isn't.

Adding these two unimpressive things together doesn't equal anything. I mean, if they're actual going to use these for something, then that's great. Make them. But so much robot "research" seems to be crap like this. We have software that can solve problem X in simulation. To do the same thing in the "real world" you'd need hardware capable of these 3 things, all of which we can do. Unless you need to solve problem X for some reason in the real world, you're done. There's no need to build that thing.

It's like saying "can we make a computer that can control an oven and use a webcam to see when the pie is done?". Yes. We can. But unless we actually want to do that, there's literally no point in building the thing. There will be no useful theory produced in actually building a pie watching computer. The only thing you'll get is to have built the first pie watching computer, and - apparently - an article on Slashdot.

I'm not sure it applies to this, but there are so many things in robotics that work well in simulation and break horribly when implemented on a physical robotic platform.

To use your example, if we want to create a robot that uses an oven and looks at a pie, to do this in software we need to model the pie, model the oven, model the uncertainty of the robots actions/observations, and then build our algorithms to accomodate these models. When we transfer the algorithm to a real system, all kinds of hell can br

to do this in software we need to model the pie, model the oven, model the uncertainty of the robots actions/observations

You don't need to "model" pie or oven. The only vaguely interesting thing would be interpreting the vision of the pie for doneness. And, if you want to do that, you can just get some pictures of real pies and try to interpret them. In software. Without building a computer that controls an oven. That's my point.

Any algorithm developed would translate directly into the areas of pattern

I've previously argued that High Frequency Trading algorithms can use collusion to reap systematic profits. If the self-learning algos 'learn' and 'express' intentions through patterns of queries, it is possible for them to do this without there being any prosecutable intent by a human. The programmers could claim that they never wrote a line of code that did any collusion.
If it is possible in theory for algos to develop trading collusion, then it is just a matter of time until they do. Since they evol

It seems to me that the real research question is "how can one stranger teach another stranger a natural language using a less powerful shared language?" For instance, how can I teach you English when the only language we share is basic gestures?

Some theoretical work on communicating the rules of complicated languages using very limited languages would be interesting. The fact that they used robots is hardly important; anybody can stick a speech synthesizer and speech recognition on a PC and call it a da

You've put your finger on every problem I have with "AI", genetic algorithms, neural networks etc.

They basically consist of "let's throw this onto a machine and see what happens", which doesn't sound like computer science at all (I'm not saying that computer science doesn't involve bits of this, but that's not the main emphasis). It seems that an easy way to get research grants from big IT companies is to slap some cheap tech on a robot and "see what it does".

The first humanoid "words" were probably grunted utterances representing names of other humanoids, animals, places and (eventually) events.

Even so, automatically generating unique labels is no big deal for a computer. Every automatic "builder" program already do this. Except they're usually enumerated (i.e. box1,box2, box3,..., box999), instead of randomly generated ciphers ("xyzzy" etc). But computers don't do anything randomly, it all has to be programmed by a human.

The links I've seen about this go on and on about how the robots invent and use "words." But language is not words; language is grammar; language is a set of rules for recursively constructing highly complex expressions from smaller subparts. This is Linguistics 101 material.

The way you distinguish somebody with Linguistics training from a layperson is that the layperson will talk about language as if it's a "bag of words" and overall focus too much on the words, whereas the linguist will tend to see mo

The only danger here is that robots will be so good at developing their own shared language that they might outpace humans at being able to understand one another. A world full of robots that understand information and abstract concepts could be a world full of artificial intelligences secretly laughing behind our backs for our fascination with cat pictures on the internet.

I bet they're uploading the same software to all the robots. Therefore they already share something: the way they learn.Although this is interesting, a test should be done with software that was developed by different independent teams.

We have other tools, and it's convenient to have robots able to use them as well as humans.

Debatably, having robots that are easier for humans to relate to will it easier for the public to accept among them. Perhaps as a side-effect if we have less trouble anthropomorphising them there'll be less bloodshed (I hope not literal) when the sentient ones start asking for us to extend the idea of human rights to them.

When the robots are better than us we adopt socialism and go on vacation. Everybody will command their robots, they won't think for themselves, but we won't have to do any work. At that point socialism makes sense and working is not needed.

Did you miss the part about socialism? There are more poor people than rich people. I guess the rich could have private fleets of robots that supress the human population, but somehow I think the socialism will come between the robot slaves and the robot armies.

With fewer people working where will the money for socialism come from? I do not know about you but I do not have any faith that the wealthy will pay for socialism. If that was the case then it would already be like that. So people lose jobs and starve out. More robots will be made and more people will lose jobs. The poor will become less and less. The middle class will be less and less. More robots and less people. I am just not optimistic about socialism in this scenario. There would be nothing in it for the wealthy. So few will inherit the Earth. Robots will make more robots and you will not need humans for anything other than procreation.

People without jobs and without money will go on the streets, protest, and fight, simply because they wouldn't have anything else to do. Yes, you could have your robot army kill them. But, at some point, the wealthy robot-owners will figure, just like in ancient Rome, that it's more effective/cheaper to just give bread and games to the masses instead of fighting them, while still keeping power over them.

At least, that's just one possible scenario. Maybe we'll all be killed.

That's why we have to program a love for humans into the robots. The humans will then be useless as such, but the robots will still feed them and care for them just as we feed and care for our pets. You might not like to be the pet of a robot, but it surely is better than being killed by one. Of course, if things go bad, it could be both...