below this are shown:- the calculated weights, - then the calculated threshold and then- the actual calculated (suggested) output is displayed,

- and below that you can enter the correct output that is to be teached (left button: increase, right button: decrease, orange button: ENTER).

if learning is correct and complete, the training mode stopps automatically;to interrupt the training mode manually, press [ESC] (dark grey key). then you can test the trained patterns by pressing the touch sensors.

Return to learning mode: again press [ESC] (dark grey key). By this you can retrain new patterns.

you may train e.g.:a OR b a AND b a NOR b a NAND b a AND (NOT b) and even more...

...and the following is a feed forward net with3 inputs (Touch-Sensors)and2 outputsthe 3 inputs are plugged to both of 2 neurons, and each of these neurons can be trained to a different output behavior.

I'm just very interested in neural net although I've just started to read some stuffs about it. Ford, I wonder to know if your net would work for continuous inputs (as ultrasonic sensors). Furthermore, I guess the feedback layer allows your outputs to take in account past inputs values. So suppose your net models a collision avoiding behavior and that it is properly trained, what will be the behavior of the robot if it went into a dead end ? Would it be able to go backward extracting itself form the dead end and then continue its cruise ? One more question, suppose the robot can track its position (with odometry process and/or compass, gyro sensor etc ...). What kind of architecture do I have to implement to make my robot able to reach a goal position while avoiding obstacles ? Should I add another input (like the relative angle to the goal cap) or a second net and a "merging layer" for output computation ?

Thank you in advance

(I'm gonna buy a french/german dictionnary )

Tue Jul 07, 2009 5:38 am

Ford Prefect

Guru

Joined: Sat Mar 01, 2008 12:52 pmPosts: 1030

Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)

hi XTinX,thx for your interest in the NN.1st, it was for me just sort of "academic" interest implementing AI on an NXT.

2nd, consider that backpropagation nets need hundreds or or even thousands of learning cycles (e.g., teaching a XOR condition like in my example pattern that Xander tried out). This takes HOURS of automized training or maybe WEEKS for conditionizing and self-training "by doing" (refer to "Skinner Box"). Sometimes the starting initializations lead to dead ends and the learning function divererges with each learning step. You'll have to initialize anew and start a new trainig (for hours or weeks ;) )

3rd, for Elman nets it surely is possible that it learns from properly experienced conditions - but the trainig effort is exponentially larger.

4th, for the use at a NXT-based NN the effort-success-ratio is best using Feed Forward nets. The Trainig is quick (10-20 cycles) though not every possible existing condition can be trained (e.g., XOR conditions never can be trained by them). But they can be approximated, that fits mostly.

5th, for an application like training a labyrinth run it may take 100 Thousands of virtual neurons, but the nxt memory limitates the NNs to maximum 30 neurons. :P

and 6th, yes, analog sensors like ultrasonic, light or gyro sensors can be used as inputs.

Now that it seems that finally the miscalculations by Robotc may have been fixed, I'll try to figure out some useful applications for my NXT-NNs.

But now the next problem is:I'll need powerful sensor and motor multiplexers for maybe 30 sensors (NN inputs) and 10 motors (NN outputs), and the best way was to have a NXT RS485 network with 4 NXTs and up to ten 4x muxers (1 at every NXT I/O port) to communicate with the master NXT which runs the Neural Net.Unfortunately, RobotC hasn't got the C commands you need for such a network (unlike NXC ).

I didn't understand why you wanted to have so many sensors and motors (what kind of application or behavior would they be for ?). But anyway, regarding a path finding while obstacle avoiding behavior, why you don't you build a training file (couples of desired outputs versus inputs) by controlling the robot remotely (like a r/c car) and sample the I/O couple values ? Then, you would run the optimization algorithm that calibrates your net. And finally, as the training file won't be perfect, you could still correct the bahavior of the net remotely. Of course the position and the cap of the robot would be inputted in the net as well as the goal position. All the learning mecanism would be supervised but does it matter ?

Cheers

Wed Jul 08, 2009 5:42 am

Ford Prefect

Guru

Joined: Sat Mar 01, 2008 12:52 pmPosts: 1030

Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)

of course my project is sth like you mentioned.but recognizing and perceiving the environment by my robot is not possible with just 3 or 4 sensors:

Sounds so sweet and ambitious ! Sure the nxt is a little too weak to run such program. What kind of net topology have you imagine ??? I wonder how to design such net (number of layers, where to set up close loops etc ...)

Sorry Ford but I meant "topology" instead of "technology" and was talking about de neuron net, not the communication net. I don't get it, you said that the number of neuron in the net is limited to 30 on nxt. By the way, I looked at your communication problem with different nxt brick. I think one way to exchange data is to send them in realtime via bluetooth. Each slave NXT sends the states of their sensors in a single data packet every 50ms (for example) without acknowledgement. Just have to take care of the bluetooth latencies.

Who is online

Users browsing this forum: No registered users and 0 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum