Wow! That's an incredibly interesting concept. That's what I'd like to create eventually. Can you take detailed notes and pictures so that the community can learn from this project? That would be awesome!

From webpage: I'm hoping that MicroRaptor will be able to take longer strides without falling over because it will use a 5-axis IMU to help maintain balance.

I especially want info on that IMU

Im planning on making my own biped run. In my head it makes sense on how to do it . . . but it might be harder than I imagine it to be. What expected speed do you aim for? One of the problems Im having though is isolating the IMU from the vibration and shock caused from running . . . Ill probably add some energy absorbing springs to the legs, not really sure yet . . .

I seriously start my biped in May when I have time, but for now Ill dream

Wow! That's an incredibly interesting concept. That's what I'd like to create eventually. Can you take detailed notes and pictures so that the community can learn from this project? That would be awesome!

Absolutely, I will be taking detailed notes and pictures, and publishing on my site...

Im planning on making my own biped run. It my head it makes sense on how to do it . . . but it might be harder than I think. What expected speed do you aim for? One of the problems Im having though is isolating the IMU from the vibration and shock caused from running . . . Ill probably add some energy absorbing springs to the legs, not really sure yet . . .

Admin,

I don't know if MicroRaptor will be able to run - I hope it will. In turns of using the IMU, my plan is to let it learn to figure out how to use it. The IMU will present a certain pattern while it is moving, and if it is successful at the movement, the expected IMU pattern becomes part of that movement sequence.

Exactly. In my mind, there is no reason to "program" in responses to sensors - the robot should be able to learn that stuff by (controlled) experimentation. Eventually, once it gets past a certain point, it should be able to learn independently.

The only limitation on whether MicroRaptor will be able to run versus walk will be due to limitations in the actuators.

i see you redesigned the neck . . . decided not to have vertical head/tail motion?

any thoughts on how to fix that serious wobbly/vibrating issue?

how long did it take for you to tweak the motions for that first step to work?

just a thought . . . you dont have the cam and battery and electronics on it yet . . . you should put 'simulation weights' in respective places on your walker, otherwise you may find yourself redoing the walking sequences when you finally do load the weights onto it . . .

I removed the pitch servo for the neck, because it was having a hard time holding up the head sticking straight out, and I haven't even added the cameras yet, so there was no chance.

The vibration comes from the servos attempting to keep a specific position. I have been fiddling with the compliance parameters and torque settings to find a happy medium between having them squeal and having them chatter.

The motion tweaking only took a few minutes, maybe 10. Writing the tool that let me do that took several hours.

I'm just working on the software tools right now, so I'm not worried about the dynamics of the system changing once I add the batteries/electronics/cameras. Everything I'm doing right now with respect to movement sequences is throw-away, not the least because eventually it will have an IMU onboard, and it will use that to tweak the sequences itself to maintain balance.

awwwww I can't watch widows media files....... So you are using an atmega128 connected over wifi and doing your processing on a remotedesktop? Are you using somekind of relational database (sql or the like) to keep you learning?And also on you sensor query concept, are you implementing this as a kind of controller and when certain conditions exist you start up a thread that does something about the sensor query untill resolved? Then overlay the outputs of these threads in a subsumption like architecture? or am I way off base?

I don't have the wifi set up yet - the wifi module is on its way out west to my brother, who is going to write the SPI interface to it. However, I'm still doing my processing on my laptop, and its connected to my ATmega128 interface board via USB for now.

Right now, I keep the robot definition in a sort-of object database on disk - eventually, when it gets complicated enough, I'll start using a real OO database, but for now I can just serialize the robot into a file without any trouble.

The sensor query will be implemented pretty much as you describe. The sensor query will get execution time each cycle to do matching against current sensor readings, and if there is a match it will execute the trigger associated with that event. The event handler will be overlaying in some cases, or completely overriding in others, the outputs from the "standard" movement controller.

The nice thing about this mechanism is that you can implement a wide range of interesting behaviors with it, including balance, landmard based navigation, and goal satisfying.

I think you could work out a pretty good architecture if you talked to your brother.

Here's my idea. (and I'm a gumstix nut because they are small but you could do this with any embedded PC)

So you do the same thing with the Atmega128 but feed it to a gumstix (which means you switch the ATmega128 from sparkfunto a robostix but it should be fairly portable) as for the IMU processing and the ballance you can leave that on the Gumstix because it would be faster there. For the vision part and higher AI you can add straight up 802.11bg with the gumstix wich should get you faster high level AI while keeping the lower level AI functions running without the latency of the SPI interface.

Another pro is that you can run your vision through the gumstix which will alow the Atmega to react to commands much quicker and not have to push as much data through.

Well, the original plan was to have a gumstix onboard, and use that as the "brain". However, after talking to some people, and playing around with my gumstix/robostix, I decided to switch to a PC-based brain, with a high speed wifi connection to the robot. The robostix can talk to the Bioloid bus at 1.0 Mbps, but unfortunately when you're running at that speed, you can't also run the second UART at 921 Kbps, which is as fast as most serial wifi links will go. So, we looked at the DPAC wifi module, which optionally comes with an SPI port instead of a UART. The SPI port on the DPAC module can talk at 1.0 Mbps, but because its a slave SPI device, the micro-controller you talk to it with has to be a master SPI device.

My brother, who gave a lot of design input into the robostix, told me that the robostix can't do master SPI, so we looked at another ATmega128 module, and quickly settled on the one from Sparkfun because it is small, cheap, and brings out all the pins.

The gumstix does not have anywhere the memory, processing power, or storage available to do the level of "brain" control that I want to do. The way I will have it hooked up (once my brother gets the wifi module code done), I will have a straight-through high speed low latency wireless connection to the Bioloid bus, running at 1.0 Mbps. I will be using wireless cameras for stereo vision, which will feed back into separate USB digitizers for video processing. Since my laptop is a Core Duo, I've got an extra core sitting there doing nothing while my Smalltalk brain code runs, so it might as well be doing vision processing.

The ATmega128 will only be handling the actuators on the bus (servos, plus any custom actuators I build) and simple low-rate sensors like IMUs and touch/pressure. Video will be going back to my PC on its own wireless channel.

wow that is amazing. are you doing that for your job/university or just for fun? are you self funded or you got some big company sponsoring you? are those servos special ones because they dont look like the ones i normally see?anyway that is amazing.

This is a hobby for me, so I'm just doing it for fun, although I treat it like a research project. I am a professional software developer though, and get paid for doing software development (not robotics software, unfortunately).

The AX-12 can switch from a 300 degree positonal servo to a full-rotation gearmotor by setting a software parameter.

It uses a half-duplex one-wire serial daisy-chained bus to communicate with the controller, so each servo is plugged into the one beside it. The bus runs at 1.0 Mbps, which as it turns out is doable by any of the ATmega chips. Because it is a one-wire bus, you have to cross the Tx & Rx pins, but as long as your software is doing the right thing, that isn't a problem. The servos are powered by 9.6 volts, and each has two 2-pin connectors (signal, ground, power).

Each AX-12 has a unique ID on the bus, and you can send messages (commands) to individual servos, or all the servos at once (broadcast). Each servo has a table of RAM and FLASH locations that correspond to parameters, including: sensed position, sensed speed, sensed load, desired position, desired speed, maximum torque, minimum torque, and various compliance parameters.

You can also set many settings in each servo, including temperature limits, rotation limits, and a bunch of other things.

The real kicker is that the AX-12 has 222 oz-in (16kg-cm) of torque at 9.6V.

I've got software (freely available) that runs on an ATmega128, and acts as a pass-through from your PC's serial port at 115,200 baud to the AX-12 bus, running at 1.0 Mbps. My brother is working on a piece that interfaces an ATmega128 (or a 168) to both the AX-12 bus and to an SPI wifi module, so we will be able to get a full 1.0 Mbps wireless connection from a PC to a 20-servo robot.