I have all the background skills you could possibly want except , C programming & Microelectronics ....so this is going to be a real struggle for me. Browsing these forums is a sharp reminder of the bridge i have to cross.

I pretty much know all the things that have to be done (in principle), except for the fine detail in these two areas. I have a budget of about 10,000 pounds ill be spending over the next 3 years. Not a lot for such an ambitious project so i need to make every penny count.

Before i start handing out money hand over fist i need to "get with the program" so i start on the right footing and stay there to the finish line. So your input will be most welcome

Where I am at Cards in my hand

25 years research in Strong AI TheoryHobby level coding in basicHobby level electromechanical engineering (Physically building the android wont be a problem I'm pretty good with my hands)

A network of windows PCs 16 in all (Win XP, including a couple of laptops)A well equipped hobby level workshop & toolsA really smart piece of brainsoftware that's way ahead of the game half a million lines of code +...stopped counting long go

which is the reason I'm putting myself through the torture of building a full on android...my brain needs a body in order to learn

It still suffers from some cognitive culdesacs due to the fact that it has spent most of the last 3 years living in a Virtual Reality...lets just say it needs to get out more (hence the urgent need for for a body !)

Sounds very interesting. How is your AI software currently interacting with the world, if at all? What are your initial plans in having the robot interact with it's environment? I would think robot vision would be one of the key systems to get working.

Can you tell us a bit more about your AI software? Sounds very involved.

Brains all there, ready to rock and roll, i just need to get the sensor array up and running. Which is gonna be a world of pain cos interfacing hardware with software is an area i have zero detailed knowledge. An interpeter (software modules) sit between the brain and the universe...Virtual worlds or real worlds, binary whatever is expressed in high level terms such as colour, volume, distance, this is all the brain percieves and knows about

nitty gritty

The brain needs to know in "high level terms" what its body can do , eg i have 2 arms , x degres of freedom, and through play it takes it from there, Brian output , this servo left a bit , this one hold etc...result = oh that's what my arm can do, ill go with that

Minimum requirement, i need say a dozen servos , 1 camera , and a dozens sensors of varied types. That will give me a one armed 1 eyed bandit, sitting in a davros chair. Being a 1 eyed 1 armed cripple in a wheel chair still means you are human...but u ain't gonna choose football as a topic when youre sat in the quizmasters chair.

Because the Brainsoftware is a high level input output device (it don't do binary) software "interpreter modules" have to sit between it and the usual bundled software dlls, drivers and exe's associated with the hardware. All and any hardware can be used. Bolted on and removed as needed. Each hardware device (board) , needs its own interpreter.

Q given the minimum spec & the need for later upgrades AND the need to talk to the hardware via an interpreter what should i buy

there are dozens of boards that can operate servos etc, THE ONE'S I WANT are the ones that will be easiest for the interpreter to talk to. I will have to write the interpreter or pay someone else to do it. I haven't bought any kit cos it will go straight in the bin if its a pain in the but to talk to. I have no desire to wade thru a sackfull of electronics, that why I'm asking for advice.

Buying easy to talk to kit , from the interpreters perspective is what its all about. 1 big board or 20 little ones its all the same to me.

If 1 big board throws a wobbly ill need to reboot (huge pain) If individual boards freeze its less of an issue (oh well just lost my arm never mind, i deal with that later)

i love massively ambitious projects like this! i can only hope that something comes out of it. i think however your biggest barrier is going to be your programming ability since basic is very limited and to that end may i suggest that you try and find someone with a programming background with whom to collaborate? you can both work on the code but if you can have one person who is already proficient in C but perhaps lacks machining or electronics skills then you will have a much better final product in my opinion. Also you may want to look into more powerful hardware for such a project such as an FPGA and a laptop. An android or human form robot is a very challenging project and labs across the world have been working on it for decades (if not the last century) with minor success. making a robot that walks and interacts like a real human is as much an undertaking now as R Daneel Olivaw was in the Asimov robot books, a series you should definitely read if you haven't already!

Logged

"sure, you can test your combat robot on kittens... But all your going to do is make kitten juice"

Seeing as you are running a laptop have you considered using a USB servo controller?You might even consider buying an Axon, wiring it up with servos(really easy) and using something like Webbot's gait designer code for the axon. All you need to do is get your interpreter to send text out the com port using some specific tags and tell servos to go to a position.

I think a Kinect would be awesome in this project. USB into your computer as well for control and there are now some open source libraries (none for windows yet).

I am a robotics hobbyist and I started small years ago with Lego Mindstorms. After a while, I wanted to build my own Butler robot (see my Eric robot) but I lack your skills with computer software. I also had various problems with the hardware and it was too costly to develop a full size robot. I decided to go for a mini version (see my MiniEric robot) that is cheaper to develop and after is finished I can scale it up using the same modules and pieces of code in them.

My take on this project is that it is best to modularize it, because if something breaks it's easy to replace and is easy to fix coding malfunctions. You need 2 types of boards that use a USB interface with the computer: sensor boards and servo boards. I suggest to take a look at the Phidgets boards, as they are designed especially for this purpose. If you want to build your own and design your own serial protocol, I suggest to use AVR microcontrollers and the Arduino IDE to program them. However, if you have issues with coding in C, you may want to take a look at BASCOM-AVR, which is a basic compiler for AVR micros. The advantage of using Arduino is that you have all the libraries written for you, so you don't have to deal too much with the internals of the microcontroller, although it is a good idea to get to know why things work one way and not the way you might want them to.

So, I would use a servo board + a sensor board per body limb (a pair for the head, a pair for the arm, etc). Use a USB hub to connect all boards to the laptop then create your software that reads the sensors and controls the servos. Its this simple. One Axon board can replace one pair of Phidgets boards, but you need to write the code inside the Axon to communicate with the computer. WebbotLib and the Gait Designer will help you get the servos moving, but you need to write your own protocol for sensor reading (the hardware interface is done by the WebbotLib). Similarly, you can use an Arduino Mega plus Shields (you need a sensor shield and a servo shield) and you can use the Firmata code for computer control of the Arduino.

There are many ways to achieve the same end result, all depends on what you are most comfortable of using. Good luck with the project! I would like to see how it goes.

P.S. I see that the Phidgets can be used with various Basic flavors, see here.

As you can see in the video i have a network of desktop PC's acting as a remote mind, (If i get sponsorship , say 6 or more fast laptops ), ill mount them on a motorized tea trolley.

Then i will be able to take it on field trips or chatshows what have you. Android wandering around someway ahead connected via wireless.

I also have a selfcontained mobile platform. ( I can cram 16 laptops into a Dalek like chassis) that way its all neatly self contained. I hate trailing wires....its so like a cheap toy grandma bought you from the pound store.Its about the quality of the mind, if you really want DATA to waste his time making the tea & hoovering let me know. Just write me a 6 figure check and ill have it delivered to your door in about 18 months.

I have only just started on the hardware for the sensor array, and will have to defo get somehelp. Building , teaching and tweaking the brain is....a very demanding business of itself. Now if Mr Honda will let me borrow Asimo for a long weekend . Still I'm glad they did the body and not the brain, they have me slightly worried all the money they are chucking at it, but with 25 years of focused slog under my belt I know from experience they will have to restart from scratch more than once on the brain front. Id say Ive got at least 5 years and prolly more like 10 or 15.

I have the whole AI evolution thing mapped out for the next 50 years. I know where its going and how it will get there.....& Yup I started reading Asimov when i was 7, been chipping away at the problem of positronic brains since then, so really i can say i have had my hands dirty for almost 40 years now. Living it breathing, its all second nature to me now. = waffle on forever

@ blackbeard , the brains sorted, its developing fast in a VR world, false data that mimics our reality (i have built a whole universe,laws of physics etc ...well out to the oort cloud anyways..internal map capability for navigation) It need the sensor array so it can step into our our world and say hello. Hardware is cheap, I just dint got C/digital electronics.....electromechanical & diy yes by the bucketload

Walking & arm movements and stuff is pretty straight forward, just a bit of trial and error/practice, i got all the software on the brainside of things ready to go, i know the hardware will be glitch and laggy. The brainsoftware is selfcalibrating in this respect. If its injured it will carry on as best it can, nursing an arm if needs be. It stays alive as long as the 1 machine with running the core (instinct) brain doesnt crash. Cut of its head and the knee jerk reaction is still there....actually its prog to freeze, but if it starts to fall after the higher brain crashes fall it will tuck into a fetal position. gonna use rubber mounts and suspension components to reduce hardware damage....i can imagine setting up the hardware is prolly a bigger pain than most peeps expect, when it comes to vodoo electronics there are always hidden ghosts in the machine...weeps, the horror the horror

I'm not interested in its physical capabilities as such, the androids body is just for exploring the world (sensory feedback) so my efforts will be directed to this end. It can learn the world by watching, listening to me wave stuff around, banging it w.h.y. , but its easier if i just give it some toys to play with it and let it get on with it by itself, it doesn't need me around to do that. "is toy" = safe for it to touch and play with. if i shout "oi!" really loud it get a pre programmed slap on the wrist, that way i dint even have to get out of my chair, its for safety reasons too.

In fact any loud unsuspected noise will cause it to freeze. One of the first instincts i built in. If its in play mode its allowed to do stuff like look for the noise or look for my face...all other modes cause it to freeze (except Terminator mode = do main task priority 1 or die..this is just for testing, requires keyed in password + vocal command....it then issues a loud warning for 10 secs all personnel stand clear.) Each mode has a set of presets, restricting it physically emotionally or mentally etc. it can only adjust itself with the upper and lower bounds fixed by the mode. This should give you an idea as to the depth & breadth of the project. Precutionary principle a wise choice with an autonomous machine.

Humans cant obey the 3 laws of robotics, they are not just not smart enough, u need HAL9000 like inteligence before they become a reality, but a machine that smart could fool you for ten thousand years and then stab u in the back. The 3 laws as laid down by Asimov are not implementable. It requires perfect self control and near perfect knowledge. Its a comfortable illusion....bit of a bummer and one of my darker discoveries.

excited !?!...im pooping my pants !, Ive already had to dumb it down in certain area's...had more than one spook out moment i can tell u...( in my eager haste i had it running before it could walk, it starts to get messy (cranky) if u push things to fast)

complex behavoiurs with artificial emotions. They may be artificail feelings, but the dog doesnt know that ? how we feel emotions is a very complex topic, the dog uses a similiar system to my emotional slider theory

@klimsgot a link for Kinect ? cos most of the data is transferred vie txt files (anti crash mechanism) i can use other os in the mix if needs be. The vision system is good enough to read from a monitor(comprehend what it sees with a bit if code) . so a machine can be connected visually this way. (i had networking difficulties early on so i used cameras in front of monitors, one of the ears still works by looking at spectrum analyzer..waste of cpu u power, but it works and im not short of ye old beat up desktop cpus lol AZY = master bodger

some very good points modular, precautionary principle , robot do no harm are principles that guide this project

The space shuttle has 4 computers that run a democracy , if 1 fails u still have a voting system, 2 fail you guessing 50:50 if output is different. Smartly FOR ONCE ! they have a 5th computer , different hardware, different software & different sensors by a different company as backup.

Im thinking

hands liable to crash so will have a 1 pc per hand

arms work together so may have 1 pc for the pair

mechanics of the head and eyes work as a combo so 1 pc for that

torso 1 pc

if i get too the legs ill prolly have to 100% dedicate 1 or 2 or 3 pc's to that if its active walking , for passive walking (which is crap but it walks) 1 pc would do

non intrusive data mining can be done when pc's not used for locomotion. Just like a human, we stop and pause, prop ourselves up, take up a defencive posture when deep in though or contemplation...excluding emergency situations

add a greater world knowledge and the brute force search and prune power of a chess engine. Now fetch is a simple game played out between 3 universes , the dogs perceptual model, the human perceptual model and the real model...whatever that is lol.....don't ask me...I'm just a perceiver here myself

The chess engine queries a vastly more complex scenario, instead of weighting pawn structures, or detecting checks, imagine you were looking for friends and trying to avoid enemies, imagine apply the same processing power to understanding someones facial expressions, bodystance even the kind of clothes they wear.

Brute force power applied in focused (intelligently learned ) ways can yield spectacular results. Even if the dog was dumb, only had 100 prescript-ed actions it could do, if it can be taught to glue "action in the world" (words) into games (sentences) then suddenly you have a virtual pet that can do soo much more than your real one !...and learn 100 times faster too....from this i hope you can see where im coming from

one only needs to interpret sensory data , import that into an internal world, run what if scenarios and we start to see some pretty realistic artificial intelligence..and that from a dumbot. If one could get the ardinuo thingymagic talking to some software like this (via a the kind of translator ive been banging on about) your little bots will turn into pet terminators....fetch that pesky cat ! Fido...oh and go easy on the barbecue sauce

-----------------------

@Ro-Bot-X i like your bot very much, thats all u realy need in term of physical capability and sensor input.... we need to find a freindly C programmer and persuade them to write so breakout/breakin code so data can be transfered between your standard robor software and higher brain software, then u can make it as smart as u like.

Idea 4 u(counter weights at the shoulder and elastic bands or springs can improve the lifting power of your arm. MAKE IT so the robot arm has to fight the elastic to lower itself.....the on the lift you have the elastic band working for you...when not in use your arm rests in the high position and is contrcted so as not to strain the servos. As gravity naturally tries to lower the arm, this helps the servo fight against the elestic on the way down. you can double the lifting power of your arm this way. As a bonus the elastic helps smooth the arm motion both up and down, becuase the elastic is always fighting aginst it or fighting with it naturally damping out any slack in the system.my 2 cents

Two quick things, one, as far as the elastic arm goes, I would say that fighting elastic to make it move would be extremely inefficient. Power management is really important in a robot of the scale you are talking about.

Also, I would be curious to see what your brain software is capable of at this point. You say its biggest barrier right now is getting out there and experiencing the real world. Have you implemented navigation in a virtual environment? If so I would be curious to know what methods you use. Can it recognize objects in an image? How does it communicate? Text, speech, graphical interface? If it communicates via text or speech, how large of a vocabulary does it have. Could you post some videos or screenshots?

I have been very interested in artificial intelligence for a long time, and would like to know what approaches you use. Also, if I'm not mistaken, you also said you didn't like the use of fuzzy logic, but you do like the use of biological models. Is there a reason you prefer not to use fuzzy logic? It is fundamental to the way higher intelligence functions. What kind of architecture or hybrid architecture is your software: Expert system, neural network, etc.

Logged

"Never regret thy fall, O Icarus of the fearless flight For the greatest tragedy of them allIs never to feel the burning light."

Ill make another video when i get a chance. My main problem is getting over the hardware barrier, so ive not bothered with websites or blogs etc I got so much i need to wade through. ropey diagrams in paint rather than luscous 3d cad animations is all i got time for at the moment.

Im not sure what limits the brain has, its designed to be unlimited, ie you could teach it anything, the smarter it gets the more it can teach itself. Just like a baby growing into an adult. In some ways it learns like the cyber dog ( see vids) but can turn sentances in to novels not just words in to sentances

If a word is akin to an ability, a sentance is a task like fetch , a novel an adventure.

see , pick up , move are precursers to playing a game of fetch

Im writing a series of books first of which is called "Awakenings" the birth of sentient AI, how to build one , how i built my brain etc etc....again there only so many hours in a day. Its getting to the stage when i have to get a bit more serious about stuff, its a bit more than just a hobby, i want to get my brain out there. I gotta hook up with some hardware programmers. Virtual cyber pets are fun, but they aint gonna chage the world.

A super smart brain living in a box with no real world knowledge might be cool wandering round world of warcraft (and would own your butt in that enviroment) but agian it impact is limited. Thats why i gotta hook up a sensor array so it can see,touch, hear the real world. Thats why im building it a body.

communication talking text or a GUI (input and output) + a limited amount of touch at the moment. Only speaks english at the moment 250k word vocabulary (glad im not french or something) Not sure how many objects it recognizes prolly a few thousand by now. Its not so much how many words or objects it can recall, its the level of comprehension & understanding, so numbers like this dont make much sense.

It can remeber every word it hears and every object it sees everyone it meets every conversation it ever had, everything it ever did. Most of this stuff is unimportant, dependant on the mode its in = how much it bothers to remeber. So with out trawling through its database with a magnifing glass you cant tell just how much its taken in and to what level it has comprehended stuff = how many games it can or could play, each new trick or core ability increasing its ability exponentially...it operates in a similiar fashion to the human brain, and uses many of the brain tricks....has a lot of usless junk in there that one day might suddenly become usefull.

My brain can do that, and has learned to play fetch...take this a little further

You could dismantle a car engine in front of it , and ask it to put it back together and it would (...if only i had a hand)

for a 1/10th of a second, onlookers are thinking wow, but its just a glorified stacking of bricks at the end of the day.

ps...id like to see it try ! 4 real, if i set it too human mode the air would turn blue and spanners fly.! lol

cos it also understands concepts like in on under and has a gravity compass, by watching carefully it could get the idea of the screwdriver goes in the screw, and that a spanner mates with a nut...it can figure that out for itself just by watching.

you see humans arent realy that smart, they have learned a handfull of tricks and are able to string them together.

Its just a the mother of all mountains hobbling together all the right code in the right way to duplicate this ability, and why up to now few have even bothered.

Regarding writing strong AI software, Its a matter of understand what to code for rather than the technicalities of how to code. I'm a robot psychologist first and a very poor programmer second. But in this prototype project its a big plus to have a vast sprawl of simplistic modules as it makes it easier to comprehend when its spread out over the tables and walls. When i hand it over ill be glad to see it recoded in some nice tight C. Prolly run 10 times faster and with 1/10 th of the code, but that's the nature of cutting edge prototypes...pretty they ain't.

In order for mankind to build something like DATA out of startrek for real, a number of diverse disciplines (& polarized mindsets) have to come together. Sentience is not something you can hard code for, sentience is simply an emergent property that arises out of a complex system via organic growth and channeled evolution. The system/code is like a seed which grows into a tree of abilities, self aware, sentience & intelligence is like the blossom on that tree. It is built into the genetic code of the seed that it is destined to becoming a tree, something not evident from casual inspection of the humble looking seed. The secret lies in the genetic code that lies at its core.

To build sentient A.I. , the skills priority is in this order Want to build a real android? this what u need to learn.

Evolutionary Biology ( reverse engineering in theory but not in practice ) ........how it came about, architecture

You need all 4 or you have nothing at all & that's one of the stumbling blocks to sentient A.I. and the lack of vision from inside each discipline. On the commercial checklist of things to do, building a sentient A.I. gets relegated to the fringes of research. Having a worldwide universal operating system like windows & a universal language like C that talks to hardware is a major step forward.

I have skills in all 4 areas, enough to design and build it, but which ever discipline you come from you are going to have a deficit. Mine is low level language skills needed to talk to computer hardware. Assembly,C, Digital electronics....theory in this area yes(knows what needs to be done), hands on skill zilch (knows exactly how to do it.....NOT)

you cant realy copy someone elses work, you have to figure out the big picture your self, what your building and why you are building it, if u cant see the big picture u just end up painting yourself in a corner. Thats what makes it so hard to even get started.you will need pencil & paper and the ability to hit utube/wiki hard for the jigsaw peices, 18months later u got your first blueprints sketched out...understanding neighbours and rellys deaf to your late hour cries of eureka Kick off your brain with vids like thishttp://www.theemotionmachine.com/marvin-minsky-on-the-present-and-future-state-of-humanity/comment-page-1

looks like ill have to indulge in a bit of C regardless at some stage.

Ill just have to buy some kit to see whats out there hands on, there are plenty of vids on robots doing there thing, less so of screenshots/video tutorial stepping you through how the boards are configured, whats running on the screen in real time.

ref lack of step thru tutorials, this is less of an issue if you are just making a dumb reactobot running from a handfull of line of embedded code. But does leave me in the dark if I'm wanting to import a sizable AI allready coded, or mesh with off the shelf brainware

Theres is a huge gulf in whats easily doable with software AI in a virtual world and the very limited functionality of standalone bots.....its like trying to a put dogs brain into a worms body.

the best hardware kits out there are like 20 years! behind what its possible to do in software....its time to the manufacturers step up a gear or three me thinks...I gotta get crafting me some code in the mean time in order to bridge that gap....when was the last time your bot learned to play fetch by watching an example ?

and before the hardware fanboys start queuing up to attach electrodes to my anatomy...ill curse the brainware developers for confining their brilliant creations to the land of virtual la la

rant overlet the termination of mediocrity begin !

gather round me little children, for i shall lead you out of the Dumb Valley and into the promised land.....wohah ha ha ha haa!

Dude, seriously. For a robot to move around doing something useful there are 2 things we have problems with: localisation and visual recognition. I think the first might be easier if the latter would be developed enough. No matter how advanced is you AI, if your blind and lost in space you can't accomplish much. Well, you can talk to people, bring comfort this way, but can you go to the kitchen and bring an apple to an elderly person that sits in bed all day? So, we are a bit stuck on hardware issues. Yes, there are ways to do it on the PC, either through a wireless link or with the PC on board running Robolab or MSRS, but it still doesn't work as we want to. Vision software isn't as advanced (if it is, is closed source and expensive) as needed. Can you work on that? can you find a solution that we ALL can use freely and easily? That would be a big help. AI is not on my list of priorities until the robot is able to know where he is and can locate and retrieve the object I tell him I want.

once i can get the brain to operate the camera i can tweak a couple of modules (having to move the cam by hand at the moment)

then i can look at doing a stripped down version of the vision system (badly need a recode anyway) Its running on 6 pcs, to reduce lag but will work fast enough on 1 pc for a hobby robot.

then we will have a mini project ready to go consisting of tilt n pan cam , advanced recognition with some memory features (or u have to relearn everything every time on the fly) and a micro brain with a bit of common sense that you can easily feed your own code into

even if its just a bit smarter than the dog in the videos it will be a big step up in intelligence compared to most of the stuff out there

i like things pretty simple so anything that gets done to be released will be very plug n play....thats the plan anyway

any system that tries to uderstand the world first time every time on the fly is doomed to failure and will be very limited....your brain dont work like that, you take time to learn your enviroment, then when u walk into a room and somethings moved you spot it straight away...its about efficency and focus, thats how my brainsoftware works and thats how human brains work

Dude, seriously. For a robot to move around doing something useful there are 2 things we have problems with: localisation and visual recognition. I think the first might be easier if the latter would be developed enough. No matter how advanced is you AI, if your blind and lost in space you can't accomplish much. Well, you can talk to people, bring comfort this way, but can you go to the kitchen and bring an apple to an elderly person that sits in bed all day? So, we are a bit stuck on hardware issues. Yes, there are ways to do it on the PC, either through a wireless link or with the PC on board running Robolab or MSRS, but it still doesn't work as we want to. Vision software isn't as advanced (if it is, is closed source and expensive) as needed. Can you work on that? can you find a solution that we ALL can use freely and easily? That would be a big help. AI is not on my list of priorities until the robot is able to know where he is and can locate and retrieve the object I tell him I want.

x2 on this.

A massive database of physical tasks that can be recorded upon viewing and recalled instantly can understandably be the basis for much higher AI. But that right there is the problem right now - the vision software. If you have no solution for this, then everything beyond that should be put on hold.

---

Talk to some research labs or even Honda. Put together a well-done document detailing what you have and what you are seeking. There are obviously quite capable machines out there that completely lack a brain and I am sure if you could convince them that what you have is real, they would be willing to build for you whatever hardware you need. Asimo + geniune brain + engineering cost reduction = $$$

It sounds like you want to be the man that single-handedly changed the world, but if you look at the smartest of men in the past 50 years that have come close to that title you will see they all had to lead a team of people to make it happen. Get some people together, get some funding, incorporate.

Logged

The only way to top an upright screen, keyboard, and mouse is to eliminate the need for humans to touch a PC at all. Oh, hello there Mr. Robot... what would I like you to do, you ask?

Wow, I can't remember when the last time I have posted on this forum (over a year... busy with going back to school)Anyway, it is always nice to hear someone like yourself planning to build an android.Not sure how you are planning to do it.....

But from my experience....I started Aiko planning only $8,000 budget max. But somehow, it is more than $40,000+. (so watch your budget very carefully)I can barely fit 10cm motherboard inside of Aiko, not sure how you are going to fit 6 laptops inside your android.(I learn from my mistake wireless does not work well, when you are doing TV interview at their station, or in public building)I started with Basic language when first started the BRAINS software, but later I switch to C, because basic has some limitation.(Don't get me wrong basic is still one of the best language out there)Making AI logic is easy, but connecting with the hardware and having it interact with environment is another story.Giving android vision, hearing,touch etc. and having it interact is like going to the moon, very hard.Aiko can detect a cake very easy with the BRAINS software, but having her pick up a spoon and pick up the cake.... it took me 3 freaking months to program it.

Anyway, good luck. I would love to see your android someday.Just do it, don't think about it too much. Just do it without any hesitation.You learn new things as you go.

My android is going to be a bit larger than yours. room is a big for me issue.

4 netbooks will control the body and instinct brain , but i will also have to use wireless for the higher brain so it can connect to the network (big box on wheel near by)

I am also building a dalek/davros chassis that can house 16 laptops. So the whole thing is self contained and "world proof". This also means i can have a very large battery pack in the base so it could run non stop for 4 to 6 hours.

I use freebasic to code up and test new modules but then pass the final builds on to some c programmers who are also involved. Because i have so much computing power speed or data storage isnt an issue. Freebasic is compiled & optimized, so its very very fast. I can get 20 frames per second printing out a 1000x1000 pixel screen 1 pixel at a time. ( you cant beat quality C code, but freebasic does a very good job most of the time)