It seems no one's brought it up before, but will there be any form of hardware acceleration using graphics chips? Some motherboards do have integrated chips, and I believe NVIDIA has just put out the OpenCL SDK and has had CUDA for a while. Nvidia seems to have quite a few examples of CUDA being used to accelerate AI.

shadow wrote:It seems no one's brought it up before, but will there be any form of hardware acceleration using graphics chips? Some motherboards do have integrated chips, and I believe NVIDIA has just put out the OpenCL SDK and has had CUDA for a while. Nvidia seems to have quite a few examples of CUDA being used to accelerate AI.

Yes there is the newer intel and amd chips comes integrated with gpu certain I5 and I7 have them not the I3 this one lacks and a variant of the I5 also lacks the gpu inside. Windows 7 support this with ''directcomputing'' which comes with directx10 and 11. Apple has OpenCl. The lates Java environment support this as well. This process is called heterogeneous it makes it possible to let your code to decide if some computing can be done on other processor. And thats very cool I've waited for this for years tried this on my first dual PIII system gave me an headache.forks , threads etc.

Microsoft even has this experimental operating system named ''Singularity'' it should support this from the core but its difficult to read or get into. I think linux support this somehow through OpenCl or with the mono platform don't know for sure. Be on the watch to find out more about this subject here.

such a board with would need to be a standalone system like tesla servers. In this situation would be a small - medium graphic (96cuda cores GT240 low profile) card size. For direct use; no idea if its even possible without a host main CPU, at least it does need alot of reprogramming on the card's bios itself to I/O all the component you need.

Things about Ion Platforms:Although Intel GMA-HD not support openCL. I would think Ion Platform would be capable of using general computation and openCL computation by the GPU; ion and ion2 has very small form factor. Atom CPU power is very limited, a dual core n330 would be an old 2ghz celeron.

choosing hardware would be much simpler if motherboards can fold. Pcie flexible extension cables are just making it look bad.

If any motherboard that to be fitted in Aiko's head, it has to be size of mini-itx folded in half.

Also another idea: why not use a basic CPU and motherboard combination for Aiko's body and have it communicate with a more powerful computer in the house using a wireless connection? This way you could use GPU acceleration, multi CPU socket motherboards, whatever you want to run AI really.

She couldn't leave the range of the wireless signal, but it will probably be a while before the necessary processing power is available for a self contained android.

shadow wrote:Also another idea: why not use a basic CPU and motherboard combination for Aiko's body and have it communicate with a more powerful computer in the house using a wireless connection? This way you could use GPU acceleration, multi CPU socket motherboards, whatever you want to run AI really.

She couldn't leave the range of the wireless signal, but it will probably be a while before the necessary processing power is available for a self contained android.

Im sure with wireless, it will come in different problems relates to latency depend on different extreme scenarios.