to sum up i said nice job on the baby robot thing and that i want one. i want to be a dad and i dont want to put my gf through child birth or the manual labor it takes to raise a child the first few years

If a robot is 'badly programmed' in a way that gives it personal/emotional handicaps, and it commits a crime (lets say serial killing and torture), does it get 'put down'? I mean I take it the person who designed it or who tampered with it would get heavily punished but what about the robot itself? Would it get put in a 'machine correctional facility' or something? lol

The rules for robot liability and culpability vary from region to region. Here, where indieVisible lobbying was most successful, a criminal robot is treated identically to a criminal human. However, this is very, very, very rarely a problem, since the modern expert system seeds are very well understood and polished.

If a robot gets in trouble for a crime, it's mostly a crime committed to prevent greater harm, or a crime that was created by bad laws (such as when Malaysia introduced the "robots must obey human commands" law).

the culpability issue Archduke raised is interesting. for instance: if you maliciously make a robot with a killing instinct, it gets the blame and will be thrown in jail if it kills and you get away with it (as long as you claim it wasn't intentional)?

so they have bypassed the problem by only legalising a set of 'seeds' (i must admit i am unsure of the meaning of that term).
i think that solution has it's own problems. bureaucratic bodies do not and for the most part cannot function like open source communities; for instance how do they update the set or patch errors (if patches are even necessary, obviously, given that i don't understand the nature of seeds) and who regulates which seeds are acceptale and which aren't and to what degree you are allowed to modify them?

also, how do you differentiate between a slave and an appliance? the table that went "Vweep" showed a glimmering of a consciousness.

An expert system seed is a specific copy of a well-understood algorithm with specific starting data. It is an expert system right off the bat, and then grows according to specific principles and algorithms into a sentience by using inputs, outputs, and quantum memory. It can also be "recompressed" (AKA "refactored") back into a seed by destructively reading the quantum memory.

"Starting seeds" are blank, no-memory seeds normally considered acceptable by the majority of AI experts - that is to say, algorithms and an expert system whose parameters can be tweaked within a given set of limits.

However, aside from this little town, most end users do not want a starting seed. They want a mature seed. This created a brief but extremely profitable trade in raising robots from a starting seed, refactoring them into a new factory default, and selling copies of the new factory default for use in many robot bodies right away.

This is the practice that indieVisible warred against. The idea of unsuitable starting seeds is really very minor, an AI research project more than an actual problem. The real problem was perfectly suitable seeds being used in very bad ways.