Is BabyX the Future of Silicon-Based Life Forms?

This site may earn affiliate commissions from the links on this page. Terms of use.

In the admittedly futuristic discipline of simulated artificial life form, or animat, research, BabyX is perhaps the most ambitious project to date. Its creator, Mark Sagar, has chosen no less a goal than simulating the neural machinery of an infant human in silico. A milestone that many pundits had predicted to still be many decades in the future, the advent of BabyX brings a host of moral and philosophical questions regarding artificial life: What are our duties and obligations to silicon-based life forms? Do they have rights akin to our own? And what if any legal status will they possess?

Questions like these are now becoming less farfetched than they appeared a decade ago. For instance, could one be arrested for trafficking in animats across state lines? You laugh, but if the suffering of animats could be increased exponentially simply by cutting and pasting an erroneous piece of source code, the prospect of something akin to an artificial holocaust of astronomical proportions is not unthinkable – a topic explored at some length both by Nick Bostrom and Yuval Harari in their exemplary exposition on artificial intelligence.

While these thought experiments are still perhaps a little ahead of their time, the window for making meaningful progress on them is fast diminishing. But first — what exactly is BabyX, and what, if any, sentience does it possess?

The problems involved in answering this question turn out to be numerous. The University of Auckland’s site, which heads up the research on BabyX, defines it as a computer generated psychobiological simulation. But despite some deep digging into what this means, the exact algorithms behind BabyX remain mysterious. While we know there’s some form of reinforcement learning being used for acquiring skills like playing the piano, the depth and breadth of these networks is sketchy. For instance, it’s unclear whether BabyX display “superstitious behavior,” an artifact of some instrumental learning algorithms that many mammals exhibit.

[embedded content]

It’s also unclear whether BabyX displays anything akin to intentionality. The recent literature on BabyX makes no mention of the beliefs, motivations, and desires that underpin BabyX cognitive abilities, and whether these are on par with higher-order sentient creatures such as humans.

Moreover, the high-end graphics used for modeling animats like BabyX can be so spellbinding, that the difficult mathematics behind their brain circuitry gets brushed aside. Which is not to say that the graphics used for BabyX aren’t top shelf. But this should not take precedence over the more fundamental questions regarding brain architecture. From snippets of the accompanying video on BabyX, it appears the animat possesses many of the neural correlates of a human, including an artificial dopamine system and other pleasure-releasing brain structures.

From a technical standpoint, though, there’s precious little to draw conclusions from. This is the problem, both moral and philosophical. If BabyX experiences pleasure, can she likewise experience pain, and is this pain in any way akin to our own? Do we have any means of ensuring Mark Sagar hasn’t created a creature living inside a virtual nightmare with no means of escape? Given the recent speculation we may also be living in a simulation, it seems a timely question for investigation.