A World Unready (Final Thoughts on Atom: The Beginning)

The ending of Atom: The Beginning is left so that further adaptation of its ongoing source material can be made; this is not the complete conclusion of the story, and knowing this context now rather sets my initial observations about the series in context (that it was taking a very laid-back and almost uninterested approach to its worldbuilding and the ethical questions raised). It is an adaptation of a small part of a longer, ongoing work. Of course it will not provide all the answers. Before moving into the meat of this consideration of the series, it is worth considering something else. I was initially perturbed, or at least surprised, to see that the series was raising and ignoring questions about machine sentience and robot ethics. It felt like a failure of science-fiction to studiously avoid taking a stand while raising allegorical and philosophical questions.

This is, ultimately, because I have certain preconceptions of what science-fiction can and should do. In my mind, if science-fiction raises ethical questions or handles interesting philosophical dilemmas (especially those with direct contemporary relevance such as machine learning, automation and so on) and then does not deliver satisfying interrogation of them, or offer much new to the debate, it is wasted potential. I see, and am aware this is often presumptuous, elision over the difficult questions of a setting to focus on something else as effectively trying to ignore those issues and shut out challenge to them; it is a method of analysis that is, ultimately, not wholly useful in all cases. Talking with others about Atom made me reconsider this viewpoint.

Had it covered the very ordinary and expected questions of automation and AI ethics, and done so formulaically and uninterestingly, I would have probably deemed it a worthy act of preaching to the choir. It would have been ticking the boxes of “intelligent science-fiction” but not enriching the genre in any way for doing so. Instead, it did not, and once I accepted that it was not going to, that it was not setting out to have a meaningful lecture about whether or not it is right to make a machine that thinks like a man, that it was not going to in simple terms ask if man has the right to create sentience to serve, and that it was not going to ask if automation is a good thing for society, I was able to see what it was asking. It took a little while to get there.

The series depicts a world normalised to automation and artificial intelligence. The introductory narration says as much. The debates about whether this is right or not are long passed, and ultimately the robot-makers won because now robots are ubiquitous. People might not be wholly OK with this (as the episode where A106 assists some removal-men shows) but nevertheless, robots are things people live alongside. A line is drawn under the entry-level AI ethics argument. A106 thus represents the next sets of questions people will need to ask – set against a society that does not quite realise it needs to ask them. The leading human characters represent two angles of scientific research – Tenma obsessed with building powerful, smart robots and the highest technology possible, Ochanomizu growing to love his creation like a son and realising there is an ethical angle here. The fact they are students, studying and researching the limits of science, makes this if anything more of an ethical dilemma story; by the end of the series, during the robot tournament, their rival is Dr Lolo and her robot Mars, built with the most advanced military technology possible and yet almost completely unfeeling.

Tenma sees A106’s victory over Mars as final proof that he is the better scientist, that he can come from nothing against someone with privilege and the backing of industry to build a better robot in a shed. He completely fails to see why it might be considered bad to now scrap A106, or let it fight until destroyed, and begins thinking immediately about A107, its successor model. His view of A106’s fighting is that he has built a powerful, strong robot with special moves and its incidental sentience is just another edge it should use to fight. He cannot – because A106 is a robot and he is an inventor – make the connection between a sentient being that he has programmed to think and learn and something that should be given rights and treated well. Thus as he talks about how A107’s AI is going to be even more intelligent, even more capable of thinking and feeling, and is unable to stop gloating over his victory over Lolo (which the audience are aware was as much a result of A106’s sentience allowing it to communicate with other robots in a way humans could not read), the scenes become quite chilling. He envisions building a procession of ever more lifelike, ever more worthy of rights entities – and throwing them away if they fail because of the doublethink that they are simultaneously the future of machine-human interaction but also merely machines. A certain morality has been normalised in Atom‘s world; that even the smartest machines are, nevertheless, machines and peoples’ relationships with them (as shown in the pet episode) are as pets or servants. Tenma and Ochanomizu have built a machine that transcends this, but only one of them understands what that means. Throughout the final episode a crippled A106 watches as the two scientists argue about their future careers, and gathers dust as Tenma refuses to fund repairs. A sentient or near-sentient being is left ultimately suffering and watching its creators talk about its obsolescence until Ran helps it.

It is a very nicely handled reversal of the series’ languid setup; firstly, the narrative voice is shown to be apparently ignoring the big worldbuilding questions. Secondly, this is shown to be because the world has been normalised to them and they do not need answering. And finally, a new set of questions are evolving. This makes the cliffhanger ending (for it is, ultimately, an ending which leaves a lot unanswered about Mars, Lolo and the future of robotics more widely) work well in its own right as a conclusion to a single, context-free series. The viewer has seen a window into a world that will soon have to face new and difficult questions, and the people responsible are not mature enough to do so.

Even when I felt Atom was not meeting the standards I wanted to hold it to I found it enjoyable; I liked the dynamic of two young scientists stumbling upon a discovery they did not fully understand, and felt that a setting normalised to its technology level provided a good backdrop for that. When, during the fighting episodes, A106 consistently proved it could interpret instructions to fight and kill in ways that preserved life and prevented undue harm to other AIs and people, it worked very well. And the final episode, which showed how little the supposedly intelligent protagonist had learned from everything he had observed, provided an uncomfortable kind of coda to the series. Sentience, and subsequently emotion and the ability to communicate with other robots, were going to be seen by society as frivolities and add-on features for machines that can do useful things. Initially one thought the world of Atom had “solved” the problem of interacting with machines. It had not.

Like this:

Related

2 comments

I was always vaguely interested in seeing Atom, but with it consigned to the Amazon dungeon I forgot about it quickly. Thanks for reminding me of it, and, at the same time, giving me more reasons to think about trying it out. It was quite fun to trace your evolving thoughts and impressions of the series, and to have an interesting way of looking at the show when I get around to seeing it myself.