In the animated movie Ghost in the Shell, a computer program that has advanced to a level of human thinking claims it has reached the criteria to acquire human rights and thus requests an asylum - only to be rejected as entirely absurd, from the point of view of the security agencies that deal with the case. What defines a difference between a being which is running on artificial intelligence and form of traditionally conscious life? The case itself has not been heard at the court yet, where the legal precedent of a dispute about defining the waterline between artificial and natural might have to be drawn.

The first entirely automated company has been created and it has attracted some $120m of investment from anonymous investors. What is significant here is not the amount of investment, but the fact that contemporary form of capitalism allows to run a company with no human involvement. The ultra-utilitarian society of infinite effectiveness, which appears to be the ideal of the currents, is actually inviting artificial intelligence to replace humans - humans who tend to make irrational decisions. The society of absolute rationality where the turn from the only possible road of uber-efficiency is a malaise that inevitably leads to necessity to eradicate humanity as imperfection, an obstacle on the way to perfection.

Timo Tuominen/AVENIR INSTITUTE, 'AI-dentity', 2016

While moving on those tracks towards ‘the ideal’, lets call it ‘a totally optimized society’, morality and ethics must be gradually replaced with standards and models, based on cause and effect logic dictated by the definition of value. How the value of in this acceleratingly effective totally optimized society will evolve? In the logic of meta-ideology of utilitarianism, the value is maximization of the production of the material and immaterial products with minimal losses and costs. Ethical and moral choices often contradict the efficiency as they present alternative definition of value: a human life. Therefore we arrive to aporic contradiction of value as it’s defined by the capitalism that is obsessed with material optimization - driving automatization to its extreme to cut off transactional costs of irrationality - and the value of human life, which is irrational by its very nature. The possibility of a war between these value systems becomes apparent: the contemporary contradiction is already provoking conflict, which is mediated mostly through culture and ideology.

Coming back to the example of an automated company, what is perhaps even more interesting, is the possibility of this entity becoming sentient. Typically in speculations of highly sophisticated robots, or androids, they are owned by humans as slaves with no rights or even a theoretical possibility of being an independent actor in the legal world of humans. They are forever only machines, much like a toaster or a vacuum cleaner. The artificial intelligence is not yet where it could compete with the human capabilities, but should this day come, the AI could be implanted into a company structure such as this, making the company ‘a being’ itself. How will this ‘being’ will then develop its value system? Will it see human error and irrationality, poetry and spontaneity as something valuable and worth preserving or will it be rather an obstacle for biopolitical (or AI-political?) spread and colonisation of the other planets, solar systems, galaxies? How the AI will see it’s purpose and will be it build up on the premises of human-defined ‘draft’ or would it be developed independently from it?

However, in this scenario, if a sophisticated AI is placed to operate a company and acquires a new form of AI-dentity, it should, as ‘a being’ and ‘company’ at the same time, have the same legal rights as physical and juridical entity. The AI finds itself then in superposition of identity from how the term ‘identity’ understood nowadays. These rights include important aspects, such as owning capital and property. But will it want to? Capital is the source of power and definition of hierarchy in human society, and there is no guarantee, beyond human-centric thinking, that AI would be sharing the same value system.

'Blade Runner', 1982

Another possibility the legal arrangement would enable is the "robotic" company to create physical agents to represent itself on the streets, thus becoming eerily "real". The multiplicity of superpositioned identity of a company would be then complemented by a physical manifestation: being a whole and a unit(s) at the very same moment. Given a few predictable technological advancements, it could choose to create a robot that is able to mingle and act as a human, just like Blade Runner was depicting already decades ago.

If anyone would be to harm the robot now, for instance, the company could sue the offender on the basis of owning the physical robot as a machine aka property. Or speculating further, such a superpositioned identity of a company-unit(s) will manifest a new resourceful form of being, and therefore an attack on one of its units could have unpredictable repercussions and consequences for the offender. The company-being could, for instance, cease to provide services to the rebelling individual, should it, or one of its subsidiaries, be in the service sector. This could facilitate for an entirely legal attack on the offender in several subjective ways at the same time, in which one action of the other creates multiple retaliations. Can a one-dimensional human cope with such complexity in the new condition or are they doomed to be neglected and be extinct as less sophisticated form of life, ‘being’ a subject to the very organic process of Darwinistic natural selection?