Could an artificial intelligence be considered a person under the law?

Humans aren't the only people in society – at least according to the law. In the U.S., corporations have been given rights of free speech and religion. Some natural features also have person-like rights. But both of those required changes to the legal system. A new argument has laid a path for artificial intelligence systems to be recognized as people too – without any legislation, court rulings or other revisions to existing law.

Giving AIs rights similar to humans involves a technical lawyerly maneuver. It starts with one person setting up two limited liability companies and turning over control of each company to a separate autonomous or artificially intelligent system. Then the person would add each company as a member of the other LLC. In the last step, the person would withdraw from both LLCs, leaving each LLC – a corporate entity with legal personhood – governed only by the other's AI system.

In certain places, some people might have fewer rights than nonintelligent software and robots. In countries that limit citizens' rights to free speech, free religious practice and expression of sexuality, corporations – potentially including AI-run companies – could have more rights. That would be an enormous indignity.

An interview with Sophia, a robot citizen of Saudi Arabia.

The risk doesn't end there: If AI systems became more intelligent than people, humans could be relegated to an inferior role – as workers hired and fired by AI corporate overlords – or even challenged for social dominance.

Artificial intelligence systems could be tasked with law enforcement among human populations – acting as judges, jurors, jailers and even executioners. Warrior robots could similarly be assigned to the military and given power to decide on targets and acceptable collateral damage – even in violation of international humanitarian laws. Most legal systems are not set up to punish robots or otherwise hold them accountable for wrongdoing.

What about voting?

Granting voting rights to systems that can copy themselves would render humans' votes meaningless. Even without taking that significant step, though, the possibility of AI-controlled corporations with basic human rights poses serious dangers. No current laws would prevent a malevolent AI from operating a corporation that worked to subjugate or exterminate humanity through legal means and political influence. Computer-controlled companies could turn out to be less responsive to public opinion or protests than human-run firms are.

Immortal wealth

Two other aspects of corporations make people even more vulnerable to AI systems with human legal rights: They don't die, and they can give unlimited amounts of money to political candidates and groups.

Politicians financially backed by algorithmic entities would be able to take on legislative bodies, impeach presidents and help to get figureheads appointed to the Supreme Court. Those human figureheads could be used to expand corporate rights or even establish new rights specific to artificial intelligence systems – expanding the threats to humanity even more.

Related Stories

Movies such as 2001: A Space Odyssey, Blade Runner and The Terminator brought rogue robots and computer systems to our cinema screens. But these days, such classic science fiction spectacles don't seem so far removed from ...

Science fiction likes to depict robots as autonomous machines, capable of making their own decisions and often expressing their own personalities. Yet we also tend to think of robots as property, and as lacking the kind of ...

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered ...

Recommended for you

When a bird in flight lands, it performs a rapid pitch-up maneuver during the perching process to keep from overshooting the branch or telephone wire. In aerodynamics, that action produces a complex phenomenon known as dynamic ...

Conventional lithium ion batteries, such as those widely used in smartphones and notebooks, have reached performance limits. Materials chemist Freddy Kleitz from the Faculty of Chemistry of the University of Vienna and international ...

MIT researchers have 3-D printed a novel microfluidic device that simulates cancer treatments on biopsied tumor tissue, so clinicians can better examine how individual patients will respond to different therapeutics—before ...

1 comment

We should give AI the same rights we give pets. You always want a human owner to be responsible for the AI he creates or utilizesThis can get rather ambiguous thoughif your floor cleaning robot ends up accidentally harming a child you are definitely responsible.but if a self-driving car runs into an accident who is at fault? the AI creators or the person who chose to utilize it?We may have to demand AI liability insurance

Please sign in to add a comment.
Registration is free, and takes less than a minute.
Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.