Ethical Issues of Artificial Intelligence

Comments (0)

Transcript of Ethical Issues of Artificial Intelligence

Ethical Issues of Artificial Intelligence

Ethical Issues: EmploymentAs advancements in artificial intelligence continue to increase, we will eventually get to the point where computers will "be able to program themselves, absorb vast quantities of new information, and reason in ways that we carbon-based units can only dimly imagine." This seems to paint a grim picture for our future job market (Marcus). Ethical Issues: PrivacyIt is becoming easier through artificial intelligence for data to be collected on individuals including shopping patterns, social media posts, location information and more. The idea that personal information is being stored and analyzed by companies, and governments make many uncomfortable at best.What is Artificial Intelligence?Artificial intelligence can be described as the ability for a machine to mimic human intelligence and behavior ("Artificial intelligence").AI OverviewEmployment: Option 1One solution to this issue is restrict the amount of automation/artificial intelligence a company is allowed to use based on the current rate of unemployment.Employment: Option 2Otherwise, we allow it to be used without restriction and hope that the creation of new jobs balances the jobs that are lost to automation.Leaders in the FieldArtificial intelligence is beginning to become a part of all aspects of our daily life and companies are continuously researching and making advances in the field to make their products competitive and more attractive to consumers. ExamplesApple's acquisition of Siri from the CALO project for use in the IPhone as an intelligent personal assistant.Macy's intelligent advertisement system that scans a shopper's face and produces results on screens that best fit their demographic.FlyRuby is developing technology that will automatically connect private charter flights in the most efficient manner in order to reduce the cost of private flights by half (At The Extreme Edge Of Artificial Intelligence).by Ankur Kumar, Chris Harvey, Kyle Duke and Robert HickeyPrivacy: Option 1Regulate and put restrictions on the amount of private information that can be collected on individuals by companies, government, and other parties.Privacy: Option 2Accept violations in privacy in order to reap the maximum benefit from analyzes performed on personal information (improved shopping experience, etc.).Ethical Issues: Rights of Intelligent MachinesWhen technology reaches the point that it exhibits signs of true intelligence, does this intelligent machine deserve the same or similar rights of humans? Is it ethical to restrict these rights (Bostrom)?Rights of Intelligent Machines: Option 1One solution is to grant human rights to machines that exhibit human traits.Rights of Intelligent Machines: Option 2Otherwise, we would establish that intelligent machines do not qualify for human rights.Consequences: Employment - Option 1By restricting automation, goods and services will likely be more expensive.More human error would be introduced into high risk jobs that could instead be performed by a robot.It is typically less efficient for a human to perform a task that could be completed by a machine.Consequences: Employment - Option 2It may introduce new jobs like robot maintenance , programming etc.A new level of security for robots would be surveillance by humans.Rights based : Employment Consequences of PrivacyPrivacy Option 1:Regulate information available to companies and government.Allows people to feel safe using online sources. Allows people to select what information they give out.Detracts from levels of personalization to which we are accustomed.Privacy Option 2:More enjoyable interactions with computers and machines. Higher degree of personalization.Higher risk of exposure to hackers.Public transparency towards government and powerful private firms (feeling of unease.) Option 1We will not consider artificially intelligent systems to be included in the"Golden Rule"Gives the benefit of the job to humans.Treating others with the respect of employment.Accepted by "Rights Based Approach"Option 2Gives benefits to the A.I. systems, with the hope that human jobs will be created.Respects machines more than humans.Violates "Rights Based Approach".Virtue/ Ethics Based EmploymentOption 1Follows a humanitarian approach.Gives people the right to pursue employment and the American dream.Option 2Capitalistic approach.Turns humans away in order to gain cheap labor.Privacy: Rights BasedPrivacy: Virtue BasedConsequences of Human RightsIf robots can simulate human wants, desires and emotions, should they, in turn, be granted rightsOption 1Machines will be given payment for labor.Goods will become more expensive, even though humans are out of work.Humans and machines become socially integrated.Prejudices are formed against machines.Could lead to hate crimes, rallies and war.Machines are not compensated for labor. Companies thrive and goods remain cheap.No integration of machines into culture.Maintain feelings of general indifference.Not granting rights to machines could be viewed as a form of slavery.Remember: Slave owners did not consider slaves to be humans.One day, not giving rights to machines could seem just as ludicrous!Over the years, all conscious beings have been given rights.Virtue & Rights Based: Human RightsOption 2ConclusionsIssue 1: Regulate the use of machines based on unemployment rates. Issue 2: Regulate Information that can be gained by companies and governments.Issue 3: Grant human rights to any conscious machine. Otherwise, becomes slavery.Everyone has the right to feel that their privacy is being respected if they so choose.There will never be a time when EVERYONE will feel safe giving out personal information.We must respect the wishes of these people, just as we hope they would respect our wishes.Respecting peoples privacy is just.Non- respect leads to hacking and identity theft.Works CitedBostrom, Nick, and Eliezer Yudkowsky. "The ethics of artificial intelligence." Draft for Cambridge handbook of artificial intelligence (2011)..

Gary Marcus. "Why We Should Think About the Threat of Artificial ... - The New Yorker." 2013. 6 Apr. 2014 <http://www.newyorker.com/online/blogs/elements/2013/10/why-we-should-think-about-the-threat-of-artificial-intelligence.html>