Robotics and the Law2013-09-24T00:55:47Zhttp://blogs.law.stanford.edu/robotics/feed/atom/WordPresssteve wuhttp://blogs.law.stanford.edu/robotics/?p=3462013-09-24T00:55:47Z2013-09-24T00:55:47Z

I am pleased to announce a program on the security of drones presented by Donna A. Dulo MS, MA, MAS, MBA, JD, PhD(c). Please attend in person if you can, or if you are out of town, a dial-in is available. The presentation will be Wednesday, September 25, 2013 at 1:30 Pacific/4:30 Eastern, at the offices of Cooke Kobrick & Wu LLP, 166 Main Street, Los Altos, CA 94022.

With tens of thousands of unmanned aerial vehicles slated to enter our national airspace in the near future, a significant focus has been on privacy and safety issues. However the issues of security and information assurance (“IA”) are just as pervasive. Data security, communications security, hostile takeover through methods such as GPS spoofing, and issues of lost link resulting in vehicle and data loss are some of the many examples of security and IA issues that face drone operators in all facets of vehicle operations. Maintaining data confidentiality, availability and integrity in drone operations is a complex task, and resulting breaches could potentially emerge not only as IA incidents but also as privacy and safety situations. This teleconference will focus on the issues of security and information assurance in drone operations and the potential legal issues that operators may face when security incidents occur during vehicle operations. It will also cover the compound issues that may emerge between security, privacy and safety in drone operations.

Bio for Donna Dulo:

Donna A. Dulo is a senior mathematician, computer scientist, and software/systems engineer for the US Department of Defense where she has worked for over 25 years in both military and civilian capacities. She is also a systems, software, and safety engineer for Icarus Interstellar where she designs spacecraft and integrates safety and resilience into spacecraft systems. Donna is an adjunct faculty member at the Embry-Riddle Aeronautical University Worldwide Campus where she teaches computer security and information assurance law. Donna provides consulting services to various entities such as NASA and the Monterey Institute for Research in Astronomy, as well as to various universities. Donna is the author of several articles on law, drones, and software engineering and is writing the ABA’s book on unmanned aerial systems law which is due for publication in the summer of 2014.

Donna did her undergraduate work at the US Coast Guard Academy and the University of the State of New York in Management Economics. Her graduate work includes an MS in Computer Science specializing in computer security, wireless autonomous systems, and artificial intelligence from the US Naval Postgraduate School, an MS in Systems Engineering focusing on aerospace systems from Johns Hopkins University, an MAS in Aeronautics and Aerospace Safety Systems from Embry-Riddle Aeronautical University, an MBA in Engineering and Technology Management from City University of Seattle, an MA in National Security and Strategic Studies from the US Naval War College, an MS in Computer Information Systems from the University of Phoenix, and a Doctor of Jurisprudence from the Monterey College of Law. She is currently a PhD candidate in Aerospace Software Engineering at the US Naval Postgraduate School where she is researching the safety impact of software on manned and unmanned systems.

]]>0steve wuhttp://blogs.law.stanford.edu/robotics/?p=3442013-06-22T21:39:45Z2013-06-22T21:39:45ZFor robot manufacturers, one of the top legal issues is product liability. Manufacturers don’t want to face a company-ending huge product liability verdict against it. On Wednesday, June 26, I will be presenting a program on managing the risk of product liability in commercializing robots. The Association of Unmanned Vehicle Systems International will be hosting the program as a webinar. To view a program description and register, click here.

This program follows my presentation at the We Robot Conference at Stanford Law School in April. The white paper I presented at the program appears here. I focused on the root causes of large-dollar verdicts against manufacturers in product liability suits. Why did these verdicts take place? What are the legal theories under which manufacturers face liability? How can manufacturers limit and manage their risk?

I also considered some topics that product liability presentations don’t usually cover. These topics concern sources of risk other than design, manufacturing, and other engineering risks. In particular, I talked about information security risks and supply chain risk. I also talked about effective records management practices that will help a manufacturer prepare today to win suits that won’t be filed until perhaps decades from now.

If you missed the We Robot Conference, you have a chance to hear the presentation this coming Wednesday. I will also add an additional thought about a non-traditional source of risk: liability arising from faulty data sources. For instance, in the driverless car context, what happens if a manufacturer obtains map data supply vendor that provides faulty data? We will discuss this and many other questions.

“One of the most significant obstacles to the proliferation of autonomous cars is the fact that they are illegal on most public roads.” That’s what Wikipedia tells us—at least until I change it. I can’t change a New York Times op-ed that declared “driverless cars” to be “illegal in all 50 states” or the many articles that have repeated this claim.

To the extent that such pronouncements of illegality reflect assumption rather than analysis, they are inconsistent with our nation’s entrepreneurial narrative: An invention is not illegal simply because it is new, and a novel activity is not prohibited just because it has not been affirmatively permitted. So to determine the actual legal status of the automated vehicles that may someday roam our roads, I reviewed relevant law at the international, national, and state levels. While my 100-page study raises a number of questions about both the ultimate design of these vehicles and the duties of their human operators, it finds no law that categorically prohibits automated driving. In short, even without specific legislation, automated vehicles are probably legal in the United States.

A striking corollary of this finding is that Nevada, Florida, and California (the three states that have already enacted pertinent legislation) did not really “legalize” automated vehicles, as has been popularly reported. Instead, those recent laws primarily regulate these technologies. In Nevada, for example, both an automated vehicle and its operator must be specially registered with the state, but across the border in Arizona, where a similar bill failed to pass, no such requirements exist.

The laws are significant for other reasons as well: They endorse the potential of, catalyze important discussions about, and establish basic safety requirements for these long-term technologies. To a more limited extent, these laws also reduce legal uncertainty: “Definitely legal” sounds very different than “probably legal.”

Curiously, however, one of the stronger challenges to the legality of automated vehicles is actually a law that no state can repeal. After World War II, thousands of Americans began shipping their cars across the Atlantic to motor through Europe, where they encountered a variety of drivers—of horses, pack animals, and livestock in addition to cars and bikes—who were following a variety of road customs. National governments, including the United States, sought to harmonize these customs through the 1949 Geneva Convention on Road Traffic. One of the rules of the road specified in this international agreement is that every kind of road vehicle “shall have a driver” who is “at all times … able to control” it. Because the treaty is federal law—domestically comparable to a statute enacted by Congress—no state government or federal administrative agency can lawfully contravene it.

Fortunately for Nevada and its early-adopting brethren, this treaty provision is not necessarily inconsistent with automated driving. Human operators are able to control today’s research vehicles by starting them, stopping them, and intervening at any point along the way. Even a vehicle without a human behind the wheel would probably satisfy this requirement if it performs at least as safely, reasonably, and lawfully as a human driver would. A vehicle that operates within these bounds would essentially be under control, regardless of whether its legal driver is a human, a computer, or a company. The upshot: Emerging technologies are much more likely to shape the future interpretation of this treaty language than the language is to shape the future development of these technologies.

Nonetheless, significant legal uncertainty does remain, even in Nevada, Florida, and California. Take two examples. First, the human who operates or otherwise uses an automated vehicle may need to participate more actively in that operation than the particular technology itself may demand. New York rather uniquely requires a driver to keep one hand on the steering wheel—though it does not require her to actually steer the vehicle to which the wheel is attached. The District of Columbia, among others, prohibits “distracted driving” and mandates “full time and attention” during operation—requirements that the Autonomous Vehicle Act recently passed by its council will not change. And in state tort law, even driver behavior that is not expressly illegal might nonetheless be civilly negligent.

Second, current rules of the road reflect the fact that human drivers necessarily make real-time decisions that are generally judged, if at all, only afterward. Automated driving still requires human decisions, but they are the anticipatory decisions of human designers rather than or in addition to the reactive decisions of human drivers. At the state and local levels, how and to whom will laws that prescribe “reasonable,” “prudent,” “practicable,” and “safe” driving apply? And at the federal level, what will constitute the kind of “unreasonable risk” that triggers a vehicle recall? Do standards like these merely require an automated vehicle to perform as well as a reasonable human driver—or will governments, courts, and consumers expect something more? In particular, when crashes inevitably occur, how will legal responsibility be divided among manufacturers, designers, data providers, owners, operators, passengers, and other potential parties?

These are just some of the important questions that will emerge as particular automation technologies are further developed, tested, and ultimately commercialized. Governments may not be able to answer them yet (and perhaps they shouldn’t yet try), but this does not mean that automated vehicles are illegal. To the contrary, on this threshold question of legality, my analysis suggests that while the road may be curvy, the lights are not all red.

]]>0wendy m. grossmanhttp://blogs.law.stanford.edu/robotics/?p=3282012-07-23T01:17:02Z2012-07-23T01:17:02ZFirst: hi. The idea is that I will post sporadically on topics relating to Robots, Freedom, and Privacy.

At the east coast hacker conference HOPE 9 the weekend of July 13-15, 2012, Pacific Social Architecting Corporation’s Tim Hwang reported on experiments the company has been conducting with socialbots. That is, bot accounts deployed on social networks like Twitter and Facebooks for the purpose of studying how they can be used to influence and alter the behavior and social landscapes of other users. Their November 2011 paper (PDF) gives some of the background.

The highlights:
– Early in 2011, Hwang conducted a competition to study socialbots. Teams scored points by getting their bot-controlled Twitter accounts (and any number of supporting bots) to make connections with and elicit social behavior from an unsuspecting cluster of 500 online users. Teams got +1 point for mutual follows; +3 points for social responses; and -15 if the account was detected and killed by Twitter. The New Zealand team won with bland, encouraging statements; no AI was involved but the bot’s responses were encouraging enough for people to talk to it. A second entrant used Amazon’s Mechanical Turk; another user could ask it a direct question and it would forward it to the MT humans and return the answer. A third effort redirected tweets randomly between unconnected groups of users talking about the same topics.

– A bot can get good, human responses to “Are you a bot” by asking that question of human users and reusing the responses.

– In the interests of making bots more credible (as inhabited by humans) it helped for them to take enough hours off to seem to go to sleep like humans.

– Many bot personalities tend to fall apart in one-to-one communication, so they wouldn’t fare well in traditional AI/Turing test conditions – but online norms help them seem more credible.

– Governments are beginning to get into this. The researchers found bots active promoting both sides of the most recent Mexican election. Newt Gingrich claimed the number of Twitter followers he had showed that he had a grass roots following on the Internet; however, an aide who had quit disclosed that most of his followers were fakes, boosted by blank accounts created by a company hired for the purpose. Experienced users are pretty quick to spot fake accounts; will we need crowd-based systems to protect less sophisticated users (like collaborative spam-reporting systems)? But this is only true of the rather crude bots we have so far. What about more sophisticated ones? Hwang believes the bigger problem will come when governments adopt the much more difficult-to-spot strategy of using bots to “shape the social universe around them” rather than to censor.

Hwang noted the ethical quandary raised by people beginning to flirt with the bot: how long should the bot go on? Should it shut down? What if the human feels rejected? I think the ethical quandary ought to have started much earlier; although the experiment was framed in terms of experimenting with bots in reality the teams were experimenting on real people, even if it was only for two weeks and on Twitter.

Hwang is in the right place when he asks “Does it presage a world in which people design systems to influence networks this way?” It’s a good question, as is the question of how to defend against this kind of thing. But it seems to me typical of the constant reinvention of the computer industry that Hwang had not read – or heard of – Andrew Leonard’s 1997 book Bots: The Origin of New Species, which reports on the prior art in this field, experiments with software bots interacting with people through the late 1990s (I need to reread it myself). So perhaps one of the first Robots, Freedom, and Privacy dangers is the failure to study past experiments in the interests of avoiding the obvious ethical issues that have already been uncovered.

wg

]]>0ryan calohttp://blogs.law.stanford.edu/robotics/?p=3222012-07-14T21:21:47Z2012-07-14T21:21:47Z
]]>0ryan calohttp://blogs.law.stanford.edu/robotics/?p=3202012-07-14T21:19:53Z2012-07-14T21:19:53Z
]]>0huttunenhttp://blogs.law.stanford.edu/robotics/?p=3122012-03-16T11:43:52Z2012-03-16T11:43:52ZTechnological development has faced criticism. The efficiency brought by industrialization and computer technology is expected to eventually lead to unpleasant outcome. The critics have developed by means of science-fiction stories about a future filled with technology. It is assumed that people stagnate and indulge only their animal desires. In the second scenario, the people become insensitive robots relying only on pure reason. (Steve Fuller, New frontiers in science and technology (Polity, Cambridge 2007) 232 p)

Herbert Marcuse was of the opinion that the logos of technology equals to the logos of slavery. People have become tools, even if it was thought that the technology releases persons. In his book One-dimensional Man published in 1964, Marcuse says that in the historical continuum man has been and will be the master of the other man. This is a social reality which societal changes do not affect. The basis for domination, however, has changed over the ages. Personal dependence has been replaced by an objective order of dependency, such as the dependency of a slave to the master has changed to the dependence of the economic laws and of the market. In accordance with Marcuse this higher form of rationality deprives natural and spiritual resources more efficiently and shares profits in a new way. A man can be seen as a slave in the production machinery and there is a battle of the existence in the production machinery. The battle affects with the destructive power the production machinery and its parts, such as builders and users.

The title of the blog, Legal Futurology, contains a certain degree of deliberate ambiguity. You, Dear Reader, may wonder, what kind of a future we are talking about and what the law has to do it. At this point we don’t expect we will be writing about futures (the financial instruments) or about future developments in the law in general, say, regarding the resolution of the financial crisis or the next EU treaty or planned directive this or statute that or what the court will (or should) decide in Rubber v. Glue or whatever. While we cannot promise to avoid such topics altogether (classic evasive move there), what we have in mind are some very specific aspects of the future and the law.

The future we are referring to is that of the William Gibson quotation ‘The future is already here — it’s just not very evenly distributed.’ It is also that of Richard Susskind’s book Future of Law. And since at least one of us is a board-certified Legal Realist, there might be the odd dash of future in the sense of Prediction Theory thrown in as well.

More concretely, we both are researchers at the University of Helsinki working at the intersection of law and artificial intelligence. Our perspectives are quite different, as one of us (Anniina) studies AI as the object of legal regulation, whereas the other (Anna) studies AI as a tool to facilitate legal information retrieval or even do legal reasoning by itself. These complementary perspectives should open up for a broader range of topics than either one of us could do by herself. We are also planning to take advantage of this in more traditional fora through co-authored publications (stay tuned!).

]]>0ryan calohttp://blogs.law.stanford.edu/robotics/?p=3102011-11-17T05:41:28Z2011-11-17T05:41:28Z
]]>0ryan calohttp://blogs.law.stanford.edu/robotics/?p=3052011-09-24T19:53:16Z2011-09-24T19:53:16ZNot many people in the legal academy study artificial intelligence or robotics. One fellow enthusiast, Kenneth Anderson at American University, posed a provocative question over at Volokh Conspiracy yesterday: will the Nobel Prize for literature ever go to a software engineer who writes a program that writes a novel?

What I like about Ken’s question is its basic plausibility. Software has already composed original music and helped invent a new type of toothbrush. It does the majority of stock trading. Software could one day write a book. A focus on the achievable is also what I find compelling about Larry Solum’s exploration of whether AI might serve as an executor of a trust or Ian Kerr’s discussion of the effects of software agents on commerce.

I commute back and forth to Stanford from San Francisco and, to pass the time, I listen to the occasional audio book. A few weeks ago I finished Daniel Wilson’s Robopocalypse, slated to become a Steven Spielberg movie in 2013. The book was entertaining. It was also technically quite specific. Wilson is a roboticist with a PhD from Carnegie Mellon and was able to lend a certain realism to his doomsday scenario. Many of the robots he described exist in prototype and some of the ethical issues flow from contemporary human-robot interaction literature.

But like most scary robot stories, Wilson’s depiction of a robot revolution helped itself to a quixotic key ingredient: a sentient machine. The villain in Robopocalypse is a self-aware computer program called Archos that, in what must be a nod to Milo of Microsoft’s project Natal, presents itself as a soft spoken little boy. This psychotic, artificial toddler decides it would be a good idea to prune the human race by a few billion and therefore sets about coordinating a massive robot assault.

Strong AI, meaning the general intelligence of the sort we might expect from a conscious being, is a common feature of movies involving robots, killer or otherwise. Think Terminator or 2001: A Space Odyssey. But machine sentience, let alone malice toward people, is not plausible in anything like the short run. A friend in robotics at the University of Sidney described the state of the art this way: we have been doing AI since at least the 1950s when that term was coined at Dartmouth College. Sixty years later, robots are about as smart as insects.

In a lovely essay, Northwestern’s John McGinnis acknowledges the hurdles we would have to overcome to achieve strong AI. One is vastly increased computational power. I agree with McGinnis that gains of this sort are likely in light of the unchecked, exponential growth we have seen to date. The second, however, is software capable of leveraging that computational power into a form of intelligence. Here I think the case is thin. Time will tell, of course, and I should note that AI is but one of the technologies McGinnis examines in what promises to be a fascinating book, Accelerating Democracy.

Weak or “narrow” AI, in contrast, is a present-day reality. Software controls many facets of daily life and, in some cases, this control presents real issues. One example is the May 2010 “flash crash” that caused a temporary but enormous dip in the market. A subsequent report on the crash placed much of the blame on high-frequency trading algorithms. Danielle Keats Citron has written about the problematic role of autonomous software programs deployed by the government.

One of my favorite works of fiction to discuss AI’s potential impact on society is Daemon, a recent novel by Daniel Suarez. Suarez’s vision is of a series of relatively simple software programs set into motion by a game designer and able to act on the world. Suarez is a more gifted writer than Wilson, in my view, but the book’s real appeal comes from the fact that most everything in the narrative could happen today. And, importantly, the book’s villain is a really clever person—one who uses software manipulate and harm others. The result is eye-opening, the implications for law and society arguably immediate.

I would recommend any of these works. I am also happy to report that Solum, McGinnis, Kerr, and an AI expert are coming to Stanford Law School this October to discuss AI and the law on a panel. We hope to record and display it on Center for Internet and Society’s website. But in my view, our first priority should be thinking through the negative ramifications of the many computer programs already capable of acting upon the world. Worrying that robots will become self-aware and hurt people feels a little like worrying that mops and brooms will become enchanted and ruin the sorcerer’s house.

“Michigan held off Iowa for a 7-5 win on Saturday. The Hawkeyes (16-21) were unable to overcome a four-run sixth inning deficit. The Hawkeyes clawed back in the eighth inning, putting up one run.” This piece of sports news was generated by an intelligent system. It was written by Narrative Science’s computers in the United States. It was not created, nor edited by a human, which means that it is completely computer generated. This particular text is likely not protected by copyright, as it is not sufficiently original and creative. However, when the software evolves and becomes able to create writings that fulfill the prerequisites for copyright protection, the question of authorship becomes relevant. As lawyers, we will then face the question of how to approach this issue under the copyright laws. Another example: Google’s intelligent car was involved in an accident in August 2011. This time, a human was responsible for the accident, but what if the autonomous vehicle had been considered responsible? How would we fit the case within the legal regime governing legal liability? Further, what if an intruder breaks into the video camera of a robot and spies on children in their bedroom? How would this problem be approached from the perspective of the privacy laws?

Over the past 60 years, information technology has become an integral part of our everyday life. The number of processors contained in transistors has doubled every two years in accordance with Moore’s Law, and it is expected that this trend will continue well into the future. Computers and other technical equipment have become networked, thus increasing the overall technical capacity. As the next step in this process, machines are becoming more and more intelligent. For example, Bill Gates, the leading figure of computer revolution, predicts that robotics is the next revolutionary technology. According to him, every household will have at least one robot. Gates parallels the current development phase of robotics with that in which the computer industry was in the mid 1970’s.

The properties involved in the operations of robots are usually called artificial intelligence. Artificial intelligence can be defined in several ways. According to John McCarthy, artificial intelligence means the science and engineering skills related to the development of intelligent machines. Intelligence in turn refers to the ability to achieve goals. In a recent bill on autonomous vehicles adopted by the state of Nevada in the US, artificial intelligence is defined as the use of intelligence by computers and similar devices, allowing the machines to mimic and reproduce human behavior.

Legally, artificial intelligence can be approached from at least two different angles. First, one can explore how applications of artificial intelligence are used in the legal decision-making. Second, one can examine what challenges artificial intelligence poses to jurisprudence. My research will focus on the latter set of questions. I will study artificial intelligence systems in light of the current system of norms. This new technology raises new legal issues that can be partially solved by means of traditional jurisprudence. However, there will also be problems and challenges to which the current system offers little sustainable solutions.

An intelligent system can be, for example, an intelligent agent or robot. An intelligent agent is a computer program that contains artificial intelligence. Intelligent agents can modify their own code and learn from their mistakes. In accordance with Peter Singer’s classic definition, robots are made up of three parts. These include sensors, processors, or artificial intelligence, and actuators. Sensors monitor the environment and detect changes in it. Processors, or artificial intelligence, decide how to respond to these changes, and actuators reflect the decisions made by processors in their functioning, creating changes in the world around the robot. According to Maja Mataric’s much used definition, a robot is an autonomous system, which exists in the physical world, discovers its environment, and can act in its environment in order to achieve particular goals.

The intelligence of machines can be divided into three classes. First, the agent can be autonomously intelligent. In this case, a machine agent implements intelligent functions independently, without a need for human intervention. Secondly, the machine can augment human intelligence, acting in close interplay with a human. In this case intelligence is both borrowed from the human and created from human-robot interaction. Thirdly, intelligence can be analogous to swarm intelligence, i.e., multiple robots can elicit complex and intelligent behavior when interacting with each other, even if any one of the robots could be safely considered “stupid” upon individual examination.

2) Objectives and Rationale

I explore the development of intelligent systems and the feasibility of the legal framework, in particular in the consumer environment. I look into copyright, legal liability and privacy issues in a problem-oriented manner. I look for the criteria by which liability issues should be resolved, and I search for points of reference, combining copyright, privacy and tort-based judicial review. My research will focus, for example, on intelligent agents that create news, music, and literary works. I will also examine household robots, such as Personal Robot 2 “PR2″ developed by Willow Garage in Silicon Valley, a robot that knows how to fold the laundry and how to pick up goods from the refrigerator.

I will consider regulation in Finland, the United Kingdom, the European Union and the US. The United Kingdom and the US have been chosen for review because the current information technology laws are largely based on practices evolved in these two countries.

My research problems are, in particular, the following: Who is responsible for the damages caused by intelligent systems, and who holds the copyright to works created by artificial intelligence? The actors under review are the producers of the intelligent machines, the programmers of the software run on such machines, the users of the machines, their owners and the intelligent systems themselves. My purpose is to outline the various legal doctrines from among which the legislator can choose the policy to pursue de lege ferenda. My goal is for this thesis to contribute to technological development process, and I find that the legal aspects of robotics should be taken into account in the development of services and products.

In the final section of the thesis I will analyze how the intellectual property, tort and privacy laws have responded to the development of technologies. I will also examine how these laws now seem to apply to technologies involving artificial intelligence and what the current legal regime reveals about the relationship between technological and social developments. I will start by examining how copyright once was applied to the development of photography, and by comparing this to how intellectual property now regulates intelligent systems. Then I will examine public limited companies and the manner in which legal liability related to software was formed, and compare this to legal liability related to robots. I will also analyze intelligent systems and privacy laws, and this is compared to the way by which the privacy policy has responded to the RFID technology, for example

Finally, I will consider whether the issues related to robots are so different from those present in other fields of technology that they require their own approach, or whether robots can be examined within the general conceptual systems of various areas of law as they currently exist. In other words, the research question is how to respond to the challenges of new technologies. Can the new policy questions be solved by the traditional means of legal interpretation, or is a new kind of approach required?

3) Methods

This is a problem-oriented study. The viewpoint is legal, comprising both domestic and comparative law. However, despite the importance of an international perspective, the focus is on the European and the Finnish systems.

Finally, I also approach the research problem from of the viewpoints of technology, sociology and history, and use the theory of Science and Technology Studies, STS. The study provides an overview of, for example, Bruno Latour’s actor network theory (Actor-Network Theory), and discusses which kind of actors can be found behind intellectual property law, tort law and the privacy laws. The sociological and historical study is carried out in the latter part of the thesis as an adjunct to legal analysis.