Patrick Lin

Patrick Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is also an associate philosophy professor. He has published several books and papers in the field of technology ethics, especially with respect to nanotechnology, human enhancement, robotics, cyberwarfare, space exploration, and other areas. He teaches courses in ethics, political philosophy, philosophy of technology, and philosophy of law. Dr. Lin has appeared in international media such as BBC, Forbes, National Public Radio (US), Popular Mechanics, Popular Science, Reuters, Science Channel, Slate, The Atlantic, The Christian Science Monitor, The Times (UK), Wired, and others (see this page for more).

Dr. Lin is currently or has been affiliated with several other leading organizations, including: Stanford Law School's Center for Internet and Society, Stanford's School of Engineering (CARS), New America Foundation, UN Institute for Disarmament Research, University of Notre Dame, US Naval Academy, and Dartmouth College. He earned his BA from University of California at Berkeley, and MA and PhD from University of California at Santa Barbara.

In the year 2025, a rogue state--long suspected of developing biological weapons--now seems intent on using them against U.S. allies and interests. Anticipating such an event, we have developed a secret "counter-virus" that could infect and destroy their stockpile of bioweapons. Should we use it?

Pages

Cyberattacks are the new normal, but, when they come from abroad, they can raise panic about an invisible cyberwar. If international conflicts are unavoidable, isn’t a cyberwar better than a physical war with bombs and bullets?

Sure, cyberwar is better than a kinetic or physical war in many ways, but it could also make war worse. Unless it’s very carefully designed, a cyberattack could be a war crime.

In the first of this two-article series, we saw how augmented reality (AR) is causing friction between individual liberty and public interest. AR appmakers are being required by some parks to obtain a permit before they can “put” virtual objects in those public spaces, given the sudden crowds the apps can cause.

This article looks at the same core dilemma with another technology: automated driving.

With very rare exceptions, automakers are famously coy about crash dilemmas. They don’t want to answer questions about how their self-driving cars would respond to weird, no-win emergencies. This is understandable, since any answer can be criticized—there’s no obvious solution to a true dilemma, so why play that losing game?

When cyberattacks come from abroad, there’s special panic. We often imagine them to be the opening volleys of a cyberwar that could escalate into a kinetic war. For that reason, hacking back—or cyber-counterattacking—is presumed to be too dangerous to allow.

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Last week, the Dallas police killed a suspected gunman with a bomb-delivering robot. It was a desperate measure for desperate times: five law enforcement officers were killed and several more wounded before the shooter was finally cornered.

Pages

Robots are unquestioningly getting more sophisticated by the year, and as a result, are becoming an indelible part of our daily lives. But as we start to increase our interactions and dependance on robots, an important question needs to be asked: What would happen if a robot actually committed a crime, or even hurt someone — either deliberately or by mistake?

In an interesting recent essay in the Atlantic – ‘Is it Possible to Wage a Just Cyberwar?’ – Patrick Lin, Fritz Allhoff, and Neil Rowe argue that events such as the Stuxnet cyberattack on Iran suggest that the way we fight wars is changing, as well as the rules that govern them. It is indeed easy to see how nations may be tempted to use cyberweapons to attack anonymously, from a distance, and without the usual financial and personnel costs of conventional warfare. (See also Mariarosaria Taddeo’s interesting recent post on this blog.)

The Atlantic has published a fascinating article about how the ongoing digital revolution is changing the face of war, and how military and government leaders are failing to adopt a new ethics to match. Written by cyberwar and emerging technology experts Patrick Lin, Fritz Allhoff, and Neil Rowe, the essay makes the case that just-war theory still applies – even when the battlefield is digital.

Pages

Attendees will hear leading speakers, participate in interactive breakout sessions, and network with key innovators in this exciting field. Don't miss what's in store for the Automated Vehicles Symposium 2016.

Self-driving cars are already cruising the streets today. And while these cars will ultimately be safer and cleaner than their manual counterparts, they can’t completely avoid accidents altogether. How should the car be programmed if it encounters an unavoidable accident? Patrick Lin navigates the murky ethics of self-driving cars.