LAST OCTOBER, MILLIONS of interconnected devices in- fected with malware mount- ed a “denial-of-service” cy- berattack on Dyn, a company
that operates part of the Internet’s
directory service. Such attacks require us to up our technical game in
Internet security and safety. They also
expose the need to frame and enforce
social and ethical behavior, privacy,
and appropriate use in Internet environments.

Social behavior and appropriate
use become even more crucial as we
build out the “Internet of Things”
(IoT)—an increasingly interconnected cyber-physical-biological environment that links devices, systems,
data, and people. At its best, the Io T
has the potential to create an integrated ecosystem that can respond
to a spectrum of needs, increasing
efficiency and opportunity, and empowering people through technology, and technology through intelligence. At its worst, the Io T can open
a Pandora’s Box of inappropriate and
unsafe behavior, unintended consequences, and intrusiveness.

The difference between an IoT
that enhances society and one that
diminishes it will be determined by
our ability to create an effective model for IoT governance. This model
must guide social behavior and ethical use of Io T technologies while promoting effective security and safety.

While we should not limit technology innovation too early with overly
restrictive policy, neither should we
leave the policy and governance discussion until the IoT is so mature
that it cannot easily incorporate protections.

What Policy Will BeNeeded for the Io T?

Although much of the policy needed for
the IoT may evolve from Internet governance, the scale, heterogeneity, complexity, and degree of technological autonomy within the Io T will require new
thinking about regulation and policy
and force new interpretations of current
law. As an example of the complexity
of the governance challenge, consider
three key areas critical to ensure the
positive potential of the Io T:

1. What are your rights to privacy inthe Io T? The Io T will sharpen the ten-sion between individual privacy andthe use of personal information to pro-mote effectiveness, safety, and secu-rity. Who should control informationabout you? Who should access it? Whocan use it? The answer is not alwaysclear-cut. Consider medical monitor-ing devices and the information theyaccumulate. Should your personalhealth information be shared whenthe Centers for Disease Control want totrack a potential epidemic? When bio-medical researchers want to model po-tential treatment strategies on a richerdataset? When an employer is consid-ering you for a job?

At present, policy and laws about
online privacy and rights to information are challenging to interpret and
difficult to enforce. As IoT technologies become more pervasive, personal
information will become more valuable to a diverse set of actors that include organizations, individuals, and
autonomous systems with the capacity to make decisions about you.

Some have suggested that individuals should have a basic right to opt
out, delete, or mask their information
from systems in the IoT, providing
one tenet of a potential IoT “Bill of
Rights.” However, it may be infeasible
or impossible for an individual to control all the data generated about them
by Io T systems.

Interestingly, strong individual
privacy rights may also mean less social benefit. Too many “opt-outs” may
erode the public and private value of
IoT datasets, 3 negatively impacting
their social benefit—imagine a Google
map where locations come and go.

The complexity of providing useful
services subject to dynamic participation and evolving individual preferences may be extraordinarily complex
to develop and administer.

2. Who is accountable for decisions made by autonomous systems?

As autonomous systems replace some
human activities, we face the challenge of when and how these systems
should be deployed, and who is responsible and accountable for their
behavior. When your “smart” system