Last month, philosopher Patrick Lin delivered this briefing about the ethics of drones at an event hosted by In-Q-Tel, the CIA's venture-capital arm. It's a thorough and unnerving survey of what it might mean for the intelligence service to deploy different kinds of robots.

Robots are replacing humans on the battlefield--but could they also be used to interrogate and torture suspects? This would avoid a serious ethical
conflict between physicians' duty to do no harm, or nonmaleficence, and their questionable role in monitoring vital signs and health of the
interrogated. A robot, on the other hand, wouldn't be bound by the Hippocratic oath, though its very existence creates new dilemmas of its own.

The ethics of military robots is quickly marching ahead, judging by news coverage and academic research. Yet there's little discussion about robots in
the service of national intelligence and espionage, which are omnipresent activities in the background. This is surprising, because most military
robots are used for surveillance and reconnaissance, and their most controversial uses are traced back to the Central Intelligence Agency (CIA) in
targeted strikes against suspected terrorists. Just this month, a
CIA drone
--a RQ-170 Sentinel--crash-landed intact into the hands of the Iranians, exposing the secret US spy program in the volatile region.

The US intelligence community, to be sure, is very much interested in robot ethics. At the least, they
don't want to be ambushed by public criticism or worse, since that could derail programs, waste resources, and erode international support. Many in
government and policy also have a genuine concern about "doing the right thing" and the impact of war technologies on society. To those ends, In-Q-Tel--the CIA's technology venture-capital arm (the "Q" is a nod to the technology-gadget genius in the
James Bond spy movies)--had invited me to give a briefing to the intelligence community on ethical surprises in their line of work, beyond familiar
concerns over possible privacy violations and illegal assassinations. This article is based on that briefing, and while I refer mainly to the US
intelligence community, this discussion could apply just as well to intelligence programs abroad.

BACKGROUND

Robotics is a game-changer in national security. We now find military robots in just about every environment: land, sea, air, and even outer space.
They have a full range of form-factors from tiny robots that look like insects to aerial drones with wingspans greater than a Boeing 737 airliner. Some
are fixed onto battleships, while others patrol borders in Israel and South Korea; these have fully-auto modes and can make their own targeting and
attack decisions. There's interesting work going on now with micro robots, swarm robots, humanoids, chemical bots, and biological-machine integrations.
As you'd expect, military robots have fierce names like: TALON SWORDS, Crusher, BEAR, Big Dog, Predator, Reaper, Harpy, Raven, Global Hawk, Vulture,
Switchblade, and so on. But not all are weapons--for instance, BEAR is designed to retrieve wounded soldiers on an active battlefield.

The usual reason why we'd want robots in the service of national security and intelligence is that they can do jobs known as the 3 "D"s: Dull jobs, such as extended reconnaissance or patrol beyond limits of human endurance, and standing guard over perimeters; dirty jobs,
such as work with hazardous materials and after nuclear or biochemical attacks, and in environments unsuitable for humans, such as underwater and outer
space; and dangerous jobs, such as tunneling in terrorist caves, or controlling hostile crowds, or clearing improvised explosive devices (IEDs).

Robots don't act
with malice or hatred or other emotions that can lead to war crimes and other abuses, such as rape.

But there's a new, fourth "D" that's worth considering, and that's the ability to act with dispassion. (This is motivated by Prof. Ronald Arkin's work at Georgia Tech, though others remain skeptical, such as Prof. Noel Sharkey at University of Sheffield in the UK.) Robots wouldn't act
with malice or hatred or other emotions that may lead to war crimes and other abuses, such as rape. They're unaffected by emotion and adrenaline and
hunger. They're immune to sleep deprivation, low morale, fatigue, etc. that would cloud our judgment. They can see through the "fog of war", to reduce
unlawful and accidental killings. And they can be objective, unblinking observers to ensure ethical conduct in wartime. So robots can do many of our
jobs better than we can, and maybe even act more ethically, at least in the high-stress environment of war.

SCENARIOS

With that background, let's look at some current and future scenarios. These go beyond obvious intelligence, surveillance, and reconnaissance (ISR),
strike, and sentry applications, as most robots are being used for today. I'll limit these scenarios to a time horizon of about 10-15 years from now.

Military surveillance applications are well known, but there are also important civilian applications, such as robots that patrol playgrounds
for pedophiles (for instance, in South Korea) and major sporting events for suspicious activity (such as the 2006 World Cup in Seoul and 2008 Beijing
Olympics). Current and future biometric capabilities may enable robots to detect faces, drugs, and weapons at a distance and underneath clothing. In
the future, robot swarms and "smart dust" (sometimes called nanosensors) may be used in this role.

Robots can be used for alerting purposes, such as a humanoid police robot in China that gives out information, and a Russian police robot that
recites laws and issues warnings. So there's potential for educational or communication roles and on-the-spot community reporting, as related to
intelligence gathering.

In delivery applications, SWAT police teams already use robots to interact with hostage-takers and in other dangerous situations. So robots
could be used to deliver other items or plant surveillance devices in inaccessible places. Likewise, they can be used for extractions too. As
mentioned earlier, the BEAR robot can retrieve wounded soldiers from the battlefield, as well as handle hazardous or heavy materials. In the future, an
autonomous car or helicopter might be deployed to extract or transport suspects and assets, to limit US personnel inside hostile or foreign borders.

In detention applications, robots could also be used to not just guard buildings but also people. Some advantages here would be the elimination
of prison abuses like we saw at Guantanamo Bay Naval Base in Cuba and Abu Ghraib prison in Iraq. This speaks to the dispassionate way robots can
operate. Relatedly--and I'm not advocating any of these scenarios, just speculating on possible uses--robots can solve the dilemma of using physicians in interrogations and torture. These activities conflict with their duty to care and the Hippocratic oath to do no harm. Robots can monitor
vital signs of interrogated suspects, as well as a human doctor can. They could also administer injections and even inflict pain in a more controlled
way, free from malice and prejudices that might take things too far (or much further than already).

And robots could act as Trojan horses, or gifts with a hidden surprise. I'll talk more about these scenarios and others as we discuss possible
ethical surprises next.

ETHICAL AND POLICY SURPRISES

Limitations

While robots can be seen as replacements for humans, in most situations, humans will still be in the loop, or at least on the loop--either in
significant control of the robot, or able to veto a robot's course of action. And robots will likely be interacting with humans. This points to a
possible weak link in applications: the human factor.

For instance, unmanned aerial vehicles (UAVs), such as Predator and Global Hawk, may be able to fly the skies for longer than a normal human can
endure, but there are still human operators who must stay awake to monitor activities. Some military UAV operators may be overworked and fatigued,
which may lead to errors in judgment. Even without fatigue, humans may still make bad decisions, so errors and even mischief are always a possibility
and may include friendly-fire deaths and crashes.

Some critics have worried that UAV operators--controlling drones from half a world away--could become detached and less caring about killing, given the
distance, and this may lead to more unjustified strikes and collateral damage. But other reports seem to indicate an opposite effect: These controllers
have an intimate view of their targets by video streaming, following them for hours and days, and they can also see the aftermath of a strike, which
may include strewn body parts of nearby children. So there's a real risk of post-traumatic stress disorder (PTSD) with these operators.

Another source of liability is how we frame our use of robots to the public and international communities. In a recent broadcast interview, one US
military officer was responding to a concern that drones are making war easier to wage, given that we can safely strike from longer distances with
these drones. He compared our use of drones with the biblical David's use of a sling against Goliath: both are about using missile or long-range
weapons and presumably have righteousness on their side. Now, whether or not you're Christian, it's clear that our adversaries might not be. So
rhetoric like this might inflame or exacerbate tensions, and this reflects badly on our use of technology.

One more human weak-link is that robots may likely have better situational awareness, if they're outfitted with sensors that can let them see in the
dark, through walls, networked with other computers, and so on. This raises the following problem: Could a robot ever refuse a human order, if it knows
better? For instance, if a human orders a robot to shoot a target or destroy a safehouse, but it turns out that the robot identifies the target as a
child or a safehouse full of noncombatants, could it refuse that order? Does having the technical ability to collect better intelligence before we
conduct a strike obligate us to do everything we can to collect that data? That is, would we be liable for not knowing things that we might have known
by deploying intelligence-gathering robots? Similarly, given that UAVs can enable more precise strikes, are we obligated to use them to minimize
collateral damage?

On the other hand, robots themselves could be the weak link. While they can replace us in physical tasks like heavy lifting or working with dangerous
materials, it doesn't seem likely that they can take over psychological jobs such as gaining the confidence of an agent, which involves humor,
mirroring, and other social tricks. So human intelligence, or HUMINT, will still be necessary in the foreseeable future.

Relatedly, we already hear criticisms that the use of technology in war or peacekeeping missions aren't helping to win the hearts and minds of local
foreign populations. For instance, sending in robot patrols into Baghdad to keep the peace would send the wrong message about our willingness to
connect with the residents; we will still need human diplomacy for that. In war, this could backfire against us, as our enemies mark us as dishonorable
and cowardly for not willing to engage them man to man. This serves to make them more resolute in fighting us; it fuels their propaganda and
recruitment efforts; and this leads to a new crop of determined terrorists.

Also, robots might not be taken seriously by humans interacting with them. We tend to disrespect machines more than humans, abusing them more often,
for instance, beating up printers and computers that annoy us. So we could be impatient with robots, as well as distrustful--and this reduces their
effectiveness.

Without defenses, robot could be easy targets for capture, yet they may contain critical technologies and classified data that we don't want to fall
into the wrong hands. Robotic self-destruct measures could go off at the wrong time and place, injuring people and creating an international crisis. So
do we give them defensive capabilities, such as evasive maneuvers or maybe nonlethal weapons like repellent spray or Taser guns or rubber bullets?
Well, any of these "nonlethal" measures could turn deadly too. In running away, a robot could mow down a small child or enemy combatant, which would
escalate a crisis. And we see news reports all too often about unintended deaths caused by Tasers and other supposedly nonlethal weapons.

International humanitarian law (IHL)

What if we designed robots with lethal defenses or offensive capabilities? We already do that with some robots, like the Predator, Reaper, CIWS, and
others. And there, we run into familiar concerns that robots might not comply with international humanitarian law, that is, the laws of war. For
instance, critics have noted that we shouldn't allow robots to make their own attack decisions (as some do now), because they don't have the technical
ability to distinguish combatants from noncombatants, that is, to satisfy the principle of distinction, which is found in various places such as the
Geneva Conventions and the underlying just-war tradition. This principle requires that we never target noncombatants. But a robot already has a hard
time distinguishing a terrorist pointing a gun at it from, say, a girl pointing an ice cream cone at it. These days, even humans have a hard time with
this principle, since a terrorist might look exactly like an Afghani shepherd with an AK-47 who's just protecting his flock of goats.

Another worry is that the use of lethal robots represents a disproportionate use of force, relative to the military objective. This speaks to the
collateral damage, or unintended death of nearby innocent civilians, caused by, say, a Hellfire missile launched by a Reaper UAV. What's an acceptable
rate of innocents killed for every bad guy killed: 2:1, 10:1, 50:1? That number hasn't been nailed down and continues to be a source of criticism. It's
conceivable that there might be a target of such high value that even a 1,000:1 collateral-damage rate, or greater, would be acceptable to us.

Even if we could solve these problems, there may be another one we'd then have to worry about. Let's say we were able to create a robot that targets
only combatants and that leaves no collateral damage--an armed robot with a perfectly accurate targeting system. Well, oddly enough, this may violate a
rule by the International Committee of the Red Cross (ICRC), which bans weapons that cause more than 25% field mortality and 5% hospital mortality.
ICRC is the only institution named as a controlling authority in IHL, so we comply with their rules. A robot that kills most everything it aims at
could have a mortality rate approaching 100%, well over ICRC's 25% threshold. And this may be possible given the superhuman accuracy of machines, again
assuming we can eventually solve the distinction problem. Such a robot would be so fearsome, inhumane, and devastating that it threatens an implicit
value of a fair fight, even in war. For instance, poison is also banned for being inhumane and too effective. This notion of a fair fight comes from
just-war theory, which is the basis for IHL. Further, this kind of robot would force questions about the ethics of creating machines that kill people
on its own.

Other conventions in IHL may be relevant to robotics too. As we develop human enhancements for soldiers, whether pharmaceutical or robotic
integrations, it's unclear whether we've just created a biological weapon. The Biological Weapons Convention (BWC) doesn't specify that bioweapons need
to be microbial or a pathogen. So, in theory and without explicit clarification, a cyborg with super-strength or super-endurance could count as a
biological weapon. Of course, the intent of the BWC was to prohibit indiscriminate weapons of mass destruction (again, related to the issue of humane
weapons). But the vague language of the BWC could open the door for this criticism.

If a soldier could resist pain through robotics or
genetic engineering or drugs, are we still prohibited from torturing that person?

Speaking of cyborgs, there are many issues related to these enhanced warfighters, for instance: If a soldier could resist pain through robotics or
genetic engineering or drugs, are we still prohibited from torturing that person? Would taking a hammer to a robotic limb count as torture? Soldiers
don't sign away all their rights at the recruitment door: what kind of consent, if any, is needed to perform biomedical experiments on soldiers, such
as cybernetics research? (This echoes past controversies related to mandatory anthrax vaccinations and, even now, required amphetamine use by some
military pilots.) Do enhancements justify treating soldiers differently, either in terms of duties, promotion, or length of service? How does it affect
unit cohesion if enhanced soldiers, who may take more risks, work alongside normal soldiers? Back more squarely to robotics: How does it affect unit
cohesion if humans work alongside robots that might be equipped with cameras to record their every action?

And back more squarely to the intelligence community, the line between war and espionage is getting fuzzier all the time. Historically, espionage isn't
considered to be casus belli or a good cause for going to war. War is traditionally defined as armed, physical conflict between political
communities. But because so much of our assets are digital or information-based, we can attack--and be attacked--by nonkinetic means now, namely by
cyberweapons that take down computer systems or steal information. Indeed, earlier this year, the US declared as part of its cyberpolicy that we may
retaliate kinetically to a nonkinetic attack. Or as one US Department of Defense official said, "If you shut down our power grid, maybe we'll put a
missile down one of your smokestacks."

As it applies to our focus here: if the line between espionage and war is becoming more blurry, and a robot is used for espionage, under what
conditions could that count as an act of war? What if the spy robot, while trying to evade capture, accidentally harmed a foreign national: could that
be a flashpoint for armed conflict? (What if the CIA drone in Iran recently had crashed into a school or military base, killing children or soldiers?)

Law & responsibility

Accidents are entirely plausible and have happened elsewhere: In September 2011, an RQ-Shadow UAV crashed into a military cargo plane in Afghanistan,
forcing an emergency landing. Last summer, test-flight operators of a MQ-8B Fire Scout helicopter UAV lost control of the drone for about half an hour,
which traveled for over 20 miles towards restricted airspace over Washington DC. A few years ago in South Africa, a robotic cannon went haywire and
killed 9 friendly soldiers and wounded 14 more.

Errors and accidents happen all the time with our technologies, so it would be naïve to think that anything as complex as a robot would be immune to
these problems. Further, a robot with a certain degree of autonomy may raise questions of who (or what) is responsible for harm caused by the robot,
either accidental or intentional: could it be the robot itself, or its operator, or the programmer? Will manufacturers insist on a release of
liability, like the EULA or end-user licensing agreements we agree to when we use software--or should we insist that those products should be thoroughly
tested and proven safe? (Imagine if buying a car required signing a EULA that covers a car's mechanical or digital malfunctions.)

We're seeing more robotics in society, from Roombas at home to robotics on factory floors. In Japan, about 1 in 25 workers is a robot, given their
labor shortage. So it's plausible that robots in the service of national intelligence may interact with society at large, such as autonomous cars or
domestic surveillance robots or rescue robots. If so, they need to comply with society's laws too, such as rules of the road or sharing airspace and
waterways.

But, to the extent that robots can replace humans, what about complying with something like a legal obligation to assist others in need, such as
required by a Good Samaritan Law or basic international laws that require ships to assist other naval vessels in distress? Would an unmanned surface
vehicle, or robotic boat, be obligated to stop and save a crew of a sinking ship? This was a highly contested issue in World War 2--the Laconia
incident--when submarine commanders refused to save stranded sailors at sea, as required by the governing laws of war at the time. It's not unreasonable
to say that this obligation shouldn't apply to a submarine, since surfacing to rescue would give away its position, and stealth is its primary
advantage. Could we therefore release unmanned underwater vehicles (UUVs) and unmanned surface vehicles (USVs) from this obligation for similar
reasons?

We also need to keep in mind environmental, health, and safety issues. Microbots and disposable robots could be deployed in swarms, but we need to
think about the end of that product lifecycle. How do we clean up after them? If we don't, and they're tiny--for instance, nanosensors--then they could
then be ingested or inhaled by animals or people. (Think about all the natural allergens that affect our health, never mind engineered stuff.) They may
contain hazardous materials, like mercury or other chemicals in their battery, that can leak into the environment. Not just on land, but we also need
to think about underwater and even space environments, at least with respect to space litter.

For the sake of completeness, I'll also mention privacy concerns, though these are familiar in current discussions. The worry is not just with
microbots, which may look like harmless insects and birds, that can peek into your window or crawl into your house, but also with the increasing
biometrics capabilities that robots could be outfitted with. The ability to detect faces from a distance as well as drugs or weapons under clothing or
inside a house from the outside blurs the distinction between a surveillance and a search. The difference is that a search requires a judicial warrant.
As technology allows intelligence-gathering to be more intrusive, we'll certainly hear more from these critics.

Finally, we need to be aware of the temptation to use technology in ways we otherwise wouldn't do, especially activites that are legally
questionable--we'll always get called out for that. For instance, this charge has already been made against our use of UAVs to hunt down terrorists.
Some call it "targeted killing", while others maintain that it's an "assassination." This is still very much an open question, because "assassination"
has not been clearly defined in international law or domestic law, e.g., Executive Order 12333. And the problem is exacerbated in asymmetrical warfare,
where enemy combatants don't wear uniforms: Singling them out by name may be permitted when it otherwise wouldn't be; but others argue that it amounts
to declaring targets as outlaws without due process, especially if it's not clearly a military action (and the CIA is not formally a military agency).

Beyond this familiar charge, the risk of committing other legally-controversial acts still exists. For instance, we could be tempted to use robots in
extraditions, torture, actual assassinations, transport of guns and drugs, and so on, in some of the scenarios described earlier. Even if not illegal,
there are some things that seem very unwise to do, such as a recent
fake-vaccination operation
in Pakistan to get DNA samples that might help to find Osama bin Laden. In this case, perhaps robotic mosquitoes could have been deployed, avoiding the
suspicion and backlash that humanitarian workers had suffered consequently.

Deception

Had the fake-vaccination program been done in the context of an actual military conflict, then it could be illegal under Geneva and Hague Conventions,
which prohibit perfidy or treacherous deceit. Posing as a humanitarian or Red Cross worker to gain access behind enemy lines is an example of perfidy:
it breaches what little mutual trust we have with our adversaries, and this is counterproductive to arriving at a lasting peace. But, even if not
illegally, we can still act in bad faith and need to be mindful of that risk.

The same concern about perfidy could arise with robot insects and animals, for instance. Animals and insects are typically not considered to be
combatants or anything of concern to our enemies, like Red Cross workers. Yet we would be trading on that faith to gain deep access to our enemy. By
the way, such a program could also get the attention of animal-rights activists, if it involves experimentation on animals.

More broadly, the public could be worried about whether we should be creating machines that intentionally deceive, manipulate, or coerce people. That's
just disconcerting to a lot of folks, and the ethics of that would be challenged. One example might be this: Consider that we've been paying off
Afghani warlords with Viagra, which is a less-obvious bribe than money. Sex is one of the most basic incentives for human beings, so potentially some
informants might want a sex-robot, which exist today. Without getting into the ethics of sex-robots here, let's point out that these robots could also
have secret surveillance and strike capabilities--a femme fatale of sorts.

The same deception could work with other robots, not just the pleasure models, as it were. We could think of these as Trojan horses. Imagine that we
captured an enemy robot, hacked into it or implanted a surveillance device, and sent it back home: How is this different from masquerading as the enemy
in their own uniform, which is another perfidious ruse? Other questionable scenarios include commandeering robotic cars or planes owned by others, and
creating robots with back-door chips that allow us to hijack the machine while in someone else's possession.

Broader effects

This point about deception and bad faith is related to a criticism we're already hearing about military robots, which I mentioned earlier: that the US
is afraid to send people to fight its battles; we're afraid to meet the enemy face to face, and that makes us cowards and dishonorable. Terrorists
would use that resentment to recruit more supporters and terrorists.

But what about on our side: do we need to think how the use of robotics might impact recruitment in our own intelligence community? If we increasing
rely on robots in national intelligence--like the US Air Force is relying on UAVs--that could hurt or disrupt efforts in bringing in good people. After
all, a robotic spy doesn't have the same allure as a James Bond.

And if we are relying on robots more in the intelligence community, there's a concern about technology dependency and a resulting loss of human skill.
For instance, even inventions we love have this effect: we don't remember as well because of the printing press, which immortalizes our stories on
paper; we can't do math as well because of calculators; we can't recognize spelling errors as well because of word-processing programs with
spell-check; and we don't remember phone numbers because they're stored in our mobile phones. In medical robots, some are worried that human surgeons
will lose their skill in performing difficult procedures, if we outsource the job to machines. What happens when we don't have access to those robots,
either in a remote location or power outage? So it's conceivable that robots in the service of our intelligence community, whatever those scenarios may
be, could also have similar effects.

Even if the scenarios we've been considering end up being unworkable, the mere plausibility of their existence may put our enemies on point and drive
their conversations deeper underground. It's not crazy for people living in caves and huts to think that we're so technologically advanced that we
already have robotic spy-bugs deployed in the field. (Maybe we do, but I'm not privileged to that information.) Anyway, this all could drive an
intelligence arms race--an evolution of hunter and prey, as spy satellites had done to force our adversaries to build underground bunkers, even for
nuclear testing. And what about us? How do we process and analyze all the extra information we're collecting from our drones and digital networks? If
we can't handle the data flood, and something there could have prevented a disaster, then the intelligence community may be blamed, rightly or wrongly.

Related to this is the all-too-real worry about proliferation, that our adversaries will develop or acquire the same technologies and use them against
us. This has borne out already with every military technology we have, from tanks to nuclear bombs to stealth technologies. Already, over 50 nations
have or are developing military robots like we have, including China, Iran, Libyan rebels, and others.

CONCLUSION

The issues above--from inherent limitations, to specific laws or ethical principles, to big-picture effects-- give us much to consider, as we must. These
are critical not only for self-interest, such as avoiding international controversies, but also as a matter of sound and just policy. For either
reason, it's encouraging that the intelligence and defense communities are engaging ethical issues in robotics and other emerging technologies.
Integrating ethics may be more cautious and less agile than a "do first, think later" (or worse "do first, apologize later") approach, but it helps us
win the moral high ground--perhaps the most strategic of battlefields.

Most Popular

Should you drink more coffee? Should you take melatonin? Can you train yourself to need less sleep? A physician’s guide to sleep in a stressful age.

During residency, Iworked hospital shifts that could last 36 hours, without sleep, often without breaks of more than a few minutes. Even writing this now, it sounds to me like I’m bragging or laying claim to some fortitude of character. I can’t think of another type of self-injury that might be similarly lauded, except maybe binge drinking. Technically the shifts were 30 hours, the mandatory limit imposed by the Accreditation Council for Graduate Medical Education, but we stayed longer because people kept getting sick. Being a doctor is supposed to be about putting other people’s needs before your own. Our job was to power through.

The shifts usually felt shorter than they were, because they were so hectic. There was always a new patient in the emergency room who needed to be admitted, or a staff member on the eighth floor (which was full of late-stage terminally ill people) who needed me to fill out a death certificate. Sleep deprivation manifested as bouts of anger and despair mixed in with some euphoria, along with other sensations I’ve not had before or since. I remember once sitting with the family of a patient in critical condition, discussing an advance directive—the terms defining what the patient would want done were his heart to stop, which seemed likely to happen at any minute. Would he want to have chest compressions, electrical shocks, a breathing tube? In the middle of this, I had to look straight down at the chart in my lap, because I was laughing. This was the least funny scenario possible. I was experiencing a physical reaction unrelated to anything I knew to be happening in my mind. There is a type of seizure, called a gelastic seizure, during which the seizing person appears to be laughing—but I don’t think that was it. I think it was plain old delirium. It was mortifying, though no one seemed to notice.

Why the ingrained expectation that women should desire to become parents is unhealthy

In 2008, Nebraska decriminalized child abandonment. The move was part of a "safe haven" law designed to address increased rates of infanticide in the state. Like other safe-haven laws, parents in Nebraska who felt unprepared to care for their babies could drop them off in a designated location without fear of arrest and prosecution. But legislators made a major logistical error: They failed to implement an age limitation for dropped-off children.

Within just weeks of the law passing, parents started dropping off their kids. But here's the rub: None of them were infants. A couple of months in, 36 children had been left in state hospitals and police stations. Twenty-two of the children were over 13 years old. A 51-year-old grandmother dropped off a 12-year-old boy. One father dropped off his entire family -- nine children from ages one to 17. Others drove from neighboring states to drop off their children once they heard that they could abandon them without repercussion.

His paranoid style paved the road for Trumpism. Now he fears what’s been unleashed.

Glenn Beck looks like the dad in a Disney movie. He’s earnest, geeky, pink, and slightly bulbous. His idea of salty language is bullcrap.

The atmosphere at Beck’s Mercury Studios, outside Dallas, is similarly soothing, provided you ignore the references to genocide and civilizational collapse. In October, when most commentators considered a Donald Trump presidency a remote possibility, I followed audience members onto the set of The Glenn Beck Program, which airs on Beck’s website, theblaze.com. On the way, we passed through a life-size replica of the Oval Office as it might look if inhabited by a President Beck, complete with a portrait of Ronald Reagan and a large Norman Rockwell print of a Boy Scout.

Since the end of World War II, the most crucial underpinning of freedom in the world has been the vigor of the advanced liberal democracies and the alliances that bound them together. Through the Cold War, the key multilateral anchors were NATO, the expanding European Union, and the U.S.-Japan security alliance. With the end of the Cold War and the expansion of NATO and the EU to virtually all of Central and Eastern Europe, liberal democracy seemed ascendant and secure as never before in history.

Under the shrewd and relentless assault of a resurgent Russian authoritarian state, all of this has come under strain with a speed and scope that few in the West have fully comprehended, and that puts the future of liberal democracy in the world squarely where Vladimir Putin wants it: in doubt and on the defensive.

The same part of the brain that allows us to step into the shoes of others also helps us restrain ourselves.

You’ve likely seen the video before: a stream of kids, confronted with a single, alluring marshmallow. If they can resist eating it for 15 minutes, they’ll get two. Some do. Others cave almost immediately.

This “Marshmallow Test,” first conducted in the 1960s, perfectly illustrates the ongoing war between impulsivity and self-control. The kids have to tamp down their immediate desires and focus on long-term goals—an ability that correlates with their later health, wealth, and academic success, and that is supposedly controlled by the front part of the brain. But a new study by Alexander Soutschek at the University of Zurich suggests that self-control is also influenced by another brain region—and one that casts this ability in a different light.

Modern slot machines develop an unbreakable hold on many players—some of whom wind up losing their jobs, their families, and even, as in the case of Scott Stevens, their lives.

On the morning of Monday, August 13, 2012, Scott Stevens loaded a brown hunting bag into his Jeep Grand Cherokee, then went to the master bedroom, where he hugged Stacy, his wife of 23 years. “I love you,” he told her.

Stacy thought that her husband was off to a job interview followed by an appointment with his therapist. Instead, he drove the 22 miles from their home in Steubenville, Ohio, to the Mountaineer Casino, just outside New Cumberland, West Virginia. He used the casino ATM to check his bank-account balance: $13,400. He walked across the casino floor to his favorite slot machine in the high-limit area: Triple Stars, a three-reel game that cost $10 a spin. Maybe this time it would pay out enough to save him.

“Well, you’re just special. You’re American,” remarked my colleague, smirking from across the coffee table. My other Finnish coworkers, from the school in Helsinki where I teach, nodded in agreement. They had just finished critiquing one of my habits, and they could see that I was on the defensive.

I threw my hands up and snapped, “You’re accusing me of being too friendly? Is that really such a bad thing?”

“Well, when I greet a colleague, I keep track,” she retorted, “so I don’t greet them again during the day!” Another chimed in, “That’s the same for me, too!”

Unbelievable, I thought. According to them, I’m too generous with my hellos.

When I told them I would do my best to greet them just once every day, they told me not to change my ways. They said they understood me. But the thing is, now that I’ve viewed myself from their perspective, I’m not sure I want to remain the same. Change isn’t a bad thing. And since moving to Finland two years ago, I’ve kicked a few bad American habits.

A report will be shared with lawmakers before Trump’s inauguration, a top advisor said Friday.

Updated at 2:20 p.m.

President Obama asked intelligence officials to perform a “full review” of election-related hacking this week, and plans will share a report of its findings with lawmakers before he leaves office on January 20, 2017.

Deputy White House Press Secretary Eric Schultz said Friday that the investigation will reach all the way back to 2008, and will examine patterns of “malicious cyber-activity timed to election cycles.” He emphasized that the White House is not questioning the results of the November election.

Asked whether a sweeping investigation could be completed in the time left in Obama’s final term—just six weeks—Schultz replied that intelligence agencies will work quickly, because the preparing the report is “a major priority for the president of the United States.”

A professor of cognitive science argues that the world is nothing like the one we experience through our senses.

As we go about our daily lives, we tend to assume that our perceptions—sights, sounds, textures, tastes—are an accurate portrayal of the real world. Sure, when we stop and think about it—or when we find ourselves fooled by a perceptual illusion—we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.

We can all agree that Millennials are the worst. But what is a Millennial? A fight between The New York Times and Slate inspired us to try and figure that out.

This article is from the archive of our partner .

We can all agree that Millennials are the worst. But what is a Millennial? A fight between The New York Times and Slate inspired us to try and figure that out.

After the Times ran a column giving employers tips on how to deal with Millennials (for example, they need regular naps) (I didn't read the article; that's from my experience), Slate's Amanda Hess pointed out that the examples the Times used to demonstrate their points weren't actually Millennials. Some of the people quoted in the article were as old as 37, which was considered elderly only 5,000 short years ago.

The age of employees of The Wire, the humble website you are currently reading, varies widely, meaning that we too have in the past wondered where the boundaries for the various generations were drawn. Is a 37-year-old who gets text-message condolences from her friends a Millennial by virtue of her behavior? Or is she some other generation, because she was born super long ago? (Sorry, 37-year-old Rebecca Soffer who is a friend of a friend of mine and who I met once! You're not actually that old!) Since The Wire is committed to Broadening Human Understanding™, I decided to find out where generational boundaries are drawn.