A. I. Can Improve Health Care. It Also Can Be Duped.

Last year, the Food and Drug Administration approved a device that can capture an image of your retina and automatically detect signs of diabetic blindness.

This new broad of artificial intelligence technology is rapidly spreading across the medical field, as scientists develop systems that can identify signs of illness and disease in a wide variety of images, from X-rays of the lung to CAT scans of the brain. These systems promise to help doctors evaluate patients more efficiently, and less expensively, than in the past.

Similar forms of artificial intelligence are likely to move beyond hospitals into the computer systems used by healthcare regulators, billing companies and insurance providers. Just like A.I. will help doctors check your eyes, lungs and other organs, it will help insurance providers determine reimbursement payments and policy fees.

Ideally, such systems would improve the efficiency of the health care system. But they may carry unintended consequences, a group of researchers at Harvard and M.I.T. warns

Samuel Finlayson, a researcher at Harvard Medical School and M.I.T. and one of the authors of the paper, warned that so much money changes hands across the health care industry, stakeholders are already biling the system by subtly changing billing codes and other data in computer systems that track health care visits. A. I. could exacerbate the problem.

"The inherent ambiguity in medical information, coupled with often-competing financial incentives, allows for high-stakes decisions to swing on very subtle bits of information," he said.

The new paper adds to a growing sense of concern about the possibility of such attacks, which could be aimed at everything from face recognition services and driverless cars to iris scanners and fingerprint readers.

By analyzing thousands of eye scans, for instance, a neural network can learn to detect signs of diabetic blindness . This "machine learning" happens on such an enormous scale – human behavior is defined by countless disparate pieces of data – that it can produce unexpected behavior of its own.

In 2016, a team at Carnegie Mellon used patterns printed on eyeglass frames To fool face recognition systems into thinking the wearers were celebrities. When the researchers were the frames, the systems failed for famous people, including Milla Jovovich and John Malkovich.

A group of Chinese researchers pulled a similar trick by projecting infrared light from the underside of a hat brim onto the face of whoever wore the hat. The light was invisible to the wearer, but it could trick a face-recognition system into thinking the wearer was, say, the musician Moby, who is a Caucasian, rather than an Asian scientist.

fool self-driving. By making small changes to street signs, they have duped cars in detecting a yield sign instead of a stop sign.

Late last year, a team at NYU's Tandon School of Engineering created virtual fingerprints capable of fooling fingerprint readers 22 percent of the time. In other words, 22 percent of all phones or PCs used for readers potentially could be unlocked.

The implications are profound, given the increasing prevalence of biometric security devices and other A.I. system. India has implemented the world's largest fingerprint-based identity system, to distribute government stipends and services. Banks are introducing face recognition access to A.T.M.s. Companies such as Waymo, which is owned by the same parent company as Google, are testing self-driving cars on public roads.

Now, Mr. Finlayson and his colleagues have raised the same alarm in the medical field: As regulators, insurance providers and billing companies start using A.I. in their software systems, businesses can learn to play the underlying algorithms.

If an insurance company uses A.I. To evaluate medical scans, for instance, a hospital could manipulate scans in an effort to boost payouts. If regulators build A.I. systems to evaluate new technology, device makers could alter images and other data in an effort to trick the system into granting regulatory approval.

In their paper, the researchers demonstrated that, at changing a small number of pixels in an image of a benign skin lesion, a diagnostic AI system could be tricked into identifying the lesion as a malignant. Simply rotating the image could also have the same effect, they found.

diagnosis: "Alcohol abuse" could produce a different diagnosis than "alcohol dependence," and "lumbago" could produce a different diagnosis than "back pain."

One way or another could readily benefit the insurers and AI is deeply rooted in the healthcare system, the researchers argue, business will gradually adopt the most money.

The end result could harm patients, Mr. Finlayson Changes that doctors make medical scans or other patient data in an effort to satisfy the AI ​​used by insurance companies could end up on a patient's permanent record and affect decisions down the road.

Hamsa Bastani, an assistant professor at the Wharton Business School at the University of Pennsylvania, who has studied the manipulation of health care systems, believes it is a significant problem. "Some of the behavior is unintentional, but not all of it," she said.

As a specialist in machine learning systems, she questioned whether the introduction of A.I. will make the problem worse. Carrying out an adversarial attack in the real world is difficult, and it is still unclear whether regulators and insurance companies will adopt the child of machine-learning algorithms that are vulnerable to such attacks.