Imagine a technology that is potently, uniquely dangerous — something so inherently toxic that it deserves to be completely rejected, banned, and stigmatized. Something so pernicious that regulation cannot adequately protect citizens from its effects.

That technology is already here. It is facial recognition technology, and its dangers are so great that it must be rejected entirely.

""Using facial recognition to help the visually impaired or as a tool to identify and combat cyber harassment is notable, because the positive uses of facial recognition technology are pretty limited to fun and maybe authentication," says Woodrow Hartzog, a law and computer science professor at Northeastern University who studies privacy and data protection. "It's interesting now to see different uses. We collectively need to watch that to see how it plays out.""

"“For many of these systems, the inclusion of real-time face recognition is just a software update away,” said Harlan Yu, co-author of a report on body camera policies for Upturn, a technology think tank."

"“In this particular case, it's not clear that Snapchat even used face recognition technology in a way that would implicate BIPA,” Yana Welinder, a lawyer and legal fellow with Stanford University, e-mailed. “It's possible that they are simply using face detection technology. If so, that would be similar to what many digital cameras do to identify a face in an image to focus the lens on the face.”"

"Meanwhile, a privacy expert, Woodrow Hartzog, law professor at the Cumberland School of Law at Samford University, echoed this sentiment and noted that facial recognition tech is "problematic for a number of reasons."

"The first is that facial recognition technologies require a database of images to be checked against," he wrote in an e-mail to Ars."