Facial recognition software used by law enforcement agencies across the country has captured information on more than 117 million Americans, according to a report that calls for greater oversight and reviews for possible racial bias.

The report, “The Perpetual Line-up,” found that roughly half of all US adults are included in facial recognition databases and 16 states allowed law enforcement officials to run searches against driver’s license photo databases without warrants — a “highly problematic” finding, according to the report released Tuesday by Georgetown Law’s Center on Privacy & Technology.

“We know very little about these systems,” said the report. “We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems. And we don’t know how any of these systems — local, state, or federal — affect racial and ethnic minorities.”

The report, billed as the most comprehensive of its kind to date, is based on more than 100 police records requests.

Overall, it found a few agencies have put “meaningful protections” in place to prevent misuse. But in many more cases, facial recognition software “is out of control,” according to the study, raising questions from privacy advocates who say the technology has a disproportionate impact on minorities.

“We need to stop the widespread use of face recognition technology by police until meaningful safeguards are in place,” Neema Sing Guliani, legislative counsel for the American Civil Liberties Union, said in a statement. “Half of all adults in the country are in government face recognition databases, yet the vast majority of law enforcement agencies using this technology lacks clear policies, audits to ensure accuracy and transparency.”

In a letter to the Department of Justice’s Civil Rights Division, 52 civil and human rights organizations called on DOJ officials to investigate the “disparate impact on communities of color,” citing a 2012 study co-authored by an FBI expect that found several leading facial recognition algorithms were up to 10 percent less accurate with images of African-Americans.

“Despite these findings, there is no regular, independent testing regime for racial bias in face recognition algorithms,” the letter reads. “In fact, the Georgetown report found that two major face recognition vendors did not conduct internal tests for bias either.”

The report also found that no state has laws on the books comprehensively regulating use of facial recognition software by police.

“We are not aware of any agency that requires warrants for searches or limits them to serious crime,” according to the report. “This has consequences.”

For example, the Maricopa County Sheriff’s Office in Arizona — where Sheriff Joe Arpaio is facing federal contempt of court charges — entered all Honduras driver’s licenses and mug shots into its database. The Pinellas County Sheriff’s Office in Florida, meanwhile, runs 8,000 searches per month on the faces of 7 million drivers in the state without requiring officers to have probable cause before executing a search, the report found.

‘It may seem like science fiction. It is real.’

- Center on Privacy & Technology report

Of 52 agencies found to use facial recognition technology, only one — the Ohio Bureau of Criminal Investigation — has a policy that clearly bans officers from using it to track people on the basis of political, religious or other protected forms of speech.

Among major metropolitan police departments, the NYPD denied records requests “entirely,” according to the report, and the LAPD — which recently announced a new “smart car” equipped with real-time facial recognition cameras — claimed to have “no records responsive” to requests by the report’s authors.

The report also found that nearly all major facial recognition companies offer real-time software, allowing a growing number of law enforcement agencies to monitor people simply walking to work.

“It may seem like science fiction,” the report reads. “It is real.”

Contract documents and agency statements from at least five major police departments — including in Chicago, Dallas and Los Angeles — either claimed to run real-time software using street cameras, purchased that technology or expressed written interest in buying it.

Messages seeking comment from DOJ officials were not immediately returned Wednesday. FBI officials, while not addressing the new report directly, said the agency has made “privacy and civil liberties integral to every decision” since the inception of facial recognition technology.

“The FBI has successfully used face recognition technology to further its mission to protect the American public from harm, building privacy and civil liberties protections into the technology and providing the public with transparency,” the statement to The Post read. “It is crucial that members of the law enforcement community have access to advanced biometric technologies to accurately investigate, identify, apprehend, and prosecute terrorists and criminals.”

Facial recognition algorithms, according to the FBI, are developed in the “computer vision field,” and do not actually compare faces. Nor do they consider skin color, sex, age or any other biographic.

“Rather, they compare the patterns that a face image represents,” the statement continued. “Algorithms, to include those utilized in the [Next Generation Identification] facial recognition system, do not identify matches, and therefore cannot be said to misidentify any particular individual or groups of individuals.”

The FBI requires two layers of “human review” before any candidate can be provided to an investigator.

“A candidate is sent back in approximately 12% of requests, and then only as an investigative lead, not positive identification,” the statement concluded. “The investigative lead requires additional investigation to determine whether the candidate is indeed the person being sought.”