Lawmakers in Massachusetts are now considering a bill that would make changes to how police can use facial recognition technology over concerns about inaccuracy and the potential for racial discrimination.

Right now, law enforcement officers can use facial recognition technology to look at things like surveillance video, then search for suspects based on their physical characteristics.

That can bring up a slew of issues, said UMass Amherst computer science professor Erik Learned-Miller.

“There's a lot of talk about accuracy in face recognition. How often does it make errors? How often does it get a correct match?” Learned-Miller said. “You might hear a vendor say something like, our software is 99% accurate, or something like that. And I want to let people know that those numbers are very hard to interpret.”

That 99% figure, he said, could come from testing a database of something highly standardized, like passport photos, in which images are well-lit and subjects are looking directly at the camera.

“However, in practice, the police might be using something to look at a grainy surveillance video in which the subject's face is a bit blurry, the lighting is poor, and maybe they're looking slightly away from the camera,” Learned-Miller said. “Just because a piece of software does well on a certain laboratory test does not mean it's going to do the same in practice, and it almost never does as well.”

Proposed changes in front of the Massachusetts Legislature include creating a centralized statewide office staffed by people trained on the software’s vulnerabilities, to which local police departments can submit requests.

Another proposed change: Requiring police officers to have a warrant before they run an image through facial recognition software.

“We want to avoid fishing expeditions in which you put in an image of somebody and start searching for a match,” he said. “You want to have a warrant that says there's probable cause to believe that this particular individual may be involved in a crime. … Those kinds of basic rights that would prevent this from being overused, just the way we don't allow surveillance to be done on people that are not suspected of a crime.”

Learned-Miller said he hoped more checks on facial recognition would help prevent cases like that of Robert Williams, a Black man in Detroit accused of stealing watches from a jewelry store and arrested based only on a false positive match from facial recognition software. Williams was nowhere near the scene of the crime. But police arrested him in front of his two young daughters.

“The police report on that possible match said in big, bold letters, 'Do not use this match. To arrest this person, you have to have corroborating evidence,'” Learned-Miller said. “Unfortunately, the department ignored its own rules and went out and arrested him anyway with no additional corroborating evidence.”

Williams is now suing the police department over the wrongful arrest.

“This has happened about five or six times,” Learned-Miller said. “Every single time that it's known to have happened, it been to a Black person. So that definitely reinforces the idea that software may work less well for dark-skinned people.”

Learned-Miller said people who grew up with simple, reliable technology tend to trust it more — even as technology becomes more complex and more fallible.

“They grew up using calculators that were always correct when they'd multiply two big numbers together, and they got used to the idea that technology is the accurate and precise all the time,” he said. “But face recognition and other artificial intelligence applications are much more uncertain and much more difficult. And frankly, they make errors all the time.”