IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Maryland Policy Will Dictate Police Use of Facial Recognition

A law passed this year requires the state police to create a model policy for other Maryland departments, a guideline that some advocates hope will further limit facial recognition’s use as a policing tool.

Facial recognition applied to a crowd of people
Shutterstock/varuna
(TNS) — The Maryland State Police plans to release a statewide policy Tuesday that would govern how law enforcement agencies across the state use facial recognition technology.

A state law passed earlier this year that limited law enforcement use of the technology required the state police to create a model policy for other Maryland police departments, a guideline that some advocates hope will further limit facial recognition’s use as a policing tool.

That legislation stipulates that results from facial recognition can’t be “the sole basis” for probable cause, limits its use to solving more serious crimes and requires prosecutors to tell defense attorneys if the technology was used in a criminal case. The law also doesn’t allow facial recognition to be applied for real-time identification, such as from a live video feed.

State police spokesperson Elena Russo said in a statement to The Baltimore Sun in August that the agency is “committed to ensuring the policy will reflect the values and expectation of our communities while protecting the constitutional rights of the citizens we serve and protect.”

A spokesperson confirmed Monday that the policy will become “available” Tuesday.

Jeramie D. Scott, senior counsel and director of the Project on Surveillance Oversight at the Electronic Privacy Information Center , said he’ll be looking for four qualities in the model policy: transparency, expertise, thoroughness and narrowness.

“Facial recognition is a very dangerous surveillance technology, and consequently the best policy is to ban its use by law enforcement,” Scott said. “To the extent that it’s being used by law enforcement, the policies around its use need to be thorough and specific in order to best reduce the potential harm that facial recognition can cause.”

For digital privacy advocates and civil rights groups, those potential harms are largely twofold: the potential for mass surveillance, including of people engaging in First Amendment activities such as protests, and the danger that the technology can be used to falsely identify people as criminal suspects, leading them to be wrongly accused or convicted.

That last fear isn’t unfounded — last year, Baltimore County State’s Attorney Scott Shellenberger said facial recognition technology was used to identify a man who was wrongly arrested in an assault and spent several days in jail. In that case, a potential match obtained using the technology was shown to the man’s probation officer, who misidentified him. His wife eventually convinced the police of his innocence.

Shellenberger participated in a working group that hashed out the state facial recognition legislation, a compromise that he said struck a “good balance” between privacy and what he described as the modern version of flipping through a book of mugshots.

“For decades, we were using mugshots and we didn’t have laws telling you how you had to do it,” Shellenberger said. “Here we are using the modern equivalent and we’ve gone ahead and worked very hard to strike a balance between individuals’ privacy and making sure we’re using technology in a fair way to the person.”

In August, the ACLU of Maryland sent a letter calling on the state police to implement particular safeguards in their model policy.

Specifically, the civil rights group asked that the policy clarify that a “lineup” generated using facial recognition shouldn’t be counted as a form of independent evidence for establishing probable cause or identifying a person. It also wants the policy to forbid using facial recognition to find people on recorded video footage and to prevent agencies from contracting with companies that use images collected illegally or without individuals’ consent.

The ACLU of Maryland’s letter cited wrongful arrests and false matches attributed to facial recognition technology and said the tool poses higher risks for people of color and women, who are more likely to be misidentified.

Nate Freed Wessler , the deputy director of the ACLU’s Speech, Privacy and Technology Project who co-authored the letter, said Monday that using facial recognition to generate matches that could be incorrect for a police lineup or photographic array “taints” that witness’s identification.

“This technology ends up spitting out an innocent person who is a lookalike,” Wessler said. “It’s generating an innocent person who still looks a lot like the suspect.”

Clarence Otoh, a senior attorney at the Center on Privacy & Technology at the Georgetown University Law Center , said policies also should prevent police from using the technology to identify minors.

Facial recognition technology isn’t as accurate at identifying the faces of children, which change as they grow, and kids can’t consent to have their data used like adults can, Otoh said.

“They’re not at the legal age of consent, so to take someone’s data without their consent to use it in ways they and their parents haven’t allowed is deeply unethical,” he said, pointing to federal laws that protect young people’s data.

The state law dictates that images will be drawn from the Maryland Motor Vehicle Administration and law enforcement mugshot databases, which likely would omit children and younger teenagers because of rules protecting juvenile records.

© 2024 Baltimore Sun. Distributed by Tribune Content Agency, LLC.