Public Safety Commissioner Michael Kopy is set to appear before the Rye City Council next week to answer questions about the police department’s new controversial facial recognition technology from Clearview AI.
City Manager Greg Usry told council members that the appearance would be scheduled for the council meeting on June 12. “Just to give a briefing on what they’re actually using the software for — it’s not spyware,” he said.
Councilwoman Sarah Goddard replied that Kopy’s appearance would be “good.”
“There is a lot of interest around it,” she said.
Usry clarified that the technology was “to assist the police department in their investigations, only a tool, not the way in which [Police] make a case.”
Usry said the commissioner would give background and an overview of what the police department is using the technology for and what they have used it for in the past in county investigations.
Kopy declined to detail what Rye police have used the technology for when asked about it by The Record, saying only: “I make every effort to solve crimes and keep people safe in the City of Rye that I can.”
Kopy said it was unusual to appear before the City Council to discuss “operational matters,” but he would do so “at the direction of the city manager.” He did so most recently to discuss new procedures and the acquisition of new technology that could be used in the case of an active shooter.
Detective Lt. Mike Anfuso said the facial recognition software could be used on an image “with just a quarter of the face, but it’s more accurate when it’s straight on and there’s a clear view.”
Jake Laperruque, who is deputy director of the Security and Surveillance Project at the Center for Democracy and Technology, said facial recognition technology is “highly pervasive” and “should only be allowed if it’s subject to rigorous safeguards, such as warrants, limiting use to investigating serious crimes, disclosure to affected individuals, and independent testing and accuracy.
“Clearview AI is especially alarming and invasive, because it scrapes billions of photos from social media in violation of those websites’ rules,” he said.
He also said that Clearview AI has a “notorious history” of dodging independent testing of its technology and exaggerating its accuracy with “selfmade” accuracy figures. In particular, Laperruque said that the 98.6 percent accuracy figure found in the company’s marketing materials has never been independently verified by a third party.
“The civil rights and civil liberties risks from facial recognition are far too high to deploy tech from such an untrustworthy company,” he said.
Clearview AI has not returned calls for comment. The company’s CEO, Hoan Ton-That, has written that: “Clearview AI continues to achieve the highest level of third-party verification for our data security, cybersecurity, and internal security policies and procedures.”