Facial Recognition Debate Lessons from Ukraine

Facial Recognition Debate Lessons from Ukraine

 

By Matthew Feeney and Rachel Chiu

 

According to Reuters, Ukrainian officials are using the facial recognition search engine Clearview AI to “uncover Russian assailants, combat misinformation and identify the dead.” In the United States, Clearview AI has made headlines in reporting on law enforcement, with civil liberties experts raising well‐​founded concerns about the proliferation of facial recognition technology (FRT) in police departments. These concerns have prompted calls for the outright ban of facial recognition. Yet the Reuters article serves as a reminder that FRT has many applications beyond policing, and that those concerned about FRT should focus on regulations guiding deployment rather than seeking a ban on the technology.

 

Since the Russian invasion of Ukraine a few weeks ago, the American government has responded with economic sanctions and military assistance to Ukraine. But pressure on Russia has come from private companies as well as governments. Apple has suspended sales in Russia and limited Russian use of its Apple Pay and Apple Maps software. Google has suspended advertising in Russia and halted payment on Google Pay. A host of other U.S.-based companies have also taken steps to limit Russian access to their goods and services. Yet the recent news about Clearview AI shows another way that private companies can involve themselves in the ongoing war: by assisting the Ukrainian government.

 

Clearview AI scrapes billions of images from social media sites such as Twitter, Facebook, and Instagram in order to build a search engine for facial images. Someone with Clearview AI’s technology can upload an image that Clearview AI’s FRT then compares to the billions of images in the Clearview AI database, thereby confirming identity. In the U.S., civil liberties activists have protested the police use of Clearview AI, which has resulted in thousands of queries.

 

American police use of Clearview AI prompted Google and Facebook to write cease and desist letters to the company, urging it to stop using photos from their platforms. Clearview AI responded that its use of publicly available photos was First Amendment‐​protected speech.

 

Clearview AI made the same First Amendment argument while seeking to dismiss a lawsuit filed by Illinois residents. Clearview AI argued that Illinois’ biometric privacy law, the Biometric Information Privacy Act, violated the First Amendment, a claim the presiding judge found unpersuasive.

 

Amidst civil liberties controversies at home, Clearview AI is now seeking to help the Ukrainian government. News from the Ukraine‐​Russia war has showcased the use of weaponized disinformation and misinformation. Russia invaded a county where a majority of adults have smartphones. The result is that social media platforms are awash with videos and photos of the conflict. Predictably, fake content associated with the conflict has been spread by both those seeking to confuse social media users and those who incorrectly believe what they are sharing is accurate.

 

Civil liberties and surveillance concerns are important to debates over FRT, but they should not be the only considerations. As the Reuters article demonstrates, facial recognition can help defending countries in wartime. But it can also be valuable in peacetime. For example, in 2018 police in New Delhi scanned images of 45,000 children in orphanages and care facilities. Within four days, they were able to identify 2,930 missing children – a feat that would have been exceedingly difficult in the absence of this technology.

 

FRT can be applied in many circumstances: It can help refugees locate their families, strengthen commercial security, and prevent fraud. There are also use cases unrelated to law enforcement and safety. For example, CaliBurger patrons can pay for their meal with a face scan. Similar payment systems have been trialed around the world, in countries such as DenmarkNigeriaSouth Korea, and Finland.

 

Broad prohibitions and bans often overlook these valuable applications, focusing solely on misuse by law enforcement.

 

While the deployment of facial recognition in Ukraine highlights positive potential, it also underscores the jurisdictional challenges associated with this controversial technology. In recent months, Clearview AI has been the subject of international investigations, with some governments claiming that its proprietary facial recognition system runs afoul of national data protection laws. Regulators in Sweden, France, and Australia have ordered the company to delete all data relating to its citizens, while the United Kingdom and Italy have imposed large fines.

 

As lawmakers and regulators around the world continue to grapple with FRT policy they should consider its benefits as well as its costs. It is possible to craft policies that allow public officials to use FRT while also protecting civil liberties. The recent use of Clearview AI in Ukraine does not mean that we should ignore the potential for FRT to be used for surveillance. Rather it should serve as a reminder that FRT policy should focus on uses of the technology rather than the technology itself.

 

From cato.org

Categories: