The controversial face recognition app by Clearview AI reportedly saw a 26% hike in its usage after the Capitol Hill attack. This controversial app has been in the news earlier too because of the way it collects its data.

Just to brush up on your memory and facts, on January 6th the world witnessed a spine-chilling incident when an angry group of Trump supporters stormed the US Capitol in order to obstruct the certification of the US election results which declared that Joseph Robinette Biden Jr. would be the 46th president of the United States of America. This led to the declaration of emergency in Washington DC for 15 days. Various leaders across the globe expressed their concern over the matter. 

U.S. Role as Democracy's Champion Battered by Assault on Capitol | Top News  | US News
Source: Reuters

The controversial face recognition app by Clearview AI reportedly saw a 26% hike in its usage after the Capitol Hill attack. The law enforcement agencies have been using the app to track down the mob that allegedly stormed the Capitol to create havoc on the certification day. 

There have been multiple pictures and live streams of the given incident and this surely must have helped the law enforcement units to track down the perpetrators well. There were multiple broadcasts live on cable news that showcased the faces of the rioters entering the building. There were also multiple selfies, social media posts from the rioters too that were tracked down by respective social media platforms. The FBI and other agencies have asked for the public’s help too to identify the concerned offenders. 

According to the Times, the Miami Police Department is using Clearview to identify some of the rioters, sending possible matches to the FBI Joint Terrorism Task Force. The Wall Street Journal also added to the conversation saying that local police forces like an Alabama police department were also using Clearview AI’s software to locate the offenders. According to the company “over 2,400 police agencies” use its facial recognition software. They also have reported instances where their software was used in federal investigations too. 

While the tracking down by the software has been impactful and helpful in this instance, it does clear it off the list of privacy hazards. While federal organisations use facial recognition from certified data like driving licenses or the likes, Clearview AI actually scraps off data from platforms Facebook, YouTube, and Venmo to build its database. This raises concern over privacy in a lot of ways. It came under the radar of many civil liberties activists who argued about the consent factor in the collection of private data and pictures of the given platform users. 

Clearview AI said in May it would stop selling its technology to private companies and instead provide it for use by law enforcement only. Facial recognition tools are not always effective when it comes to legal issues. They are heavily used for clue finding in investigations but one can not be charged on the basis of that alone.