The source code for Clearview's AI, as well as its app, was exposed to some cybersecurity error. The company right now claims only law enforcement agencies have access to its software. How true is this?
What's The Scoop
There was a security oversight in a facial recognition startup Clearview A.I. The lapse in supervision caused them to dump its source code, secret keys as well as cloud storage credentials, and finally, copies of the apps were accessed publically.
TechCrunch has reported that there was an exposed server which was discovered by Mossab Hussein, the Chief Security Officer at cybersecurity firm SpiderSilk, found that it was configured that it would allow anyone to register as a new user and log in. This news came as a shock and added to another headline for Clearview A.I.
The first headline Clearview A.I. made was back in January when the New York Times shared that its facial recognition software has over billions of images scraped from websites, social media platforms, and wherever they could get their hands on one.
Users upload a picture of a PUI or person of interest, and Clearview A.I. software will attempt to match it to any other similar images out there collected in their database. This might end to potentially revealing the person's identity from just one single clear picture.
Since then, after the word got out in public, Clearview A.I. has defended itself by saying that the software is only readily accessible to law enforcement agencies. There have been rumors, however, that Clearview has also been marketing its system to private businesses, including Macy's and Best Buy.
Coming from TechCrunch, the said server, which contained the source code of the company's official recognition facial recognition database, including secret keys, credentials that allowed access to some of its cloud storage containing the copies of Windows, Mac, Android, and iOS apps.
Apple also recently blocked the app for violating its rules. Even the company's Slack tokens were accessible to the public, which could have allowed access to several of the private internal communications to-and-fro. Hussein also said that he was able to find 70,000 videos in the company's cloud storage taken from a camera installed in a residential building.
Hoan Ton-That, Founder of Clearview A.I., has told TechCrunch that the footage captured was with the permission of the building's management as a part of an attempt to prototype a security camera. The building itself is located in Manhattan.
Clearview A.I.'s Response
Ton-That said that it "did not expose any personally identifiable information, search history, or biometric identifiers" and added, "done a full forensic audit of the host to confirm no other unauthorized access occurred,"
This suggests that Hussein was the only one to access the misconfigured server. The secret keys exposed by the server have also been changed, so they no longer work.
Clearview A.I.'s system has also faced heated criticism from other tech firms as well as the United States authorities after the news became public. Facebook, Twitter, and YouTube have told Clearview to stop scraping their images, as well as police departments, have been told not to use the software.