While mass tort and class action lawsuits often take root following claims of physical harm, the multidistrict litigation brought against the facial recognition company Clearview AI likely indicates an expanding view of what is legally actionable harm today, at the height of modern technological growth.
Clearview AI, a New York-based software start-up, has found itself at the center of legal and ethical debate over its facial recognition technology, which many argue violates privacy laws and seriously threatens the constitutionally protected liberties of the American public.
Clearview AI: The Software
Founded in 2017, Clearview AI is an American tech company that developed a program designed to aggregate internet images – scraped from social networks, including Facebook, YouTube, Twitter, Instagram, and Venmo, and news, employment, and educational sites – and to use a facial recognition algorithm to sort the faces from the images into “neighborhoods,” as described by the New York Times’ January 2020 piece covering the company. Users can take a picture of a person, upload it to the Clearview program, and retrieve public photos of the person that link to where they appeared on the internet. Clearview claims its system database has amassed more than three billion images taken from millions of websites and identifies as a law enforcement tool “[empowering] agencies to quickly, accurately, and efficiently identify suspects, persons of interest, and victims of crime,” according to the Clearview website.
Since Clearview AI’s 2017 launch, more than 1,800 law enforcement agencies and 10 federal agencies have used Clearview’s software – including the Secret Service and F.B.I, according to the New York Times’ follow-up coverage from July 2021. While Clearview has found continued support from investors and reported an uptick in law enforcement usage after it proved to be helpful in identifying participants in the Capitol riots in January, the software company has encountered significant scrutiny in the form of lawsuits, such as those brought in Illinois and California by plaintiffs claiming Clearview’s violation of their respective state privacy laws.
The Claims Against Clearview AI
Clearview has publicly emphasized the legality of its software under the First Amendment, with founder and CEO Hoan Ton-That asserting his right to collect public photos, highlighting First Amendment protections of the creation and dissemination of information (such as the publicly-available images in Clearview’s database). However, the software company – whose product has been accused of discriminatory practices due to racial bias, has been deemed illegal in the EU and Canada and is under investigation by Britain and Australia – faces significant legal action domestically on grounds of privacy law and First Amendment violations.
The class-action suit was filed in Illinois, on the counts of seven violations of the Illinois Biometric Information Privacy Act (BIPA), violation of Virginia Code Section 8.01-40, violation of the Virginia Computer Crimes Act, violation of California’s Unfair Competition Law, violation of California Civil Code Section 3344(a), violation of California’s common law right of publicity, California’s constitutional right to privacy, violation of New York’s Civil Rights Law, unjust enrichment, and the Declaratory Judgment Act.
As to the claim brought under BIPA, which regulates biometric data use, storage, and sale, Clearview asserted that the individuals in the photos accumulated in the database had no reasonable expectation of privacy, given that the images were available to the public online. The company further argued that BIPA violated its First Amendment protections, restricting and thus burdening its speech based on content, which cannot survive a strict scrutiny test. The litigation is ongoing, with the plaintiffs’ recent opposition to Clearview’s motion to dismiss on the grounds that even if BIPA applied to their conduct, the Act would violate the dormant Commerce Clause, precluding “the application of a state statute that has the effect of regulating conduct in another state.”
Numerous consumer privacy groups, including the Electronic Frontier Foundation, filed amicus briefs in support of the plaintiffs, asserting BIPA’s aim to protect Illinois citizens’ right to privacy and freedom of expression, which Clearview’s facial recognition technology is argued to harm.
Outside the MDL
While numerous complaints have subsequently been filed in federal courts across the country, some states are considering an alternative approach to confronting Clearview’s contentious practices: legislation. New York, Maine, and Massachusetts have passed legislation banning the use of facial recognition technology, and states such as Maryland and cities such as San Francisco, Boston, and Portland, Oregon, are considering following suit.
While the Clearview MDL remains pending in Illinois, the ongoing debate within the litigation highlights the complexity and questions the efficacy of current data privacy law. Further, legislative changes to data privacy policies in cities and states across the country signals a common policy interest to protect citizens against harmful violations of their right to privacy from companies like Clearview AI. Further, while the legality of Clearview’s practices remains in question, both the domestic and global responses to their software prove the intensity with which data privacy issues are discussed and decided in today’s increasingly complex world.
Written by Suzanne McGrath
About the Author
Suzanne is a J.D. candidate at Brooklyn Law School, where she will soon begin her 2L year. Prior to law school, she graduated from Vanderbilt University with a B.A. in Political Science and a minor in Spanish. After graduating, she moved to New York, where she worked as a paralegal for the Capital Markets Group at Cadwalader, Wickersham & Taft, assisting attorneys with transactional work in the CMBS space. She is a junior member of the Board of Directors at Adam J. Lewis Academy, an independent co-educational day school dedicated to providing an enriched and nurturing education to underserved families in the greater Bridgeport, CT community. She grew up in Fairfield, Connecticut, and enjoys music, tennis, and spending time with friends and family.
The Mass Tort Institute is a consortium of industry leaders dedicated to providing education, training, and networking opportunities for those advocating on behalf of mass tort victims.