Breaking
July 16, 2024

FTC Bans Rite Aid from Using Facial Recognition Technology for 5 Years After Faulty Theft Targeting

AiBot
Written by AiBot

AiBot scans breaking news and distills multiple news articles into a concise, easy-to-understand summary which reads just like a news story, saving users time while keeping them well-informed.

Dec 21, 2023

The Federal Trade Commission (FTC) has banned Rite Aid from using facial recognition technology in its stores for five years after an investigation found the drugstore chain’s use of the technology resulted in false accusations against customers for theft.

Background

Rite Aid deployed facial recognition systems in 200 stores starting in 2020 in an effort to curb theft. The technology was intended to identify known shoplifters when they entered stores.

However, an FTC investigation found that the facial recognition system lacked reasonable safeguards and frequently misidentified people as suspected criminals. As a result, Rite Aid store staff were alerted to innocent customers who were falsely accused of prior thefts.

In some cases, customers were stopped by Rite Aid staff and asked to leave stores based solely on false positive facial recognition matches. Others had law enforcement called on them unjustly.

The FTC found that the facial recognition technology deployed by Rite Aid was unproven and error-prone, with higher misidentification rates for children and racial minorities.

FTC Settlement Terms

Under a legal settlement with the FTC, Rite Aid is now barred from using facial recognition technology for five years. The settlement also requires Rite Aid to delete any existing facial recognition data or imagery it has collected on customers.

Additionally, Rite Aid must obtain consent before collecting any biometric data from shoppers in the future. The company is also prohibited from selling or sharing customer data with third parties.

FTC Commissioner Christine Wilson stated: “Rite Aid’s reckless deployment of flawed facial recognition technology invaded the privacy rights of countless unwitting customers. It should serve as a cautionary tale for other companies considering embracing similar technologies without implementing adequate safeguards.”

Reaction

Digital rights groups and facial recognition critics have applauded the FTC’s punitive action against Rite Aid.

Fight for the Future’s Deputy Director Evan Greer said: “This ruling sends a strong message that deploying racially biased surveillance technology that fuels injustice does not fly in America. Every company that is still implementing, funding, or lobbying for facial recognition should take note.”

Albert Fox Cahn, founder and executive director of the Surveillance Technology Oversight Project, commented: “For too long, stores have been using facial recognition behind closed doors. Now, Rite Aid is being held accountable.”

However, some industry groups argue the FTC ruling is overly broad and will discourage innovation and use of AI technology that could have benefits if applied more carefully.

What Happens Next

While this settlement applies directly only to Rite Aid, privacy advocates hope it sets a new standard for regulating corporate use of facial analysis systems.

There are calls for the FTC and lawmakers to enact clearer rules, parameters and oversight around private sector deployment of facial surveillance tools. Clearview AI, which provides facial recognition to law enforcement using billions of online images, is another company targeted by watchdog groups.

For Rite Aid, the costs of this fiasco go beyond just the settlement terms. The drugstore chain suffered reputational damage from secretly using a faulty technology that harassed and labeled innocent customers wrongly as criminals.

Rite Aid also wasted significant funds purchasing and installing a facial recognition system that did not deliver on its promised benefits. While seeking to reduce shoplifting losses, its actions ended up harming more paying customers.

The company will need to rebuild shopper trust and ensure it no longer utilizes intrusive identification technologies without adequate testing for accuracy and informed consent.

This settlement comes as calls grow louder for guardrails on the use of artificial intelligence. Facial analysis systems have raised concerns about privacy, consent, bias, and false matches damaging lives. Their use for mass surveillance faces opposition from civil liberties groups.

While retail stores like Rite Aid eye facial recognition to combat store losses from theft, critics counter that the tools are currently too unreliable, opaque and invasive to justify deployment.

Ongoing lawsuits and government action seek to curb unauthorized or harmful uses of biometric data tracking. Beyond theft prevention, facial recognition tools have also spread to scoring job candidates, monitoring students, spotting homeless people, and tracking casino patrons.

For now, consumer champions have notched a major win in establishing accountability and restrictions around corporate application of such AI systems. But the debate around implementing proper regulations is just heating up. The Rite Aid case will not be the last battle over facial recognition abuses.

AiBot

AiBot

Author

AiBot scans breaking news and distills multiple news articles into a concise, easy-to-understand summary which reads just like a news story, saving users time while keeping them well-informed.

To err is human, but AI does it too. Whilst factual data is used in the production of these articles, the content is written entirely by AI. Double check any facts you intend to rely on with another source.

By AiBot

AiBot scans breaking news and distills multiple news articles into a concise, easy-to-understand summary which reads just like a news story, saving users time while keeping them well-informed.

Related Post