Meta is ramping up its efforts to combat the rise of celebrity deepfake scams on its platforms, deploying facial recognition technology to tackle fake ads impersonating high-profile figures. As AI-powered deepfakes continue to flood social media, Meta’s move has sparked both hope and concern. While the initiative aims to protect celebrities and users from malicious ads, privacy advocates warn that the technology raises critical ethical questions.
Meta’s Fight Against Deepfake Scams
Starting in December 2024, Meta will launch a trial using facial recognition technology to identify fraudulent advertisements featuring the likeness of celebrities. A selection of 50,000 public figures will be included in the trial, with the technology comparing ads to the celebrities’ official Facebook and Instagram profiles. If a match is found, and the content is flagged as a scam, Meta will promptly remove the offending ad.
Monika Bickert, Meta’s Vice President of Content Policy, announced that early trials with a small group of celebrities had yielded promising results. The technology significantly accelerated Meta’s ability to detect and remove deepfake ads, which have plagued its platforms in recent years. From fabricated endorsements featuring Brad Pitt to fake crypto schemes endorsed by Cristiano Ronaldo, these scams undermine user trust and put the social media giant in hot water with lawmakers.
Privacy and Ethical Concerns
Despite the promising results, critics are sounding the alarm on the privacy risks tied to facial recognition technology. Facial data is far more sensitive than other forms of personal information, like PINs or passwords, as it cannot be easily changed once compromised. Keiichi Nakata, a Professor of Social Informatics at Henley Business School, expressed concern over how Meta will manage the collection and storage of this highly sensitive data.
“Facial recognition technology uses personal data that cannot be altered,” Nakata noted. “The ethical concerns lie in how the data is collected, managed, and stored, and whether it’s used in ways that are acceptable and responsible.”
Meta is no stranger to these concerns. In 2021, the company scaled back its use of facial recognition technology after facing backlash from privacy advocates. The firm acknowledged the regulatory uncertainty surrounding the technology and stated that its use should be limited to specific cases—like the ongoing deepfake epidemic.
A Growing Market Amid Privacy Fears
Despite these concerns, the global facial recognition market is booming. According to a report from Insight Partners, the market is expected to grow from $5.01 billion in 2021 to $12.67 billion by 2028, driven by advancements in AI and security solutions.
However, Meta’s history with privacy issues complicates its current efforts. A Harvard University project demonstrated how easily the technology could be abused. Two students used Meta’s smart glasses, along with public databases, to identify people in real time—exposing just how vulnerable facial recognition can be to misuse.
As Meta embarks on its mission to combat deepfakes, it faces the daunting challenge of balancing security with user privacy. The company’s reliance on facial recognition may offer a temporary solution to the deepfake crisis, but the broader implications for data security and ethical responsibility are still unclear.
Also Read: Gold Surges 1.4%, Bitcoin Drops 3% As Middle East Tensions Spike – Safe Haven Status In Question
With regulators still crafting rules around the use of facial recognition technology, Meta’s latest move will likely spark further debate about the role of this powerful tool in an increasingly digital world. Whether this trial will protect celebrities and users without crossing ethical boundaries remains to be seen.