Meta, the social network giant with nearly 4 billion users, is introducing facial recognition technology in an effort to combat the growing issue of fake celebrity scam ads that have plagued its platforms. In an October 21 statement, the company revealed that early testing with a small group of celebrities has yielded encouraging results. Meta plans to expand the trials to a wider group of 50,000 celebrities and public figures over the coming weeks.
The new system works by comparing images from the advertisements with the official Facebook and Instagram profile pictures of the celebrities in question. If the system identifies a match and determines the ad to be a scam, Meta will block it. This is part of Meta’s larger effort to curb the rise of scams targeting users with impersonations of well-known public figures. Celebrities like Tesla CEO Elon Musk, American TV host Oprah Winfrey, and Australian billionaires Andrew Forrest and Gina Rinehart have previously been used as bait in fraudulent ads.
Meta acknowledges that these so-called “celeb-bait” scams are a serious issue, not only for the individuals impersonated but also for users of its platforms. Scammers often use these ads to trick people into providing personal information or handing over money. Meta emphasized that this type of scam violates its policies and is detrimental to its user base. As part of its efforts to enhance protection, Meta will soon begin sending in-app notifications to targeted celebrities, informing them that they have been enrolled in the new protection system. Celebrities will have the option to opt out if they choose.
This development comes at a time when Meta must tread carefully. The company recently reached a $1.4 billion settlement with the state of Texas after being accused of using biometric data from its residents without proper consent. To address concerns about privacy, Meta has stated that it will immediately delete any facial data generated during the process of determining whether an ad is a scam.
In addition to addressing celebrity impersonation scams, Meta is looking to extend the use of its facial recognition technology to help users verify their identities and regain access to compromised accounts. While this technology holds potential for bolstering security, Meta’s previous run-ins with data privacy issues have made some wary of its approach.
Despite a surge in cryptocurrency scam ads on Facebook, Meta recently refuted claims from Australia’s consumer regulator that nearly 60% of crypto investment schemes on the platform in August were fraudulent. Many of these scams reportedly use AI-generated deepfakes, a new and more sophisticated method for luring victims into investing in bogus cryptocurrency ventures.
Meta’s latest initiative demonstrates its intention to ramp up security measures and take action against increasingly advanced online scams. However, the company’s approach to facial recognition will likely be scrutinised, particularly in light of recent privacy concerns. As the technology continues to evolve, Meta must balance its commitment to protecting users from scams with the need to ensure ethical and transparent use of biometric data.
Meta, the social network giant with nearly 4 billion users, is introducing facial recognition technology in an effort to combat the growing issue of fake celebrity scam ads that have plagued its platforms. In an October 21 statement, the company revealed that early testing with a small group of celebrities has yielded encouraging results. Meta plans to expand the trials to a wider group of 50,000 celebrities and public figures over the coming weeks.
The new system works by comparing images from the advertisements with the official Facebook and Instagram profile pictures of the celebrities in question. If the system identifies a match and determines the ad to be a scam, Meta will block it. This is part of Meta’s larger effort to curb the rise of scams targeting users with impersonations of well-known public figures. Celebrities like Tesla CEO Elon Musk, American TV host Oprah Winfrey, and Australian billionaires Andrew Forrest and Gina Rinehart have previously been used as bait in fraudulent ads.
Meta acknowledges that these so-called “celeb-bait” scams are a serious issue, not only for the individuals impersonated but also for users of its platforms. Scammers often use these ads to trick people into providing personal information or handing over money. Meta emphasized that this type of scam violates its policies and is detrimental to its user base. As part of its efforts to enhance protection, Meta will soon begin sending in-app notifications to targeted celebrities, informing them that they have been enrolled in the new protection system. Celebrities will have the option to opt out if they choose.
This development comes at a time when Meta must tread carefully. The company recently reached a $1.4 billion settlement with the state of Texas after being accused of using biometric data from its residents without proper consent. To address concerns about privacy, Meta has stated that it will immediately delete any facial data generated during the process of determining whether an ad is a scam.
In addition to addressing celebrity impersonation scams, Meta is looking to extend the use of its facial recognition technology to help users verify their identities and regain access to compromised accounts. While this technology holds potential for bolstering security, Meta’s previous run-ins with data privacy issues have made some wary of its approach.
Despite a surge in cryptocurrency scam ads on Facebook, Meta recently refuted claims from Australia’s consumer regulator that nearly 60% of crypto investment schemes on the platform in August were fraudulent. Many of these scams reportedly use AI-generated deepfakes, a new and more sophisticated method for luring victims into investing in bogus cryptocurrency ventures.
Meta’s latest initiative demonstrates its intention to ramp up security measures and take action against increasingly advanced online scams. However, the company’s approach to facial recognition will likely be scrutinised, particularly in light of recent privacy concerns. As the technology continues to evolve, Meta must balance its commitment to protecting users from scams with the need to ensure ethical and transparent use of biometric data.
ODINDOG, the latest digital asset to grab the spotlight, has officially made its mark on the Bitcoin blockchain. This development signals a new phase for Odin's ecosystem, as the first memecoin created by the project now sits permanently etched on the Motherchain.
Blockchain enthusiasts...
Runes Exchange Environment & Richswap have officially launched, marking a major step forward for decentralised finance on Bitcoin. Built directly on...
In a recent interview with Adam Taggart on Thoughtful Money, Michael Howell, founder and CEO of Crossborder Capital, shared his insights on global liquidity, financial markets, and the challenges ahead. Howell, a seasoned expert in...
Recent strides at PIVX, under the guidance of Duddino and JSKitty, mark a consistent leap in refining the platform's functionality and user interface. The team's focus on continuous innovation is evident in their latest proposals,...
Amidst the hustle of Bitcoin's financial corridors, a silent war is brewing over the fate of BRC-20, the protocol that has quietly amassed considerable financial prowess. Unisat and the Layer 1 Foundation (L1F) find themselves...
Dominic Williams recently posed a thought-provoking question on X: "What network has the highest growth in onchain compute, isn't threatened by pending unlocks, benefits...
Querio, an on-chain Web3 search engine, is revolutionising the discovery of decentralised applications (dApps) across multiple blockchains. By enabling seamless searches...