Meta, the social network giant with nearly 4 billion users, is introducing facial recognition technology in an effort to combat the growing issue of fake celebrity scam ads that have plagued its platforms. In an October 21 statement, the company revealed that early testing with a small group of celebrities has yielded encouraging results. Meta plans to expand the trials to a wider group of 50,000 celebrities and public figures over the coming weeks.
The new system works by comparing images from the advertisements with the official Facebook and Instagram profile pictures of the celebrities in question. If the system identifies a match and determines the ad to be a scam, Meta will block it. This is part of Meta’s larger effort to curb the rise of scams targeting users with impersonations of well-known public figures. Celebrities like Tesla CEO Elon Musk, American TV host Oprah Winfrey, and Australian billionaires Andrew Forrest and Gina Rinehart have previously been used as bait in fraudulent ads.
Meta acknowledges that these so-called “celeb-bait” scams are a serious issue, not only for the individuals impersonated but also for users of its platforms. Scammers often use these ads to trick people into providing personal information or handing over money. Meta emphasized that this type of scam violates its policies and is detrimental to its user base. As part of its efforts to enhance protection, Meta will soon begin sending in-app notifications to targeted celebrities, informing them that they have been enrolled in the new protection system. Celebrities will have the option to opt out if they choose.
This development comes at a time when Meta must tread carefully. The company recently reached a $1.4 billion settlement with the state of Texas after being accused of using biometric data from its residents without proper consent. To address concerns about privacy, Meta has stated that it will immediately delete any facial data generated during the process of determining whether an ad is a scam.
In addition to addressing celebrity impersonation scams, Meta is looking to extend the use of its facial recognition technology to help users verify their identities and regain access to compromised accounts. While this technology holds potential for bolstering security, Meta’s previous run-ins with data privacy issues have made some wary of its approach.
Despite a surge in cryptocurrency scam ads on Facebook, Meta recently refuted claims from Australia’s consumer regulator that nearly 60% of crypto investment schemes on the platform in August were fraudulent. Many of these scams reportedly use AI-generated deepfakes, a new and more sophisticated method for luring victims into investing in bogus cryptocurrency ventures.
Meta’s latest initiative demonstrates its intention to ramp up security measures and take action against increasingly advanced online scams. However, the company’s approach to facial recognition will likely be scrutinised, particularly in light of recent privacy concerns. As the technology continues to evolve, Meta must balance its commitment to protecting users from scams with the need to ensure ethical and transparent use of biometric data.
Meta, the social network giant with nearly 4 billion users, is introducing facial recognition technology in an effort to combat the growing issue of fake celebrity scam ads that have plagued its platforms. In an October 21 statement, the company revealed that early testing with a small group of celebrities has yielded encouraging results. Meta plans to expand the trials to a wider group of 50,000 celebrities and public figures over the coming weeks.
The new system works by comparing images from the advertisements with the official Facebook and Instagram profile pictures of the celebrities in question. If the system identifies a match and determines the ad to be a scam, Meta will block it. This is part of Meta’s larger effort to curb the rise of scams targeting users with impersonations of well-known public figures. Celebrities like Tesla CEO Elon Musk, American TV host Oprah Winfrey, and Australian billionaires Andrew Forrest and Gina Rinehart have previously been used as bait in fraudulent ads.
Meta acknowledges that these so-called “celeb-bait” scams are a serious issue, not only for the individuals impersonated but also for users of its platforms. Scammers often use these ads to trick people into providing personal information or handing over money. Meta emphasized that this type of scam violates its policies and is detrimental to its user base. As part of its efforts to enhance protection, Meta will soon begin sending in-app notifications to targeted celebrities, informing them that they have been enrolled in the new protection system. Celebrities will have the option to opt out if they choose.
This development comes at a time when Meta must tread carefully. The company recently reached a $1.4 billion settlement with the state of Texas after being accused of using biometric data from its residents without proper consent. To address concerns about privacy, Meta has stated that it will immediately delete any facial data generated during the process of determining whether an ad is a scam.
In addition to addressing celebrity impersonation scams, Meta is looking to extend the use of its facial recognition technology to help users verify their identities and regain access to compromised accounts. While this technology holds potential for bolstering security, Meta’s previous run-ins with data privacy issues have made some wary of its approach.
Despite a surge in cryptocurrency scam ads on Facebook, Meta recently refuted claims from Australia’s consumer regulator that nearly 60% of crypto investment schemes on the platform in August were fraudulent. Many of these scams reportedly use AI-generated deepfakes, a new and more sophisticated method for luring victims into investing in bogus cryptocurrency ventures.
Meta’s latest initiative demonstrates its intention to ramp up security measures and take action against increasingly advanced online scams. However, the company’s approach to facial recognition will likely be scrutinised, particularly in light of recent privacy concerns. As the technology continues to evolve, Meta must balance its commitment to protecting users from scams with the need to ensure ethical and transparent use of biometric data.
The blockchain world has welcomed a fresh addition with Bitfinity’s mainnet officially going live. Yesterday, November 19th, marked the launch, ending months of anticipation and bringing with it a blend of excitement, gratitude, and plenty of hopes for the future. The Bitfinity team...
The Stacks Nakamoto Upgrade has sparked a wave of enthusiasm across the Bitcoin community. This latest advancement promises a significant transformation,...
India, long regarded as one of the major engines of global growth in recent years, is starting to display signs of economic strain. With its currency hitting a record low and key indicators showing downturns,...
Recent strides at PIVX, under the guidance of Duddino and JSKitty, mark a consistent leap in refining the platform's functionality and user interface. The team's focus on continuous innovation is evident in their latest proposals,...
Amidst the hustle of Bitcoin's financial corridors, a silent war is brewing over the fate of BRC-20, the protocol that has quietly amassed considerable financial prowess. Unisat and the Layer 1 Foundation (L1F) find themselves...
Retail investors are currently experiencing an unprecedented surge of optimism. The latest data from the American Association of Individual Investors (AAII) has revealed that...