When you go to Face Recognition in Facebook’s Settings page, there’s a link that leads you to more info about it that says switching it on can “help protect you from strangers using a photo of you as their profile picture.” That pertains to a feature Facebook launched in March that’s supposed to tip it off if somebody is impersonating you. According to a new report by The Washington Post, though, the technology won’t quite solve Facebook’s problem with fake accounts. The social network has admitted to the publication that it mostly looks for impostors only among your friends and friends of friends.
Facebook said that it does compare profile pics against millions of other users’, but it didn’t reveal a specific number. It also didn’t disclose how it chooses which accounts to compare against. Besides, “millions” is still a tiny fraction of the 2 billion users on the website. In the event that it does find fakes, it doesn’t always penalize the right person — the Post says that in some cases, it disables people’s real accounts instead.
In addition to comparing profile pics against a small number of users, Facebook reportedly said that it only reviews new accounts created since the feature launched, because it would take too much power to compare billions of profile photos against each other. Considering the social network’s problem with fake accounts, which it calls “undesirables,” has been going on for years, the technology won’t be able to completely solve the issue as it is.
The company believes that there were as many as 87 million undesirables as of last quarter, which is almost five times as many as the 18 million fakes on the website back in 2016. Those fakes were linked to Russia’s efforts to influence the most recent Presidential Elections — Russian troll farm Internet Research Agency apparently created fake Americans on social networks like Facebook to post anti-Clinton sentiments.
Even Sen. Christopher A. Coons became the victim of a Facebook impostor with a lot of Russian friends that copied his name, photos and info. He brought up the issue to Mark Zuckerberg when the Facebook chief appeared before the Senate to answer questions about the Cambridge Analytica fiasco. When asked why Facebook shifts “the burden to users to flag inappropriate content and make sure it’s taken down,” Zuckerberg replied:
…it’s clear that this is an area… we need to do a lot better on. Over time, we’re going to shift increasingly to a method where more of this content is flagged up front by AI tools that we develop.”