Facebook Tries to Improve Transparency in IndiaSocial Media Giant Attempts to Win Back Confidence, But More Needs to Be Done
In the wake of the news that the U.K. fined Facebook $645,000, the maximum penalty allowed, for violating the nation's rules on processing personal data in the Cambridge Analytica incident, the social media giant is taking steps to be more transparent about its content policies.
See Also: Why CASBs Matter to Cloud Security
In a rare meeting with the Indian news media, Facebook public policy managers, Sheen Handoo and Varun Reddy, gave a glimpse of how the company decides whether to remove content and what steps the social media giant is taking to curb the spread of fake news (see: Facebook's Zuckerberg Takes First Drubbing in D.C.).
The public policy managers gave us an overview on how the Facebook team categorizes content as "hate speech" that should be removed. It's the first time Facebook opened up to the public in India about its standards on content policy.
"At Facebook we are trying to be as transparent as we can," Handoo said. "We have published our first-ever community standards enforcement report. We want to be absolutely clear that there is no space for harmful languages, images and videos on our platform."
Handoo says that Facebook believes that "having these conversations and opening up community standards to the public will form the basis of an increased dialogue from our community and partners across the world."
With the help artificial intelligence and machine learning, the company during January, February and March disabled 583 million fake accounts and removed 1.9 million pieces of terrorist propaganda, Handoo said. Facebook's tools, however, have not been very successful in removing hate speech, she acknowledged.
"While we removed 2.5 million hate speeches during January-March 2018, only 38 percent of it was detected by AI and machine learning tools," she told reporters. "The reason is that hate speech tends to be context heavy. We need to improve our internal tools to be able to detect the kind of hate speech going on our platform."
It's heartening to see Facebook share these numbers as well as its content policy. Earlier the content policies gave a very broad overview of hate speech. If you look at the Facebook's community standards, there is now a clear definition along with examples on what content is fit to be on the social media platform and what is not.
The company is clearly making strides toward being more transparent after the blacklash following the Cambridge Analytica incident.
There is no denying the fact that Facebook is witnessing a slowdown in its growth numbers. In its 2018 third quarter earnings report released on Oct. 31, the social network confirmed that the number of daily active users in the U.S. and Canada remained flat at 185 million, while the number of European users slipped from 279 million to 278 million.
While Facebook is making efforts to win back public trust, it still has a lot of work to do to curb fake news.
It's well known that fake news has impacted elections in the U.S. and Brazil. With the Indian election coming up next year, Facebook engaged with BoomLive, a news fact-checking company, six months ago to keep a check on fake news. But there apparently has been relatively little progress made, because fake news sites such as Newspost and others are still up and running and posting content on Facebook.
"We have a misinformation and harm policy," Reddy said at the press briefing. "We have partnered with trusted partners across countries. So when the partner flags us, we make a call on whether or not to take down or reduce distribution of a particular content."
Engaging with one third-party fact-checking company to crack down on fake news is an inadequate step, and Facebook needs to take approaches tailored to each region.
Four months ago, Katie Harbath, Facebook's global manager for politics and government outreach, personally assured O.P. Rawat, chief election commissioner of India, that Facebook would track false campaigns and fake news mass-circulated through its network by voluntarily running a fact check. Content found to be fake and based on twisted facts would be blocked or prevented from being forwarded.
But cyber lawyer Vicky Shah, who's based in Mumbai, contends that so far, the effort has come up short.
"The engagement with BoomLive in India has so far not been successful," she says. "Of course nobody expects 100 percent curb on fake news, but there should have been at least 60 percent to 70 percent improvement. I don't see any."
Facebook informed ISMG it soon plans to come out with a detailed strategy for curbing fake news tied to the election in India. Hopefully it will take adequate measures.
In the meantime, kudos to Facebook for taking steps toward being more transparent.