Skip to content
Ad Fraud Calculator Calculate Your Loss
Have Questions? 888-337-0641
4 min read

Is Facebook’s Fake News “War Room” Really Blocking Fake Content?

Featured Image

Despite suffering from a meteoric stock price plunge that caused Facebook to lose a record-breaking $119 billion, the tech giant continues to dominate the internet by holding onto the top spot among online social media networks.

It’s estimated 1.47 billion daily and 2.23 billion monthly active users are on Facebook’s platform. This equates to Facebook reaching about one-third of the world's population. And remember, Facebook also owns WhatsApp and Instagram, the third and sixth most popular social networks worldwide, meaning that their global reach is unparalleled.

Of course, those numbers aren't quite as black-and-white as they might seem.

Facebook disabled close to 1.3 billion accounts in Q4 of 2017 and Q1 of 2018. The company claimed the accounts were largely auto-generated accounts "using scripts or bots, with the intent of spreading spam or conducting illicit activities such as scams." The crackdown relied heavily upon FB’s automated fraud detection systems.

Since Facebook conducted this crackdown, they admit somewhere between three and four percent, or 66 to 88 million, of all monthly active users on the platform are still "likely fakes."

FB's Response to Fake News: “The War Room”

Facebook has taken a lot of flack for the amount of propaganda, misinformation, and fake news that’s spread through its platform. In response, Facebook pulled out all the stops just in time for both the Brazilian national elections and the 2018 U.S. midterms.

Deep inside Facebook's Menlo Park, California headquarters is a room that's been transformed into a space that's a cross between a stock exchange trading floor and mission control at NASA. Dubbed the "War Room," it's the temporary home to a team of 20-30 individuals. The team is comprised of Facebook's best and brightest leaders along with reps from WhatsApp and Instagram.

 

Facebook War Room

Source: Wired

 

On October 17, 2018, Facebook invited reporters inside their “War Room” for a tour. The tour was led by Facebook's head of civic engagement team Samidh Chakrabarti. Chakrabarti encouraged his guests to snap photos of the clusters of standing desks, screens broadcasting non-stop cable news feeds, and the prominent U.S. flag displayed high above all the action.

And no, the title "War Room" isn't a tag created by the media. Facebook has numerous posters featuring the words in big, bold red letters located throughout the room.

Is It Really About Blocking Fake News or Gaining Trust?

According to Facebook, their “War Room” team includes reps from their legal, threat intelligence, moderation, and engineering teams. These operation specialists, data scientists, and communications experts worked 24/7 in the lead up to Brazil's national election (October 28), presumably to hone their skills before the November 6 U.S. midterms.

"We've been doing all this work virtually for two years. But when stuff needs to be done fast, there is no substitute for face-to-face contact,"  said Chakrabarti. “The reps in the War Room "represent and are supported by the more than 20,000 people working on safety and security across Facebook," he explained. This gives the social media giant the ability to react immediately to any threats identified by its systems.

Facebook claims that their “War Room” dashboards provide them with real-time monitoring of key election issues that could impact that democratic process. They cited a false, viral message which claimed that Brazil's Election Day had been moved ahead in response to nationwide protests. Within an hour the post was flagged, reviewed, and removed. Similar posts were also purged from Facebook within two hours.

What Facebook isn't saying is how much money they've invested in their anti-fraud efforts to date, or even what the Facebook "War Room" will cost. They are making a point of demonstrating that they are doing something - even if that 'something' involves some highly questionable actions, which include outright censoring of content using both AI and human censors.

Who Decides What to Block on Facebook?

While it's clear that Facebook's latest fake news efforts will help to keep some propaganda at bay, there are more questions than answers about what the actual effects of Facebook's “War Room” and ongoing AI censorship will have on free speech.

The current approach of selective censorship by Facebook puts the power squarely in the hands of those who work for Facebook. This approach has many industry watchers asking whether or not the end justifies the means, or if all this unbridled censorship simply plays directly into the hands of the very entities Facebook is waging war against.

What Gives Facebook the Right to Censor Content?

Granted, as a private company, Facebook (along with every other social media platform) has no legal obligation to permit unbridled free speech in the same way the government does... yet. In discussing Facebook's removal of posts by Infowars' Alex Jones earlier this year, the First Amendment Center Executive Director Lata Nott stated, "As private companies, Apple, Facebook, and Spotify can decide what content appears on their platforms, so I wouldn't call (the tech sites’ actions) a violation of speech."

On the other hand, the reach of the First Amendment is entering the social media realm, as confirmed in the case of Packingham v. North Carolina, which reached the U.S. Supreme Court in 2017. In a rare unanimous decision, the court ruled that North Carolina's law banning convicted sex offenders from using social networking sites like Facebook was a violation of the First Amendment, and in doing so, reignited the debate over free speech, censorship, and social networks.

Facebook Could Get Ahead Of The Issue... If They Really Wanted To

To date, everything Facebook has done and is currently doing, in the war against fake news, hate speech, election interference, and other posts that violate their policies, amounts to a big virtual game of cat-and-mouse. Facebook, by their own admission, is always on the defense - waiting for the bad guys to strike, and then scrambling to try to stop the damage after the fact.

The thing is, it doesn't have to be this way.

For the past 13 years, we've been working to develop stable, effective code that accurately identifies and blocks bot-based, malware-based, and human fraud attacks in real-time before these threats actually impact the web assets.

At Anura, we're one step ahead. We've dramatically minimized the need for content censorship and "war rooms" - along with the inevitable human error that is involved. We can stop threats at the source - including the kind of threats Facebook is scrambling to react to - long before the content is actually posted online.

So, while Facebook's War Room makes for a great media story, the real solution is already here.

Hey Zuckerberg, whenever you want to level-up your game and get out of the defensive zone, give us a call. We've already played in the big leagues - and we're winning the war.

New call-to-action