Skip to content
Jenny Wolfram3/1/18 3:06 PM5 min read

How Silicon Valley Is Stepping Up in the Online Terror Takedown

This article was originally published by BrandBastion CEO Jenny Wolfram in Adweek.

In the aftermath of the Charlottesville, Va., white-supremacist rally, hatred and violence spilled from the streets into the online world. Neo-Nazi site The Daily Stormer shamelessly trolled victim Heather Heyer after her death, in turn provoking the digital powers that be.

Google, Facebook, Twitter, DreamHostGoDaddy and more blocked the site, banishing it to the depths of the dark web.

Newly surfaced, The Daily Stormer now clings to Icelandic “free speech” web host OrangeWebsite as it bounces between international domains and social networks.

Meanwhile the titans of tech are working together to quash the many forms of digital hatred. Facebook, Twitter, YouTube and Microsoft are building a growing army of community moderators and artificial intelligence tools to expose and eradicate dangerous online materials.

As extremist propaganda meets live-streamed violence and aggressive terrorist recruitment campaigns, global lawmakers and intergovernmental organizations are turning up the heat. They want iron-clad content moderation and real-time responses, and they’re threatening sanctions and legal action. Across the globe, all eyes are on Silicon Valley as it steps up in the new war on aggression.

From hatred to terror

Last June, the tech world was united in a pledge to clean-up the internet. Signatories of the European Union code of conduct, Facebook, Twitter, YouTube and Microsoft vowed to remove illegal hate speech published on their properties in under 24 hours. But as hatred proliferates, new formats like livestreaming video create fresh channels for violence—we’ve seen murder, gang rape, revenge porn and suicide.

Facebook has some 7,500 moderators on top of community flagging tools and AI technologies such as automated text, image and audio recognition tools. This form of intelligence, known as “computer vision,” processes more images than human eyeballs, removing offensive posts and renegading violent material to a growing repository of horror. In collaboration with Microsoft, Twitter and YouTube, all terrorist imagery and recruitment materials are collected in the shared industry hashes database.

One year down the line, Facebook was cheered for achieving compliance, but a despondent EU Justice Commission warned that Twitter and YouTube are failing. The body seeks legislative action to enforce faster changes, greater transparency and regulation for all online platforms. Engadget’s Jon Fingas explained that while the code of conduct demands that parties take action, new measures “would dictate how.”

Regulating censorship

The EU evaluation of Facebook’s advances against online hate shows that, contrary to the claims of OrangeWebsite, a safe environment free from harassment relies on community controls and moderation. A breakdown of the logic behind these safety checks reveals a labyrinth of conflicting rules and seemingly bizarre decisions.

As reported by The Guardian, Facebook’s “internal training bible” categorizes offensive content with complex abuse standards.

In a post, the text, “Someone shoot Trump,” would be removed, as the president is a “public figure” (with more than 100,000 followers), but the threat, “I hope someone kills you,” directed to an individual that is not defined as “protected,” would bypass these checks.

The inconsistencies multiply when you expand this to photos and videos, and fully autonomous AI decisioning could wreak havoc here. (Remember, Twitter taught Microsoft chat bot Tay to be racist in just 12 hours.)

This means the process of classifying and removing hate and aggression is not yet ready for strict regulations and censorship laws. Facebook users generate 1.3 million posts per minute, and there are 2.4 million Google searches traversing trillions of web pages every minute. This sheer mass of content can’t be trusted to man nor machine.

Playing it safe won’t work, either: If tech firms err on the side of caution and suppress any questionable content, not only are they gagging free speech, but they could also put lives at risk.

Facebook’s rule book states that some potentially offensive content can help to raise awareness of “mental illness, war crimes and other important issues.” A community of more than 2 billion is also an asset to authorities, sharing and exposing crimes and getting help to victims. While governments want a great deal of content removed, some of this information is useful, and the Facebook hive is bursting with big data that’s attracting global attention.

Local nuances and international challenges

International demands from local authorities are gaining momentum in line with the uptick in Facebook Live violence and a recent spate of terrorist attacks on European cities.

In the U.K., one London Bridge attacker was inspired by radical YouTube videosthat are popular with ISIS fighters. YouTube is working with Jigsaw to control this type of footage, showing anti-terrorist videos to potential recruits instead.

Google is also partnered with more than 100 non-government associations and organizations, and is working with police and civil groups. Facebook is additionally working with Vietnamese authorities, the German government, Austria and many more international governments.

Thinktank Policy Exchange reported that the U.K. sees the most clicks on jihadist content, and it has also suggested new laws that would criminalize consumption.

In this way, governments and organizations across the globe are uncovering a Pandora’s box of hatred, applying differing regional context and individual demands to universal moderation rules. They want back-door access to data that would diminish user privacy and full compliance from the biggest names in tech and their tools.

So, while the alt-right marches against Google and smaller businesses like OrangeWebsite and Patreon wrangle with supposedly more liberal approaches to Internet expression, what does it all mean for the future of free speech?

Technology will decide. AI will be central to these developments, recognizing real-time triggers, understanding behaviors and content and mapping metadata patterns that point to risk. It can uncover dangerous incidents as they are happening and suppress malicious content, but there’s danger in automation at scale.

A one-size-fits-all solution won’t work. It requires a deeper understanding to define hate and aggression in myriad scenarios and contexts, evaluating the real impact when content of this nature reaches people online.

The largest Silicon Valley internet companies are the gatekeepers to the digital world, commanding audiences of billions and masses of data. Grouping together, they’re a force to be reckoned with. As The Daily Stormer can attest, there’s power in numbers.

Image courtesy of the-lightwriter/iStock.

Subscribe to our monthly round-up 💌

  • Latest social media news,  trends, and access to exclusive resources
  • Hear from other brands and learn from their stories and experiences
  • Be the first to know about new BrandBastion updates
  • Prepare for upcoming holidays with post ideas and real brand examples

RELATED ARTICLES