The internet has allowed us to be more connected than ever. But the anonymity and distance it has created alongside connectivity has often brought out the worst of human behavior, hidden behind a screen and an anonymous username.
Multiple studies show that more than half of all internet users have experienced online bullying, with the bullying being constant and persistent for about 25% of users. Online bullying is a serious problem, and greater collective action is starting to take place to address this.
Yesterday, Sheryl Sandberg announced on Instagram:
View this post on Instagram
@instagram is a place where people can have positive experiences and express themselves safely. This week the IG team is sharing updates to help limit bullying and spread kindness, including: using AI to find bullying in photos and captions and send them to the teams to review, adding the bullying comment filter - which automatically hides comments meant to harass or upset people - to live videos, and creating a new camera effect to inspire kindness in Stories (see my Story for more). Thanks to everyone who helps keep Instagram a kind and safe space. #nationalbullyingpreventionmonth
A post shared by Sheryl Sandberg (@sherylsandberg) on
This announcement is the latest in a movement to tackle the pervasive issue of cyber bullying. The urgency of this issue has escalated over the past few years, especially as high profile suicide cases resulting from cyber bullying began to make their way into the public consciousness. In recent years, Instagram has launched a global offline campaign called #KindComments and Facebook has translated its Bullying Prevention Hub into over 55 languages.
These efforts by social media platforms are commendable. And not only the platforms but the advertisers on these platforms are following suit. Many large brands, such as P&G with its "The Talk" campaign and Diageo with Smirnoff's #ChooseLove campaign, have moved from standing on the sidelines to take a clear stance against discrimination, harassment, bullying, and hate, using their advertising as a platform to stand up for values of respect, equality, and diversity.
Damon Jones, P&G vice-president of communications and advocacy, explains that when they launched "The Talk" campaign, while most responses were positive, some were also very negative:
"...some were angrily questioning why companies are involving themselves in social and political issues. We have an answer to that: If not us, then who? If not now, then when? We didn’t stop, we increased both advertising spend and PR with the key message that this film has an important purpose: to promote conversation.”
Starting conversations are great, but it's important to keep the conversations on the right track, or they can easily spiral out of control. Most brands have clearly defined Community Guidelines to guide people who post on their brand Pages or other brand-owned forums. Organic social media and community management teams work to ensure that trolls and bullies are not violating these guidelines, and swift action is usually taken against those who break the rules, with repeat offenders being blocked and hateful comments being removed.
However, the one area of social media that is often neglected by brands in their attempt to protect their communities is the discussion that take place in the comments of social media ads. These ads are not listed on a Page / Profile newsfeed, and so when comment volumes are high, advertisers don't get notifications about each and every one of them. When there are ads being run from multiple ad accounts or using the dynamic creative or dynamic product catalog formats, the complexity is multiplied (read more about why it is difficult to manage comments on ads in our guide here).
In our experience, across virtually every industry, brands are leaving highly negative and harmful comments unattended. These comments are visible to the wide audiences that these ads with large budgets reach.
To underscore the severity of this issue, here are some recent statistics from BrandBastion's 2018 data, pulled from our work as a Facebook Marketing Partner helping brands manage social media engagement. Trigger warning: The following paragraphs contain explicit content.
1. In the apparel industry, ads featuring models are a strong target for bullying comments. 56% out of all ads in a sample of 63 ads analyzed received sexist and inappropriate comments. (Based on a sample of over 9,000 comments)
Let's get these two dumb sticks, fill them with drugs and film them dancing like the dumbasses they are. - Real comment example from a live ad
Underneath it all she’s a super hoe ready to be naked any chance she gets. - Real comment example from a live ad
2. For the gaming industry, 2 - 3% of all comments posted on brands' social media posts are comprised of extreme profanity and discriminatory comments, which run into the thousands for one brand alone every year. Facebook automatically hides around 35% of extreme profanity, but when it comes to discrimination, Facebook only hid around 16% of the discriminatory comments that BrandBastion was able to detect. (Based on a sample of 1 million comments)
users are gay as f*ck. I whoop their *ss but it’s just like huh you have absolutely zero skill - Redacted comment example moderated from a live post
3. Ads from publishers attract some of the highest rates of harmful comments - 1 in 25 comments is harmful. 31% of these harmful comments contains defamatory language and online bullying, while another 20% comprises hate speech. (Based on a sample of 40,000 comments)
F*cking Muslim u dimwit scumbags to those who voted for him. - Redacted comment example moderated from a live post
Immigrant trash.... - Real comment example from a live ad
We believe there is a tremendous opportunity for advertisers to shift the conversation away from negativity on these ads, which reach large audiences and impact public perception of a brand.
This can be done by:
BrandBastion's AI + human solution can take care of these aspects on running ads on interactive social media platforms, where anyone and everyone can jump on board and leave any kind of comment they like.
Facebook and Instagram have some auto-moderation tools in place, but they are not fail-safe. Brands who truly care about protecting their communities from harassment and bullying should be able to monitor and control all conversations on their owned properties around the clock and be able to immediately step in if things get out of hand, and the good news is, they can.