With the announcement of Facebook’s core focus area of 2018, the question of the quality of social media content, campaigns, and comments is more relevant than ever. One of the ways brands can easily contribute to the Facebook’s wellness promise of 2018, is to take control of the interactions on their social properties. This includes that the users, and brand itself, are being protected from harmful content.
Almost all types of damaging content can also be seen in comments to the ads of some of the most popular brands. Spreading like a parasite, these comments can completely block all real interactions with customers and make them wonder why is the company not doing anything. Any advertising manager should ask him/her-self a question: Is this what I aim to promote?
So what kind of content is truly harmful? The short answer is, it depends on how you want to approach things.
For the longer answer we invite you to read forward. This blog post will illustrate what we have identified as most common types of negative content based on the millions of comments we have encountered over the years in our experience of managing social engagement for big companies.
Some of this content is inarguably universally harmful, such as spam, scam, violent comments and imagery, or notoriously vague hate speech. Apart from being often offensive to users, this type of content is beyond comparison in stifling any meaningful interaction among users.
These two are sadly still common problems across the web, and the phenomenon does not seem to be dying out anytime soon. While it is becoming rarer to have the once in a lifetime opportunity to get money by helping a foreign prince in need, you can still pay for a spell to win back your lover, or chat up a love-deprived bot. We think the advertiser should be able to control whether they want to pay to promote this kind of content.
Sometimes accounts on social media are used simply just for the shock-value that can be achieved with violent imagery posted on various pages, written or in picture format. As this content is most assuredly not safe for work, we will refrain from going into too much detail. Let’s just say that whether it is an argument between users, or directed towards brand, we believe no one should be exposed to wishes and depictions of suicide, violence, mutilation, or sexual harassment online.
This was one of the buzzwords of 2017, and seems like it will be prevalent during 2018 as well. While courts and social media giants are struggling to find a definitive answer to the question of what should be done about it on global scale, the tools to monitor blatant racism, sexism and other types of discriminatory content are already there. While the question of relationship between free speech and hate speech is a complex one and yet largely undecided, most would agree with us that the brand’s ad-spend should not go into promoting this type of content.
Scrolling through the wastelands of spam chains makes most users quickly click away, whereas discriminating and hateful language alienates potential customers and existing community members.
The reality is that all pages receive this type of content, the important matter is how it is dealt with, and if it is dealt with at all.
However, while removing the above types of content goes a long way to nurture and encourage meaningful experiences and engagement online, there are other types of content that in itself, might not be offensive, but can still harm your brand.
We have written before about the reputation damage that harmful comments can cause to your ads, and we know the content described above covers only some of the different issues brands encounter on social media. The other part of the damaging content are the comments that we call contextually harmful.
Imagine that you run a family friendly brand, or a specific product. Maybe a video game, an industry struggling with notoriously toxic communities, or a children’s clothing brand. You might want to be stricter and enforce clean language policies suitable for little eyes. This means that suddenly profanity becomes harmful for your page.
Perhaps you are offering a media service online, with area-locked services. This means that now the harmful content you should be screening for includes messages giving other users tips related to account misuse and piracy.
Pharma brands should not enable users to discuss folk remedies or other medications with each other, as these conversations can potentially cause serious physical harm for the users, and big legal harm for the brand.
A retailer may receive an extremely negative customer review on their social media page or under their ad campaign. While these legitimate concerns naturally need to be addressed, the cold fact is that users rarely bother editing their post after their issue is solved. This means these stories will be dragged along your ad campaign, forever. On other hand, positive stories may also go unnoticed in the comments, and be drowned under upset customers, or spammers.
A beauty brand might wish to tackle the controversial topic of animal testing, and combat any and all false information that can do irreparable damage to their image. At the very least most would like to be informed of this topic on their social properties.
The list goes on.
What these topics have in common is that they are not as such what people would consider harmful content. They are topics and discussions that can and do harm the brand and their campaigns.
And while it may be that Facebook, Instagram, and global community manage to find a way of combating hate speech, measures they take will not address these issues.
That is why we are here to help. We already monitor, take action on, and analyse the content mentioned above. We want you to know what people are saying on your properties, and enable you to have full control of what kind of content can stay on your page and what happens to it.
Want a free analysis of your brand's social properties identifying whether your brand is at risk of the threats mentioned above?