Skip to content
Crystal Cha11/1/22 6:26 AM9 min read

How Brands Can Actively Fight Racism and Discrimination on Social Media

As a marketer, it will come as no surprise that the time people spend online has been on the rise, especially during the pandemic. As people were forced indoors, the amount of screen time increased dramatically, by 60 to 80 percent, according to a literature review by Frontiers Media.

A large portion of that screen time is taken up by social media, which is one of the most popular online activities today. In 2021, over 4.26 billion people were actively using social media worldwide. That number is projected to increase to almost six billion by 2027, according to Statista

Yet the more social media and online exposure users have, the higher their risk of facing cyberbullying, harassment, and racist and discriminatory behavior. The risk increases for vulnerable groups, such as children, teenagers, and minority groups. 

For brands seeking to establish and maintain brand trust and brand safety online, it’s critical to be aware of the issues and challenges of having an online presence in an always-on environment. While the Internet and social media have made it easier than ever for brands to reach customers, and for customers to connect with each other and form online communities, there are inherent risks that come along with the new opportunities. 

Read on to learn more about the issues and challenges of racism and discrimination on social media, and what you can do as a brand marketer to effectively address these issues.  

What Are the Consequences of Cyberbullying?

According to Pew Research, 59 percent of teens have experienced cyberbullying at some point in their life. In Canada, a recent study found that over half of young people under the age of 35 see racist content online, according to Leger and the Association for Canadian Studies

This rampant online bullying, harassment, and racism has real consequences for the mental and physical health of the individuals affected. And those who are most severely affected are those who are the most vulnerable: youth and children, and minority groups. 

Multiple studies have shown that bullying victimization, whether in person or cyber, or both, is associated with a higher risk of sadness, depression, anxiety, and suicidal tendencies among teens. Youth under 25 who have not yet developed the emotional maturity and tools to navigate this kind of harassment are twice as likely as older cyberbullying victims to attempt suicide and self-harm. 

Research has also shown that the mental health implications for victims from racial minority groups are higher, and they experience more loneliness when being discriminated against. Other types of minorities, including sexual minorities, are also more vulnerable and face a higher risk of online bullying. 

Even the popular and famous are not spared the effects of online discrimination and hate. Influencers who don’t fit the traditional Eurocentric standards of beauty also face persistent cyber attacks and harassment, such as ​​Alicia McCarvell, the body-positive TikTok influencer with 5.5 million followers. 

What Can Be Done About Online Racism and Bullying?

As awareness and urgency grow around these issues, platforms have been taking action and enforcing strict bans on hate speech and discrimination on social media. Every mainstream platform today has clear policies banning hate speech. 

Meta (formerly known as Facebook) even publishes a quarterly report in its Transparency Center on how these community standards have been enforced in its Transparency Center. However, the sheer volumes of such content and the ever-evolving nature of language make it nearly impossible for platforms to detect every piece of harmful and discriminatory content.

According to a report published by UNESCO:

“Using automated detection tools based on methods available today, Twitter, Facebook, Instagram, and YouTube have increasingly reported flagged and/or removed content. Between January and March 2021, YouTube removed 85,247 videos that violated its hate speech policy. Its two previous reports show similar figures. For the same quarter, Facebook reported a total of 25.2 million pieces of content actioned, whilst Instagram reported 6.3 million pieces of content. According to Twitter’s last transparency report, the company removed 1,628,281 pieces of content deemed to violate their hate speech policy between July and December 2020.”

What is clear is that brands and individuals need to also play a role in upholding safe online spaces for all of us. 

What Can Brands Do To Fight Racism and Bullying Online?

If you are a brand owner or marketer who cares about brand safety, no doubt this question has crossed your mind. With so much hate and discrimination proliferating on social media and platforms, finding it challenging to stem the tide completely, what action can companies and brands take to safeguard their brand reputation and the mental health and well-being of their audiences and customers, especially the most vulnerable?

To address the dilemmas of regulation and moderation of online content, UN Human Rights has proposed five actions for States and companies to consider. At BrandBastion, we champion a more inclusive, safe online environment for everyone, and our solutions align closely with several of these ‘five actions’ suggested by the UN. 

Improve content moderation processes

The first action that UN Human Rights urges is that the focus of regulation should be on “improving content moderation processes, rather than adding content-specific restrictions.” Algorithms and automation alone are often too simplistic, blanket banning all content that is at risk of being harmful, without taking into account context. For instance, many social media platforms have very strict filters around all forms of nudity. This leads to perfectly useful and educational content being censored, including, for instance, medical content and content about childbirth and breastfeeding. 

We live in a complex world, and when faced with complex issues and decisions about what is appropriate and what is not appropriate, people should constantly be working alongside the technology to train the algorithms to make better decisions, the more data there is available. 

To further underscore the importance of human involvement in decision-making and avoiding bias, a 2021 study on racism on social media found that a lot of racism experienced on social media was covert and often disguised as banter or emojis that bots failed to detect as harmful, such as the use of a monkey emoji when racially discriminating a Black person. 

Often, these bots are unable to pick up contextual details and be flexible about the rules in instances when a human moderator would clearly be able to understand the context and intent behind the content. 

Our moderation solution, BrandBastion Safety, addresses this with AI plus human moderation that goes beyond social media platform algorithms. We have developed our own proprietary AI, trained by linguistic specialists, to detect and categorize harmful comments, but we go one step further - our technology is supervised and supported by our human content specialists 24/7 to ensure accuracy at scale. 

Be transparent about how content is curated and moderated

The second action that UN Human Rights urges is for companies to be transparent about how they curate and moderate content and how they share information. 

As a brand, having clear and transparent community guidelines available on your website is important. For some examples of brands with well-written community guidelines and what to consider when putting together your own, read our blog post:

How To Set Up Community Guidelines On Social Media

Enforcing those community guidelines is equally important. BrandBastion supports brands in understanding exactly how we are moderating your content, by providing you and your team full transparency into all moderated content by category. All content we moderate and flag can be viewed in a live dashboard, as well as during review and reporting sessions with a dedicated account manager. You can turn on and off moderation categories and decide how strict you want to be with moderating content, based on your brand’s needs and the audiences you serve (for instance, family-friendly brands might choose to take stricter stances on swear words, which BrandBastion allows you to adjust for).

Other approaches proposed by UN Human Rights include: having clear state laws governing restrictions on harmful content, providing users with avenues to appeal against decisions they consider to be unfair, and granting independent courts the final say over the lawfulness of content, with civil society and experts involved in the design and evaluation of legal regulations.

Safeguarding Your Brand With BrandBastion

Beyond the aspects of BrandBastion’s solutions mentioned above, other benefits of working with us include receiving real-time alerts. This allows you as a marketer to respond to sudden spikes in harmful content, such as discrimination. You can also access detailed reports and analytics, which allow you to better understand how your audience is engaging and the brand risks you’re facing across your social media accounts. 

Additionally, you’ll get support from a dedicated Account Manager who will help you refine your community guidelines, considering your brand's industry and current social climate, and frequently share updated recommendations to keep your brand safe.

Case Study: How BetterHelp Prioritized Their Customers’ Well-Being While Increasing Positive Sentiment by +116%*

Social media is a powerful tool for companies in the health sector, like BetterHelp, a platform that provides clients with access to mental health services through a network of over 20,000 therapists. However, online interactions with clients often require special attention and care, due to the sensitive nature of the industry.

To create a safe, non-triggering space for healthy dialogue, BetterHelp wanted to steer all comments related to mental health issues and self-harm to a separate, dedicated channel to ensure they are properly managed. With the BrandBastion Safety & Care solutions, the brand was able to do this, increase their response rates, and see a +116% increase in positive sentiment. They have had peace of mind knowing that their clients are treated with the utmost care and are safe on social media.

Read the full case study → 

In conclusion, while social platforms do have an obligation to keep their users safe, brands and companies can and should do more to go beyond algorithmic, automated moderation and get more nuanced and contextual in filtering content. 

You should look at your comment section the same way you look at your branding, public relations, and corporate social responsibility efforts - conversations are an extension of your message and will create negative associations if you don’t take action.

A Harris poll found that 87% of consumers feel that it is the brand’s responsibility to ensure their ads appear in brand-safe environments. Customers expect more from brands, as they believe brands to be complicit if they do nothing.

Above all, doing the right thing matters, and it’s vital that your brand removes hate speech and other harmful comments to demonstrate that this behavior is not tolerated on your social media accounts.

Ultimately, what’s at stake is the mental health of their audiences and brand reputation - by protecting your communities online, you’ll ultimately protect your brand reputation. 

Find out more about how BrandBastion can help you safeguard your brand. Book a free consultation today!

Book a Meeting

RELATED ARTICLES