This article was originally published by BrandBastion CEO Jenny Wolfram in Adweek.
Under the cover of night, they laid a trap for unsuspecting contacts, posting “Hey, it’s OurMine Team, we are just testing your security, please send us a message.” IT experts confirmed that they’d entered through a Foursquare hack, crossing into Twitter as Holmes had enabled the application in the past.
The fact that the chief of a social media management company could fall victim to these ploys shows how adept hackers have become. He joins the ranks of Twitter co-founder Evan Williams and Facebook co-founder and CEO Mark Zuckerberg himself, who was hacked for the second time last year.
In the digital world, no one is safe. And these hackers are just one offender in a motley crew of rampant social media villains.
As increased eyeballs and brand dollars turn to social media—eMarketerpredicted that ad spending on social media will hit $36 billion this year—businesses open to new interactions with their customers fall foul to the darker characters lurking on the web.
There are hackers, pirates, spammers, trolls and more. They’re breaking into accounts, taking over paid ad comments and hijacking chats with a new wave of cybercrime.
Last year, we conducted a range of studies, exploring the biggest dangers to brands on Facebook and Instagram by analyzing nearly 500,000 comments on brand ads. Here’s a rundown of the biggest current threats and what’s being done.
Social bots and fake accounts
Problem: The fake like business is booming, with earnings estimated at more than $200 million per year. These click farms sell likes and interactions to brands looking to bolster their social presence, and this is all powered by false users. In order to appear legitimate, fake accounts will follow and like respected brands, helping to create the illusion of a real person.
Brands complain about fake likes from false accounts, as it risks their own images, and weeding out the fakes is nearly impossible without the help of smart technology.
More advanced tools are also employing social bot technology, spreading messages to other real users. President Donald Trump’s army of Twitter bots was reportedly responsible for one-third of pro-Trump tweets in the lead up to the election.
Bots operating with an agenda on social media pose a big threat to brands. And when false accounts and bots are employed by spammers and scammers, they act as a powerful tool for spreading harmful content.
Solution: While there is no definitive answer to the issue of Twitter bots, some users have been fighting fire with fire, developing their own troll-hunting Twitter bots.
Pirates ambush brand campaigns
Problem: Drifting through the backwaters of the web, pirates ambush brand advertising as they hijack their paid advertising, posting links to free streaming channels, illegal download sites, and counterfeit items.
Online piracy directly impacts entertainment-industry pre-release promotions, as well as infringing on licensed digital content copyright. These activities effectively halve video-gaming industry revenues, channeling ad revenues to the game hackers, modders and pirates commandeering brand conversations.
In our investigation of major movie studios’ Facebook ad campaigning, we found that 31 percent of total comments beneath the ad lured users with promises of free pirated versions.
More often than not, these links act as fronts (just 3 percent led to real streams or downloads) seeking to obtain users’ credit-card information; planting malware, adware and spyware viruses; or driving users to clickbait sites.
Solution: Developers have started to fight back, banning modders and hackers, opting for different paying models and modifying or blocking content in the game.
Also, emerging technologies using machine learning and natural language processing can help moderators identify and remove this harmful content.
Scammers and hackers plug phishing hoaxes
Problem: Hackers and viruses spread malicious content online not only through hostile takeovers like what happened to Holmes, but also through those fake accounts, posting links directly onto brand pages and in individual post comments.
Cisco also conceded that Facebook scams are the largest threat to organizations, spreading malicious redirectors.
We found that the most prominent threats to brands was the littering of spam in the form of chain letters, pornography, competitor promotions and more dangerous online scams, generating money from personal data, subscription payments, referral links and malware.
Our investigations of Facebook comments found that in the beauty category, 10 percent of total engagement was spam, including unauthorized selling.
Mashable reported that these type of posts—typically including keywords like “click here,” “free,” “wow” and “join”—are powering an underground industry. The more likes a page has, the more money a scammer can make through dodgy redirects.
Scammers often pay up to $200 to post their links on popular Facebook pages. With the reach of brand advertisers, they get this exposure for free, costing just the reputation of a brand and the customer experience.
Demanding fans put pressure on moderators
Problem: If you thought that Instagram’s estimated 24 million spam bots were a problem, then wait until you meet its general public.
Instagram boasts the highest levels of brand engagement—10 times that of Facebook and 84 times higher than Twitter.
These interactions make Instagram an ideal channel for fan communications, initiating new conversations with consumers and gaining valuable feedback. But community moderators that fail to respond to comments risk the wrath of demanding fans and potential public shaming.
In our exploration of beauty brands on Instagram, we found that 4.93 percent of the comments were direct inquiries, to which just 14.54 percent received replies.
Questions surrounding product details and availability that go answered are a risk to brand reputation, also hampering potential new sales. It’s a big job for social media teams with big returns. But a popular account can become a poisoned chalice in the presence of these villains—be they neglected customers or scheming cybercriminals.
Solution: Facebook is adding 3,000 more people to its content reviewing team, bringing to 7,500 the total number of moderators it employs globally to review content being posted by its more than 2 billion monthly users.
As Facebook becomes the de facto portal for digital exploration and brand management, Zuckerberg and team are making strides to curb these dangers. Facebook has been busy creating community flagging tools, forming strategic partnerships and employing advanced AI its algorithms to fight the proliferation of social media crooks.
Having identified these threats, businesses can be better positioned to fight them and continue to thrive in the burgeoning social media landscape.
Image courtesy of BrianAJackson/iStock.