How Can Social Media Bots Influence Public Opinion?
The “court of public opinion” isn’t just a popular colloquialism for how people perceive a given topic—it’s an existential threat to celebrities, businesses, and any individual who finds themselves thrust into the limelight for any reason. Part of the problem is that this “court” can judge an individual or organization guilty of heinous crimes based purely on hearsay and accusations despite any evidence to the contrary.
In some cases, all it takes to swing the court of public opinion is an especially persistent rumor or accusation. Take, for example, the infamous Duke Lacrosse Case where several students were expelled, the team coach resigned, and the lacrosse season was canceled. However, as reported later: “it turned out the claims were false and the prosecutor had gone rogue in his tactics… there was no evidence.”
For the expelled students and resigned coach, it was too late. The court of public opinion had already convened, condemned them, and sentenced them to public shunning—causing the Duke University leadership to engage in quick damage control that had long-term impacts on the innocent people being charged.
Social media bots are especially good at generating and maintaining outrage that can stir up negative public opinion that harms the innocent. Let’s go over what social media bots are, how they can influence the court of public opinion, and how to counter bot-based attacks on your own reputation.
What Are Social Media Bots?
Social media bots (a.k.a. social bots) are bot programs that are designed to carry out specific tasks on social media platforms. Examples of tasks they can perform include managing social media profiles, sharing canned posts on different platforms and profiles, or liking/viewing specific pieces of content.
Fraudsters use social media bots to do things like spread misinformation, incite flame wars on social media, and to artificially inflate their “influencer” profiles for committing affiliate fraud.
As with many kinds of bots, social media bots are often arranged into large botnets—collections of dozens, hundreds, or thousands of bots operating in a loose network. These bots are frequently installed on devices without the device owner’s knowledge using “zombie bot” malware that lets the fraudster run their bots on other’s hardware.
How Social Bots Influence Public Opinion
So, how can social media bots influence public opinion? There are a few different ways in which bots can be used to sway the perceptions of the general public, such as:
-
Repeatedly Sharing Misinformation and Fake News Links.
One way that fraudsters and cybercriminals use bots to change public opinion is to have them repeatedly share misinformation and fake news links on social media. These bots may find articles or videos on different platforms and leave a fake story about some wrong committed by the bot controller’s intended victim to vilify them. Alternatively, they may leave links to false “news” articles under different posts to support the claims being made. This is a behavior that has been infamously seen during election years as special interest groups attempt to influence election results. -
Leaving Bad Reviews on Websites.
Another way that “social bots” can negatively impact public opinion is by leaving negative reviews on sites like Yelp or under product listings on retail sites like Amazon. Because your company doesn’t control these sites, it’s often hard to get fake reviews removed. Some fraudsters might do this to get a competitor’s products buried on websites since many consumers search for goods and services based on how positively they’re reviewed. For example, say one of your competitors was launching a new product that fit the same niche as your flagship product/service. They could review bomb your product or service on different sites to drive consumers away from your product and towards the alternative that they sell. -
Creating Artificially Inflated Support for Extreme Views.
Cybercriminals might use bot-controlled social media accounts to make it look like there’s vastly more widespread support for a fringe opinion than actually exists. For example, the fraudster might create a social media group for a controversial opinion and pack it with thousands of bot accounts that then repeatedly share fake news posts and other content to make it look like that opinion has widespread support. This can end up influencing fringe elements and attracting people to this political view who might otherwise have ignored it. In extreme cases, this can end up radicalizing real people. -
Skewing Poll Data That Is Reported by the News.
Another way that fraudsters might try to artificially influence public perception is by skewing poll results from news organizations. To do this, they use specialized poll bots (a type of form bot) to repeatedly fill out a poll with specific results. This can change the public perception of the existing political climate, influence voting behavior, or even make it look like a brand is more or less popular than it actually is—it all depends on the specific poll being bombarded with bot activity.
Another part of the problem when dealing with social media bots is that although many people are now aware of them thanks in part to various news stories covering them, not very many people are able to reliably identify them.
According to the Pew Research Center, only 7% of those who have heard of bots are “very confident” that they can identify them—and people are often confident about subjects that they subsequently fail to pass basic knowledge-based or practical tests on. Pew also reported that “eight-in-ten of those who have heard of bots (81%) think that at least a fair amount of the news people get from social media comes from these [bot] accounts.”
So, while many acknowledge the existence of bots and the likelihood of fake news being shared by social bots, they aren’t always able to identify fake news when they come across it.
The Depp Vs. Heard Trial: An Example of Social Media Bots Influencing Opinions and Outcomes
Now that we’ve covered some of the potential ways that social bots could be used to influence public opinion, what’s a real-world example of how they’ve been used to stir up conflict between groups? One infamous example would be the court case between Johnny Depp and Amber Heard.
While the case was ongoing in court, the court of public opinion was already in session and, apparently, wildly divided between the two Hollywood celebrities. Although some of the vitriol espoused by each side could be attributed to how devoted each celebrity’s fans were to the objects of their fandom, it wasn’t purely due to the dedication of the fans.
Social media posts about the trial on sites like Facebook, Reddit, Tumblr, Twitter, and others were filled with extreme rhetoric vilifying one side of the conflict or the other. Stories were frequently made up entirely or, if there was any truth to them, they were greatly exaggerated to ridiculous extremes. While many of these posts were originally supplied by bots, real human users picked up on them over time and started sharing them as proof why their favorite celebrity was on the right side of the conflict.
How pervasive were bots in the Depp vs Heard trial? Some researchers described it as a veritable “army of bots spreading rhetoric” during the trial. Some bots were favorable to Depp, while others were favorable to Heard.
Why did bots interfere with the public discourse surrounding the trial? One reason might have been to capitalize on the notoriety of the trial itself. By promoting the vitriolic discourse, some third-party apps and organizations could increase interactions on their preferred platforms—generating potential interest and revenue for their apps.
Regardless of the intent behind the interference, the effects were clear: there was an enormous uptick in negative attention on the trial as fans of either celebrity spat insults, false stories, and rhetoric at one another. While this (ostensibly) didn’t affect the trial itself, the mere existence of such virulent rhetoric could have a tangential effect on the opinion of a pool of jurors, judges, or other court officials before the trial and after the fact. People don’t live in a vacuum, after all.
What You Can Do about Social Media Bots Affecting Public Opinion
Because of how dangerous social media bots can be to organizations—potentially spurring negative public opinions or even encouraging extreme actions by those who fall for the vitriolic stories shared by bots—it’s more important than ever to be able to reliably expose and counter social bots and the fake stories they share.
But, what can you do to uncover bots on social media or other platforms? Unfortunately, manually investigating every negative review or vitriolic social media post on the internet would be massively impractical. It would take dedicated expertise and an incalculable number of labor hours to positively identify every post as being real or fake using purely manual methods.
Instead, it would be better to use a dedicated anti-fraud solution with proven bot identification capabilities to track down bot activity on social media and proactively protect your social media accounts from fraud. Anura is an ad fraud solution that is certified to identify bot activity in real time—flagging it so you can combat social media bots more efficiently and effectively.
Protect your business’ reputation from the effects of social media bots now by adding Anura to your arsenal!