Social Media and Divisiveness: An Explanation From a Data GeekReading time: 4 minutes
Humans are fundamentally social by nature. We evolved to depend on each other for everything from gathering food to making clothing. This is one of our greatest strengths as a species, but in certain contexts, it becomes a weakness.
Just as most of us lack the time and resources to farm and harvest our food, we also lack the individual capacity to become experts in all of the pressing issues facing the planet. It simply takes too much time to become versed in every subject. As such, we often look to others to analyze information and help us form our thoughts. We socially source our opinions.
We take this more efficient collaborative route not because we are lazy, but because modern life’s structure makes it difficult to adequately analyze the enormous amount of information available on political and social issues. Relying on group consensus and consulting others for information has been a part of human life since our species’ existence began.
But in a world where social media enables anonymous manipulation, socially sourcing information can be complicated and dangerous.
Imagine we posed the following question to an audience of 1,000 randomly-selected people:
“Should Lithium-Ion or Nickel Metal Hydride batteries be used to power the next wave of electric cars?”
Chances are that the average person knows very little about the substance of this issue. Most would struggle to give an informed answer. If we surveyed the group immediately after asking the question, most would likely say they’re uncertain which type of battery is preferable.
Now let’s change the experiment to see if we can engineer a particular outcome.
This time, we’ll ask the same group the same question. But instead of immediately surveying the group for their answer, we’ll ask them to first spend an hour in a room with a group of people discussing the issue. To make things interesting, we’ll plant several people in the group with a strong interest in seeing Nickel Metal Hydride batteries win the debate. We’ll ask these “plants” to boldly and confidently expound on the virtues of Nickel-based batteries over Lithium.
With “plants” in the group, especially if their voices are confident and amplified, we’ll likely see an increase in the number of individuals that form an opinion in line with our engineered outcome. And research shows that once people form an opinion on something, it’s challenging to get them to change their minds, especially if others reinforce that opinion.
Manipulation in Social Media
Now let’s apply this concept to social media. Facebook and Twitter are, in many ways, giant versions of the experiment described above. They are platforms where groups of people gather to, among other things, discuss and argue about political and social issues.
Unfortunately, in the same way that we planted people in our groups to sway opinions toward an engineered outcome in the above hypothetical experiment, external forces do the same on social media.
Plant the right number of clicks or likes and you can easily portray a philosophy or idea as having broad support. Plants can sway large groups of people toward a desired outcome by bolstering an idea’s social credibility. A recent documentary called “The Social Dilemma” explored this idea. Create a fake news story. Pay for bots or troll accounts to support it. Suddenly, you control the conversation. And the more you control the conversation, the more people you can sway toward your desired belief.
Controlling the Conversation
This tactic has impacts ranging from relatively benign (such as a business promoting their product over competitors) to severe. Foreign interference in social media conversations surrounding elections and contentious issues has become a persistent crisis.
To highlight two notable examples, the Russian and Chinese governments pursue a goal of weakening democracies via social media, and it’s working. They succeed because very few people know, care, or understand how powerful a weapon social media manipulation can be. These manipulation tactics were employed in Spain, England, Ukraine, the USA, and many other countries around the world.
Foreign entities with ill intent create fake groups, false articles, and misleading posts on divisive issues. Then, they use paid advertising to jump-start the conversation. Paid troll farms then boost and promote the content to create division and steer the conversation toward desired outcomes. This interference poisons the conversation and leads observers to fall in with what appear to be credible, popular ideas with social proof. In reality, they are sometimes falsehoods deliberately engineered to divide and enrage.
How can data help solve this problem?
Though the problem initially seems vast and insurmountable, it’s possible to identify manipulation and interference with a data-informed approach.
In data hosted by Twitter and Facebook, foreign adversaries and their troll farms can be identified by tracking patterns to find anomalies that stand out from baseline user behavior. These patterns are traceable back to specific networks of accounts that post simultaneously on the same topics. Clustered posting with specific hashtags or promoting and liking specific stories can reveal patterns that allow algorithms to identify and flag suspicious activity. By looking deeper into the data, algorithms can even distinguish between human trolls and bots.
The algorithms required to identify these patterns are relatively simple and are well within Facebook and Twitter’s capacity to implement. Forcing more substantial implementation will require public advocacy and pressure. It is incumbent on us to do our part and advocate for these measures to be taken to preserve the national discourse and prevent manipulation.