How Foreign Powers Take Advantage of Our Social Media

With over 240 million social media users in the U.S., social media advertising has become one of the main ways to reach voters in the digital space.

Political social media strategists have encouraged politicians to start “digital grassroots campaigns” to gain support. This ranges from collaborating with a social media influencer to reach young voters, to hiring a social media marketing agency to craft and launch targeted ads.

However, politicians are not the only ones using social media platforms to influence voters.

Foreign countries are crafting their own social media strategy to sway public opinion in their favor. For instance, critics of the Chinese government have pointed to its use of Facebook, Twitter and YouTube to control the narrative surrounding protests in the country.

Following the 2016 U.S. presidential election, numerous reports surfaced regarding Russia’s use of social media marketing to influence voters. Media outlets analyzed Russia’s intricate strategy to spread misinformation on the most popular social media platforms. According to experts, the objective was to exploit political tensions in order to stoke anger and disagreement.

With the upcoming 2020 presidential election, the conversation has come back around. Will foreign powers influence this election? And what measures are taken to protect Americans?

Common Political Tactics for Social Media Marketing

There are numerous strategies that foreign powers use to spread misinformation on social media platforms. And news literacy (the ability to tell the difference between reliable and untrustworthy information) is playing a major role.

Now, with the advancement of technology, it has become even harder to tell what’s true and what’s fake news. Through the use of artificial intelligence (AI) and memes, Russia attempted to take advantage of these weaknesses and affect the outcome of the 2016 presidential elections.

Read on to learn about the tactics that have reached more than 125 million users through social media engagement.

Memes

Experts performing social media monitoring found that politically charged memes were rampant during the months leading up to the 2016 presidential election.

Memes, which are images accompanied by a caption, are used as a quick, punchy way to present an idea online. They are generally humorous and light-hearted, tapping into popular culture as well as daily life. Many become instantly popular, often going viral within a day of being created.

That viral quality is what makes memes a great social media marketing tool to exploit. Creating them does not require paid promotion, and they grow by virtual word of mouth.

While political memes employ the same visual components, they are typically used to criticize and/or ridicule politicians, law enforcement, government entities, etc. Russia made wide use of political memes during the 2016 election.

Back in 2016, Democratic candidate Hillary Clinton was the primary target of Russia’s meme-related social media strategy. Political memes featuring Clinton with disparaging statements spread like wildfire and contributed to a negative perception of the candidate.

Fake Accounts Across All Social Media Platforms

Investigations into Russian interference revealed that Russia carried out a three-year plan to disseminate and promote divisive statements. They built a network of fake social media accounts to publish controversial social media posts that would stoke political tensions.

After reviewing social media analytics and social media metrics, the New York Times reported that before, during and after the 2016 election, there were thousands of fake accounts, often bots, on Twitter and Facebook that promoted anti-Clinton sentiment.

For the average social media user, these profiles seemed like real American voters who were just vocal about American politics. Upon closer examination, the FBI, cybersecurity firms and social media outlets discovered that these accounts all led back to the Russian government.

Targeted Political Ads

Back in 2017, Facebook announced that it had discovered around 3,000 politically charged ads linking back to Russia. These ads, worth more than $100,000, ran for three years – before, during and after the 2016 election.

While most did not feature candidates, they did target popular but divisive issues, such as race, immigration and gun control in an attempt to take advantage of existing tensions and keep Americans disunited.

Facebook, one of the most popular social media platforms, was under fire for its inaction regarding this and other foreign interference.

Initially, Mark Zuckerberg dismissed claims of interference on his site. Later, Facebook said it was unaware of the interference until after the election. However, a lawmaker from the United Kingdom discovered a 2014 internal email between Facebook employees which suggested that the social media company was aware about Russian activity.

In October of 2017, Facebook executives stood before a senate judiciary subcommittee on Capitol Hill to discuss their role in the interference and explain the extent of their knowledge. U.S. lawmakers criticized them for waiting close to a year to disclose details surrounding inflammatory posts and ads on the platform.

As a response, Facebook vowed to hire more employees to improve the current social media tracker system they have and prevent future election meddling.

Preparing for the 2020 Presidential Election

Russia’s interference in the last presidential election was a calculated plot that attacked several areas of the political system in the U.S. Intelligence officials warn that Russia will continue its attempts to tamper with U.S. elections through social media promotion.

According to FBI Director Christopher A. Wray, the bureau has implemented permanent task forces to monitor the newest social media platforms for suspicious activity. Social media companies are also following suit by taking measures to prevent another foreign infiltration.

Facebook recently announced a public contest for technology that can detect deepfake media. The social media network will be awarding $10 million in grants and awards to qualifying candidates until the end of 2020.

The term “deepfake” refers to the use of AI and facial recognition to superimpose images and alter video footage. Originally, this practice was mostly used in academic research to test AI capabilities. It then became a tool adopted by amateurs in online communities for obscene purposes.

Most recently, deepfake videos are used on social media outlets to lie about current events and create fake political videos. Experts expect them to be a big issue during the 2020 election.

A study by the Stern Center for Business and Human rights of New York University reported that Instagram will likely be the main “vehicle of choice” for the spread of misinformation, just as Facebook was in 2016.

Instagram has since implemented several safeguards against this type of content. The platform will now allow users to flag content that spreads misinformation. Secondly, the social media app has begun blocking hashtags that promote false claims, such as #vaccinesCauseAIDS.

As for users, the best way to protect against these dangerous social media marketing tactics is by conducting research, verifying sources and being selective about what kind of information is shared on social media.