The rise of the digital media ecosystem – with internet search engines, over-the-top video services, social media networks, and web-based news outlets all simultaneously vying for our collective attention – has dramatically revolutionized the way that the average American consumes information today. This new media regime increasingly influences every aspect of our society, from how we educate our kids to which products we choose to buy. And critically, one feature of our modern information diet is a practice known as microtargeting, and among its many commercial and noncommercial uses, it is continuing to change the way American politics works.
What is microtargeting?
Microtargeting is a marketing strategy that uses people’s data — about what they like, who they’re connected to, what their demographics are, what they’ve purchased, and more — to segment them into small groups for content targeting. It’s the reason that if you typically shop at Whole Foods, you may be served an advertisement for organic sunscreen during the Summer. And while it can help deliver content that is interesting and helpful to you, it also has a dark side — especially if it delivers information that’s inaccurate or biased and meant to sway your vote.
How people and organizations collect and use data (for advertisements and otherwise) is largely an unregulated arena for market activity in the United States. It is thus squarely on our shoulders – as consumers, citizens, and voters – to understand the media regime’s current nature and take care to protect ourselves from its rough edges. This piece will attempt to illustrate the technologies underpinning it today, and suggest remedies for individuals to prepare as the U.S. midterm elections fast approach.
A look back: the 2016 U.S. presidential election
A stark illustration of the use of microtargeting lies under our noses: the 2016 U.S. presidential election. Comments made by the campaign’s head digital strategist Brad Parscale suggest the potential of political communications pushed over digital media platforms like Facebook and Twitter, utilizing microtargeting as a practice. Earlier this year, he tweeted about how the campaign enjoyed “100x to 200x” times greater efficiency than the Clinton campaign in social media advertising disseminated on Facebook.
I bet we were 100x to 200x her. We had CPMs that were pennies in some cases. This is why @realDonaldTrump was a perfect candidate for FaceBook.
— Brad Parscale (@parscale) February 24, 2018
Did microtargeting have any impact on the final result in November 2016? When swing states are won by small margins, one gets the feeling that anything could have tipped the scales. We can point to microtargeting, online echo chambers and the proliferation of fake news on social networks as building blocks of Trump’s victory.
Traditional Media Versus Digital Media Today
It is worth juxtaposing digital media with the traditional media landscape, which for decades has been dominated by television, radio and print news. There are two key facts to consider about traditional media:
- Traditional media platforms push content (e.g., political ads on a TV network) to very large – broad — audiences.
- Political ads disseminated over traditional media outlets necessarily receive tremendous public scrutiny for compliance with federal election regulations, a fact that naturally assures that ads that appear on traditional media usually do not contain falsehoods.
Those facts do not hold true when it comes to microtargeting on digital media. With microtargeting, advertisers can curate the content predicted to be the most relevant to specific groups of people (also known as “filter bubbles”). And large, homogenous audiences typically do not see the same ads.
Instead of all the national viewers of Fox or NBC, or all the regional listeners to a radio or television newscast, a digital advertiser’s intended audience for a given ad can be more targeted, more narrow, and ultimately more cost-effective in reaching the intended audience.
Does microtargeting work?
Microtargeting might seem subtle and inconsequential, but if executed effectively, it can generate political shockwaves.
How does it work? The answer lies in the commercial practices at play behind many digital platforms. The leading social networks – Facebook, Youtube, and Twitter, each of which testified before the U.S. Congress because of malicious Russian activity on their services during the 2016 election cycle – have the same business model. It is three-pronged:
- The first component is to engineer a universe of tremendously compelling, borderline-addictive services that hold the user’s attention, whether through the ever-updating News Feed, or messaging apps with push notifications.(More on the attention economy here.)
- The second component is to collect as much data on the individual user as possible through those digital services – all to the end of constructing comprehensive behavioral tracking profiles on each and every one of us.
- And the last component is the development of algorithms designed to do two things: predict which content will keep us scrolling, watching, and clicking; and target the ads interspersed throughout that content that we’re likeliest to click on for optimal revenue.
This business model begets microtargeting amongst political communicators. The more digital platforms can curate your experience and compel your attention through the data they hold about you and the profiles they have created based on that data, the more effectively they can craft filter bubbles of interest to political targeting.
Segmentation and microtargeting are valuable for commercial purposes, and the systems built to maximize that commercial value can be used just as easily to divide us all up into politically opposed groups, and feed us the ads and content that they predict we’ll wish to see. And it’s our personal data – the information derived from our use of digital platforms, drawn from our voter files, and purchased from non-transparent brokers– that fuels this activity. It’s an explicit harvesting of our likes, dislikes, interests, preferences, behaviors and beliefs.
Microtargeting and the spread of misinformation
Political communicators who are able to master microtargeting and content curation over social media can also commandeer the gold mine in modern politics: “organic” shares and reshares of content pushed by unpaid users who appreciate what they see — whether or not they know it is true — and wish to spread it around their networks. This results in free content consumption for the political campaign. It is this concept – that of the viral spread of “unpaid” or “organic” content – that further encourages the success of misinformation campaigns.
Misinformation is false content that is spread – knowingly or unknowingly – by users (whereas disinformation is false content that is spread knowingly). If someone shared an article that they truly believed that said: “The earth is flat,” that would be considered misinformation. It is more dangerous when false information is about politics and the future of our democracy. For example, by all sources, the Pope did not endorse Donald Trump. And, of note, researchers concluded recently that we consume and re-spread fake news much faster and farther than the truth, either (Science, March 2018).
Here’s more on misinformation:
Voters need to rely on themselves
Revisiting Parscale’s boast, it becomes clear as to how he and the Trump campaign might have accomplished their tremendous successes over social media. And we cannot rely on the industry to keep their platforms free of disinformation or detect all malicious activity.
We must remain vigilant and employ our best intuition and judgement in the face of online political advertising. Here are a few things that we should try to remember.
- Examine political content with great vigilance. At a bare minimum, we must take with great circumspection any content – paid or unpaid – that we see on the internet that involves a political candidate. This has always been the case even for television advertising, but in the digital world where accountability is less of a given, we must be triply cautious.
- Use all the information that the platforms give you. Some of the major social media platforms are developing mechanisms on an ongoing basis to express to the user that certain content is misinformation, disinformation, or otherwise constitutes some other sort of policy-violating content. This information might be expressed by the platforms as a disclaimer or warning somewhere near the content itself, or the contents panel might have some visual indicator noting that it is dangerous.
- Report content that violates your personal standards of integrity. The next time you see an ad for a candidate – whether they are competing for local office, Congress, or the White House – consider where the money behind it could be coming from, what they ad says, whom it targets, and what it says. Then consider whether it could realistically be content meant to misinform – and if so, report it to the platform on which you saw it. Most major social media platforms, including Facebook, YouTube and Twitter, have ways to report specific posts, pages and videos.
- Use free online tools to understand the political environment. Some of the major platforms possess tools now to help inform your understanding of the information you see. Examine some of the disinformation that Russia disseminated during the 2016 cycle so that you know what to look for. Should you wish to explore further, look at the entire database of 3500 Russian disinformation Facebook posts that were shared by Congress. You can also use free online tools like ProPublica’s Facebook political ad collector to examine what other users have reported and shared with the online community.
- Check your information diet. Many argue that the modern information diet for the average American is not a healthy one, particularly because of the impact of social media algorithms that have caused the filter bubble problem. We should always try to diversify our mediums for content consumption — including by consuming more traditional media — so as to stay as informed as possible.
It is a tough thing to ask voters to remain cautious in this way. For the most part, we don’t have the time or energy to think critically about the information we see on a daily basis; we have busy lives and many other things to do besides worry about the veracity and earnestness of political communications.
But for now — until the industry does a better job to protect us and the government steps up its enforcement mechanisms to protect our information ecosystem — being cautious is indeed the best weapon the American body politic possesses in the face of harmful disinformation.
The time has come for us to take things into our own hands. Nothing short of the honesty and integrity of our democracy is at stake.
Dipayan Ghosh, Ph.D. (@ghoshd7) is the Pozen Fellow at the Shorenstein Center at the Harvard Kennedy School. He was a technology and economic policy advisor in the Obama White House, and until recently, served as a privacy and public policy advisor at Facebook. Opinions are his own.
If you enjoyed this content, you may also like:
- 5 Things You can Do to Counter Misinformation
- Why Fair Elections Require Responsible Tech
- Candidate Y, speculative fiction by Malka Older
- Hello, I’m Your Election, speculative fiction by Genevieve Valentine
- What to Expect When You’re Electing, an IRL podcast episode