Just What We Need: An Algorithm to Help Politicians Pander

A Northeastern University researcher has developed an algorithm that could make it even easier for politicians to know what to say to make us love them.
Web
Then One/WIRED

There's a reason Republicans love to name-drop Ronald Reagan. It's not because their policies are always in line with Reagan's, as many opponents have pointed out. And it's not because they're trying to get us drunk during debate drinking games. The reason they talk about Reagan is because Reagan is popular in polls. Whether or not their platforms sync up with the Gipper's, they talk about Reagan, because, well, Reagan sells.

It's no secret that politicians pander. They cling to trite concepts and overused buzzwords because they've got polls, focus groups, and an ever-growing deluge of data from social media sites telling them that those terms are the ones we want to hear. It's a tried and true method, but it's far from precise. Figuring out the right things to say still requires plenty of trial and error on the part of the campaigns.

But in the future, says Northeastern University researcher Nick Beauchamp, machine learning technology could change that. He's developed an algorithm that could make it even easier for politicians to know exactly what to say to make us love them and hate their rivals. It's a future that's as fascinating as it is terrifying.

As an assistant professor in Northeastern's department of political science, Beauchamp studies the way political arguments can change political opinion. When he began developing this algorithm, he says, he wasn't looking for a way to make it easier for politicians to manipulate the masses. Instead, he wanted to acquire a deeper understanding of what makes people support the issues they support and oppose the issues they oppose. He wanted to break apart standard political discourse to figure out what elements of a given issue are most likely to be favorable or unfavorable, and, most importantly how tweaking the way we talk about that issue accordingly can impact public support.

Beauchamp began work on an algorithm that he hoped could crack the code. First, he needed to pick an issue. He settled on Obamacare because, he says, it's an issue on which many Americans still have fluid opinions. He then skimmed 2,000 sentences from a pro-Obamacare website called ObamaCareFacts.com and fed it to a machine learning model. The system grouped the 2,000 sentences into individual topics, such as sentences related to costs or health care exchanges—and began mixing and matching.

After the machines took a swing at political discourse, Beauchamp turned to the human brains on Mechanical Turk, Amazon's online community for crowdsourcing tasks. Using the formulations developed by the model, Beauchamp sent hundreds of Turkers in the United States various combinations of sentences, then asked them, on a scale of 1 to 9, whether they strongly approve or strongly disapprove of Obamacare. Based on their answers, the system would go back to the topic pools to find more and more favorable sentence combinations and send them out to a new group of Turkers.

"The goal is: Can you combine better and better collections of sentences such that after people read them they’re more disposed toward Obamacare?" Beauchamp says.

Within an hour-and-a-half, Beauchamp was left with a collection of text that had a 30 percent higher approval rating than the original text. He discovered that sentences about pre-existing conditions and employer-employee relationships tended to be viewed most favorably, while sentences about legal rights and state and federal rights were viewed least favorably.

"All of these sentences theoretically are in favor of Obamacare," he says. "So it’s interesting some of them tend to backfire or be less persuasive." While Beauchamp's system was looking for text that would persuade people to support Obamacare, he says, it could just as easily do the opposite by creating collections of text that garner increasingly more disapproval.

The Art of Manipulation

There are countless ways to use a model like this one, Beauchamp says. A campaign could, for instance, feed the model with sentences from a speech to figure out what to keep and cut. It could feed the model with everything the candidate has said to figure out what people like the most and the least. It could even help candidates figure out, for instance, what makes people love Donald Trump by combining Trump's speeches with their own to figure out which Trump quotes rise to the top.

Social media already gives campaigns a good sense of which topics are most correlated with favorable or unfavorable conversation about a candidate. But Beauchamp says it's tough to prove causality in those instances and what, exactly, caused that favorable reaction. An experiment like this one is exact.

Beauchamp says the model is still a work in progress, but already, he's keenly aware of how this power could be abused by politicians. After all, it turns the already unscrupulous art of manipulation into a science. And yet, it also calls attention to a central issue of democracy, particularly in a world in which data on public opinion is so plentiful.

"Democracy has this inherent problem where if you do it right, you're perfectly pandering to the audience," he says. "We’re all worried by that, but we also, at the same time, all believe in democracy."

If we're more aware of how easily we can be manipulated, perhaps we'll be more willing to question those who are trying to manipulate us.