Tech Policy

Your own devices will give the next Cambridge Analytica far more power to influence your vote

Greater connectivity, more data, and auto-generated content will make today’s manipulation techniques look primitive.

Apr 2, 2018

As the scale and scope of the Facebook personal data scandal grows, there are questions galore: why Facebook took so long to act, whether the company should be held liable, and just how much trouble the executives at Cambridge Analytica are in, across multiple jurisdictions. The FTC has opened an official investigation into Facebook; Palantir, billionaire Trump supporter Peter Thiel’s data company, has been implicated in the scandal; and so has Trump’s new national security advisor, John Bolton.

But what’s most important are the implications for the future.

Though it’s not clear if Cambridge Analytica’s behavioral profiling and microtargeting had any measurable effect on the 2016 US election, these technologies are advancing quickly—faster than academics can study their effects and certainly faster than policymakers can respond. The next generation of such firms will almost certainly deliver on the promise.

Research points to where the field is headed. At an event that NYC Media Lab hosted in 2015, Alexander Tuzhilin, professor of information systems at the NYU Stern School of Business, pointed out that most of the targeting applications we see today represent the second generation of these technologies. The data employed includes context awareness, spatiotemporal and mobile data, multi-criteria ratings, social-media data, conversational recommendations, and more. These are standard tools of the trade used in targeting by internet marketers, as well as by Cambridge Analytica in 2016.

But the third generation of targeting is already upon us. In the next few years, experts like Tuzhilin predict, we’ll see the convergence of multiple disciplines, including data mining, artificial intelligence, psychology, marketing, economics, and experiential design theory. These methods will combine with an exponential increase in the number of surveillance sensors we introduce into our homes and communities, from voice assistants to internet-of-things devices that track people as they move throughout the day. Our devices will get better at detecting facial expressions, interpreting speech, and analyzing physiological signals.

In other words, the machines will know us better tomorrow than they do today. They will certainly have the data. While a General Data Protection Regulation is about to take effect in the European Union, the US is headed in the opposite direction. Facebook may have clamped down on access to its data, but there is more information about citizens on the market than ever before. Recently, the US Federal Communications Commission allowed internet service providers to sell data on your web browsing behavior. That’s just one example of what will be available through legitimate means, not to mention all the data sloshing around thanks to hacks and misuse.

The worst-case scenario is that advanced targeting technologies fed by all this data will combine with new methods for automatically generating convincing content—not just text but also images, video, and audio, as a report on online political manipulation from the European Data Protection Supervisor warned last week. In his testimony to the British Parliament, now-suspended Cambridge Analytica CEO Alexander Nix said the company had generated thousands of pieces of creative content for the Trump campaign for its various targets. Most of this was presumably done by hand—the company included creative services. Imagine millions if not billions of precisely tailored messages, including synthetic video, in future campaigns—all machine-generated. These tools are available now.

True, the jury is still out on whether these technologies will influence people as well as Nix liked to claim in his sales pitch. Cambridge Analytica’s services were certainly hyped, but we simply won’t know their impact unless there is full disclosure in British courts. In a paper in the Utrecht Law Review in February, “Online Political Microtargeting: Promises and Threats for Democracy,” researchers concluded that we need a great deal more data and transparency to understand the effect of these technologies. In particular, regulations are needed for governments and citizens to know what goes on inside the “black box” systems at companies like Facebook, YouTube, Snap, and Twitter.

In new industries—from energy to cars to food production—it takes time for society to recognize harmful effects and put controls in place to preserve things that matter. The same is true of the information ecosystem and the public sphere. The future of our democracy requires us to imagine these technologies as potential threats and to recognize that unfettered innovation in social media has had its heyday. If today’s Facebook dark data scandal proves a tipping point, it may yet portend a brighter future.

Justin Hendrix is executive director of NYC Media Lab. He started a website, RegulateSocialMedia.org, to foster discussion about appropriate regulations for social-media platforms.

David Carroll is a professor at the Parsons School of Design at the New School in New York. He filed a claim against Cambridge Analytica and SCL affiliates to force Cambridge Analytica to disclose how it came up with the psychographic targeting profile it has on him.