Was one of the news headlines you scrolled past on your social media feed today shared by a bot? Depending on the platform and the country you’re in, the odds are surprisingly high that automated computer scripts – political bots – are shaping the type of content you are seeing online. These bots are thus accomplishing the objectives of their programmers to populate the social media space with a specific type of messaging. Given the increasingly central role social media platforms play in the circulation of news, the mounting use of vast networks of political bots to circulate content has profound implications in terms of the composition of our collective media diet and, in turn, the way public opinion is shaped.
Last week the Political Bots project at Oxford’s Internet Institute released a series of case studies that examine how governments and politicians are using computational propaganda to mold public opinion. Governments of authoritarian regimes are using bots to promote the official line, or to sow disinformation to weaken political opponents. In democratic countries, bots are the latest marketing tool politicians and political parties are using to amplify their messages. In both cases, the public is largely unaware of how software programs are shaping the news and information they see. Furthermore, improvements in artificial intelligence (AI) mean that, in addition to merely sharing pre-programmed messages, these bot are now even able to convincingly mimic human conversation. Therefore, even our exchanges with other social media users are potentially shaped by political actors trying to frame the debate online.
The nine national case studies (Russia, Ukraine, China, Taiwan, Poland, Germany, Brazil, USA, Canada) represent the first collection of reports on computational bots that examines their use in different contexts. Indeed, one of the primary takeaways is that governments and other political actors are using bots for a variety of distinct purposes and in quite distinctive ways around the world. What makes these reports so rich is how they include both quantitative social network analyses of bot activity, as well qualitative interviews with bot programmers and political consultants who ultimately determine how bots will be used and what messages they should amplify. It is important to remember that ultimately humans carefully engineer bot interventions with specific intentions in mind.
The implications of political bots for the news media are profound. This is especially true considering that more people are turning to social media platforms as their primary source of news. According to the Pew Research Center in 2016 62% of adults in the US got news on social media. Indeed, younger generations are much likely to rely on social media for news. The role of bots in populating the content of social media platforms undercuts producers of news who want to get their content to readers. They must now try to reach readers in an environment that is cluttered with content – much of it amplified by bots. So, the rise of bots represents one more way in which the role of media outlets as gatekeepers of news and information is being diminished.
The use of computational propaganda, particularly by authoritarian regimes, should be of great concern to supporters of press freedom. The lead researchers of the Political Bots project Phil Howard and Samuel Woolley assert that, “Computational propaganda is now one of the most powerful tools against democracy.” Political bots allow these regimes to organize complex disinformation campaigns and drown out critical voices online. As the authors of the Ukraine case study note, when bots are used to harass and troll specific users, their targets are often journalists. This is most likely done in an attempt to silence their reporting and to make them think twice before they write a future article that challenges those in power. The use of bots in this way threatens the ability of citizens to access high quality, reliable news content. In the long run, the toxic mix of computational propaganda and coordinated disinformation campaigns will undermine overall trust in the media, which could fundamentally damage the vibrant media ecosystems on which democracy relies.
Social media platforms have competing interests in terms of responding to the rise of political bots. On the one hand, they likely stand to benefit financially from the increased activity and engagement that political bots generate. For this reason critics argue that platforms are partially to blame for the spread of misinformation by bots, and thus are morally obligated to do more than just shutting down fraudulent accounts once they are identified. On the other hand, in the long run social media platforms need to be inviting spaces for users if they are to succeed. When a platform becomes overrun with content populated by political bots it is unlikely that users will enjoy spending time there, which will deprive social media platforms of advertising revenue. According to authors of the Russia case study, currently 45% of highly active Twitter accounts are bots. No doubt, executive and engineers at social media platforms are concerned about this problem even if they haven’t determined what the best response is.
Research on computational propaganda is still in the early stages, and the speed with which social media platforms change means that how bots are being used right now may look quite different in a couple of years time. Indeed, the dominant platforms of today may, in the future, be overtaken by networks that haven’t even been imagined yet. That being said, the role social media plays in circulating news is probably here to stay. And given that framing the debate online has such powerful impacts offline, it is unlikely that political actors will stop trying to do so via computational propaganda. For those concerned about the type of news people have access to, research on political bots adds another layer to the complex and unfolding story of how digital platforms are restructuring our news ecosystems globally.
Daniel O’Maley is the Associate Editor at the Center for International Media Assistance.
Comments (0)
Comments are closed for this post.