"Social bots" are supposed to be human-like "opinion robots" programmed to influence political discussions in the social media space. They have been blamed for the outcome of the Brexit vote, Trump's election and, in Germany, for the opposition to the UN migration pact. Academic supporters of this theory, such as Dirk Helbing, professor at ETH Zurich, did not grow tired of warning the public of supposed threats associated with this phenomenon. As a result, in November, Germany’s Federal Council (Bundesrat) demanded that “a labelling obligation for so-called social bots should be imposed immediately”.
In December, a survey conducted by a company called Botswatch received wide media attention in Germany. The company claimed that 28 percent of the tweets on the UN migration pact came from "social bots". A few weeks later, researchers at the University of Duisburg-Essen published a methodologically bizarre simulation to demonstrate that just a few "social bots" could be enough to influence political opinions on the Internet.
Scientists: up to 15 percent of Twitter accounts are “social bots”
A publication by the University of Zurich and the FU Berlin about the 2017 Bundestag elections even claimed that 9.9 percent of the approximately 838,000 followers of the Twitter accounts of seven German parties during the election campaign had been “social bots”, i.e. almost 83,000. In the Tagesspiegel, the co-author of this study - Ulrike Klinger, junior professor at the FU Berlin and at the Weizenbaum Institute - recently stated that it could be generally assumed that nine to 15 percent of all Twitter accounts were controlled by algorithms. In addition to “passive bots”, these include “active bots” that share links and engage in discussions, and that are “often difficult to recognize as computer programs”.
The idea of a computer program that can take part in political discussions about current topics and thereby is hardly distinguishable from a human being seems rather surprising, given the current state of technology in the field of so-called “artificial intelligence”. Even far simpler tasks like making an appointment can be challenging for today's chatbot technology. Apparently, the creators of “social bots” have a huge technological advantage over researchers and technology companies from all over the world.
The question is: Who has ever experienced a "social bot" in action?
Social Bots: A question of definition
The company Botswatch does not want to give any details about the "social bots" it has found and refers to “data privacy”. The authors of the “social bot" study of the University of Duisburg-Essen cannot name a single accounts that they would call a “social bot". Also, when we asked him on Twitter, Dirk Helbing of ETH Zurich was unable to name us a single Twitter account that could be considered a “social bot”.
This leaves us with the study by the University of Zurich and the FU Berlin, in which 83,000 “social bots” supposedly had been identified. Upon request, the study’s first author Tobias Keller gave us four hand-selected examples of Twitter accounts he found in his study which he considered "social bots". These are the accounts @ebookguide, @ciofinance, @bioltec and @Opticlean24. All four are operated by German commercial enterprises and have nothing to do with politics. None of the accounts appear to be automated or resemble an "opinion robot".
A detector that turns humans into bots
This should make you think. Keller and Klinger used the so-called “Botometer” as a basis for their study. It was programmed by developers around the American researcher Emilio Ferrara and has considerable methodological shortcomings. After the American presidential election, the same program was used to claim that bots played a decisive role in the election campaign. This statement, too, has already given rise to debate.
The idea of the “Botometer” is to apply a machine learning algorithm to the problem of differentiating between "social bots" and real people. However, this requires representative training data. This is obviously a problem with social bots.
Ferrara decided to use an old sample of spam accounts as training data instead. He hides this problem by simply calling these spam accounts bots. His second trick is to always accept the number his “Botometer” spits out as the absolute truth. Every account that has a “Botometer score” of more than 2.5 (on a scale from zero to five) is considered a bot and consistently referred to as such.
Professors and journalists classified as bots
Not surprisingly, the results provided by the “Botometer” are ridiculous. We took the liberty of contacting the three Twitter followers of the Weizenbaum Institute with the highest botometer scores. They receive scores of 4.9 and 4.8 respectively. All three kindly answered us and appeared to be very human. They were a professor from Israel, a scientist from England, and a publisher’s reader from Germany. None of the three accounts showed even the slightest signs of automation.
When we evaluated the “Botometer” results more systematically, we observed that almost regardless of the group of people inspected, it usually misclassified around ten percent of human accounts as bots. Of the 1355 Twitter accounts followed by the Tagesspiegel, 93 are declared bots (seven percent), including, for example, journalist Anna Graefe with a botometer score of 4.8. Out of 515 accounts of members of the German Parliament, 60 are falsely identified as bots (12 percent). And even out of 68 accounts of Nobel laureates, seven are classified as “bots” (10 percent). However, a list of 396 accounts of employees of the German Press Agency dpa is the front-runner of our survey. Here, the alleged "bot rate" is a stunning 33 percent.
As we’ve said before, the “Botometer” is the basis for the latest study by Keller and Klinger. Instead of investigating “social bots” as claimed, they are dealing with false positives. Their findings about alleged "social bots" can be fully explained with false positives from the obscure “Botometer”.
Simple explanation for political developments
In just a few years, so-called "social bot research" has grown into a field of research with considerable funding pools, but it has completely decoupled itself from reality. Politicians and funding organizations should take note of this development.
The idea of a sinister "botmaster" who pulls the strings in manipulating public opinion and who clouds the minds of citizens on the Internet may seem all too attractive to many, as a simple explanation for undesired political developments. But political discourse would gain a lot if politicians and journalists stopped depriving people in the social media of being human on the basis of a bizarre conspiracy theory.
Florian Gallwitz is a professor of computer science and media at the Nuremberg Institute of Technology. Michael Kreil is a data journalist at the Infographics Group in Berlin.
Note: We have published the raw data and the program code used to obtain the results presented in this article on GitHub.