“Online trolls” are alive and well in the V4
Political Capital and its regional partners – the Czech PSSI, the Slovak IVO, and Sastre Consulting – researched the prevalence and effects of inauthentic online behavior concerning Russia’s war in Ukraine between 21 February and 31 March. Our research revealed the existence of potentially coordinated “online troll networks” engaging in efforts to artificially distort discussions on Facebook in all V4 member states, although their methods and goals differ from country to country. The main findings of our research are the following:
- The vast majority of the messages being spread in the V4 were in line with the Kremlin’s interests, although the situation in Slovakia is completely different. We can differentiate multiple groups of likely fake or real users engaged in inauthentic online behavior, which we defined as attempts to derail social media discussions through copying (nearly) the exact same text under several Facebook posts (the message had to appear at least 100 times altogether). The three main user groups:
- The “professionals” are likely fake profiles representing the interests of international actors (mainly the Kremlin) in coordinated campaigns.
- The “enthusiastic activists” are seeking to convince others of the validity of their views and are highly active. Their work is potentially coordinated via closed Facebook or Telegram groups.
- The “average users” likely just see something they agree with and share the same text a few times.
- There is reason to believe that the actions of a part of users engaged in inauthentic online activity are coordinated – besides the presence of fake profiles. First, there seems to be no other logical explanations for dozens, sometimes hundreds of profiles to copy the same text under the posts of various Facebook pages. Second, some profiles were involved in spreading multiple repetitive comments. Third, in the most active days of dissemination, the comments arrived in a steady stream throughout the day, including working hours in some cases, and they often arrived in a batch of 8-10 within an hour. Fourth, oftentimes, the texts were “introduced” by one or two unique-looking sentences, but these were also repeated by multiple different users, suggesting the existence of some sort of “guidelines” to “personalize” the messages.
- Hungary saw the highest level of inauthentic online activity, largely due to the fact that two election-related narratives – one pro-government, one pro-opposition – were being spread in the campaign period. The pro-government narrative, warning Hungarians to vote for Fidesz-KDNP to preserve peace in the country, was the most frequently repeated message out of the six we found in Hungarian Facebook discussions.
- In Hungary and Czechia, most users engaged in spreading repetitive messages tried to push well-known pro-Kremlin narratives into mainstream Facebook discussions. The messages claimed that (1) Ukraine committed genocide in the Donbas, (2) the US wanted to open NATO bases in Crimea, (3) neo-Nazis took over Ukraine, (4) Ukraine does not exist or (5) Ukraine was being led by a theatre company. The false allegations about genocide being committed in the Donbas and the “neo-Nazi Ukraine” were spread in both countries. One of the repetitive messages about Ukraine being taken over by neo-Nazis was spread in Czechia was a Facebook post of a member of the right-wing, Eurosceptic Tricolor party. The manipulative claims about NATO bases in Crimea and the theatre company were exclusive to Hungary (in terms of repetitive comments).
- Spreading openly pro-Kremlin messages would be counter-productive in Poland. Thus, repetitive messages (mainly spread by two extremely active commenters) in the country focused on sowing discord and generating a sense of insecurity because of the war. The repetitive texts suggested that the ruling PiS mismanaged the country’s national security efforts, while NATO cooperation could drag Poland into the war. This strategy merited responses mainly in times when Poles’ security perception was lower, such as the early days of the war or when the idea of sending Polish peace-keepers to Ukraine first emerged.
- The Slovak situation is entirely different, as most repetitive comments were pro-West. One constantly repeated message – a quote from the Slovak Penal Code concerning the crime of disrupting peace – was neutral content-wise, but it was used against both pro-Kremlin and pro-West commenters. This situation is probably the result of the perceived strength of pro-Kremlin public opinion and its materialization in social media discussions. Countering pro-Kremlin disinformation narratives online can be a valid goal, but if the message being spread is factually inaccurate (as it was seen in Slovakia in one case), it might backfire.
In the V4, inauthentic online behavior in the context of the war can be considered to be a severe challenge, as the profiles we found can be remarkably effective in introducing pro-Kremlin narratives to relatively broad layers of the population. Since they often target mainstream media pages, pro-Kremlin actors can “bring” pro-Kremlin disinformation to citizens who otherwise might not meet these claims – as they are not following fringe pro-Kremlin media.
It is certain that Facebook is not doing enough to stop inauthentic behavior on its platform, at least in case this is happening in smaller markets, such as V4 nations. It is much more likely that the admins of social media pages themselves are acting against users they consider to be “trolls,” but the co-operation of Facebook is definitely required to lower the risks carried by inauthentic behavior.
The full study is available in English here.