At the beginning of June, the Osavul team participated in the Riga Stratcom Dialogue 2024. We were among the panelists who talked about AI and its role in virtual manipulation. Our CEO, Dmytro Plieshakov, has shared our vision with the audience, talking about malicious actors and their capabilities with generative AI and how they use it to undermine democracies and such organizations as NATO.
One of the main findings that we presented is a NATO Virtual Manipulation Brief named “Hijacking Reality: the Increased Role of Generative AI in Russian Propaganda.” In a world of computers, the line between truth and fiction is blurred.
A recent report by the NATO Strategic Communications Centre of Excellence (NATO StratCom COE) and Osavul shows how complex generative AI has become in spreading Russian propaganda and indicates a growing danger to democracies around the world. The report “Virtual Manipulation Brief” thus explores this new digital battleground of algorithms that shape public perception.
One of the main findings that we presented is a NATO Virtual Manipulation Brief named “Hijacking Reality: the Increased Role of Generative AI in Russian Propaganda.” In a world of computers, the line between truth and fiction is blurred.
A recent report by the NATO Strategic Communications Centre of Excellence (NATO StratCom COE) and Osavul shows how complex generative AI has become in spreading Russian propaganda and indicates a growing danger to democracies around the world. The report “Virtual Manipulation Brief” thus explores this new digital battleground of algorithms that shape public perception.
The Mechanics of Misinformation
At the core of this new disinformation wave is generative AI, particularly Large Language Models (LLMs), which generate content that resembles human communication. These include all manner of fake news articles and Facebook comments among many others. For instance, during one campaign, AI-generated comments on Twitter and Facebook went viral simulating political discussions between real users. This not only amplifies false narratives but also lends an air of authenticity to the content, making it harder for the average person to discern reality from fiction.
Coordinated Efforts Across Platforms
17 coordinated groups operating via 344 sources have been identified in this report as systematic disseminators of misinformation. These groups use networks of bots and fake accounts to drive their narratives home. There have been instances where fabricated events or distorted realities were embodied in AI-generated images and videos as exemplified in a deepfake video clip showing a NATO official allegedly making threats against Russia. The entirely misled video was widely spread on Telegram, VKontakte (VK). It proves again how these efforts are efficient and far-reaching.
Economic Incentives and Propaganda
Incidentally, the report also provides insights into the economics driving these disinformation campaigns. Channel owners on Telegram can now earn ad revenue shares through monetization as they let controversial posts appear on their channels. Those targeting NATO could make significant sums through channels propagating pro-Russian narratives; for example, certain channels reportedly generated millions of views during the research period, which translated into significant sums of money. This financial dimension adds another level of complexity because it encourages individuals to initiate and distribute disinformation.
Key Narratives and Strategic Messaging
Several different narratives are pushed by these groups but all seeking to undermine NATO and its allies. One of the major themes is that NATO is portrayed as an aggressor. To support this narrative, manipulated content was relied upon such as AI-generated news articles that made claims that NATO was preparing for unprovoked attacks on Russian territories. Another common theme is the portrayal of Russian military technology as superior to Western alternatives. Such stories are usually not based on facts but their purpose is to breed doubt and encourage pro-Russian sentiments in a global audience.
The Challenges Ahead
Not only does “Virtual Manipulation Brief” demonstrate what AI can do regarding disinformation, but also its inability in terms of current countermeasures. Large volumes and sophistication characterize AI-generated content thus making platforms unable to cope with them all. Enhancement of detection technologies, greater platform policy transparency, and public awareness building about hazards associated with digital misinformation are stipulated by this report.As Artificial Intelligence advances, the landscape of disinformation changes as well.
The report from NATO StratCom COE and Osavul is a grim reminder of the difficulties to be faced in maintaining truth in the age of digital technology. With another presidential race drawing near in America, there has never been more at risk.
Our team was happy to share these findings and tell more about our vision of how information security may look. Riga Stratcom Dialogue was a great platform to discuss these challenges and the upcoming ones. If you want to learn more - contact our team.
The report from NATO StratCom COE and Osavul is a grim reminder of the difficulties to be faced in maintaining truth in the age of digital technology. With another presidential race drawing near in America, there has never been more at risk.
Our team was happy to share these findings and tell more about our vision of how information security may look. Riga Stratcom Dialogue was a great platform to discuss these challenges and the upcoming ones. If you want to learn more - contact our team.