fbpx

In the latest article dedicated to the use of artificial intelligence (AI) in the context of the 2024 elections, Angelica Alecu-Ciocîrlan – Associate, STOICA & Asociații – describes the legal challenges and justified concerns about the impact of new AI-enabled large-scale disinformation and manipulation tools such as the “deep fake” phenomenon.

The year 2024 comes with a historic record, bringing to the polls 4 billion citizens who will choose their leaders and representatives. Elections for various positions will be organized in over 70 countries, and in 23 of these (including the United States of America and Russia), presidential elections will take place. Romania is no exception, joining this list with all four types of elections: local, parliamentary, European parliamentary, and presidential.

Attempts to manipulate the electorate through various propaganda means are not a novelty but rather an almost intrinsic part of the game rules. However, what is new and much more challenging for the elections in the year 2024 are the new tools of large-scale disinformation, facilitated by artificial intelligence. With the help of new technologies, phenomena such as deepfakes have gained alarming proportions, without the long-term effects even being anticipated.

If in 2021 we were impressed by the image generator DALL-E, and in 2022 by the text content generator ChatGPT, this month, OpenAI (the same company that developed DALL-E and ChatGPT) publicly announced a new project: Sora, a video content generator. Currently available to a limited team, Sora is an AI model that can create realistic or fictional sequences based on text instructions (text-to-video model). In turn, Google and Meta (Facebook) have also shown interest in developing similar tools.

In this context, concerns about the impact of these new technical possibilities on the conduct of the 2024 elections are justified. If in the past, techniques for influencing the electorate required considerable resources and specialized niche knowledge, nowadays, to create deepfake content, all that’s needed is an internet connection and access to a content-generating tool based on instructions. Compared to current and future risks, the Cambridge Analytica scandal might seem like just a faint precursor.

And we are not just talking about hypothetical situations; there are already concrete examples in this regard. Last month, in New Hampshire (USA), robot calls impersonating the voice of the president, Joe Biden, were reported, attempting to discourage citizens from participating in the first round of elections. Similarly, just a few days before the parliamentary elections in October 2023 in Slovakia, an online recording circulated purportedly showing a conversation between a journalist and Michal Šimečka (the leader of the pro-European party Progresívne Slovensko and vice-president of the European Parliament), in which the latter allegedly spoke about committing electoral fraud. His party was subsequently defeated, and the recording turned out to be a deepfake. Although a causal relationship between this event and the election outcome cannot be established, the example demonstrates how dangerous such practices are, especially when used in the final stretch of the electoral competition. With insufficient time for official verifications and denials, the greatest harm has already been done: doubt. Lastly, another recent deepfake example, portrays Frederick Christ Trump Sr. (who passed away 25 years ago), the father of former President Donald Trump, in a video sharply criticizing his son. It’s important to note that in this last case, the video mentions at the end that it was generated with the help of artificial intelligence and that The Lincoln Project (a self-proclaimed pro-democracy organization) paid for this content and took responsibility for it.

Faced with these increasingly serious challenges, major Big Tech companies (including Google, Meta, Microsoft, OpenAI, TikTok, and X) signed an agreement last week at the Munich Security Conference called “A Tech Accord to Combat Deceptive Use of AI in 2024 Elections.” The agreement contains a set of common objectives and steps to prevent and minimize the risk of using artificial intelligence to mislead the electorate.

Being more of a soft law instrument, a declaration of common intentions without the obligatory effect of the signatory parties, this agreement is a good start in the fight against large-scale manipulation of citizens but cannot replace a solid legislative framework with clear rules and concrete sanctions. As technology evolves rapidly, it becomes increasingly difficult for legislative bodies to keep pace. At the European level, although considerable efforts have been made, the most anticipated regulation in this regard (the AI Act) is expected to receive final approval in the European Parliament in April 2024, which would mean it would come into effect in 2026 (if no modifications occur).

At the national level, although debates on banning deepfakes have been ongoing since last year, last week, the Chamber of Deputies decided to refer the bill back to the Committees on Culture, Arts, Mass Media, and Information Technology and Communications. The main criticisms concerned the lack of precision in the terminology used, as well as the proposed punishment system. Specifically, discussions revolve around, including in the bill a prison sentence ranging from 6 months to 2 years or exclusively retaining pecuniary sanctions. Although often a flawed law can cause more harm than its absence, in the current context, the legislature’s passivity or slowness in designing a coherent regulatory framework for AI use could leave the door open to frauds, whose subtlety will make it difficult to prove and quantify anyway.

Thus, for now, the only weapon we have against disinformation techniques is critical thinking, careful analysis, and evaluation of the information overflowing on all communication channels. At least for the moment, details such as imperfect synchronization between lip movements of characters and the audio content, lack of voice inflections, or unnatural hand movements are clues to the generation of video and/or audio content with the help of AI.

In the future, it is highly likely that these imperfections will be corrected, and AI-generated content will become increasingly realistic, in which case training the ability to discern between truth and fiction will have to be (if it isn’t already) a top priority in the educational system.