Just two days before Slovakia’s elections, an audio recording was posted to Facebook. On it were two voices: allegedly, Michal Šimečka, who leads the liberal Progressive Slovakia party, and Monika Tódová from the daily newspaper Denník N. They appeared to be discussing how to rig the election, partly by buying votes from the country’s marginalized Roma minority.
Šimečka and Denník N immediately denounced the audio as fake. The fact-checking department of news agency AFP said the audio showed signs of being manipulated using AI. But the recording was posted during a 48-hour moratorium ahead of the polls opening, during which media outlets and politicians are supposed to stay silent. That meant, under Slovakia’s election rules, the post was difficult to widely debunk. And, because the post was audio, it exploited a loophole in Meta’s manipulated-media policy, which dictates only faked videos—where a person has been edited to say words they never said—go against its rules.
The election was a tight race between two frontrunners with opposing visions for Slovakia. On Sunday it was announced that the pro-NATO party, Progressive Slovakia, had lost to SMER, which campaigned to withdraw military support for its neighbor, Ukraine.
Before the vote, the EU’s digital chief, Věra Jourová, said Slovakia’s election would be a test case of how vulnerable European elections are to the “multimillion-euro weapon of mass manipulation” used by Moscow to meddle in elections. Now, in its aftermath, countries around the world will be poring over what happened in Slovakia for clues about the challenges they too could face. Nearby Poland, which a recent EU study suggested was particularly at risk of being targeted by disinformation, goes to the polls in two weeks’ time. Next year, the UK, India, the EU, and the US are set to hold elections. The fact-checkers trying to hold the line against disinformation on social media in Slovakia say their experience shows AI is already advanced enough to disrupt elections, while they lack the tools to fight back.
“We’re not as ready for it as we should be,” says Veronika Hincová Frankovská, project manager at the fact-checking organization Demagog.
During the elections, Hincová Frankovská’s team worked long hours, dividing their time between fact-checking claims made during TV debates and monitoring social media platforms. Demagog is a fact-checking partner for Meta, which means it works with the social media company to write fact-check labels for suspected disinformation spreading on platforms like Facebook.
AI has added a new, challenging dimension to their work. Three days before the election, Meta notified the Demagog team that an audio recording of Šimečka proposing to double the price of beer if he won was gaining traction. Šimečka called the video a fake. “But of course the fact-checking can’t be based just on what politicians say,” says Hincová Frankovská.
Proving the audio had been manipulated was hard. Hincová Frankovská had heard about AI generated posts, but her team had never actually had to fact-check one. They traced where the recording came from, discovering that it had first been posted on an anonymous Instagram account. Then they started calling around experts, asking whether they considered the recording likely to be fake or manipulated. Finally, they tried out an AI speech classifier made by an American company called Eleven Labs.
After a few hours they were ready to confirm that they believed the recording had been altered. Their label, which is still available to see on Slovak-language Facebook when visitors come across the post, says: “Independent fact-checkers say that the photo or image has been edited in a way that could mislead people.” Facebook users can then choose if they want to see the video anyway.
Both the beer and vote-rigging audios remain visible on Facebook, with the fact-check label. “When content is fact-checked, we label it and down-rank it in feed, so fewer people see it—as has happened with both of these examples,” says Ben Walter, a spokesperson for Meta. “Our Community Standards apply to all content, regardless of whether it is created by AI or a person, and we will take action against content that violates these policies.”
This election was one of the first consequential votes to take place after the EU’s digital services act was introduced in August. The act, designed to better protect human rights online, introduced new rules that were supposed to force platforms to be more proactive and transparent in their efforts to moderate disinformation.
“Slovakia was a test case to see what works and where some improvements are needed,” says Richard Kuchta, analyst at Reset, a research group that focuses on technology’s impact on democracy. “In my view, [the new law] put pressure on platforms to increase the capacities in content moderation or fact-checking. We know that Meta hired more fact-checkers for the Slovak election, but we will see if that was enough.”
Alongside the two deepfake audio recordings, Kuchta also witnessed two other videos featuring AI audio impersonations be posted on social media by the far-right party Republika. One impersonated Michal Šimečka, and the other the president, Zuzana Čaputová. These audios did include declarations the voices were fake: “These voices are fictitious and their resemblance to real people is purely coincidental.” However that statement does not flash until 15 seconds into the 20 second video, says Kutcha, in what he felt was an attempt to trick listeners.
The Slovakian election was being watched closely in Poland. “Of course, AI-generated disinformation is something we are very scared of, because it’s very hard to react to it fast,” says Jakub Śliż, president of Polish fact-checking group the Pravda Association. Śliż says he is also worried by the trend in Slovakia for disinformation to be packaged into audio recordings, as opposed to video or images, because voice cloning is so difficult to identify.
Like Hincová Frankovská in Slovakia, Śliż also lacks tools to reliably help him identify what’s been created or manipulated using AI. “Tools that are available, they give you a probability score,” he says. But these tools suffer from a black box problem. He doesn’t know how they decide a post is likely to be fake. “If I have a tool that uses another AI to somehow magically tell me this is 87 percent AI generated, how am I supposed to convey this message to my audience?” he says.
There has not been a lot of AI-generated content circulating in Poland yet, says Śliż. “But people are using the fact that something can be AI generated to discredit real sources.” There are two weeks until Polish voters will decide whether the ruling conservative Law and Justice party should stay in government for an unprecedented third term. This weekend, a giant crowd gathered in Warsaw in support of the opposition, with the opposition-controlled city government estimating the crowd reached 1 million people at its peak. But on X, formerly known as Twitter, users suggested videos of the march had been doctored using AI to make the crowd look bigger.
Śliż believes this type of content is easy to fact-check, by cross referencing different sources. But if AI-generated audio recordings start circulating in Poland in the last hours before the vote, as they did in Slovakia, that would be much harder. “As a fact-checking organization, we don’t have a concrete plan of how to deal with it,” he says. So if something like this happens, it’s going to be painful.”