Reflection: Assessing AI-borne Risks to the Integrity of the 2024 US Election
Recently, I had the privilege of participating in a dynamic forum organized by Digital Forensic Research Lab (DFRLab), “Assessing AI-borne Risks to the Integrity of the 2024 US Election.”
The forum gathered experts across disciplines to examine AI’s role in the upcoming US and global elections. We engaged in discussions on seven GAI-driven scenarios in small groups, fostering collaborative, proactive, and creative thinking — ideal for scenarios that spanned diverse sectors, including election administration, platform policies, and efforts to combat disinformation.
I’m now looking forward to the DFRLab publishing their report on AI-borne risks to the integrity of the US election, which will be informed in part by insights from this very forum. In the meantime, here are a few reflections and thought-starters.
During one of our breakout sessions, the topic of authentication gaps in GenAI content came up, particularly in audio formats. There were various levels of agreement on whether fabricated audio clips would have a small or large impact on an election in general, largely depending on whose voice was being imitated and what the message translated was. Regardless, it quickly became apparent and agreed on that audio-first content is the most challenging to authenticate, especially under time constraints and in a world that relishes a “he said — she said” debate. Once we, naturally, acknowledged that bombarding people with information is a tactic used to confuse and divert attention, all agreed that this authentication gap is a larger breeding ground for further disinformation and real-life tension escalation than originally thought.
It is then that a question of time and credibility surfaced — should the government/media authenticate and report on content simultaneously, especially when the content is found to be in a difficult format such as audio? Especially mere weeks or days away from ballot casting? Especially when either political party or representative is making definitive statements for or against the authenticity of the clip.
Further, will the public forgive journalists and their outlets for backtracking on reports once a conclusion is finalized? What if the debate over its authenticity continues for months and overlaps with the conclusion of a vote? It’s no secret that topics concerning AI safety or the responsible development of AI are quite the hot button right now, causing people to become anxious even when any news around it may not necessarily lead to significant developments. So with that, can global citizens be trusted to understand and be flexible with conflicting information as facts develop? Does this approach enhance or hinder credibility in our media, among journalists, and even political leaders?
At this point in the conversation, we confronted a familiar debate — does discussing what is and isn’t disinformation inadvertently contribute to its spread? Does reporting on a false narrative, even when stating its falseness, contribute to its dissemination, given short attention spans, lack of context, or indifference? How big of an impact is realistic, can this transform results only to later realize a vote was based on a fabrication? These questions, combined with the speed at which information spreads in today’s state of looming global elections, present unprecedented challenges. And while it’s easy to discuss worst-case scenarios, is there a silver bullet?
Short answer — no.
Long answer — the use of Language Model Moderators (LLMs) for content moderation is a potential game-changer. IBM’s Watson and Jigsaw’s Perspective API are exemplary. Intercepting and slowing down the spread of disinformation before it reaches the masses can mitigate the impact, whether the content is AI-generated or human-generated. And while voluntary codes exist, outlining what the private sector needs to disclose, what they can’t develop tools for and so on, should we dive into mandates on what these tools SHOULD do, not just what they SHOULDN’T?
It might be slightly too idealistic, the idea of making it mandatory for private entities to demonstrate how their tools combat disinformation, ensure content authentication, and contribute to Trust & Safety, but it could serve as a starting point. It’s worth pondering whether it’s time for these kinds of tools to become near-mandatory across all organizations in the LLM space.
A special thank you to Meghan Conroy for sharing this opportunity with me and for being just as excited (?) about the upcoming 2025 global elections as I am. We’re in for a ride.