Blog

An Election Year with GenAI

The white house overlayed with line graphics demonstrating AI

The impact of generative AI on the election of 2024 has people worried. There is no way to tip-toe around that fact. 

Barely two months into the year, we began to see how technology could be abused for political purposes. In January, an AI-generated robocall imitated President Biden telling Democratic New Hampshire voters not to vote for him. Who created it remains a mystery. 

From AI-generated robocalls to deep fake images and videos, the impact of generative AI on the election will undeniably be one for the books. 

So, what’s being done? And how can we, the everyday user, for the time being, until there are industry-wide standards, stay informed of when something is true versus AI-generated and being used to manipulate? 

Regulation, Regulation, Regulation

We can’t emphasize enough the importance of establishing guardrails and regulations around generative AI to help mitigate any potential misuse. 

At the end of October 2023, the Biden administration released an Executive Order regarding the plans for AI regulations, but that was simply laying the groundwork for something that can take upward of a year or more. Nevertheless, it was a start. 

Since then, a bipartisan group of senators has been drafting additional AI regulations, and the Federal Election Commission is looking at potential amendments to existing rules to prohibit the deceptive use of generative AI within campaign ads. 

But the question remains: Is it enough? Or, more important, is it happening fast enough? 

Transparency around AI-generated content is crucial, but while the technology exists to identify AI-generated text with high accuracy, it is still being fine-tuned when it comes to identifying images, audio, or video. Considering the aforementioned AI-generated Biden call and the recent deepfake images of Taylor Swift, the technology to recognize audio, video, and images with high accuracy is severely needed. 

Fortunately, large tech industries, including OpenAI, the creator of ChatGPT, have begun rolling out their own rules and regulations to prevent misuse of AI technology for political purposes. Meta, the owner of Facebook and Instagram, shared on February 6 that they will be adding labels to all AI-generated images created by third-party models, such as ChatGPT, Midjourney, and more. 

Yet even with these regulations, significant loopholes still show how difficult it is to regulate the quickly evolving AI technology fully. For example, while Meta’s announcement is a crucial effort, for now, it still relies on users who upload AI-generated images, video, or audio to mark the content themselves as AI or face a fine. Industry standards are being promoted to help automate the process and not rely on users, but that is still in the works. Until then, chances are higher that by the time Meta discovers AI-generated content that has not been adequately labeled, that content will have already spread to millions of users. 

All this means is that, even with regulation rolling out across governments and companies, it will still be up to the day-to-day user to know if the content they are taking in is AI-generated. 

But how?

Staying Aware of Generative AI

Again, transparency around AI is crucial. Still, as those rules and regulations get sorted out, efforts will need to be made on the individual’s behalf to stay aware of what is AI-generated and what isn’t. 

First of all, the good news is that most leading AI companies have begun to embed an invisible watermark on the content being created, and there are tools on the market that can help identify those watermarks within AI, such as Google’s SythnID

Furthermore, while the technology to recognize AI-generated audio, video, and images is still being fine-tuned, the tools available for AI-generated text, such as the AI Content Detector, can be crucial in helping provide transparency. These tools can offer a safeguard to inform you if the article you’re reading on a news site or a social media post was written by a human or is AI-generated. Considering how fast misinformation spread via social media during the last election, guardrails such as these are vital for staying aware of the presence of AI. 

Nevertheless, even if all the technology existed to recognize every form of AI-generated content, there’s still a need for all of us to remain skeptical. Applying critical thinking to anything we read, hear, or see that states specific facts or makes declarative statements can go a long way. Verifying any information you read online from a reliable source (or two sources, for that matter) should be a necessary step in our day-to-day interactions, especially now that AI-generated content is rapidly filling up the online space. 

It’s easy to get swept up in the idea that AI is everywhere, even scary, and yes, sometimes it can feel like we’re in an episode of Black Mirror or The Twilight Zone, but it’s essential to curtail those fears. In the end, when it comes to AI manipulation, it’s only as powerful as you allow it to be. All of us must reclaim some responsibility for the rapid spread of misinformation and empower ourselves. Instead of immediately reacting and taking headlines, news stories, and social posts at face value, we need to pause and look deeper into what is being declared. Ultimately, as scary as AI might seem, we can all stay aware and make informed decisions based on accurate human-based information with a little added effort. 

Ironically, all it takes is a bit of old-school human brain power.

Find out what's in your copy.