ChatGPT and other generative artificial intelligence tools have flooded college campuses in recent months, challenging pedagogical practices and raising ethical concerns over the role of AI in higher education.
For students, papers that previously took days and weeks to write can now be generated with a few keyboard strokes. University of Wisconsin administration has responded to these technological developments with a set of guidelines for AI usage to increase transparency and ensure academic integrity. In classrooms, professors have adopted wildly different approaches to the usage of AI tools — some allow the use with proper citation, others prohibit the tools entirely.
Generative AI, however, has more sinister potential beyond freshman plagiarism in English 100. In the realm of local, state and federal elections, the use of AI in political communication is somewhat of a wild west with very little regulation, as characterized by Bloomberg.
Point-counterpoint: Can body cameras increase police accountability?
A bipartisan effort in the state legislature now seeks to establish a regulatory framework policing generative AI in political advertisements before the upcoming elections in the fall.
In mid-February, state Republicans and Democrats crossed political lines to pass a bill that would require disclaimers in campaign ads that use AI-generated content. The high level of political cooperation signals a shared understanding between politicians of both parties that AI represents a fundamental threat to honest and fair political campaigning.
As the 2024 election cycle heats up, AI has already begun to impact campaigns and featured political advertisements. Last summer, the former campaign of Republican presidential candidate Ron DeSantis used an AI-generated voice of Donald Trump in a television ad.
Beyond voice generative technology, AI can also be used to create deepfakes — highly convincing yet fabricated images and videos, according to the University of Virginia. These deepfakes often have malicious intent in the political realm, with goals of voter suppression or spreading disinformation, as described by the University of New South Wales.
https://badgerherald.com/news/2024/02/28/unprecedented-winter-temperatures-in-wisconsin-spark-climate-concerns/
Many of the same issues with non-AI-generative political ads are now being exacerbated and exploited with the help of AI technologies. The way federal courts have interpreted the First Amendment’s limited application to political speech makes regulating content, even when not verifiably true, difficult. The stakes are now much higher with AI.
While the bill passed in the Wisconsin State Legislature was sorely needed, the lack of regulations beyond simple content disclaimers does not go far enough in regulating AI usage in political communications.
Standards concerning verity, timeliness and transparency are noticeably absent in this new legislation.
Political ads in Wisconsin can now legally broadcast AI-generated disinformation, albeit with the addition of a content warning. With an inundation of disinformation and deepfakes, whether acknowledged as AI-generated or not, it seems doubtful disclaimers can prevent AI from substantially influencing elections.
These dangers came to fruition in Slovakia’s parliamentary election last year. Two days before voters went to the polls, an AI-generated audio cast one of the leading candidates in a highly controversial light. Analysts were quickly able to label the audio as a fake, but the widely-circulated recording is still believed to have a substantial impact on the election results, according to Brookings.
In the absence of regulatory policies concerning proximity limits to elections and fabricated disinformation, elections in Wisconsin also face similar high-stakes risks.
Nearly three-fourths of all U.S. adults report hearing little or no information about AI, according to a 2023 poll from Pew Research Center. Older populations, especially, are less informed.
Given the lack of exposure and education voters have about AI, further legislative overhaul becomes a necessity.
Current campaign laws are rather confusing, and it is not clear how AI fits into existing frameworks. Wisconsin’s 1973 State Statute 12.05 prohibits deliberate false representations of candidates to influence elections.
These standards are even more vague in practice. They certainly didn’t stop Sen. Ron Johnson (R-WI) from attacking his Democratic opponent, Mandela Barnes, with verifiably false claims about Barnes’ alleged support for “defunding the police.” Johnson went on to win the election.
Climate-positive headlines greenwash Wisconsin’s fossil fuel dependency
The development of AI technologies now allows these attacks to reach new heights. Rather than lying about political opponents in traditional ads, candidates can now use even more subtle, believable tactics — using AI to fake an opponent’s voice or image to sway public opinion.
Elections for state and local governments are even more at risk from deceptive AI-generated campaign ads than elections with national attention, according to the Brennan Center for Justice. Outside of the state’s urban areas, local newspapers are fast in decline. Fewer fact checkers exist to monitor local campaign races, raising concerns AI usage will go largely unchecked.
The 2024 elections are quickly approaching and AI development is outpacing relevant legislation. Requiring content disclaimers for campaign ads featuring AI-generated content is a necessary first step, but laws need to be more comprehensive to secure integrity in local and state elections.
If traditional political ads are messy, a campaign landscape dominated by AI is even more threatening. Broader reforms to campaign laws, generally, are in order. State lawmakers need to act fast.
Jack Rogers ([email protected]) is a sophomore studying Chinese, economics and political science.