TLDR:
- Trump shared AI-generated images falsely suggesting Taylor Swift’s endorsement
- The images included Swift in Uncle Sam attire and fans wearing “Swifties for Trump” shirts
- This incident highlights concerns about AI’s potential to spread misinformation in elections
- Social media platforms struggle to effectively moderate AI-generated content
- Experts warn of increasing challenges as AI technology advances
Former President Donald Trump recently shared a series of images on his Truth Social platform, some of which were generated by artificial intelligence (AI), falsely suggesting an endorsement from pop star Taylor Swift.
The posts, made on Sunday, August 18, 2024, included a doctored image of Swift dressed as Uncle Sam with the caption “Taylor Swift Wants You To Vote For Donald Trump,” as well as AI-generated pictures of fans wearing “Swifties for Trump” t-shirts.
The incident has drawn widespread media attention and criticism, with many outlets highlighting the potential dangers of using AI-generated content in political campaigns. Trump’s campaign spokesman, Steven Cheung, defended the posts, stating, “Swifties for Trump is a massive movement that grows bigger every single day.”
Swift has not endorsed any candidate for the 2024 presidential election. In fact, the singer has been critical of Trump in the past, expressing regret in a 2020 documentary for not speaking out against him during the 2016 election.
The use of AI-generated images in this context has raised significant concerns among experts about the potential for misinformation in the upcoming election.
Emilio Ferrara, a computer science professor at USC Viterbi School of Engineering, warned,
“I’m worried as we move closer to the election, this is going to explode. It’s going to get much worse than it is now.”
Social media platforms have struggled to effectively moderate AI-generated content, despite having rules against manipulated images, audio, and videos.
Platforms like Facebook and X (formerly Twitter) have focused more on labeling content and fact-checking rather than removing posts, citing concerns about censorship of political speech.
The incident also highlights the broader challenges facing social media companies and lawmakers in addressing the rapid advancement of AI technology. Hany Farid, a UC Berkeley professor focusing on misinformation and digital forensics, noted,
“We have all the problems of the past, all the myths and disagreements and general stupidity, that we’ve been dealing with for 10 years. Now we have it being supercharged with generative AI and we are really, really partisan.”
Legislators are working to address this issue by proposing bills that would require social media companies to take down unauthorized deepfakes. Governor Gavin Newsom has expressed support for legislation that would make it illegal to alter a person’s voice using AI in campaign ads.
The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) is among the groups advocating for laws addressing deepfakes.
Duncan Crabtree-Ireland, SAG-AFTRA’s national executive director and chief negotiator, emphasized the potential consequences of such misinformation:
“Especially with elections being decided in many cases by narrow margins and through complex, arcane systems like the electoral college, these deepfake-fueled lies can have devastating real-world consequences.”
As the 2024 presidential race intensifies, both the Trump and Harris campaigns are preparing for the potential effects of AI on the election.
The Harris campaign has established an interdepartmental team to address the threat of malicious deepfakes, while the Trump campaign has not responded to requests for comment on their AI strategy.