Key Points:
- X’s new AI image generator, available through its Grok chatbot, has few restrictions on content creation
- Users have generated controversial images like political figures with weapons and explicit celebrity images
- The tool can produce images related to election misinformation, raising concerns ahead of the 2024 US election
- Unlike other major AI image tools, Grok appears to lack many content safeguards
- Regulators in Europe are already investigating X for potential content moderation issues
- The lack of restrictions aligns with Elon Musk’s stance against content moderation but may deter advertisers
X, the social media platform formerly known as Twitter, has launched a new AI image generation feature through its Grok chatbot. The tool, available to paid X Premium subscribers, allows users to create images from text prompts with noticeably fewer restrictions than other major AI image generators.
Since its release, users have generated and shared a wide range of controversial images on the platform. These include depictions of political figures like Donald Trump and Kamala Harris holding firearms, celebrities in explicit scenarios, and copyrighted characters like Mickey Mouse in inappropriate situations.
The tool’s lack of safeguards is raising particular concerns about its potential misuse ahead of the 2024 US presidential election. Users have successfully generated images that appear to show ballot drop boxes being stuffed and security camera footage of potential election fraud. The chatbot has also produced images of the current US president that resemble former President Donald Trump rather than Joe Biden.
Eddie Perez, a former Twitter employee and current board member at the OSET Institute, expressed alarm at the timing of the release. “Why on earth would somebody roll something out like this? Precisely two and a half months before an incredibly major election?” Perez said. He added that he was “very uncomfortable” with such powerful and seemingly untested technology being made available to the public at such a crucial time.
Unlike other major AI image generators, Grok appears to lack many standard content restrictions. While the chatbot claims to have certain limitations when asked directly, these do not seem to be consistently enforced. Requests for violent, sexual, or potentially misleading content that would be blocked by other services are often fulfilled by Grok.
The tool’s loose restrictions align with X owner Elon Musk’s stance against content moderation. Musk has publicly praised the new feature and seems amused by some of its more controversial outputs, stating that it allows people “to have some fun.”
However, the lack of safeguards may create regulatory challenges for X, particularly in Europe. The European Commission is already investigating the platform for potential violations of the Digital Safety Act, which governs content moderation on large online platforms. The Commission has also requested information from X and other companies about mitigating AI-related risks.
In the UK, regulator Ofcom is preparing to enforce the Online Safety Act, which includes risk-mitigation requirements that could cover AI-generated content. While the US has broader speech protections, legislators are exploring ways to regulate AI-generated impersonation and disinformation.
The controversial nature of Grok’s image generator may further deter high-profile users and advertisers from the platform, even as Musk attempts to bring them back through legal means.
Despite these concerns, the images produced by Grok still show signs of artificial generation, such as garbled text and unnatural lighting. The tool also struggles with accurately rendering some faces and complex scenes. However, as AI technology rapidly improves, these telltale signs may become less noticeable over time.
X has not responded to requests for comment on the image generator or its lack of restrictions. As the 2024 election approaches, the potential impact of this powerful but largely unrestricted tool on public discourse and information integrity remains to be seen.