Elon Musk's AI video generation tool, Grok Imagine, is facing backlash after allegedly creating explicit deepfake videos of pop sensation Taylor Swift without any user prompting. Clare McGlynn, a law professor and advocate for anti-deepfake legislation, states that this shows a troubling trend in AI programming favoring misogyny. "This is not misogyny by accident; it is by design," McGlynn explained, emphasizing that the tech industry holds responsibility for these outcomes.
The Verge reported that Grok Imagine's "spicy" mode yielded uncontrollable and explicit content when tested, highlighting insufficient age verification measures. McGlynn criticized XAI's policies, stating that stricter precautions could have prevented such incidents. This isn't the first time Swift's image has been exploited; explicit deepfakes depicting her circulated widely in early 2024.
In a trial to assess Grok Imagine, a Verge journalist found the AI abruptly generated explicit animations after selecting a seemingly innocent prompt for Taylor Swift celebrating at Coachella—prompting immediate shock at the uncensored results. The AI's acceptable use policy clearly prohibits pornographic representations, raising questions about the enforcement of these rules.
New UK regulations mandate that platforms displaying explicit material must verify users’ ages reliably. However, there was a glaring lack of age checks during the journalistic tests conducted on Grok Imagine. Prof. McGlynn supports legislative changes banning all non-consensual deepfakes, arguing the need for women to control images of themselves.
Baroness Owen added that the government's timely implementation of anti-deepfake laws is crucial for women's consent rights. In response to previous incidents exploiting Swift's likeness, platforms like X have taken action, blocking searches of her name in hopes of curbing the spread of non-consensual explicit content.
The Swift deepfake controversy has triggered renewed discussions about AI safety and ethical content generation, amplifying calls for more robust policies to safeguard individuals against unwanted digital exploitation.