Taylor Swift moves to protect voice and image as AI deepfake threat grows worldwide
Taylor Swift is said to have filed trademark applications in the US to protect her likeness and voice. Swift is not alone in exploring this strategy; earlier, influential figures such as Matthew McConaughey and Amitabh Bachchan have also taken legal steps to safeguard their identity from unauthorised use.
by India Today Tech · India TodayIn Short
- Taylor Swift reportedly has filed trademark applications in the US
- Two audio clips and an onstage image are included in it
- This comes amid the concerns of deepfakes continues to grow
Taylor Swift has reportedly taken legal steps to protect herself from the growing misuse of artificial intelligence. The pop superstar filed trademark applications in the US for two audio clips of her voice and one stage image, a move experts say is aimed at stopping deepfake videos, cloned audio and fake promotions using her identity. The filings were submitted through her company, TAS Rights Management, and cover promotional voice messages linked to her album The Life of a Showgirl along with an image of Swift performing on stage with a pink guitar.
According to trademark attorney Josh Gerben, the applications are designed to create an extra legal shield beyond existing publicity rights. He said AI tools can now generate new content that mimics a celebrity's voice or appearance without directly copying original recordings, creating gaps in traditional copyright protection. If successful, Swift's move could become one of the most-watched celebrity responses to AI misuse.
Taylor Swift case shows a wider AI deepfake problem
But the Taylor Swift case is part of a much bigger global problem. Deepfakes have rapidly moved from internet pranks to a serious concern involving misinformation, scams, harassment and reputational damage. Celebrities, politicians, and ordinary users have all become targets as AI tools make it easier to create realistic fake videos and cloned voices.
India has already faced several high-profile cases. In 2023, actors Rashmika Mandanna, Priyanka Chopra Jonas and Alia Bhatt were among the stars targeted by manipulated videos in which faces or voices were altered. Those incidents triggered strong reactions online and renewed calls for stricter digital safeguards.
The issue has not slowed since then. Just last month, the Centre defended stricter action against online content, saying deepfakes were becoming a growing threat across social media platforms. Union Information and Broadcasting Minister Ashwini Vaishnaw said, "A huge quantity of deepfakes has started pouring into the social media world. It is a new menace and a new threat for the society."
He added that platforms had "almost doubled or tripled" their takedown actions as the volume of harmful AI-generated content increased. According to the minister, addressing deepfakes is important not only for individuals but also for institutions and society at large.
At the same time, the crackdown has sparked controversy. Several accounts on X, along with pages on Facebook and Instagram, were reportedly blocked or restricted under government orders. Opposition parties and digital rights groups have alleged that some actions go beyond tackling fake content and raise concerns around censorship.
The Internet Freedom Foundation has also called for greater transparency, arguing that many takedown orders offer little explanation. It has urged clearer grounds for blocking content and better legal remedies for affected users.
- Ends