Taylor Swift applies for audio and image trademarks to prevent the spread of AI impersonation content

MarketWhisper

泰勒絲申請聲音及圖像商標

According to a report by BBC on April 28, American pop singer Taylor Swift has filed three trademark applications in the United States, covering sound audio snippets and stage images, with the aim of protecting her voice and appearance from AI impersonation. Trademark lawyer Josh Gerben first disclosed the details of the above applications on his blog.

Details of the Three Trademark Applications

According to the trademark application documents disclosed by Josh Gerben on his blog, the three applications filed by Taylor Swift are as follows:

Image trademark: Using a stage photo from the Eras Tour concert as the basis for the application; the image shows her holding a pink guitar (with a black strap), wearing a multicolored rainbow bodysuit, and wearing silver boots. The photo had previously been used as one of the official promotional images for the Disney+ film Taylor Swift: The Eras Tour.

Audio trademark 1: An audio snippet in which Taylor Swift says “Hey, I’m Taylor”

Audio trademark 2: An audio snippet in which Taylor Swift says “Hey, I’m Taylor Swift”

According to the BBC report, the source of the above two audio clips was recorded by Taylor Swift last autumn for promotional purposes for the album The Tortured Poets Department for Spotify and Amazon Music.

Legal Protection Scope: Trademark Lawyer Josh Gerben’s Analysis

According to the BBC report citing Josh Gerben’s analysis, even if the original photos and audio clips are not directly copied, the registered trademark can still give Taylor Swift broader legal tools to stop AI from using her images and voice.

Josh Gerben said in his blog: “By registering specific phrases related to her voice, Swift can not only challenge copies that are completely identical, but can also challenge impersonations that are ‘confusingly similar,’ which is a key standard in trademark law.”

Josh Gerben further pointed out that, if a lawsuit is brought against AI for using Taylor Swift’s voice, any voice usage that sounds similar to the registered trademark could constitute a trademark infringement claim; the image trademark also applies the same logic—AI-generated similar stage images could trigger federal trademark protection mechanisms.

Background: Celebrity Trademark Strategy to Counter AI Impersonation

According to the BBC report, in recent years, AI-generated impersonation content of Taylor Swift has appeared in multiple forms, including explicit images and fake advertisements falsely claiming that she endorsed specific candidates in an election. The BBC report also pointed out that actor Matthew McConaughey became the first celebrity to use trademark applications to protect voices and images from AI abuse earlier in 2026; Taylor Swift’s application is one of the latest cases of celebrities adopting a trademark strategy to address the problem of AI impersonation, and the BBC noted that this is a relatively new approach among celebrities.

Frequently Asked Questions

What does Taylor Swift’s trademark application cover? Which source disclosed it?

According to the BBC report on April 28, 2026, Taylor Swift filed three trademark applications in the United States, covering one Eras Tour stage image and two audio snippets (“Hey, I’m Taylor” and “Hey, I’m Taylor Swift”); trademark lawyer Josh Gerben first disclosed the application details on his blog.

How do trademark applications legally protect celebrities from AI impersonation?

According to the BBC report citing Josh Gerben’s analysis, in addition to protecting completely identical copies, registered trademarks can also be used to challenge AI-generated similar impersonation content under the trademark law standard of “confusingly similar,” providing the trademark holder with a legal basis to assert infringement at the federal level.

Before Taylor Swift, which celebrity adopted the same trademark protection strategy?

According to the BBC report, actor Matthew McConaughey became the first celebrity to use trademark applications to protect voices and images from AI abuse earlier in 2026.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Ant Group's Ling-2.6-flash Model Open-Sourced: 104B Parameters With 7.4B Active, Achieves Multiple SOTA Benchmarks

Gate News message, April 29 — Ant Group's Ling-2.6-flash model weights are now open-sourced, having previously been available only via API. The model features 104 billion total parameters with 7.4 billion activated per inference, a 256K context window, and MIT licensing. BF16, FP8, and INT4

GateNews6m ago

Sam Altman posted screenshots of the Codex dual-mode, with office and programming functions officially split.

OpenAI CEO Sam Altman shared screenshots and a statement on X on April 29, and Codex is rolling out a new guided interface. When users enter for the first time, they must choose between two modes: Excelmogging and Codemaxxing. Codex’s current weekly active users have already exceeded 4 million, and its use cases have expanded from code generation to non-technical applications.

MarketWhisper18m ago

OpenAI's Codex Rolls Out Dual-Mode Interface: Excelmogging for Office Work, Codemaxxing for Coding

Gate News message, April 29 — OpenAI CEO Sam Altman announced a redesigned Codex interface on X today, introducing two distinct modes for users. "Excelmogging" targets everyday office tasks with a simplified interface and the tagline "Same tools, simpler interface," featuring example tasks like

GateNews1h ago

US media: A White House draft executive order would allow Anthropic Mythos models to enter government

According to a report citing an insider, quoted by Axios on April 28, the White House is drafting guidance that would allow federal agencies to bypass Anthropic’s Supply Chain Risk Determination (SCRD) and introduce new models for government use, including Anthropic’s Mythos model. In connection with this, the White House issued an official statement saying that any policy statements would be released directly by the President, and that any other claims are purely speculation.

MarketWhisper1h ago

White House Drafts Guidance to Allow Anthropic Use, Waive Supply Chain Risk Determinations

Gate News message, April 29 — The White House is drafting executive guidance that would allow government agencies to waive supply chain risk determinations for Anthropic and introduce new AI models including Mythos, according to sources familiar with the matter. The proposed administrative measure c

GateNews2h ago

GPT-5.4 Pro Solves the 60-Year Erdős Conjecture #1196

According to reports from OpenAI and Scientific American, 23-year-old amateur Liam Price, with help from GPT-5.4 Pro, solved the Erdős#1196 original set problem that had gone unsolved for 60 years. It took about 80 minutes of reasoning, 30 minutes to organize the results into LaTeX, and submit them to erdosproblems.com for review. The key is linking integer structures with Markov processes; Tao and Lichtman have expressed approval. It is still in the community verification stage and has not yet been completed for peer review. The information is available in ABMedia and the OpenAI 4/28 Podcast.

ChainNewsAbmedia2h ago
Comment
0/400
No comments