AI Voice Cloning & Artist Remedies

I came across this Instagram reel made by the producer @itsyoungcheetah. In it, he got AI to make him a topline. He made the beat (he’s a producer after all), wrote the initial lyrics for the chorus and recorded himself performing it. However, because he is not an artist (in the vocal sense) so he got AI to replace his voice with that of a female artist, and fill in the verses. The results were pretty extraordinary. 

So what might the implications of this be? 

Believe it or not, there’s already fully AI artists making traction on major streaming platforms like the psych-rock band, The Velvet Sundown. This band caused major controversy a few months ago because it had generated a huge Spotify following, released two albums, and no one had known it was AI-generated until a little while afterwards. 

It seems that producers don’t really have to collaborate with artists to get their tracks out anymore. If they prefer anonymity, they can continue to create in the background, and AI would make it easier and more accessible than ever to bring their musical visions to life without having to rely on artists. I am by no means saying that the producer-artist relationship is dead, it’s far too early to make a conclusion like that. However, it is easy to see the attraction in opting for AI if one were looking at efficiency and cost-effectiveness—especially if a producer is just starting out. Nevertheless, that is perhaps a topic to dive into on another day. 

What I really found interesting about @itsyoungcheetah’s video, is that he asked AI to specifically use a voice that sounded like the artist Annatoria. Did the AI succeed in making a perfect impression? Perhaps not. But the technology is getting better by the day. It would not be surprising if in the years ahead, publicly available AI programs gained the ability to perfectly emulate the voice of any artist. 

It’s important to note that @itsyoungcheetah didn’t have exploitative motives in making this reel. Rather, he had written a song for Annatoria, and used AI as a tool to create a topline he could pitch to her so they could hopefully collaborate on a song together one day. AI is a powerful tool. It can help artists that don’t have the resources to create demos and songs they wouldn’t have been able to otherwise. But like any tool, AI can also be dangerous in the wrong hands. 

We know that AI is not allowed to recreate copyrighted materials, so you couldn’t use it to generate the song “In the Room” ft Annatoria for instance. However, what if you just wanted to use an AI-generated emulation of an artist’s voice to perform a song that you created with a simple prompt, so that you could make money off of their signature sound? A voice is not typically copyrightable. What rights would the artist have over the use of their voice if this happened? 

The scenario seems to spark similar issues as were found in the 1988 US appellate court case, Midler v. Ford Motor Co. In it, Ford Motor Co, had wanted to use the voices of some nostalgic 70s artists in their commercial. When one such artist, Bette Midler, refused to participate, Ford hired an impersonator and went ahead with the project regardless. Midler sued stating that her voice was a unique part of her person and by impersonating her they had committed an infringement. When she lost at trial, she appealed, and the  appellate court held the following:

“A voice is as distinctive and personal as a face. The human voice is one of the most palpable ways identity is manifested. We are all aware that a friend is at once known by a few words on the phone… A fortiori, these observations hold true of singing, especially singing by a singer of renown. The singer manifests herself in the song. To impersonate her voice is to pirate her identity… We need not and do not go so far as to hold that every imitation of a voice to advertise merchandise is actionable. We hold only that when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs and have committed a tort in California.” (see full case here)

Would this case have had similar outcomes in Canada? According to this McMillan article on the “Legal Considerations in Canada related to ‘Voice Cloning’”, the answer is likely yes. 

In 2024, a song called NostalgIA was created as a supposed collaboration between Justin Bieber, Bad Bunny and Daddy Yankee. The song went viral, however, it turned out it had been made using AI and the artists were in no way involved. Their voices had essentially been cloned—just like in my hypothetical above. 

The article mentions three potential causes of action in this scenario had the events taken place in Canada: 

  • Violation of Privacy (under the applicable provincial Privacy Act)
  • The Tort of Appropriation of Personality
  • The Common Law Tort of False Light

Like Professor Festinger has told us in class, personality rights in Canada are typically found in the realm of our privacy laws instead of intellectual property laws, and this only confirms that. While these causes of action may provide reasonable recourse for those artists that have the means to pursue litigation, I would imagine that smaller artists would still remain at great risk of having their voices exploited.

Disclaimer: The image above was generated using ChatGPT.

One response to “AI Voice Cloning & Artist Remedies”

  1. mlongman

    Fascinating article Elsie! I was not expecting the song by @itsyoungcheetah to be so catchy. Since this post was written there has also been a new AI vocalist controversy with “Solomon Ray”, a fully AI-generated singer-songwriter which recently became the #1 iTunes artist for Christian music (source: https://www.christianitytoday.com/2025/11/solomon-ray-ai-christian-music-soul-singer/ ).

    I wonder to what degree a real artist working in the same genre would be able to claim any remedies if the AI voice samples used to create the new “singer” were plainly and obviously their own.