By: Cynthia Mia (CYNERGY)
The legal system is a leading source of protection for voice talents and their rights. However, it is important to acknowledge the reality that the system may struggle initially, with learning how to respond to the redesigned landscape brought about by AI and its automated processes. Countries like Denmark are working to draft legislation to give individuals copyright over their image, voice, and facial features — specifically targeting deep fakes and AI-manipulated content.

So, while the law does protect you, and the courts will try to uphold this protective stance, the fact remains that AI is a new frontier.
It would not be too hyperbolic to say that AI is the new arms race. These AI products, in their varying flavours, are constantly flooding the market. Thus, legislative responses and judicial experience will need time to develop the necessary capabilities to combat the threats they pose.
Speaking of threats, these AI companies sometimes seem to role-play the cruel, “multi-tentacled Ursula” (from Disney’s The Little Mermaid), deceiving the unsuspecting Ariel out of her voice, with sketchy contracts and muddied terms. Only to create a clone of her voice, by means that seem utterly… magical.
Therefore, as the law adjusts, it is important that voice artists and little “Ariels” everywhere learn other ways to protect their intellectual property and their voices:
HOW TO PROTECT YOUR RIGHTS
- Acknowledge Your Responsibility
In the ensuing battle with the artificial, the fact remains that AI harnesses materials; in this case, voices from real people. Humans are the source, therefore the supplier. So, voice talents retain some responsibility and power to wield in this fight. Although that power feels limited sometimes, because of the sketchy ways these companies go about harvesting voices, there are steps to take and workarounds to explore.
The voice is a key part of identity. And the onus is on the voice artist to recognise that and protect their vocal identity like it were an ID card or password. And for those who mold their voicing into a career, there is an added need for responsibility on a professional level.
2. Education and Awareness
It is important to stay informed about AI news, trends, and recent advancements (as well as the tricks employed by AI companies).
Research and exposure to information are very necessary. The more voice talents know about an issue, the more effective the existing protections will be. It is said that weapons (or shields) are only as effective as their users.
3. Know the Risks
Know the risks and be cautious of voice farming/harvesting and voice-phishing, also known as “Vishing” Tactics. These are current realities.
These companies that harvest and clone voices are in the farming business. It is unwise not to know this, as it would be unwise of the cows living next to an ice-cream shop not to watch out for their milk.
The U.S Copyright Office guidance makes it clear that while using copyrighted works for research could qualify as “fair use”, mass or commercial scraping (i.e., data extraction) without permission is prohibited.
Over the years, voice artists have raised major concerns about being exploited by companies that use their recordings to train AI models without informed consent or proper compensation. For example, contracts from platforms like Findaway-Voices allowed Apple to use the voices of audiobook narrators for AI training. After some well-deserved backlash, Apple and Findaway halted that practice for union members.
And with regards to voice-phishing, alerts have been flaring over what is being termed “vishing” scams or “voice phishing” — where scammers use AI to replicate voices for fake emergencies, kidnappings, or to make financial demands. Some of these scams even clone and utilise familiar voices to extort money from friends and unsuspecting relatives, particularly the elderly, who are easier to trick with these AI manipulations.
AI and the exploitation of voices are not just casting a shadow on career prospects; there are also personal and safety risks.
4. Implementable Protective Measures
- Beware of AI training offers
Voice artists are advised to be watchful for when offers come to train AI voice models. These offers are often coated with lucrative price tags, some as high as $5000 (others as moderate as $250), but in the long run, what looks like it’s saving pennies could turn out to be losing pounds.
Key lures are: high pay, random invites, low threshold for submissions, generalised casting, and insistent emails.
- If you must sign, check/read the fine print
If a voice artist is considering these offers, they should remember that the devil is often in the details (so the saying goes). It is best to read the terms thoroughly. And should such terms ever include any of the following terms… run, Ariel!
Here are the terms to flee from:
(i) Oral Agreement: The law is better able to protect a voice artist’s rights if things are more explicitly stated. Oral agreements leave too much room for exploitation.
(ii) In Perpetuity: this means the voice can be used forever, even if the company folds, sells, winds up, or is acquired. The law gives voice-artists perpetual rights (rights that extend even beyond their lifetime), DO NOT sign that away!
Talents must retain knowledge (if not control) over reach, application, and timeframe for usage of their voices.
(iii) Exclusive Rights: this prevents a talent from working with the client’s competitors, or voicing work in a similar field, or even in a similar geographic location. A term like this limits where you can use your voice. However, it is your voice, your right, and the choice should remain yours.
(iv) In any media now known or hereafter devised: this phrase or anything similar is non-specific licensing language. Please, swim away, Ariel.
(v) For all purposes, including commercial and non-commercial: Again, the language casts a wide net. Besides, the term also makes it difficult to determine whether the compensation offered is appropriate or not.
(vi) Irrevocable: NO!!!
(vii) The Absence of Vital Statements, like:
- No information on reach: You should know how far your voice will go, so if there’s misuse or you hear your voice in Scotland, you know the contract stipulated it would have that reach.
- No provision of a right to review or revoke: Note that even an AI version of your voice is linked to you, so you should reserve your right to assent to, or call for a revocation, in instances of misuse. Ariel’s voice was fraudulently used by Ursula, and was even applied to deceive Eric (her loved one), all because she signed away her rights.
- Unclear compensation or payment time/structure: A promise of compensation is not enough; there must be clarity and specificity of when and how payments will be made.
- No provision for voice attribution and ownership: every talent owns their voice and should be duly credited, even in AI form. An absence of such attribution leaves it to the company to claim ownership of the resulting AI model.
- Inclusion of waivers of morality, personality, or publicity rights: such a waiver weakens your right to object, based on principles of instances of voice misuse, misrepresentation, or any reputation harm.
This list continues, but not in this upload, and I’m sure you need time to digest this first. There is so much more to discuss, from the inclusion of necessary clauses to the perks of unionising. Swim on over there, Ariel, and check for Numbers 5-10 in the second (and concluding) part of this article. See ya!
Follow @voiceverseng on Instagram and across other socials for more updates from the voiceover world.