By: Cynthia Mia (CYNERGY)

Here, is the continuation of the discussion we started in the first part of this article series. These are more avenues of protection available to voice artists:
(5) Learn to Draft an Indemnifying or Licensing Clause:
A good indemnity/licensing clause should cover the following points:
(a) State that the license granted by the voice artist is "limited, non-exclusive, non-transferable, and revocable".
(b) Explicitly state the purpose for which it's granted: "for the sole purpose of training an AI voice model".
(c) Indicate the period the license is valid for: "for a period of XYZ months/years".
(d) Restate your retention of your full rights:
“All rights to the ‘Ariel’s’ voice, vocal likeness, and identity remain the exclusive property of ‘Ariel’, and may not be reproduced, distributed, or used in any commercial, political, sensitive, or unauthorized context without prior written consent.”
(e) Expressly specify examples of prohibited use:
“This license explicitly prohibits the use of the voice for impersonation, misleading or inappropriate content, and may be revoked at any time should terms be violated. Any AI outputs must not imply ‘Ariel’s’ endorsement, and appropriate attribution shall be maintained.”
(f) Add a provision for compensation and revisions:
“Compensation shall be as agreed, with options for a flat (one-time) fee or (recurrent) royalties, and ‘Ariel’ reserves full rights to audit usage and demand deletion of all related data upon termination.”
(g) Add a provision for dispute resolution (or other miscellaneous concerns):
It is standard legal practice to establish a course of action in the instance of any violation, breach or disagreement. “In the event of a breach of this contract, parties shall seek recourse at a court of appropriate jurisdiction”. If the parties wish to fully exhaust mediation options, before involving courts that can also be stipulated.
(6) Learn to Draft an Anti-AI cloning Clause:
Note, that the state of Tennessee, USA, has the ELVIS Act (Ensuring Likeness Voice and Image Security). It was signed into law by Governor Bill Lee on March 21, 2024 (it became effective from July 2024). And it is the first state law criminalising unauthorised AI voice cloning of musicians, thereby setting a legal precedent. It also serves as an update to Tennessee’s law on Protection of Personal Rights, by explicitly including voice as a protected property right.
So, during the wait for national laws to fully accommodate these needed updates; these anti-cloning clauses can serve as personal advocacy to protect these rights.
SUCH CLAUSES SHOULD CLEARLY STIPULATE:
*That the client expressly agrees that the talent’s voice work, samples or recording files and vocal characteristics captured pursuant to this agreement SHALL NOT be used, in whole or in part, for the training, development, synthesis, simulation, cloning, or replication by AI, machine learning systems, or any other similar technologies or software without the talent’s prior written and signed consent. It helps to give further examples of prohibited use scenarios.
*Finally, a statement against the breach of this provision. Specifying that it SHALL constitute a “material violation” of the agreement and may subject the client to legal liability.
(7) Know about Opt-Out Mechanisms:
Platforms like Google and OpenAI now allow some creators to opt out of AI training.
The new EU AI Act (effective August 2025) includes copyright opt-outs for datasets used in commercial AI, requiring transparency from model providers.
(8) Voice tracking or Digital Watermarking/Fingerprinting:
Voice tracking is an innovative response whereby a unique watermark is imprinted and hidden within a voice recording. It works similar to concepts like Content ID for YouTube, or watermarking for images. Companies like Audible Magic, Veritone’s Voice-guard and Respeecher offer such services.
These watermarks or “fingerprints” are embedded, into the audio (by these systems) in a way that doesn’t affect the quality, length, format or performance. It simply allows the voice to be traceable, despite any alterations.
This technology ensures the “fingerprinted” files can be monitored across the internet, podcasts, apps, games, and even AI training datasets. If a talents voice is detected in an unauthorised context, an alert can be triggered.
Advancements in these tracking services also allow for suspected samples to get on the radar, even if alterations have occurred or mixtures made.
Voice tracking services often offer dashboards or reports showing where, when, and how voice recordings are being used. This helps talents know if their voice is being exploited. In the long run, this helps voice artists build a case to bring any legal claim with evidence of such violations.
With the rise of generative AI, voice tracking is becoming a useful layer of protection, and a watch-guard against exploitation.
(9) Use of Verification Codes or Avoiding Over-sharing:
As the aforementioned “vishing” scams increase in scale, experts advise using verification codes, avoiding oversharing (unnecessary) audio online, and reporting incidents to appropriate bodies.
(10) Union Protections:
It would be difficult for one person to stand against whole companies; so the voiceover industry needs to band together and form “companies” of their own, aka unions.
Nigeria and Africa are rallying in this regard, with the establishment of bodies like the Association of Voice over Artists (AVOA, Nigeria) and the Association of African Podcasters and Voice Artists (APVA).
The power of unionisation becomes evident with the example of the American Actors union (SAG-AFTRA) that on the 14th of July, ended their year-long strike, after achieving AI Protections for its members.
This SAG-AFTRA strike by video-game voice actors resulted in the new 2025 Interactive Media Agreement that yielded the following fruits:
*An immediate +15.17% wage increase, plus 3% annual raises through 2027;
*Enhanced health and retirement benefits;
*Groundbreaking AI protections: AI usage now requires informed consent and can be revoked during strikes.
In conclusion:
Understandably, there are some of these solutions that will be easier to implement than others, but it’s important to be comforted by the fact that several protections exist. There are actions every voice talent can take legally, and personally to preserve the integrity of their rights. Knowledge of the existence of an arsenal at one’s disposal should be empowering, leading to liberation from “Ursula’s” grasp/ tentacles.
AI (much like Ursula) is an amalgamation of human and the non-human parts; so it requires extra vigilance, adaptation and firmly set boundaries.
Reading the terms and selectively signing licensing agreements (that explicitly define how AI corporations intend to use your work) are key take-aways.
Hopefully, this article helps to combat some of the fears around AI’s impact on the voice industry.
Follow @voiceverseng on Instagram and across other social platforms for news and updates in the voiceover industry.