In today’s digital age, the unauthorized use of voices, especially through AI deepfake technology, poses significant risks to individuals and brands alike. As AI advances, the creation of realistic voice replicas has become easier, leading to potential misuse and exploitation. To counteract this, there is a growing consideration for protecting the unique biometric characteristics of a voice as a trademark. But the question is, can this be done?
The deepfake dilemma
Deepfakes use sophisticated AI algorithms to create realistic replicas of voices and appearances. This technology can be weaponized to produce fake advertisements, misleading information, and even malicious impersonations.
Imagine a deepfake using the voice of James Earl Jones or George Clooney to fraudulently endorse a product. Such misuse can severely damage the reputation of the individuals concerned and mislead the public.
In some instances, the voices of famous people have been used to promote questionable services, such as pornographic websites.
The problem is not confined to artificial intelligence. Also real life human voice imitators could easily be used to create an advertisement that sounds like it was read by a famous person. For example, Ford used Bette Midler’s impersonated voice and Frito-Lay Tom Waits’s unique voice in advertising, resulting in successful lawsuits against them.
The rise of AI technology is now making the problem of using famous voices without permission much more commonplace.
Sound as trademark
In most countries, it is possible to register sounds as trademarks. Unlike traditional trademarks that protect logos, names, or slogans, sound trademarks encompass specific audio elements that are uniquely associated with a brand. These auditory trademarks must be distinctive and serve as a source identifier to qualify for trademark protection.
Examples of registered sound trademarks include the Nokia tune, Metro-Goldwyn-Mayer’s lion roar, 20th Century Fox’s fanfare, and McDonald’s I’m Loving It jingle.
Sound trademarks can be highly memorable and effective branding elements. Provided that they are distinctive, it is only right and fair that they get the same legal protection as other types of trademarks.
If a sound is very simple, common, or very long, it will not typically be perceived as a trademark. For example, a “shhh” sound would not be distinctive for carbonated drinks, because that is the sound a bottle makes when it is opened. Jingles are typically considered distinctive. A whole song would not be considered a trademark. It is simply too long to be understood as an indicator of commercial origin.
The limits of conventional sound trademarks
The challenge of conventional sound trademarks is that they protect a particular word, phrase, or other sound. For example, consider the slogan “This is CNN”, spoken by James Earl Jones. It could be a sound trademark. The trademark would protect the slogan “This is CNN” spoken with the very distinctive voice of James Earl Jones.
The protection would not extend to the unique voice of James Earl Jones as such, i.e. the voice itself. If a deepfake replicated James Earl Jones’s voice but used different words or phrases, it would fall outside the protection offered by the sound trademark. This is the challenge of using trademarks in fighting deepfakes.
Currently, it is not possible to obtain trademark protection for a type of sound as such, only for concrete manifestations, like spoken words or slogans, of that sound. This is the case even where the sounds is very unique, like that of James Earl Jones. In other words, it is not possible to register a trademark that contains the voice of James Earl Jones regardless of what is being said.
Other legal tools to fight deepfakes
In many countries, like the United States, the courts have also decided that voice (per se) is not protected by copyright. Although everybody has a particular sound, it is difficult to see how a voice fulfills the threshold of the original creation. Everybody’s sound is as it is, it is not a result of purposeful creation.
At least for now other legal tools than trademarks are better equipped to deal with deepfakes. Here are some that might be available.
The right of publicity. The right of publicity protects celebrities against unauthorized commercial use of their name, likeness, and other identifiable aspects of their persona, including their voice. This doctrine is especially relevant for celebrities and public figures whose voices are easily recognizable and often misused in deepfakes. This is probably the most common ground in tackling deepfakes, at least where that involves celebrities.
Unfair competition and business practices. These laws are designed to prevent companies from engaging in untruthful and misleading advertising, making statements that are not true, or generally acting in an unfair or untruthful manner. Using deepfakes can fall under these rules. Many countries also have regulatory bodies that are authorised to take action against companies that fall short of their obligations in trading in accordance with honest business practices.
Defamation and libel laws. These could be relevant especially where the deepfake is used in the context of unsavoury services, such as adult content, criminal activity, or political activity. Using deepfakes in sensitive areas can severely harm a person’s reputation. Imagine for example a situation where an oil company used the voice and image of Greta Thunberg in advertising, or the voice or image of a left-wing politician was used to spread right-wing message.
EU’s Artificial Intelligence Act. Finally, EU is passing legislation regulating the use artificial intelligence. It defines deepfakes as “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful“.
Rather than banning deepfakes, it imposes specific obligations on their transparent use, notably the user must disclose that content has been artificially created or generated.
Some practices are prohibited under the AI Act, such as those that exploit vulnerabilities of individuals or groups based on factors like age, disability, or socioeconomic situation to distort behaviour and cause harm.
The AI Act also prohibits “the placing on the market, the putting into service or the use of an AI system that deploys… purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm“.
One way that AI can be deemed manipulative is when it is used “to persuade persons to engage in unwanted behaviours, or to deceive them by nudging them into decisions in a way that subverts and impairs their autonomy, decision-making and free choices“. Often deepfakes are used in this manner, that is to say to nudge consumers into buying something.
Considering that the EU is enacting a comprehensive regulatory framework for artificial intelligence, it is strange that the issue of deepfakes has not been more clearly spelled out and regulated. It would’ve been easy to prohibit the use of deepfakes for creating the illusion, reference, or association to real persons without their consent.
Towards the future
The future of deepfake regulation remains to be seen. While the EU’s AI Act establishes a framework, it doesn’t go far enough. Expanding trademark protection to encompass unique voices is a possibility, but it is unlikely and poses many difficult questions.
Alternatively, specific deepfake laws could be drafted, requiring disclaimers or banning malicious uses. Continued collaboration between policymakers, legal experts, and technology companies will be crucial in crafting effective solutions to combat the evolving threat of deepfakes.
One clear example of specific deepfakes legislation is the “Elvis” Law” (Ensuring Likeness Voice and Image Security”, a recent Tennessee law passed in March 2024. It protects voices from unauthorized commercial use. This is particularly significant in the fight against deepfakes that use a person’s voice without consent for commercial purposes like advertising. This model could be adopted elsewhere as well.
Conclusion
Currently, the legal framework to deal with deepfakes is lacking and piecemeal at best. As digital technology advances, creating realistic voice replicas and other forms of deepfakes has become easier, leading to significant potential for misuse and exploitation.
While several legal tools and doctrines exist, they still don’t offer comprehensive protection against the unauthorised use of deepfakes.
Read also
Are your trademarks ready for the metaverse?
Somebody is infringing your trademark – what should you do?