When I was asked about the impact that artificial intelligence (AI) is having on influencer marketing, my first thought was to talk to the people involved. Since he doesn't know the influencer personally, he decided to turn to an AI chatbot, writes Steph Innes.
“Please write a short article on the legal issues surrounding the use of AI in influencer marketing…” I ventured, hoping that my manners would improve my results.
First, the response points out that AI is emerging as a “transformative force in the dynamic world of influencer marketing,” which is fair enough and so far so good. According to my own reading, the economic size of his content creators is estimated at 21 billion US dollars.
This was followed by an overview of various areas of law, including consumer protection, intellectual property, data protection, and discrimination. All very relevant and correct. But instead of restating it here, what is the important point?
In fact, AI is disrupting the influencer space through the creation of hyper-realistic “virtual influencers.” An example is Lil Miquela. She was active for two years before revealing that “she” wasn't actually human. Her associated Instagram account currently has over 2.6 million followers. Similarly, an analysis of H&M's Instagram ad featuring virtual influencer Kuri found that it reached 11 times more people than a traditional ad and had associated costs that were 91% lower. I understand that. Of course, greater exposure at a lower cost sounds good, but will the lack of transparency regarding the use of AI in this field continue? And what are the legal implications? what?
There are many uses for AI in influencer marketing. From dynamic ad targeting, which uses AI to ensure ads reach the most relevant audience, to the content creation itself, which is supported or directed by humans to varying degrees (or not at all). Until. The legal issues that arise will vary accordingly.
The UK's Advertising Standards Agency has so far refused to support the introduction of AI-specific rules. His CAP code for the UK is media neutral, focusing on the impact of advertising rather than how it is created or distributed. So what does the use of AI mean in this area?
Some stakeholders want advertisers to make commitments about ethics and transparency, while others want to emphasize the use of AI, meaning that what you're seeing is a real person. They are calling for legislation that would require AI-generated influencer content to be watermarked to make it clear that it is not. Such measures have been taken in some jurisdictions, with rules in India requiring influencers to clearly disclose their virtual nature.
In the UK, even in the absence of AI-specific rules, there are obvious legal and commercial benefits to ensuring advertising transparency and non-misleading. Advertisers will continue to be responsible for their advertising and will no longer be able to hide behind cries of “AI did it and we don't know how it works…” The main risks for advertisers using AI in the virtual influencer space are:
- Misleading advertising violates the CAP Code even if the CAP Code has not been modified to specifically refer to AI.
- AI can perpetuate bias and discrimination. AI trained on limited datasets can associate certain characteristics with certain groups of people, resulting in discriminatory content.
- If your AI-generated content includes third-party trademarks or copyrighted works, you may be at risk of intellectual property infringement.
- AI-generated ads can also “infringe” publicity rights if images of individuals are used. This is not based on intellectual property rights per se, but on a combination of copyright, privacy, trademark, confidentiality, data protection, and defamation.
Advertisers are responsible, regardless of the scope of their use of AI or their understanding of how it works. Violating existing rules could result in reputational damage to advertisers, among other sanctions. The situation will change with the expected passage of the Digital Markets, Competition and Consumers Bill, which will give the UK Competition and Markets Authority greater enforcement powers.
So, essentially, although there is currently no clear obligation to watermark AI-generated content in the UK, advertisers should still consider how to ensure their ads are not misleading. This may effectively be the same thing for some people. Consideration should also be given to how well the AI products used in this field are understood and, therefore, how potential risks such as intellectual property infringement and discrimination can be mitigated.
Steph Innes is a partner at Dentons