What Can Players And Clubs Do About 'AI Slop'?

De Semantic Musiconis
Sauter à la navigation Sauter à la recherche


By.
Dale Johnson


Football concerns reporter


2 March 2026


506 Comments


You do not have to look far on social media to discover images and videos of footballers in unlikely or bizarre scenarios.


Scroll through TikTok and you may quickly stumble across Lionel Messi and Cristiano Ronaldo cutting each other's hair, or boarding the Titanic in Edwardian gown. You may even see Kylian Mbappe on a ski-lift with a turtle.


This is the outcome of the exponential development of artificial intelligence (AI). Or, more exactly, AI 'slop'.


AI can be asked to provide quite much anything. By anybody. The tools are becoming ever more sophisticated and quickly accessible.


It will end up being even harder to identify what is genuine and what is, in AI terms, deepfake.


It may appear, for the a lot of part, like harmless fun. After all, who really thinks Messi and Ronaldo have been serving hamburgers?


But is there a point at which players and clubs will try to fix a limit?


Options are limited for players to act


As football has actually ended up being a commercial juggernaut, players and clubs have needed to discover how to take care of their brand names.


That might be by safeguarding the club crest or challenging using a player's name in unauthorised marketing product.


Take Chelsea midfielder Cole Palmer, who has actually trademarked the term 'Cold Palmer' with the UK government's Intellectual Property Office. The 23-year-old did the very same with his name, autograph and signature 'shivering' event.


Creating defenses is one thing. Having the ability to tackle this brand-new AI world of ruthless material is another.


In the UK there is minimal legislation covering someone's likeness. Or, as it is hired football, image rights.


Jonty Cowan, legal director at law company Wiggin LLP, informed BBC Sport that AI was presenting "great deals of novel obstacles".


" Various federal governments all over the world are trying to determine ... how do we respond to AI?" stated Cowan.


AI is being used to put players into real-life circumstances, as well as those more certainly phony.


Take the unveilings of Antoine Semenyo and Marc Guehi by Manchester City in January.


The club's main photos show each gamer with director of football Hugo Viana. Yet before those pictures had even been taken, you might discover AI pictures of Semenyo and Guehi signing a contract along with manager Pep Guardiola.


There was another of Semenyo being welcomed at the training centre by previous player Yaya Toure, whose old squad number - 42 - he was expected to take.


None of these occasions took place, however it was difficult to inform the images were fake.


Last month, an image appeared of Manchester United head coach Michael Carrick with Frank Ilett - the supporter who will not cut his hair till the Red Devils win 5 video games in a row.


Once once again, it did not happen but looks so reasonable.


And Cowan stated it was challenging for there to be any option when content is provided "in a non-contentious manner".


Unless a person has actually suffered industrial or reputational damage, choices are limited.


" It's constantly been rather challenging for a private to enforce IP rights," Cowan stated. "If it is a deepfake that is revealing them in a compromising position, let's state, that's different."


The Data (Use and Access) Act entered into force last month, making it a criminal offense to produce, share or demand a sexually explicit deepfake.


But then you have AI-generated videos such as Celtic's Luke McCowan punching an assistant referee. Could it harm his reputation, or is it simply not believeable?


A more important concern for players may be 'passing off'. This is where somebody unjustly associates their own product and services with the reputation and goodwill of an established brand or service - or gamer.


It is planned to misinform consumers into believing they connected to it - to the hinderance of the recognized brand.


Cowan discussed that in December 2024, as part of an AI-related consultation, the UK government said it was thinking about "presenting some type of personality right".


That would give a gamer more scope to do something about it.


Clubs, for their part, have a few more options open up to them.


Social network accounts putting players in the shirts of their brand-new team - or any team - is nothing brand-new.


But what if a club wished to disagree?


" Where you've got, for example, the Man City package they could take a look at other IP rights," Cowan said.


" Have they infringed the hallmark in their crest? Or style rights in their t-shirt? For that sort of image, that's what a club or an individual would likely be taking a look at."


BBC Sport understands City believe fans understand official channels stay the only places to go for any authentic news, images or videos.


But as the further, will clubs keep that stance?


Tackling platforms more realistic than court action


While clubs and players may consider taking the creators of AI images to court, it is a long and expensive battle.


Cowan says there is a quicker and cheaper path: challenge the platforms straight.


" The Online Safety Act has been presented in the UK recently, and that is putting an obligation on platforms to tackle illegal material," he added.


" It might well be that we will see more systems that platforms will present to have that content removed. Often, that is the simplest and quickest way to tackle these images."


This could lead to a development in business looking after the digital rights of clubs and gamers.


Those that already exist scrape sites and apps - utilizing AI, obviously - to identify where a company's copyright or a person's image may have been used.


They can request takedowns, effectively taking on the usage of AI without the affected celebrations getting straight involved.


Bad stars may utilize AI for wicked methods


AI provides opportunities as well as issues. Adverts and marketing product can be developed without players even needing to leave their homes.


But together with the genuine AI-generated adverts, it is easy for unauthorised celebrations to take a player's likeness and utilize it to promote their business.


Last year the oversight board that runs Meta's appeals procedure banned an advert for a betting app on Facebook, external that was produced utilizing AI.


It featured a controlled video of previous Brazil striker Ronaldo which mimicked his voice. It was not chosen up by Meta's automated detection tools.


Meta was told to create "easily recognizable indications that differentiate AI material" to avoid "substantial quantities of scam material".


It was a prime example of a platform being challenged and forced to act.


The Football Association has actually had to deal with controversy, too.


England head coach Gareth Southgate was targeted throughout Euro 2024. Fake AI-generated interviews showed Southgate making negative remarks about his players.


The videos were reported and removed. They were found to have breached TikTok's AI-generated policy, which prohibits content that "falsely shows public figures in particular contexts".


But by that point, the videos had actually been seen and shared by millions of people.


Should users be forced to state they have used AI?


Scrolling through apps today, it is uncommon for anybody to show AI has actually been utilized.


That is even with TikTok's neighborhood guidelines asking users to "label practical AI-generated content" and prohibiting content considered to "harmfully misguide or impersonate others".


Cowan thinks there is unlikely to be any significant change to legislation, but platforms could be given harder rules.


" There are openness requirements under the EU AI Act," Cowan discussed, with the act not covering the UK.


" Under marketing regulations, influencers need to reveal where a video they produce has been sponsored.


" I presume we may end up with similar transparency requirements. A little '#AI generated' or comparable label in the corner."


The issue will be whether creators care, and how simple enforcement is for platforms.


Cowan added: "If you have actually got those outright videos, where someone's putting out a hideous deepfake, they're not going to worry about adding that label."


In the meantime, a minimum of, it seems clubs are not too worried - that AI is simply something occurring on social media.


There may come a point they decide more action is required.