Author:
Sue Roberts
(MENAFN- The Conversation)
The UK government plans to crack down on explicit deepfakes , in which images or videos of people are blended with pornographic material using artificial intelligence (AI) to make it look like an authentic piece of content. While it is already an offence to share this kind of material, it's not illegal to create it.
Where children are concerned, however, most of the changes being proposed don't apply. It's already an offence to create explicit deepfakes of under 18s, courtesy of the Coroners and Justice Act 2009 , which anticipated the way that technology has progressed by outlawing computer-generated imagery.
Hugh Nelson.
Greater Manchester Police
This was confirmed in a landmark case in October to jail Bolton-based student Hugh Nelson for 18 years for creating and sharing such deepfakes for customers who would supply him with the original innocent images.
The same law could almost certainly also be used to prosecute someone using AI to generate images of paedophilia without drawing on images of“real” children at all. Such images can increase the risk of offenders progressing to sexually abusing children. In Nelson's case, he admitted to encouraging his customers to abuse the children in the photographs they had sent him.
Having said all this, it's still a struggle to keep up with the ways in which advances in technology are being used to facilitate child abuse, both in terms of the law and the practicalities of upholding it. A 2024 report by the Internet Watch Foundation, a UK-based charity focused on this area, found that people are creating explicit AI child images at a“frightening rate”.
Legal problems
The government's plans will close one loophole around images of children that was a feature of the Nelson case. Those who obtain such internet tools with the intention of creating depraved images will be automatically committing an offence – even if they don't go on to create or share such images.
Beyond this, however, the technology still creates lots of challenges for the law. For one thing, such images or videos can be copied and shared many times over. Many of these can never be deleted, particularly if they are outside UK jurisdiction. The children involved in a case like Nelson's will grow up and the images will still be in the digital world, ready to be shared again and again.
This speaks to the challenges involved in legislating for a technology that crosses borders. Making the creation of such images illegal is one thing, but the UK authorities can't track and prosecute everywhere. They can only hope to do that in partnership with other countries. Reciprocal arrangements do exist, but the government clearly needs to be doing everything it can to extend them.
Meanwhile, it's not illegal for software companies to train an algorithm to produce child deepfakes in the first place, and perpetrators can hide where they are based by using proxy servers or third-party software. The government could certainly consider legislating against software providers, even if the international dimension again makes these things more difficult.
Then there are the online platforms. The Online Safety Act 2023 placed the responsibility for curbing harmful content on their shoulders, which arguably gives them more power than is wise.
In fairness, Ofcom, the communications industry regulator, is talking tough. It has given the platforms until March to carry out risk assessments or face penalties that can be as much as 10% of revenues. Some campaigners fear this won't lead to harmful material being removed, but time will tell. Certainly, saying that the internet is ungovernable and AI grows faster than we can keep up will not suffice when the UK government has a legal responsibility to protect vulnerable people such as children.
Beyond legislation
Another issue is that among people in the public sector, there is a lack of understanding and fear around AI and its applications. I see this from being in regular contact with numerous senior policymakers and police officers in my teaching and research. Many don't really understand the threats posed by deepfakes or even the digital footprint they can have.
This chimes with a report by the National Audit Office in March 2024 which suggested that the British public sector is largely not equipped to respond to, or use, AI in the delivery of public services. The report found that 70% of staff didn't have the necessary skills to handle these issues. This points to a need for the government to tackle this gap by educating staff.
UK policymakers need to be more technologically savvy.
Shutterstock
Decision-makers in the government also tend to reflect a certain older demographic . Though even younger people can be poorly informed , part of the solution has to be ensuring age diversity in the skills pool for shaping policies around AI and deepfakes.
Finally, there is the issue of police resourcing. My police contacts tell me how hard it is to stay on top of the latest shifts in technology in this area, not to mention the international dimension. It's difficult at a time when public funding is under such pressure, but the government has to look at increasing resources in this area.
It is vital that the future of AI-assisted imagery cannot be allowed to predominate over child protection. Unless the UK combats its legislative gaps and the skills issues in the public sector, there will be more Hugh Nelsons. The speed of technological change and the international nature of these problems make them especially difficult, but still, much more can be done to help
MENAFN16012025000199003603ID1109098316
Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.