Back to blog

LCN Blogs

AI generated images - a silent crisis

AI generated images - a silent crisis

Matthew Biggerstaff

18/03/2024

Reading time: three minutes

In recent months AI-generated imagery has become a huge topic of discussion, along with the rise of AI bots and technologies. As I discussed in my previous post, AI has been making its way into the legal world. While I've already discussed how AI is being used by legal professionals and how it may be used in the future, there’s also a rise of individuals using AI imagery to create anything they can conjure. This creates a real legal grey area when it comes to potential ‘evidence’ of a crime, or the creation of nonconsensual indecent images. 

The first memory I have of AI imagery becoming a public tool was around 2021, when Dall-E was released by OpenAI. The images it created were nothing short of ludicrously garbled together colours and textures, which only reflected the prompt inputted if you closed your eyes and really used your imagination. However, in 2024 OpenAI released videos of their latest text-to-video software Sora, which has the capability to create stunningly realistic videos. The development of this software in so little time is nothing less than terrifying. The public reaction to the sharing of this software echoed this, with people voicing their concern over the already huge amounts of misinformation being spread over the internet, now being supported by a powerful video creation software. 

A very publicised example of AI images causing legal problems came in the form of indecent images created and shared online of Taylor Swift. Some extremely graphic indecent images were created using AI image creation software and were quickly shared all over X. The likeness to Taylor Swift was clear and would surely have appeared real if you’d shown the image to your not-so-tech-savvy grandparent. Swift was rumoured to have been considering legal action over the images, but no such action has been taken as of yet. 

Meta has been pushing to detect and label all AI-generated imagery and tag it on its social media platforms Facebook, Instagram and Threads. Meta even offer its own AI-generated image software, the difference being that its imagery is tagged as being generated by AI with invisible watermarks, while many other examples of this software don’t take such precautions. With Meta owning multiple social media platforms for these images to be spread, it’s no surprise that it’s looking to get ahead of the issue before it lands in hot water, in the same way that X did following the Taylor Swift AI images.  

Many have also voiced concerns over the evidential issues of such powerful AI tools. With the rapid development of this image-creating software, there’s truly no limit to how quickly this technology can develop and the ceilings it can reach. If I wanted to go onto an AI image generator and ask it to create a CCTV image of a man committing a robbery, it’d certainly be able to create an image that you would have to look twice at. As things stand, the general public, who may or may not use social media and be up to date on the development of AI, is likely to not be able to tell the difference between AI imagery and real photos and videos. If this is the case, what would happen if a jury was given evidence created by AI. Would they be able to tell, especially as technology gets more advanced?

As an experiment, I showed my partner a video circulated online of an AI-generated dog playing in snow – she couldn’t tell that the video wasn’t at all genuine. This is a person fully aware of the power of AI-generating software, so how can someone without this knowledge be expected to tell the difference? 

The phrase ‘don’t believe everything you see online’ has been around for many years but has never been more relevant.