When is your photo not your photo? When it’s taken by a monkey! At least that was the debate that raged over the last decade, after British nature photographer David Slater succeeded in tempting wild celebes crested macaques to take selfies with his camera in 2011. Unfortunately for David, back in December 2014, the United States Copywriter Office stated that works created by a non-human are not copywritable. It doesn’t matter who owned the materials or tools used (in this case a camera, lens, memory card etc.), only who triggered the shutter. But this article isn’t about the oblivious action of monkeys, but the oblivious actions of humans.
Not a day goes by without seeing someone on Facebook or a forum sharing ‘their’ latest stunning photo, and extolling the ‘genius’ of new software that has ‘rescued detail’ from their blurry or low resolution photo. Even the software developer themselves market it on their home page as being able to ‘Magically improve your photo and video quality with cutting-edge image enhancement technology.’ What am I talking about? I’m talking about software with machine learning and artificial intelligence (or ML and AI if you like your acronyms) at their core.
So what’s the problem? Post processing has always existed hasn’t it? Absolutely, and I am not not against it. Post processing - the art of tweaking the composition, colour, tone, sharpness, noise etc. of an image after it’s been taken has indeed been around since forever. I don’t believe there’s any form of post processing today that hasn’t been possible in the past. Even ‘photoshopped’ composites where multiple photos are blended together or parts removed has been possible in the darkroom. They have been made easier and more accessible in recent years via software, but fundamentally they have always been controlled by the user. First in the darkroom, and lately on computer.
So what’s my problem? Aren’t AI and ML just new tools to help photographers? The problem here is that these are the first tools that are not just editing the data, but creating new data. And that new data isn’t the property of the photographer making the edits. All tools and equipment used to edit raw images throughout history - be that in a physical or digital darkroom - have only used data the photographer has brought to the table. Primarily that’s the image itself, but potentially combining with other images or materials to build on and adapt the image, fully in the control of the photographer. I am no expert in this topic, and so I am happy to be proven wrong, but my understanding is that Machine Learning and Artificial Intelligence are founded on training data. You feed lots of example data into the machine to ingest which empowers it to act in an informed manner in it’s application. For AI powered sharpening tools, they’ve effectively been empowered by hundreds, thousands, millions of other photographers work to learn what images should look like. So when you feed in your crappy unsharp, out of focus picture of a kingfisher into such software, my understanding is this software recognises the rough colours and shapes of the Kingfisher, and fills in the details of what you probably meant to photograph.
So why is that a problem? It’s a problem for two reasons. Primarily, and surely obviously, it’s an issue because photography, like most art is founded in the skill of the photographer. To capture raw information and then edit that information in a controlled manner, choosing how to tweak and adapt that raw information at it’s core. Instead, as in the Kingfisher example, it is no longer a photograph if the image is made up of lots of information that weren’t captured but guessed based on similar images. But secondarily, and perhaps of even more concern to me at present, is that people are using these tools oblivious to the fact of what is actually happening. These sharpening tools aren’t magically finding detail in your image, they’re taking detail and information from better images and applying these to yours.
I don’t think it’s an understatement to say this is a major turning point in photography, but at present the majority of the photographic community are blindly wandering around that corner completely unaware. Perhaps we all need to embrace it. We all decide that as long as the image ends up as we envisaged it would look, then we’re happy? But I can’t help but feel it’s cheating. And if some do and some don’t use these tools, is that a level playing field? Or maybe photography is the pursuit of something beautiful to look at, and AI is simply enabling that? My main objective in writing this article though is to hopefully trigger the conversation that desperately needs to take place about these tools. Everyone using them needs to understand what is happening and why - and be transparent on when they’re used.