
Many people turn to AI to improve their writing, their photos, or their videos. YouTube has been doing the same, quietly enhancing the look of users’ uploaded videos through the use of artificial intelligence.
So, what is the problem? Many creators are upset not so much that their videos have been “enhanced” (although there is debate over whether what YouTube has done can actually be seen as an improvement) but that it was done without consent.
This is not new. YouTube creators have been complaining about the look of their uploaded video changing without them having done anything for quite some time now. For months there has been debate about whether videos have been hit with a dose of AI – and now Google has confirmed that people’s suspicions were correct, and this is exactly what has been happening.
Theories have been circulating that some form of AI-powered upscaling has been used on videos, but YouTube insists that this is not the case. The difference between users’ fears and YouTube’s claimed reality are pretty nuanced, however.
AI-enhanced YouTube
Rhett Shull posted a video looking into the issue, introducing it by saying:
Is YouTube secretly applying an AI filter to Shorts without telling creators? I recently noticed my videos looked strange and smeary on YouTube compared to Instagram, almost like a cheap deep fake. In this video, I investigate what’s going on and why I believe it’s a massive problem for everyone on this platform.
After talking with Rick Beato and seeing discussions on Reddit about the same “oil painting” effect on videos from creators like Hank Green, it’s clear there is some kind of non-consensual AI upscaling being applied to our content. For me, this is a huge issue that threatens to erode the most important thing a creator has: the trust of their audience.
You can check out the video here:
The issue feels a little complicated. While it is fair to say that there has been nothing nefarious going on, the lack of transparency is worrying. Yes, YouTube has been “enhancing” videos, presumably with good intentions, but that is not really the point.
What matters here is the fact that people’s videos have been altered with any form of notification.
YouTube has responded to user complaints by saying:
No GenAI, no upscaling. We’re running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video) YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features
Google is just one of a legion of companies pushing their own brands of AI, and this is not an issue normally. But this becomes different when people are subjected to artificial intelligence without being told that this is happening, and something else again when AI is used to alter user content without consent – even if the tweaks are well-intentioned.
As well as undermining trust in Google, there is also the question of the impact on creators. Many people pride themselves on avoiding AI in their work, but with so many viewers spotting that videos had been enhanced with AI, accusations have been leveled at creators for “lying” about not using AI. This undermines trust in individual creators, and the blame for this lies firmly with Google.
That the company appears to be unconcerned about the level of upset this has caused is another worry – but what do you think of it all?
Image credit: bilalulker / depositphotos