Recently, YouTube rolled out a change to Shorts - short-form videos meant to compete with TikTok - wherein they automatically add an AI-filter, presumably for the sake of upscaling them. They didn't ask anyone first and, as of writing, there's no way to turn it off. Beyond the issue of preserving artist intention and consent, this has the side effect of making any and every short seem like it's AI-generated, even when it isn't. This has made a lot of people very angry and YouTube hasn't done anything about it or, to my knowledge, even widely acknowledged what's happening.
I read a conspiracy theory that YouTube (well, Google [well, Alphabet]) was doing this specifically because it would make human-generated work indistinguishable from AI-generated slop. That would be gaslighting, in the truest sense of the term. Users would not be able to tell truth from fiction, at least in the context of YouTube video authenticity. I don't necessarily believe this is an intentional laying of infrastructure to benefit AI slop creators, but AI slop creators are the only beneficiaries of this decision. While real filmmakers and animators struggle to prove their work is legitimate, AI sloppers reap the reward of the ambiguity; if you can't tell fact from fiction, it's much easier for the fiction writer to pretend the facts.
A particular science communicator made a video announcing that he was taking down several of his videos. The reason? After being uploaded to YouTube, previously non-existant and dangerously seizure-triggering visual glitches appeared. Not shorts, either - full-length video. The culprit? He suspects YouTube has begun integrating the AI upscaling into full-format videos, and it's not playing nice with his resolution choices.
I was cautious about this explanation, because it can be kind of easy to fling blame like this in a one-off event... and then YouTube straight-up announced they were starting to do this. They claim it will be opt out - clearly marked - but they've already shown they're willing to just do it anyway.
This evinces the possibility of a YouTube wherein all content is generated, in part or in whole, by AI. Even that which is human-made at conception and execution becomes "AI-assisted" through system processes.
Now, I'm no conspiracy theorist... but the amount of money YouTube son of Google son of Alphabet has pumped into their misguided bet on generative AI leads me to believe this is a sort of infrastructure-laying; there is always a need by these tech-gamblers to convince stockholders that their bets are paying off, and this smells to me a whole lot like a pretense to tell investors "Look! Look how the masses love AI-generated content! Look how much money we're making from all this AI!" as they share their regular view and income reports.
Though, if you think about it, the next step in that would be to modify all videos via AI to keep the line going up, especially if viewership declines in the face of the former decision. So maybe that random YouTube commentor was on to something. Maybe the future of YouTube is one giant gaslight factory, where reality and simulacrum are indistinguishable; and then, maybe, there's a future after that, where YouTube goes dead.