Archive: https://archive.is/2025.03.17-153712/https://www.404media.co/ai-slop-is-a-brute-force-attack-on-the-algorithms-that-control-reality/

The best way to think of the slop and spam that generative AI enables is as a brute force attack on the algorithms that control the internet and which govern how a large segment of the public interprets the nature of reality. It is not just that people making AI slop are spamming the internet, it’s that the intended “audience” of AI slop is social media and search algorithms, not human beings.

What this means, and what I have already seen on my own timelines, is that human-created content is getting almost entirely drowned out by AI-generated content because of the sheer amount of it. On top of the quantity of AI slop, because AI-generated content can be easily tailored to whatever is performing on a platform at any given moment, there is a near total collapse of the information ecosystem and thus of “reality” online. I no longer see almost anything real on my Instagram Reels anymore, and, as I have often reported, many users seem to have completely lost the ability to tell what is real and what is fake, or simply do not care anymore.

  • IceAgeTower@feddit.it
    link
    fedilink
    Italiano
    arrow-up
    5
    ·
    14 hours ago

    social media that specifically tries to avoid AI generated content

    Which is more or less what many instances in the Fediverse are trying to do. But I wonder: how long will it last? When (IF) Lemmy should blow up, will it even be possible to prevent an AI flood relentlessly backed up by bots?

    • Megaman_EXE@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      I’m not sure if we can prevent it entirely. I suppose all it would take is one bad actor to get a bot set up, and it would be off to the races.

      I would hope that we would be able to figure out ways of identifying real people online. Surely, there’s gotta be some way we can alleviate the issue?