The US dictionary Merriam-Webster’s word of the year for 2025 was “slop”, which it defines as “digital content of low quality that is produced, usually in quantity, by means of artificial intelligence”. The choice underlined the fact that while AI is being widely embraced, not least by corporate bosses keen to cut payroll costs, its downsides are also becoming obvious. In 2026, a reckoning with reality for AI represents a growing economic risk.

Ed Zitron, the foul-mouthed figurehead of AI scepticism, argues pretty convincingly that, as things stand, the “unit economics” of the entire industry – the cost of servicing the requests of a single customer against the price companies are able to charge them – just don’t add up. In typically colourful language, he calls them “dogshit”.

Revenues from AI are rising rapidly as more paying clients sign up but so far not by enough to cover the wild levels of investment under way: $400bn (£297bn) in 2025, with much more forecast in the next 12 months.

Another vehement sceptic, Cory Doctorow, argues: “These companies are not profitable. They can’t be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people’s money and then lighting it on fire.”

  • Kaiserschmarrn@feddit.org
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 months ago

    Two of my friends are paying for it. One works as a developer and one in DevOps. Currently, both of them have a ChatGPT subscription. The first one now shares a lot of Dall-E images picturing his dog and the other one recently showed us proudly how he could tell ChatGPT about our DnD Session so that it generates a summary for us. The latter took nearly forever and had a lot of funny errors in it.

    I really don’t get why people are paying over 20€/month for this shit.

    • prodigalsorcerer@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      9
      ·
      3 months ago

      20 bucks a month is basically nothing for a developer who’s making $100 an hour.

      My employer pays for copilot, and yeah, it makes mistakes, but if you pretend it’s a junior developer and double check its code, it can easily save time on a lot of tedious work, and will turn hours of typing into fewer hours of reading.

      • CeeBee_Eh@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        3 months ago

        You have no idea the long term impact such a tool has on a codebase. The more it generates the less you understand, regardless of how much you “check” the output.

        I work as a senior dev, and I’ve tested just about all the foundational models (and many local ones through Ollama) for both professional and personal projects. In 90% of all cases I’ve tested it has always come back to “if I had just done the work from the beginning myself, I would have had a working result that’s cleaner and functions better in less time”.

        Generated code can work for a few lines, for some boilerplate, or for some refactoring, but anything beyond that is just asking for trouble.

        • shalafi@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          can work for a few lines, for some boilerplate, or for some refactoring

          I highly doubt the person you’re replying to meant anything else. We’re all kinda on the same page here.

          • CeeBee_Eh@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            3 months ago

            I hope so, but your be surprised. I know some devs that basically think LLMs can do their work for them, and treat it as such. They get them to do multi-hundred line edits with a single prompt.