The US dictionary Merriam-Webster’s word of the year for 2025 was “slop”, which it defines as “digital content of low quality that is produced, usually in quantity, by means of artificial intelligence”. The choice underlined the fact that while AI is being widely embraced, not least by corporate bosses keen to cut payroll costs, its downsides are also becoming obvious. In 2026, a reckoning with reality for AI represents a growing economic risk.

Ed Zitron, the foul-mouthed figurehead of AI scepticism, argues pretty convincingly that, as things stand, the “unit economics” of the entire industry – the cost of servicing the requests of a single customer against the price companies are able to charge them – just don’t add up. In typically colourful language, he calls them “dogshit”.

Revenues from AI are rising rapidly as more paying clients sign up but so far not by enough to cover the wild levels of investment under way: $400bn (£297bn) in 2025, with much more forecast in the next 12 months.

Another vehement sceptic, Cory Doctorow, argues: “These companies are not profitable. They can’t be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people’s money and then lighting it on fire.”

  • FudgyMcTubbs@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    3 months ago

    I wouldn’t pay money for access to AI. The convenience is not worth a single cent to me. But am I the average person? Is the average person sold on this nonsense enough to subscribe to it? The first hit is free to get you hooked. So if the plan is to get the average person dependent on it while it’s free and then eventually charge for it, i’m not buying and I wonder how many people will. AI output is fucking garbage.

    • Piatro@programming.dev
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      3 months ago

      I know a few people who subscribe who I never would have expected to do so, but I also know people who have started asking “why does Google show me an AI summary all the time when I don’t need it?” I think any sheen it had is diminishing, slowly but surely.

    • Kaiserschmarrn@feddit.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      Two of my friends are paying for it. One works as a developer and one in DevOps. Currently, both of them have a ChatGPT subscription. The first one now shares a lot of Dall-E images picturing his dog and the other one recently showed us proudly how he could tell ChatGPT about our DnD Session so that it generates a summary for us. The latter took nearly forever and had a lot of funny errors in it.

      I really don’t get why people are paying over 20€/month for this shit.

      • prodigalsorcerer@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        9
        ·
        3 months ago

        20 bucks a month is basically nothing for a developer who’s making $100 an hour.

        My employer pays for copilot, and yeah, it makes mistakes, but if you pretend it’s a junior developer and double check its code, it can easily save time on a lot of tedious work, and will turn hours of typing into fewer hours of reading.

        • CeeBee_Eh@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          3 months ago

          You have no idea the long term impact such a tool has on a codebase. The more it generates the less you understand, regardless of how much you “check” the output.

          I work as a senior dev, and I’ve tested just about all the foundational models (and many local ones through Ollama) for both professional and personal projects. In 90% of all cases I’ve tested it has always come back to “if I had just done the work from the beginning myself, I would have had a working result that’s cleaner and functions better in less time”.

          Generated code can work for a few lines, for some boilerplate, or for some refactoring, but anything beyond that is just asking for trouble.

          • shalafi@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            can work for a few lines, for some boilerplate, or for some refactoring

            I highly doubt the person you’re replying to meant anything else. We’re all kinda on the same page here.

            • CeeBee_Eh@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              3 months ago

              I hope so, but your be surprised. I know some devs that basically think LLMs can do their work for them, and treat it as such. They get them to do multi-hundred line edits with a single prompt.

    • SGG@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      I setup a local ollama instance trying to look for ways to integrate it into my regular work. I do IT stuff, from basic helpdesk to office 365 Configs, and almost anything in-between

      At best I just use it as a sounding board, basically rubber duck debugging.

      I prefer the rubber duck.

    • mrgoosmoos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      you have to remember how dumb the “average” person is, who absolutely thinks that AI chatbots give good answers and doesn’t notice or think about the accuracy