• Alphane Moon@lemmy.worldOPM
    link
    fedilink
    arrow-up
    3
    ·
    1 month ago

    Let’s hope they won’t, the competitive situation in the GPU space is even worse than in the CPU segment.

    • aodhsishaj@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 month ago

      It was a little tongue in cheek my comment. You’re right, and my fear is these CEOs bought too much into these transform models for LLMs they’re calling AI and spreading engineering teams even thinner. I hope this AI bubble bursts soon so that we can get back to work on worthwhile projects.

      I don’t need an AI chip from Intel in my laptop that’s less reliable because it has a tpu cludged into it so that I can ask a chat bot to tell me a bedtime story.

      • Alphane Moon@lemmy.worldOPM
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        1 month ago

        I think in 12-18 months, if they won’t come up with something radically new (i.e. not modest refinements of the latest LLM model), the bubble will pop.

        They have to get a return on this massive capital spend and I bet opex spend (salaries for people with relevant experience must be insane these days) is no joke as well.

        At some point they will have to show ROI; something they have failed to do so far.

        • aodhsishaj@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          1 month ago

          My issue is they’re not really hiring more engineers, but they are increasing the workload. I think that’s the main driver to all of the microcode failures intel has been dealing with.

        • Wooki@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 month ago

          Agreed on all points! Already talk of openai’s huge running costs and vedors are scrambling to try and monetise.