• 🌶️ - knighthawk@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    the genie is out of the bottle, they might as well capitalize on it while they still can.

    i predict with ai getting cheaper and more efficient we will “soon” have the ability to run our own personal ai locally on a phone

    nvidia will be less happy by then, but they care about short term gains like every other large Corp

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      Indeed, the whole panic over people not needing chips because AI got more efficient was misguided. All it means is that it’s more accessible now, so more people will be running AI models locally and they’ll buy chips for that. Companies like OpenAI who were trying to make a business model of selling access to their AI as a service are looking to be only the big losers in all this. If this tech gets efficient enough that you can run large models locally, then there are going to be very few cases where people need to use a service, and even when they do, nobody is going to pay high subscription fees now. Interestingly, Apple seems to have bet right because they seem to have anticipated running AI models locally and started targeting their hardware towards that.

  • FrankLaskey@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    Hopefully these improvements will become available to other Nvidia GPU architectures like Ada and Ampere in the future as well.

  • Dimmer
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    12
    ·
    2 days ago

    US export control should go further and forbid sales of GPU to China entirely.

    Then we will all get much cheaper and better GPUs.