• kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    96
    arrow-down
    2
    ·
    5 months ago

    Unfortunately, as Keynes noted: “Markets can remain irrational longer than you can remain solvent”

    And my god, they’re committed to irrationality right now.

      • Laser@feddit.org
        link
        fedilink
        arrow-up
        7
        ·
        5 months ago

        Your Return of Investment can be someone else’s investment.

        One might call it a pyramid scheme

        • Evil_Shrubbery@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          5 months ago

          And the suckers w/o means to participate can live and die for generations knowing only disadvantages of it.

          But participated or not, their lives will still get even more rounded one the system collapses.

          Your Return of Investment can be someone else’s investment.

          Lul, just realised that that might be a bit more based than the regular god (aka capital) intended version - the return on my investment can be someone else’s labour.

    • frezik@midwest.social
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      Nvidia has to be the most obvious thing to short in this whole mess, except for that quote. If the AI bubble popped tomorrow, you’ll make a lot of money. If it pops in a year, you may lose it all before then.

    • explodicle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      To be fair, I watched a ton of crypto bros use that quote to justify holding shitcoins while they crashed to zero.

  • TheFriar@lemm.eeM
    link
    fedilink
    arrow-up
    67
    ·
    5 months ago

    What trillion dollar problem is it solving? In the minds of investors, that “problem” is paying people for labor.

    • Evil_Shrubbery@lemm.ee
      link
      fedilink
      arrow-up
      8
      ·
      5 months ago

      Even when the same people need to get money from people they don’t wanna pay (or there aren’t any buyers), this is still the case.

      So if GDP doesn’t come from nature (which it shouldn’t, at least net shouldn’t), the system cannot work with financial wealth being the only goal.

      But it can work long enough to destroy much of everything everywhere.

      • nilloc@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        Yeah I’m far more worried capitalist tech morons ending humanity, than I am about AI deciding to off us.

  • EleventhHour@lemmy.world
    link
    fedilink
    arrow-up
    64
    arrow-down
    2
    ·
    edit-2
    5 months ago

    And this absolutely will not change the course of AI investment whatsoever because it still driving a huge amount of profit.

    The only thing that will finally change the course of AI investment is when the bubble finally burst which will cause the collapse of our economy because, by that point, so much money will have been invested in it. There will be no other possible result.

    And why? Because these assholes only care about one thing: short term results at any cost.

    • OpenStars@discuss.online
      link
      fedilink
      English
      arrow-up
      27
      ·
      5 months ago

      Too big to fail, too big even to jail - it’s worked before, they seem to be counting on it working once more.

      But I could be giving them too much credit - perhaps they really do believe in it.

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        5 months ago

        What’s failing or jailing?

        Like, I’m not defending AI here but using a lot of power, and building software tools people buy is not illegal.

        We can argue it’s a bit of a scam in the sense that many objectives purported are not accomplished, but that’s a tale as old of time with software.

        We can also argue the copyright issue, I think that’s the most relevant topic.

        But in general this is just software 2024 MEGA EDITION. Everything sucks, everything is just executives moving money around.

      • Flying Squid@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        5 months ago

        You think it’s too big to fail now, wait until they start wondering what they built all of those extra data centers for.

    • Optional@lemmy.worldOP
      link
      fedilink
      arrow-up
      16
      ·
      5 months ago

      “profit” is gonna need some qualifiers there.

      Firing the staff & reporting “earnings”? Goosed stock price on the above + hypey garbage? Enforced “features” no one wants? AI hardware makers? Okay, that one’s legit, but ironically not AI.

    • xtr0n@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      ·
      5 months ago

      I wish i knew of a good way to profit off of this bubble. I could work for a company in the AI space, but I think it would be well above my “executives hyping the smell of their own farts” threshold. And shorting Google and Microsoft is a dangerous game.

      • sevan@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        You got me thinking a bit on this one. One possibility is if you want to make a bet on it failing to deliver value in the near future, look at the companies whose stock prices have fallen on the fear of AI putting them out of business. For example, Concentrix does call center outsourcing and their stock is down significantly from their 2022 peak, partially on the expectation that AI is going to take business from them. Now, their profit margin is tiny and they don’t seem to be growing much, so I don’t know that they are a great investment, but there could be upside if the negative cloud of AI is removed. There are probably better examples out there, this one just came to mind.

        Note: I have not done any research on this idea or on Concentrix and don’t know if this is a good idea, but at least less risky than shorting the AI hype.

        • fine_sandy_bottom@discuss.tchncs.de
          link
          fedilink
          arrow-up
          3
          ·
          5 months ago

          The thing is… this type of “risk” is already priced into the value of shares like Concentrix. There’s a phrase… “the market is always perfectly priced”, which means that if all information is available to everyone then the price you pay is the correct price, adjusted for risk.

          To say the same thing another way, the current price for Concentrix shares is the price they would be if the AI bubble popped, less an adjustment for the risk that the bubble doesn’t pop.

    • Cryophilia@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      5 months ago

      The only thing that will finally change the course of AI investment is when the bubble finally burst which will cause the collapse of our economy because, by that point, so much money will have been invested in it.

      We’re not there yet. Remember, tech is not the whole of the economy. The recent tech layoffs have had Silicon Valley screaming, “The sky is falling!!” and the rest of the planet going “huh? You guys hear something? Must’ve been a fly”

  • Catoblepas@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    62
    arrow-down
    1
    ·
    5 months ago

    Number 3 drives me hair-tearing insane, I have straight up seen AI cultists say AI will fix the power grid but only if we keep pouring resources into it so that it can fix all our problems. ಠ_ಠ

    • TheFriar@lemm.eeM
      link
      fedilink
      arrow-up
      53
      ·
      5 months ago

      “I’ll finally have he strength to kick this heroin habit if I just do more heroin.”

      • psmgx@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        5 months ago

        That is, entirely unironically, how deep addiction makes you think. Just need enough to get to normal…

      • mojofrododojo@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 months ago

        “we should all do heroin to support the habit and mainstream it.” - every fucking company pushing AI onto society when it’s dangerous, janky af and ridiculously expensive.

    • Fermion@feddit.nl
      link
      fedilink
      arrow-up
      25
      arrow-down
      1
      ·
      5 months ago

      I’m very confident that with carte blanche the electrical engineers already overseeing the grid could solve the problems it faces. We don’t need an ai miracle, we need to remove bureaucratic and funding obstacles for critical infrastructure.

      • thanks_shakey_snake@lemmy.ca
        link
        fedilink
        arrow-up
        18
        ·
        5 months ago

        And this is it: Many of those “AI will be so smart that it can solve these problems for us!” arguments refer to problems where having a “smart” enough solution isn’t the problem… Getting people to care/notice/participate/get out of the way is.

        • MindTraveller@lemmy.ca
          link
          fedilink
          English
          arrow-up
          9
          ·
          5 months ago

          The great AI decrees that you must increase solar and wind subsidies! And pay no attention to the electrical engineer behind the curtain!

          • thanks_shakey_snake@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            An intelligent entity told us to do it. Humanity: Nah

            An artificially intelligent entity told us to do it. Humanity: Well shit let’s get to work!

    • 31337@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      ·
      5 months ago

      e/acc. The dumb MFs believe burning fossil fuels as fast as possible will lead to technological advancements to mitigate the problems. It’s all wishful thinking and convienant blind faith.

  • Rayspekt@lemmy.world
    link
    fedilink
    arrow-up
    39
    ·
    5 months ago

    I love that Ed Zitron is getting more popular. He is on a ferocious rampage against the rot economy and I’m all here for it.

    • Serinus@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      Maybe he should buy Red Lobster, force them into unfavorable contracts for supplies, sell out their land from under them, and lease the land back to them about it.

  • Seasoned_Greetings@lemm.ee
    link
    fedilink
    arrow-up
    40
    arrow-down
    2
    ·
    edit-2
    5 months ago

    If AI is a trillion dollar investment, what trillion dollar problem is it solving?

    Why, the trillion dollars not yet in the pockets of the companies that think they can take advantage of AI of course.

    The naked truth is that #4 answers #1. The biggest utility AI might provide would be replacing paid workers. That’s a trillion dollar problem if your ultimate goal is to hoard wealth and sit atop the highest pile of gold like a dragon.

    So again, we have a solution to a problem only the wealthy elite have, being marketed as an advancement for the greater good of society, to justify stealing the massive resources it consumes, in order to not have to pay that directly to their workers.

    Capitalism.

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    5 months ago

    Okay, yes true. But have you considered that Big Number Go Up? Do you really want to miss the boat on this massive speculative opportunity?

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      4 months ago

      That’s probably why Goldman Sachs is against AI all the sudden: They didn’t invest much in it and now everyone else is reaping gains in the stock market that they failed to take advantage of.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 months ago

        They didn’t invest much in it

        Goldman was strongly bullish on Microsoft in mid-2023, right before it went on a historical run, precisely because they had enormous faith in the OpenAI project. This is a huge heel turn relative to their multi-billion dollar investment in the company from last year.

  • Etterra@lemmy.world
    link
    fedilink
    arrow-up
    29
    arrow-down
    3
    ·
    5 months ago

    Yeah this is basically Metaverse or NFTs but with a slightly more plausible use case so that it will drag out far longer before corporations quietly pretend it never happened.

    • Croquette@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      The tech itself is decent. But as always, profit above anything else, so we can’t have anything nice.

      So instead, it will be wasted and forgotten.

      • Tja@programming.dev
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        4 months ago

        It will not be wasted or forgotten. The cat’s out of the bag. You can run it locally on your machine. It can summarize text for you, it can help you write boilerplate code, it can help you find that file with that thing that you don’t quite remember, it can create a poem about your left nut. The tech already has proven useful, it’s about where you use it.

        • Croquette@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          4 months ago

          Yeah, but the drive for profits will poison the concept. I agree with you that there is a lot of FOSS alternative, but the layman will probably not understand how to host their own LLM.

          • Tja@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            It will be integrated in products. Imagine libre office with local text prediction / grammar assistant / translation.

            Sure the layman actually doesn’t even know how to install libre office, but that’s a different problem.

    • MataVatnik@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      5 months ago

      It has been proven to solve scientific problems were math models are difficult to implement. But those niche cases. Forget about AGI

  • Empricorn@feddit.nl
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    2
    ·
    edit-2
    4 months ago

    AI will get better

    Aren’t LLM already pretty much out of (past) training data? Like, they’ve already chewed through Reddit/Facebook etc and are now caught up to current posts. Of course people will continue talking online and they’ll continue to use it to train AI. But if devouring decades of human data, basically everything online, resulted in models that hallucinate, lie to us, and regurgitate troll posts, how can it reach the exponential improvement they promise us!? It already has all the data, has been trained on it, and the average person still sees no value in it…

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      2
      ·
      5 months ago

      Your mistake is in thinking AI is giving incorrect responses. What if we simply change our definition of correctness and apply the rubric that whatever AI creates must be superior to human work product? What if we just assume AI is right and work backwards from there?

      Then AI is actually perfect and the best thing to feed AI as training data is more AI output. That’s the next logical step forward. Eventually, we won’t even need humans. We’ll just have perfect machines perfectly executing perfection.

    • Dozzi92@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      5 months ago

      My wife works for a hospital system and they now interact with a chat bot. Somehow it’s HIPAA compliant, I dunno. But I said to her, all it’s doing is learning the functions of you and your coworkers, and it will eventually figure out how to streamline your position. So theres more to learn, but it’s moved into private sectors too.

      • jeeva@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        4 months ago

        I mean, happily, chatbots are not really capable of learning like that.

        So she’s got a while, there.

        • Dozzi92@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          4 months ago

          Hopefully not a while, but that’s a story for another day.

          They have the chatbots reading emails, in her words, to make them (the emails) more enthusiastic. I don’t even ask questions, I’m very insulated from her field and corporate workplaces in general, and admittedly view a lot of what happens there as completely outrageous, but is probably just par for the course.

    • aesthelete@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      4 months ago

      It already has all the data, has been trained on it, and the average person still sees no value in it…

      And that data that it has been trained on is mostly “pre-GPT”. They’re going to have to spend another untold fortune in tagging and labeling data before training the newer ones because if they train on AI-generated content they will be prone to rot.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    arrow-up
    19
    ·
    5 months ago

    The last part is wrong. They aren’t imagining improvement. They know this is it for now and they’re lying their asses off to pretend that they’ll be able to keep improving it when there’s no training data left. The grift is all that’s left.

  • bricklove@midwest.social
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    5 months ago

    I like to read AI as Al (i.e. Allen) and pretend he’s just some guy stealing ideas, lying, and generally fucking up at his job. Al is an asshole

    • fine_sandy_bottom@discuss.tchncs.de
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      I think that’s the obvious implication of this question, but you’re missing the implied question arising from that answer: is this a problem we want to solve in this way ?

  • gencha@lemm.ee
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    4 months ago

    The AI makes the art. The art is made into an NFT. The NFT goes on the blockchain. We all get rich. Climate is saved. End of story. How are people not getting this??? 😂😭

  • nednobbins@lemm.ee
    link
    fedilink
    arrow-up
    12
    arrow-down
    3
    ·
    4 months ago

    This is all true if you take a tiny portion of what AI is and does (like generative AI) and try to extrapolate that to all of AI.

    AI is a vast field. There are a huge number of NP-hard problems that AI is really really good at.

    If you can reasonably define your problem in terms of some metric and your problem space has a lot of interdependencies, there’s a good chance AI is the best and possibly only (realistic) way to address it.

    Generative AI has gotten all the hype because it looks cool. It’s seen as a big investment because it’s really expensive. A lot of the practical AI is for things like automated calibration. It’s objectively useful and not that expensive to train.

    • Semi-Hemi-Lemmygod@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      4 months ago

      In my career I deal a lot with random, weird problems that servers have when doing work. Having an AI that’s just able to monitor logs and stats and then help with diagnosing issues or even suggesting solutions would be terribly useful.

      • nednobbins@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        That’s a great use case. Splunk does something along those lines. Logs are particularly nice because they tend to be so large that individual companies can often have the AI trained to the specifics of their environment.