• Nindelofocho@lemmy.world
    link
    fedilink
    English
    arrow-up
    159
    ·
    6 months ago

    So we’re starting to get to the point where its theoretically possible for computers to get real organic viruses? “Sorry boss I cant work today my computer caught Covid and coughed on me so now I have it too :(”

    • neo@lemy.lol
      link
      fedilink
      English
      arrow-up
      61
      arrow-down
      1
      ·
      edit-2
      6 months ago

      Are you seriously ill, but don’t want to leave a ton of medical debts to your family?

      Then donate your brain tissue to BrainCloud™! Instead of costing your family a lot of money, you might make them Millionaires* and also reduce CO2 emissions of world leading AI applications! Leaving a better world for our children!
      And who knows, maybe you will even enjoy thinking about chat bot responses in weird nightmarish ways for the rest of what might seem like an eternity.

      ~*We offer a donation compensation of up to $1.000.000. Actual rates depend on brain capabilities, size and constitution. Payouts are determined by our quality assurance team. Payouts are not guaranteed. In cases of brain tissue with insufficient quality, compensational fees for testing, lab work, and services may be charged to the donor’s family.~

  • Evilcoleslaw@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    ·
    6 months ago

    IIRC these organoids also die after somewhere around 100 days of hypoxia, because they have yet to be able to construct a proper circulatory system for them.

  • hersh@literature.cafe
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    5
    ·
    6 months ago

    Is this legit? This is the first time I’ve heard of human neurons used for such a purpose. Kind of surprised that’s legal. Instinctively, I feel like a “human brain organoid” is close enough to a human that you cannot wave away the potential for consciousness so easily. At what point does something like this deserve human rights?

    I notice that the paper is published in Frontiers, the same journal that let the notorious AI-generated giant-rat-testicles image get published. They are not highly regarded in general.

    • pearsaltchocolatebar@discuss.online
      link
      fedilink
      English
      arrow-up
      47
      arrow-down
      1
      ·
      6 months ago

      They don’t really go into the size of the organoid, but it’s extremely doubtful that it’s large and complex enough to get anywhere close to consciousness.

      There’s also no guarantee that a lump of brain tissue could ever achieve consciousness, especially if the architecture is drastically different from an actual brain.

      • JackGreenEarth@lemm.ee
        link
        fedilink
        English
        arrow-up
        14
        ·
        6 months ago

        Well, we haven’t solved the hard problem of consciousness, so we don’t know if size of brain or similarity to human brain are factors for developing consciousness. But perhaps a more important question is, if it did develop consciousness, how much pain would it experience?

        • ColeSloth@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          16
          ·
          6 months ago

          Physical pain? Zero.

          Now emotional pain? I’m not sure it would even be able to accomplish emotional pain. So much of our emotions are intertwined with chemical balances and releases. If a brain achieved consciousness, but had none of these chemicals at all…I don’t know that’d even work.

          • Warl0k3@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            6 months ago

            While we haven’t confirmed this experimentally (ominous voice: yet), computationally there’s no reason even a simple synthetic brain couldn’t experience emotions. Chemical neurotransmitters are just an added layer of structural complexity so Church–Turing will still hold true. Human brains are only powerful because they have an absurdly high parallel network throughput rate (computational bus might be a better term), the actual neuron part is dead simple. Network computation is fascinating, but much like linear algebra the actual mechanisms are so simple they’re dead boring - but if you cram 200,000,000 of those mechanisms into a salty water balloon it can produce some really pompus lemmy comments.

            Emotions are holographic anyways so the question is kinda meaningless. It’s like asking if an artificial brain will perceive the color green as the same color we ‘see’ as green. It sounds deep until you realize it’s all fake, man. It’s all fake.

            • ColeSloth@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              10
              ·
              edit-2
              6 months ago

              Didn’t have to. Kind of an obvious thing to point out, but OP didn’t specify what type of pain he meant, so I figured I would, just in case.

                • JohnEdwa@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  17
                  arrow-down
                  1
                  ·
                  6 months ago

                  Human brains don’t actually have any pain receptors (even though headaches would have you seriously believe otherwise), so a brain alone wouldn’t be able to feel pain any more than it would be able to smell or see.

    • Neuromancer49@midwest.social
      link
      fedilink
      English
      arrow-up
      25
      ·
      6 months ago

      Believe it or not, I studied this in school. There’s some niche applications for alternative computers like this. My favorite is the way you can use DNA to solve the traveling salesman problem (https://en.wikipedia.org/wiki/DNA_computing?wprov=sfla1)

      There have been other “bioprocessors” before this one, some of which have used neurons for simple image detection, e.g https://ieeexplore.ieee.org/abstract/document/1396377?casa_token=-gOCNaYaKZIAAAAA:Z0pSQkyDBjv6ITghDSt5YnbvrkA88fAfQV_ISknUF_5XURVI5N995YNaTVLUtacS7cTsOs7o. But this seems to be the first commercial application. Yes, it’ll use less energy, but the applications will probably be equally as niche. Artificial neural networks can do most of the important parts (like “learn” and “rememeber”) and are less finicky to work with.

    • tyrant@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      6 months ago

      Seems like it’s an ethical gray area. Some brain organoid have responded to light stimulus and there are concerns they might be able to feel pain or develop consciousness. (Full disclosure, I had no idea what an organoid even was before reading this and then did some quick follow up reading)

      • TheBananaKing@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 months ago

        How complex does a neural net have to be before you can call any of its outputs ‘pain’?

        Start with a lightswitch with ‘pain’ written on a post-it note stuck to the on position, end with a toddler. Where’s the line?

  • dumbass
    link
    fedilink
    English
    arrow-up
    36
    ·
    edit-2
    6 months ago

    Organoids is such a fun word to say.

    • sebinspace@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      14
      ·
      6 months ago

      People seeing scifi works as somehow prophetic is something I will never understand…

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        Scifi is kind of reverse prophetic, a lot of people become scientists because they were into scifi and at the end of the day we need to imagine something is possible before we can invent it.

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    6 months ago

    Article claims they are human brain organoids, doesn’t say where the source of them is. Are these grown, like most other neural computing systems, or are they actually taking matter from a human brain?

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    6 months ago

    Some cells get taken from you and turned into stem cells.

    These are converted into brain cells, and nerve cells, on a chip that represents the scaffolding, interface, and connectivity.

    Then the whole ‘organ-device’ gets surgically installed into your brain, and through gene therapy, the brain cells grow into, connect with and network into your existing tissue.

  • Blóðbók@slrpnk.net
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    6 months ago

    If this works, it’s noteworthy. I don’t know if similar results have been achieved before because I don’t follow developments that closely, but I expect that biological computing is going to catch a lot more attention in the near-to-mid-term future. Because of the efficiency and increasingly tight constraints imposed on humans due to environmental pressure, I foresee it eventually eclipse silicon-based computing.

    FinalSpark says its Neuroplatform is capable of learning and processing information

    They sneak that in there as if it’s just a cool little fact, but this should be the real headline. I can’t believe they just left it at that. Deep learning can not be the future of AI, because it doesn’t facilitate continuous learning. Active inference is a term that will probably be thrown about a lot more in the coming months and years, and as evidenced by all kinds of living things around us, wetware architectures are highly suitable for the purpose of instantiating agents doing active inference.