I found that idea interesting. Will we consider it the norm in the future to have a “firewall” layer between news and ourselves?

I once wrote a short story where the protagonist was receiving news of the death of a friend but it was intercepted by its AI assistant that said “when you will have time, there is an emotional news that does not require urgent action that you will need to digest”. I feel it could become the norm.

EDIT: For context, Karpathy is a very famous deep learning researcher who just came back from a 2-weeks break from internet. I think he does not talks about politics there but it applies quite a bit.

EDIT2: I find it interesting that many reactions here are (IMO) missing the point. This is not about shielding one from information that one may be uncomfortable with but with tweets especially designed to elicit reactions, which is kind of becoming a plague on twitter due to their new incentives. It is to make the difference between presenting news in a neutral way and as “incredibly atrocious crime done to CHILDREN and you are a monster for not caring!”. The second one does feel a lot like exploit of emotional backdoors in my opinion.

  • xxd@discuss.tchncs.de
    link
    fedilink
    arrow-up
    5
    ·
    9 months ago

    Leaving aside the dystopian echo chamber that this could result in, you could argue that this would help with fake news by a lot. Fake news are so easy to spread and more present than ever. And for every person there is probably that one piece of news that is just believable enough to not question it. And then the next just believable piece of news. and another. I believe no one is immune to being influenced by fake stories, maybe even radicalized if they are targeted just right. A firewall just filtering out everything non-factual would already prevent so much societal damage I think.

    • GrymEdm@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      9 months ago

      There are enormous issues with who decides what makes it through the filter, how to handle things that are of unknown truth (say ongoing research), and the hazards of training consumers of information to assume everything that makes it to them is completely factual (the whole point of said fake news filter). If you’d argue that people on the far side of the filter can still be skeptical, then just train that and avoid censorship via filter.

      • xxd@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        Yeah, I agree. it’s not easy to determine truth, and whoever decides truth might introduce bias that then gets rolled out to everyone. With ongoing reserach or unknown information, you could just have a “currently being researched” or “not confirmed yet” attached to the information. I’m just saying that in an ideal world where this does work, it could be safer than relying on people being skeptical, because everyone fails to be skeptical about something eventually.