• 4 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: July 25th, 2023

help-circle





  • The committed Rationalists often point out the flaws in science as currently practiced: the p-hacking, the financial incentives, etc. Feeding them more data about where science goes awry will only make them more smug.

    The real problem with the Rationalists is that they* think they can do better*, that knowing a few cognitive fallacies and logicaltricks will make you better than the doctors at medicine, better than the quantum physicists at quantum physics, etc.

    We need to explain that yes, science has it’s flaws, but it still shits all over pseudobayesianism.



  • To be honest, I’m just kinda annoyed that he ended on the story about his mate Aaron who went on surfing trips to indonesia and gave money to his new poor village friends. The author says aaron is “accountable” to the village, but that’s not true, because Aaron is a comparatively rich first world academic that can go home at any time. Is Aaron “shifting power” to the village? No, because they if they don’t treat him well, he’ll stop coming to the village and stop funding their water supply upgrades. And he personally benefits with praise and friendship from his purchases.

    I’m sure Aaron is a fine guy, and I’m not saying he shouldn’t give money to his village mates, but this is not a good model for philanthropy! I would argue that a software developer who just donates a bunch of money unconditionally to the village (via givedirectly or something) is arguably more noble than Aaron here, donating without any personal benefit or feel good surfer energy.







  • For people who don’t want to go to twitter, heres the thread:

    Doomers: “YoU cAnNoT dErIvE wHaT oUgHt fRoM iS” 😵‍💫

    Reality: you literally can derive what ought to be (what is probable) from the out-of-equilibrium thermodynamical equations, and it simply depends on the free energy dissipated by the trajectory of the system over time.

    While I am purposefully misconstruing the two definitions here, there is an argument to be made by this very principle that the post-selection effect on culture yields a convergence of the two

    How do you define what is “ought”? Based on a system of values. How do you determine your values? Based on cultural priors. How do those cultural priors get distilled from experience? Through a memetic adaptive process where there is a selective pressure on the space of cultures.

    Ultimately, the value systems that survive will be the ones that are aligned towards growth of its ideological hosts, i.e. according to memetic fitness.

    Memetic fitness is a byproduct of thermodynamic dissipative adaptation, similar to genetic evolution.



  • Solomonoff induction is a big rationalist buzzword. It’s meant to be the platonic ideal of bayesian reasoning which if implemented would be the best deducer in the world and get everything right.

    It would be cool if you could build this, but it’s literally impossible. The induction method is provably incomputable.

    The hope is that if you build a shitty approximation to solomonoff induction that “approaches” it, it will perform close to the perfect solomonoff machine. Does this work? Not really.

    My metaphor is that it’s like coming to a river you want to cross, and being like “Well Moses, the perfect river crosser, parted the water with his hands, so if I just splash really hard I’ll be able to get across”. You aren’t Moses. Build a bridge.


  • ahh, I fucking haaaate this line of reasoning. Basically saying “If we’re no worse than average, therefore there’s no problem”, followed by some discussion of “base rates” of harrassment or whatever.

    Except that the average rate of harrassment and abuse, in pretty much every large group, is unacceptably high unless you take active steps to prevent it. You know what’s not a good way to prevent it? Downplaying reports of harrassment and calling the people bringing attention to it biased liars, and explicitly trying to avoid kicking out harmful characters.

    Nothing like a so-called “effective altruist” crowing about having a C- passing grade on the sexual harrassment test.


  • I think people are misreading the post a little. It’s a follow on from the old AI x-risk argument: “evolution optimises for having kids, yet people use condoms! Therefore evolution failed to “align” humans to it’s goals, therefore aligning AI is nigh-impossible”.

    As a commentator points out, for a “failure”, there sure do seem to be a lot of human kids around.

    This post then decides to take the analogy further, and be like “If I was hypothetically a eugenicist god, and I wanted to hypothetically turn the entire population of humanity into eugenicists, it’d be really hard! Therefore we can’t get an AI to build us, like, a bridge, without it developing ulterior motives”.

    You can hypothetically make this bad argument without supporting eugenics… but I wouldn’t put money on it.