Most of the article is well-trodden ground if you’ve been following OpenAI at all, but I thought this part was noteworthy:
Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall he’d confronted one member, Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the paper might be perceived). Altman began approaching other board members, individually, about replacing her. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought,” the person familiar with the board’s discussions told me. “Things like that had been happening for years."
it’s very funny to me that all the copilot examples this article breathlessly relates are things LLMs are absolutely fucking terrible at
christ, I got bored and tapped out too soon. it’s fucking unsettling how hard this article tries to dodge around how insane all of this is — how much it normalizes these bad ideas wrapped in worse nationalism on Scott and Microsoft’s part, and how it tries to excuse OpenAI being run and staffed by cultists as them being problematically enthusiastic or whatever
I probably should have just told people to skip the article prior to the part that I quoted, I agree most of it was very boring.
I agree, the article is way too credulous about the people working with and associated with OpenAI and doesn’t delve enough early enough into the dangerous weirdness of the organisation or the EA/rationalist crowd that have been leading it.