• 1 Post
  • 106 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle

  • The bill mandates safety testing of advanced AI models and the imposition of “guardrails” to ensure they can’t slip out of the control of their developers or users and can’t be employed to create “biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities.” It’s been endorsed by some AI developers but condemned by others who assert that its constraints will drive AI developers out of California.

    Man, if I can’t even build homemade nuclear weapons, what CAN I do? That’s it, I’m moving to Nevada!




  • I’ve thought about a similar idea before in the more minor context of stuff like note-taking apps – when you’re taking notes in a paper notebook, you can take notes in whatever format you want, you can add little pictures or diagrams or whatever, arranged however you want. Heck, you can write sheet music notation. When you’re taking notes in an app, you can basically just write paragraphs of text, or bullet points, and maybe add pictures in some limited predefined locations if you’re lucky.

    Obviously you get some advantages in exchange for the restrictive format (you can sync/back up things to the internet! you can search through your notes! etc) but it’s by no means a strict upgrade, it’s more of a tradeoff with advantages and disadvantages. I think we tend to frame technological solutions like this as though they were strict upgrades, and often we aren’t so willing to look at what is being lost in the tradeoff.



  • Can AI companies legally ingest copyrighted materials found on the internet to train their models, and use them to pump out commercial products that they then profit from? Or, as the tech companies claim, does generative AI output constitute fair use?

    This is kind of the central issue to me honestly. I’m not a lawyer, just a (non-professional) artist, but it seems to me like “using artistic works without permission of the original creators in order to create commercial content that directly competes with and destroys the market for the original work” is extremely not fair use. In fact it’s kind of a prototypically unfair use.

    Meanwhile Midjourney and OpenAI are over here like “uhh, no copyright infringement intended!!!” as though “fair use” is a magic word you say that makes the thing you’re doing suddenly okay. They don’t seem to have very solid arguments justifying them other than “AI learns like a person!” (false) and “well google books did something that’s not really the same at all that one time”.

    I dunno, I know that legally we don’t know which way this is going to go, because the ai people presumably have very good lawyers, but something about the way everyone seems to frame this as “oh, both sides have good points! who will turn out to be right in the end!” really bugs me for some reason. Like, it seems to me that there’s a notable asymmetry here!





  • My main thought reading through this whole thing was like, “okay, in a world where the rationalists weren’t closely tied to the neoreactionaries, and the effective altruists weren’t known by the public mostly for whitewashing the image of a guy who stole a bunch of people’s money, and libertarians and right-wingers were supported by the mainstream consensus, I guess David Gerard would be pretty bad for saying those things about them. Buuuut…”


  • Clicking through to one of the source articles

    Through an algorithm that analyzes troves of student information from multiple sources, the chatbot was designed to offer tailored responses to questions like “what grade does my child have in math?”

    Okay, I’m not a big-brain edtech integration admin, but I seem to recall that like fifteen years ago we had a website that my parents could check to see my grade in math. I feel like this was already a solved problem honestly.