Currently, talking to a face is the ultimate guarantee that you are communicating with a human (and on a subconscious level makes you try to relate, empathise, etc.). If humanoid robot technology eventually surpasses the Uncanny Valley, discovering that I’m talking to a humanoid with an LLM and that my intuitions had been betrayed would undermine the instinctive trust I give to the other party when I see a human face. This would degrade my social interactions across the board, because I’d live in constant suspicion that the humans I was talking to weren’t actually human.

It is for this reason I think it should be the law that humanoid robots must be clearly differentiated from humans. Or at least that people should have the right to opt out from encountering realistic-looking humanoids.

  • perestroika@slrpnk.net
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    23 hours ago

    It would not exclude clear differentiation, however. :)

    Just like a chatbot posting on social media can add a message footer “this content was posted by a robot” to a fluent and human-like message, a humanoid robot, while having human form, can clearly identify itself as a robot.

    Personally, I think such a design requirement is higly reasonable on social media (as a barrier or action threshold against automated mass manipulation) but probably also in real life, if a day comes when human-like robots are abundant.