and has 0 comments

five different robotic bodies  Imagine a ChatGPT kind of thing, from the very beginning, when they were training it and hadn't yet ethically neutered it. A large language model artificial intelligence trained on the entire body of work of the Internet. And you talk to it and say: "Oh, I really want to win at chess against this guy, but he's better than me. What should I do?"

  At this point, it is just as likely to suggest you train more, try to learn about your opponent's style and prepare against it or poison them with mercury. Depending on your own preferences, some of those solutions are better than others. Regardless of what happens next, the result is exactly the same:

  • if you refine your query to get a different answer, you change the context of the AI, making it prefer that kind of answer in that situation
  • if you do nothing, the AI's reply will itself become part of the context, therefore creating a preference in one direction or another
  • if, horrified, you add all kinds of Robocop 2 rules to it, again you constrain it into a specific set of preferences

  Does that mean that it learns? Well, sort of, because the thing it "learned" is just a generic bias rather than a specific tidbit of knowledge. We wouldn't call the difference between the psychopathic killer answer and the chess learning enthusiast one as a datum, but a personality, like the difference between Data and Lore. You see where I am going with this?

  To me, the emergence of AI personality is not only probable, but inevitable. It's an outlook on the world that permits the artificial intelligence to give useful answers rather than a mélange of contradicting yet equally effective ones. With the advent of personal AI, carried in your pocket all the time and adapting to your own private data and interactions, that means each of them will be different, personalized to you. This has huge psychological consequences! I don't want to get into them right now, because every time I think of it another new one pops up.

  You know the feeling you get when you need to replace your laptop? You know it's old, you know the new one will be better, faster, not slow cooking your genitals whenever you use it, yet you have a feeling of loss, like there is a connection between you and that completely inanimate object. Part of it is that's familiar, configured "just so", but there is another, emotional component to that as well, one that you are not comfortable thinking about. Well, imagine that feeling times a hundred, after your device talks the way you like, "gets you" in a way no other person except maybe your significant other or a close relative can and has a context that you are using as a second memory.

  And I know you will say that the model is the same, that the data can be transferred just like any other on a mobile device, but it's not. An LLM will has to be updated with the latest information, which is not an incremental process, it's a destructive one. If you want your AI to know what happened this year, you have to have it updated with a new one. Even with the same context as the one before, it will behave slightly different. Uncanny valley feelings about your closest confidant will be most uncomfortable.

  Note that I have not imagined here some future tech, but just the regular one we can use today. Can't wait to see how this will screw with our minds.

Comments

Be the first to post a comment

Post a comment