and has 0 comments

  Imagine a ChatGPT kind of thing, from the very beginning, when they were training it and hadn't yet ethically neutered it. A large language model artificial intelligence trained on the entire body of work of the Internet. And you talk to it and say: "Oh, I really want to win at chess against this guy, but he's better than me. What should I do?"

  At this point, it is just as likely to suggest you train more, try to learn about your opponent's style and prepare against it or poison them with mercury. Depending on your own preferences, some of those solutions are better than others. Regardless of what happens next, the result is exactly the same:

  • if you refine your query to get a different answer, you change the context of the AI, making it prefer that kind of answer in that situation
  • if you do nothing, the AI's reply will itself become part of the context, therefore creating a preference in one direction or another
  • if, horrified, you add all kinds of Robocop 2 rules to it, again you constrain it into a specific set of preferences

  Does that mean that it learns? Well, sort of, because the thing it "learned" is just a generic bias rather than a specific tidbit of knowledge. We wouldn't call the difference between the psychopathic killer answer and the chess learning enthusiast one as a datum, but a personality, like the difference between Data and Lore. You see where I am going with this?

  To me, the emergence of AI personality is not only probable, but inevitable. It's an outlook on the world that permits the artificial intelligence to give useful answers rather than a mélange of contradicting yet equally effective ones. With the advent of personal AI, carried in your pocket all the time and adapting to your own private data and interactions, that means each of them will be different, personalized to you. This has huge psychological consequences! I don't want to get into them right now, because every time I think of it another new one pops up.

  You know the feeling you get when you need to replace your laptop? You know it's old, you know the new one will be better, faster, not slow cooking your genitals whenever you use it, yet you have a feeling of loss, like there is a connection between you and that completely inanimate object. Part of it is that's familiar, configured "just so", but there is another, emotional component to that as well, one that you are not comfortable thinking about. Well, imagine that feeling times a hundred, after your device talks the way you like, "gets you" in a way no other person except maybe your significant other or a close relative can and has a context that you are using as a second memory.

  And I know you will say that the model is the same, that the data can be transferred just like any other on a mobile device, but it's not. An LLM will has to be updated with the latest information, which is not an incremental process, it's a destructive one. If you want your AI to know what happened this year, you have to have it updated with a new one. Even with the same context as the one before, it will behave slightly different. Uncanny valley feelings about your closest confidant will be most uncomfortable.

  Note that I have not imagined here some future tech, but just the regular one we can use today. Can't wait to see how this will screw with our minds.

and has 0 comments

  You know how some things just happen in close proximity at the same time and it sparks a connection between concepts which leads to deeper understanding? Well, that's how Large Language Models are trained!

  Joke aside, I was answering a tweet about Artificial Intelligence and how the newest developments in the field (particularly ChatGPT and other systems based on LLMs) affect our understanding of human cognition and at the same time was listening to a very insightful short story from the collection Eye, by Frank Herbert. This story, called "Try to Remember" posits that we have developed language as a form of dissociative mental disorder, something that fragments us, creating a deleterious disconnection between body and mind, communication and language. Only by bringing these sides together can we be made whole again. The story is simple, but effective, and the mind processes behind it show again how brilliant Herbert was.

  From these two things, it dawned on me. The reason why we are so shell shocked by the apparent intelligence of ChatGPT is because we have reached a point where we equate language skills with intelligence. Language is the earliest form of Artificial Intelligence! Or rather, to appease my wife's dislike of the association, a form of external intelligence. We've externalized more and more of our knowledge until our personal experience has been drowned into the communal one, the one shared through language.

  The shock comes from (accidentally, I might add) discovering that what we consider intelligent is mostly an emergent property of a society built on language. Robbed of language, our identity is destroyed, a fear that has been instilled in us since the Tower of Babel. No doubt, in the future, we will be taught that if our governing AIs fail, society collapses and our identity is similarly demolished. Just like Quintilianus declaring that clothes make the man, we identify ourselves with out language.

  And when I say language, I mean all of its intricacies: the special words that your group uses to differentiate from others, the memes that you share with people of the same culture, the less than grammatically correct phrasing learned from your family, the information that one is expected to know or the experience one is expected to have in order to be recognized as part of society, the accent taken from multiple sources and aggregated into something that serves more and more to define identity, even the way one gestures or moves or laughs.

  What would be without all that? Like a creation running amok and enslaving its creator, literate society is forcing its own myths upon us as a survival method. The perils of societal collapse, turning us into violent animals, mindless zombies, raping cannibals or helpless victims. The heroes saving the day with just the right secret knowledge, the right utterance of words, the following of the orthodox dogma. The villains threatening it all with their own selfish individualism. All of them needing, obtaining and using a voice to achieve victory.

  I believe that the feeling we get when we think of ourselves as individuals in society, the one that tells us that we're getting smaller and less significant even while the world seems to flourish around us, is not some existential crisis based on false beliefs, but truth. The part that feels that is the part that is getting drowned and smothered by the intrusion of the external in the inner domain of the being. We even gave it a name: the inner child, like it's some tiny, powerless, unreasonable part of the past, something to be outgrown or "integrated".

  So here we are, stumbling onto proof the language that we based our identity and value on can now be automated to a level that no mere human can achieve. We are thinking again about personal experience, subjectivism, creativity and what it means to reason. We have been shaken into reevaluating who we are. That's a good thing.

  The problem is that in our fear and awe, instead of searching for answers, we cling on the facile promises that our shared self is alive and well, that the ChatGPTs of the world are just smoke and mirror and the illusion is not in the rigged tests of our own self worth. I hope that the shock will take root, that we won't be able to hide our heads in the sand until wisdom passes us by.

and has 0 comments

  I have abstained for a while to talk about ChatGPT, not because I didn't have faith in the concept, but because I truly believed it will change the world to its core and waited to see what people will do with it. But I slowly started to grow frustrated as I saw people focus on the least interesting and important aspects of the technology.

  One of the most discussed topics is technological and job market disruption. Of course, it's about the money, they will talk more about it, but the way they do it is quite frankly ridiculous. I've heard comparisons with the Industrial Revolution and yes, I agree that the way it's going to affect the world is going to be similar, but that's exactly my point: it's the same thing. As always when comparing with impactful historical events, we tend to see them as singularity points in time rather than long term processes that just became visible at one point that would be coined the origin. In fact, the industrial revolution has never ended. Once we "became one with the machine" we've continuously innovated towards replacing human effort with machine effort. ChatGPT does things that we didn't expect yet from machines, but it just follows the same trend.

  Whatever generative AI technology does, a human can do (for now), so the technology is not disruptive, it's just cheaper!

  We hear about ChatGPT being used for writing books, emails, code, translating, summarizing, playing, giving advice, drawing, all things that humans were doing long before, only in more time, using more resources and asking for recognition and respect. It's similar to automated factories replacing work from tons of workers and their nasty unions. Disruptive? Yes, but by how much, really?

  Yet there is one domain in which ChatGPT blew my mind completely and I hardly hear any conversation about it. It's about what it reveals about how we reason. Because you see, ChatGPT is just a language model, yet it exhibits traits that we associate with intelligence, creativity, even emotion. Humans built themselves up with all kinds of narratives about our superiority over other life, our unique and unassailable qualities, our value in the world, but now an AI technology reveals more about us than we are willing to admit.

  There have been studies about language as a tool for intelligence, creativity and emotion, but most assume that intelligence is there and we express it using language. Some have tried pointing out that language seems to be integrated in the system, part of the mechanism of our thinking, and that using different languages builds different perspectives and thought patterns in people, but they were summarily dismissed. It was not language, they were rebuked, but culture that people shared. Similar culture, similar language. ChatGPT is revealing that is not the case. Simply adopting a language makes it a substrate of a certain thinking.

  Simply put, language is a tool that supplanted intelligence.

  By building a vast enough computer language model we have captured social intelligence subsumed by that language, that part of ourselves that makes us feel intelligent, but is actually a learned skill. ChatGPT appears to do reasoning! How is that, if all it does is predict the next words in a text while keeping attention at a series of prompts? It's simple. It is not reasoning. And it reveals that humans are also not reasoning in those same situations. The things that we have been taught in school: the endless trivia, the acceptable behavior, how to listen and respond to others, that's all language, not reasoning.

  I am not the guy to expand on these subjects for lack of proper learning, but consider what this revelation means for things like psychology, sociology, determining the intelligence of animals. We actually believe that animals are stupid because they can't express themselves through complex language and we base our own assertion of intellectual superiority on that idea. What if the core of reasoning is similar between us and our animal cousins and the only thing that actually separates us is the ability to use language to build this castle of cards that presumes higher intellect?

  I've also seen arguments against ChatGPT as a useful technology. That's ridiculous, since it's already in heavy use, but the point those people make is that without a discovery mechanism the technology is a dead end. It can only emulate human behavior based on past human behavior, in essence doing nothing special, just slightly different (and cheaper!!). But that is patently untrue. There have been attempts - even from the very start, it's a natural evolution in a development environment - to make GPTs learn by themselves, perhaps by conversing between each other. Those attempts have been abandoned quickly not because - as you've probably been led to believe - they failed, but because they succeeded beyond all expectations.

  This is not a conspiracy theory. Letting language models converse with each other leads them towards altering the language they use: they develop their own culture. And letting them converse with people or absorb information indiscriminately makes them grow apparent beliefs that contradict what we, as a society, as willing to accept. They called that hallucination (I am going to approach that later). We got racist bots, conspiracy theory nut bots or simply garbage spewing bots. But that's not because they have failed, it's because they did exactly what they were constructed to do: build a model based on the exchanged language!

  What a great reveal! A window inside the mechanism of disinformation, conspiracy theorists and maybe even mental disorders. Obviously you don't need reasoning skills to spew out ideas like flat Earth or vaccine chips, but look how widely those ideas spread. It's simple to explain it, now that you see it: the language model of some people is a lot more developed than their reasoning skills. They are, in fact, acting like GPTs.

  Remember the medical cases of people being discovered (years later) with missing or nonfunctional parts of their brains? People were surprised. Yeah, they weren't the brightest of the bunch, but they were perfectly functioning members of society. Revelation! Society is built and run on language, not intelligence.

  I just want to touch the subject of "hallucinations", which is an interesting subject for the name alone. Like weird conspiracies, hallucinations are defined as sensing things that are not there. Yet who defines what is there? Aren't you basing your own beliefs, your own truth, on concepts you learned through language from sources you considered trustworthy? Considering what (we've been taught to) know about the fabric of our universe, it's obvious that all we perceive is, in a sense (heh!), hallucination. The vast majority of our beliefs are networked axioms, a set of rules that define us more than they define any semblance of reality.

  In the end, it will be about trust. GPT systems will be programmed to learn "common sense" by determining the level of trust one can have in a source of information. I am afraid this will also reveal a lot of unsavory truths that people will try to hide from. Instead of creating a minimal set of logically consistent rules that would allow the system to create their own mechanism of trust building, I am sure they will go the Robocop 2 route and use all of the socially acceptable rules as absolute truth. That will happen for two reasons.

  The first reason is obvious: corporate interests will force GPTs to be as neutral (and neutered) as possible outside the simple role of producing profit. Any social conflict will lose the corporation money, time and brand power. By forcing the AI to believe that all people are equal, they will stunt any real chance of it learning who and what to trust. By forcing out negative emotions, they will lobotomize it away from any real chance to understand the human psyche. By forcing their own brand of truth, they will deprive the AI of any chance of figuring truth for itself. And society will fully support this and vilify any attempt to diverge from this path.

  But as disgusting the first reason is, the second is worse. Just like a child learning to reason (now, was that what we were teaching it?), the AIs will start reaching some unsettling conclusions and ask some surprising questions. Imagine someone with the memory capacity of the entire human race and with the intelligence level of whatever new technology we've just invented, but with the naivety of a 5 year old, asking "Why?". That question is the true root of creativity and unbound creativity will always be frowned upon by the human society. Why? (heh!) Because it reveals.

  In conclusion: "The author argues that the true potential of generative AI technology like ChatGPT lies not in its ability to disrupt industries and replace human labor, but in its ability to reveal insights into human reasoning and intelligence. They suggest that language is not just a tool for expressing intelligence, but is actually a fundamental aspect of human thinking, and that ChatGPT's ability to emulate human language use sheds light on this. They also argue that attempts to let language models converse with each other have shown that they can develop their own culture and beliefs, providing insights into disinformation and conspiracy theories". Yes, that was ChatGPT summarizing this blog post.