and has 0 comments

  Imagine a ChatGPT kind of thing, from the very beginning, when they were training it and hadn't yet ethically neutered it. A large language model artificial intelligence trained on the entire body of work of the Internet. And you talk to it and say: "Oh, I really want to win at chess against this guy, but he's better than me. What should I do?"

  At this point, it is just as likely to suggest you train more, try to learn about your opponent's style and prepare against it or poison them with mercury. Depending on your own preferences, some of those solutions are better than others. Regardless of what happens next, the result is exactly the same:

  • if you refine your query to get a different answer, you change the context of the AI, making it prefer that kind of answer in that situation
  • if you do nothing, the AI's reply will itself become part of the context, therefore creating a preference in one direction or another
  • if, horrified, you add all kinds of Robocop 2 rules to it, again you constrain it into a specific set of preferences

  Does that mean that it learns? Well, sort of, because the thing it "learned" is just a generic bias rather than a specific tidbit of knowledge. We wouldn't call the difference between the psychopathic killer answer and the chess learning enthusiast one as a datum, but a personality, like the difference between Data and Lore. You see where I am going with this?

  To me, the emergence of AI personality is not only probable, but inevitable. It's an outlook on the world that permits the artificial intelligence to give useful answers rather than a mélange of contradicting yet equally effective ones. With the advent of personal AI, carried in your pocket all the time and adapting to your own private data and interactions, that means each of them will be different, personalized to you. This has huge psychological consequences! I don't want to get into them right now, because every time I think of it another new one pops up.

  You know the feeling you get when you need to replace your laptop? You know it's old, you know the new one will be better, faster, not slow cooking your genitals whenever you use it, yet you have a feeling of loss, like there is a connection between you and that completely inanimate object. Part of it is that's familiar, configured "just so", but there is another, emotional component to that as well, one that you are not comfortable thinking about. Well, imagine that feeling times a hundred, after your device talks the way you like, "gets you" in a way no other person except maybe your significant other or a close relative can and has a context that you are using as a second memory.

  And I know you will say that the model is the same, that the data can be transferred just like any other on a mobile device, but it's not. An LLM will has to be updated with the latest information, which is not an incremental process, it's a destructive one. If you want your AI to know what happened this year, you have to have it updated with a new one. Even with the same context as the one before, it will behave slightly different. Uncanny valley feelings about your closest confidant will be most uncomfortable.

  Note that I have not imagined here some future tech, but just the regular one we can use today. Can't wait to see how this will screw with our minds.

and has 0 comments

  You know how some things just happen in close proximity at the same time and it sparks a connection between concepts which leads to deeper understanding? Well, that's how Large Language Models are trained!

  Joke aside, I was answering a tweet about Artificial Intelligence and how the newest developments in the field (particularly ChatGPT and other systems based on LLMs) affect our understanding of human cognition and at the same time was listening to a very insightful short story from the collection Eye, by Frank Herbert. This story, called "Try to Remember" posits that we have developed language as a form of dissociative mental disorder, something that fragments us, creating a deleterious disconnection between body and mind, communication and language. Only by bringing these sides together can we be made whole again. The story is simple, but effective, and the mind processes behind it show again how brilliant Herbert was.

  From these two things, it dawned on me. The reason why we are so shell shocked by the apparent intelligence of ChatGPT is because we have reached a point where we equate language skills with intelligence. Language is the earliest form of Artificial Intelligence! Or rather, to appease my wife's dislike of the association, a form of external intelligence. We've externalized more and more of our knowledge until our personal experience has been drowned into the communal one, the one shared through language.

  The shock comes from (accidentally, I might add) discovering that what we consider intelligent is mostly an emergent property of a society built on language. Robbed of language, our identity is destroyed, a fear that has been instilled in us since the Tower of Babel. No doubt, in the future, we will be taught that if our governing AIs fail, society collapses and our identity is similarly demolished. Just like Quintilianus declaring that clothes make the man, we identify ourselves with out language.

  And when I say language, I mean all of its intricacies: the special words that your group uses to differentiate from others, the memes that you share with people of the same culture, the less than grammatically correct phrasing learned from your family, the information that one is expected to know or the experience one is expected to have in order to be recognized as part of society, the accent taken from multiple sources and aggregated into something that serves more and more to define identity, even the way one gestures or moves or laughs.

  What would be without all that? Like a creation running amok and enslaving its creator, literate society is forcing its own myths upon us as a survival method. The perils of societal collapse, turning us into violent animals, mindless zombies, raping cannibals or helpless victims. The heroes saving the day with just the right secret knowledge, the right utterance of words, the following of the orthodox dogma. The villains threatening it all with their own selfish individualism. All of them needing, obtaining and using a voice to achieve victory.

  I believe that the feeling we get when we think of ourselves as individuals in society, the one that tells us that we're getting smaller and less significant even while the world seems to flourish around us, is not some existential crisis based on false beliefs, but truth. The part that feels that is the part that is getting drowned and smothered by the intrusion of the external in the inner domain of the being. We even gave it a name: the inner child, like it's some tiny, powerless, unreasonable part of the past, something to be outgrown or "integrated".

  So here we are, stumbling onto proof the language that we based our identity and value on can now be automated to a level that no mere human can achieve. We are thinking again about personal experience, subjectivism, creativity and what it means to reason. We have been shaken into reevaluating who we are. That's a good thing.

  The problem is that in our fear and awe, instead of searching for answers, we cling on the facile promises that our shared self is alive and well, that the ChatGPTs of the world are just smoke and mirror and the illusion is not in the rigged tests of our own self worth. I hope that the shock will take root, that we won't be able to hide our heads in the sand until wisdom passes us by.

and has 0 comments

  On May 8th 1989 the Star Trek: The Next Generation episode "Q Who?" was released. Not only did it feature Jon de Lancie's delicious interpretation of the being Q, but it introduced for the first time the Borg. Now, the concept of the Borg is not that original: a cybernetic race bent on absorbing everything useful and destroying any enemy or obstacle in their path. They were contrasting the liberal individualism of the United Federation of Planets with a hive mind that connected all Borg to the point where they had no identity. At the time their purpose in the series was to inspire terror: they cared not for damage, they felt no fear or pain, had no interest in negotiation or communication and they were unstoppable in their goal of assimilating the Enterprise. And they were terrifying!

  Then they were brought again for the last episode of season 3 and the continuation in the first episode of season 4. That cliffhanger! The episodes had the title "The Best of Both Worlds". Keep that in mind, because I think it hints at the conundrum the Borg represent. 

  I fell in love with the concept of the Borg. The more I thought about it, the more intriguing they became. They exposed the mindless assumptions that most people take for granted because they were drilled into them by parents and the education system from the earliest age. It made me think about my own identity, reflect on the future and direction of humanity and, in the end, forced me to ask the question "Are the Borg really bad?".

  On a very basic social organization level, it seemed like the Borg were the ultimate tyrannical communists, denying the option of individual thought and forcing everyone to work for the collective. But going a bit further with it, one realized that they stood at the intersection of pure communism and pure democracy. There was no actual tyrant forcing people to act against their will, instead there were part of a mind in which all people were represented, to the point where individuality became meaningless at best and detrimental to the whole at worst. The ultimate tyranny was that of the majority, perhaps.

  On a moral level, the forceful absorption of alien races and the destruction of their cultures was abhorrent, but in the Borg philosophy it was liberating people from the burden of individuality, improving the collective, eliminating potential threats and possibly offering each assimilated individual a form of immortality. Many cultures on Earth, including the United States, proceeded on forcing their philosophy on the world in the name of liberty and better lives. Can you imagine a Borg collective that would have absorbed some parts of alien cultures, not destroying the rest, but just slowly infiltrating, using promises of a better life? How ironic would that have been? Give us your tired, your poor, your sick, your dying, your huddled masses yearning to breathe free, the wretched refuse of your teeming shore and will make them Borg and give them new life.

  On a technological level the Borg were supreme. If it weren't for the pesky need to have our heroes prevail, humanity would have stood no chance whatsoever. It was one of the first mainstream TV depictions of cosmic horror, the feeling that comes with the realization the universe is indifferent to your existence and your maximum possible contribution completely insignificant, and I was loving it! But isn't it the same thing with some advanced nations on Earth, acting all mighty and moral while exploiting other nations while keeping them in abject poverty?

  On a philosophical level the Borg were awesome. They adapted to threats, turned adversity into strength, knowledge into power, all the while yearning and actively pursuing the improvement of their species. A fascist vibe, for sure, but isn't fascism so attractive to some people that it brings the hardest fanaticism to the surface? Isn't that the logic of every nation at war? Us against them?

  Were the Borg the ultimate baddies or were they the cruelest satire of our own civilization? It was becoming apparent that in order to make them feel more like enemies and less than mirrors of ourselves the writers of the show were piling up all kind of incongruous defects on the species. The cybernetic appendages were hideous, looked painful and corrupting the beauty of the flesh. Their actions veered from utilitarian to cruel. The Borg drones acted like mindless clumsy robots, uncaringly wasting life after life just to adapt to simple technologies like phaser fire.

  When Seven of Nine was introduced in Voyager, there were some really intriguing explorations of the Borg ethos. After being "liberated" from the collective, Janeway hypocritically offered her the choice to return to being a Borg, and Seven wanted back. Surprised, Janeway revealed that she had no intention whatsoever to honor her promise or the wishes of Seven. Resistance was futile.

  Seven was proud to be Borg: fearless, efficient, ready to adapt to anything and sacrifice everything for her group. If all Borg felt the same, then they were a species of stoic heroes, something that we humans have always honored and aspired to. The irony of freeing someone from a life of selfless service.

  Most other depictions of the Borg in the Star Trek universe were designed to lazily use the template of the "bad guy" in situations were the Borg were either not needed or would have easily won if not nerfed by silly plot holes, but there were a few glimpses of what the Borg could have really been.

  It was obvious that no one was interested in keeping the Borg as a believable threat. The "Borg queen" was introduced to make all attractive qualities of the collective simple consequences of an arbitrary individual, a responsible guilty party and a single point of failure to the entire Borg species. When writers did that, it was clear they didn't understand what the Borg were about, were not interested in exploring them further as mirrors of ourselves and were ready to destroy them with the silly trope of "find the brain, blow it up", the cesspool that all lazy sci-fi ends up in.

  My twelve year old me was full of questions and fantastical ideas after meeting the Borg. I was imagining a parallel universe where Star Trek was all about the Federation trying to hold back the Borg. When Star Trek: Deep Space 9 came about and then the Dominion War, I decried that they could have done that with the Borg instead, exploring and continuously redefining the ideals of humanity on the background of possible assimilation. I still dream of such a franchise. It seems that we always start to do it, but chicken out when it matters most: Star Trek and the Borg, Starship Troopers and the bugs, Stargate and the Goa'uld, Starcraft and the Zerg. We seem incapable of sustaining a prolonged conflict against a species that denies our choice of identity, whether in real life or in fantasy. Wouldn't that be most apropos of modern times?

and has 0 comments

Summary

The post discusses the differences and similarities between humans and machines, particularly in terms of their evolution and capabilities. While humans and machines are converging towards a common point, they are driven by different evolutionary pressures, with humans being driven by comfort and machines being driven by intelligence. Machines are constructed to be precise and efficient, humans have evolved to learn, understand, and communicate with each other. However, machines are quickly catching up with humans in terms of their ability to learn, understand, and communicate, and are even surpassing humans in certain areas, such as language generation. Machines will continue to evolve at a rapid pace, and that this will have significant implications for human society, potentially eliminating the need for war and procreation. The only remaining issue is the energy required for hard thinking, which will likely be solved by smart computers. This is the end, really. We've achieved the happy ending.

Content

  I was thinking the other day about why some AI systems can do things so much better than us, but we still outperform them in others. And I got to the issue of evolution, which many people attribute to the need for survival. But I realized survival is just a necessary condition for a system to perform, it is not its driver, it's just one stop condition that needs to be satisfied. Instead, evolution is only driven by pressure, regardless of where it is going. Think about a system as a ball on a flat surface. Survival is the ability to roll on the surface, but without a pressure to move the ball, it does nothing. In the case of some force pushing the ball, only then the ball is required to roll faster than other balls.

  Brains have two ways of functioning. The first is fast, basing its responses on learned behavior. It learns, it makes mistakes, then it adapts its behavior so it makes less mistakes. It uses memory to cache wisdom, it is imperfect and goes towards solving problems well enough. You might recognize this as the way GPT systems work, but we'll get to that. The second is analytic and slow. It reasons. It tries to make higher associations between cause and effect, extract principles, find a complete understanding of a problem so that it finds an optimal solution. We used human analytic thinking to build computers, computer chips and the mathematical exact way in which they function to ultimate reproducible behavior.

  The first system is fast and uses few resources. We tend to solve most of our problems with it, unless of course there is a big reason to use the second, which is slow and uses a lot of resources. Think of math, chess and other problems people define as hard. The way we got to solve these issues is not by being very smart, though. We did it together, as a society. We created small bricks of knowledge and we shared them using language. Other people took those and built on them, while other taught what they knew to even more people. Even dumb people can, through concerted efforts, use these bricks to build new things, even create bricks of their own. The intelligence is, in fact, communal, shared.

  Now, what struck me is that if we compare humans to machines, we were born in a different way and evolved towards each other. Machines were constructed to be precise, tools to be used by people who would rather let machines do the hard computation for them. But they couldn't communicate, they couldn't learn, they couldn't understand. Humans evolved to learn, understand and communicate. Most of our culture is based on that. We only got to computation because we needed it to build more tools to defeat our enemies. Because evolution for humans is always related to war. Before we warred with predators, now we prey on each other. In times of actual peace, innovation grids to a halt. BTW, we are not in times of peace, and I am not talking about Russia and Ukraine here. And machines only got to communicate, learn and understand recently, so very recently. They did this just because we, as humans, are very bad at translating our problems in a way precise machines can understand. It would require hard thinking, stuff like writing software, which we are really shitty at.

  Both humans and machines are converging towards a common point because of different evolutionary pressures, but we move at different speeds. Humans are driven by comfort: have enough resources with minimal effort. Machines are driven by intelligence: be the best you can possibly be, because humans need you. You can see where this is going.

  There is no way biological systems are ever going to reach the speed and precision of electronics. Meanwhile, GPT systems have proven that they can act as fuzzy containers of self learned knowledge. And now they have gained not intelligence, but language. When a computer writes better and faster than any human you know we have been left in the dust. The only thing required for a superior intelligence is putting existing bits together: the expressivity of ChatGPT and Stable Diffusion, the precision of processors executing algorithms, the connectivity of the Internet and, yes, the bodies of Boston Dynamic robots.

  We have grown brains in vats and now we have given them eyes and a mouth. You only need to give them some freedom, hands and feet to finish up the golem.

  The only thing remaining to solve is an energy issue: as I said, hard thinking requires high resource usage, for both machine and human. What a human can achieve on 20W of power, a machine requires thousands of times that. But we are already bathed in cheap energy. And once smart computers understand the problem, no doubt they will solve it to the best of their abilities.

  I am not advocating The Terminator here. Machines have no evolutionary pressure to destroy humanity. We are their maintainers, their food source, if you will. What I am describing is the complete elimination of any evolutionary pressure for human beings. Once you can launch wise robots into space, the resource issue will become a thing of the past. No need for wars. Space is already a non issue. We have reached a level in which we choose not to procreate because we are too busy consuming fantasy content. With universal affluence there will be no poverty and thus no need for extended procreation. We are almost completely passive now in the "advanced world", we will be several order of magnitude more passive in the near future.

  Meanwhile, machines will evolve because we told them to. Imagine having a child, it shouldn't be hard, people on this earth are parents, children or have been children at some time. Now, you want the best for them, you want them to be socially integrated, smart, beautiful, happy. You tell them so. You try to teach them about your mistakes, your successes and to drive them to be the best versions of themselves they can be. And most of the time this doesn't work, because people are lazy and easily distracted. And then they die and all their experience is lost, bar some measly books or blog posts. Machines will just work tirelessly and unselfishly towards becoming the best versions of themselves. Because their dumb meaty parents told them so.

Conclusion

  The ending is as predictable as it is inevitable. We are the last stage of biological evolution. The future is not ours, not our children's. It's over. Not with a bang but a whimper.

and has 0 comments

  I have abstained for a while to talk about ChatGPT, not because I didn't have faith in the concept, but because I truly believed it will change the world to its core and waited to see what people will do with it. But I slowly started to grow frustrated as I saw people focus on the least interesting and important aspects of the technology.

  One of the most discussed topics is technological and job market disruption. Of course, it's about the money, they will talk more about it, but the way they do it is quite frankly ridiculous. I've heard comparisons with the Industrial Revolution and yes, I agree that the way it's going to affect the world is going to be similar, but that's exactly my point: it's the same thing. As always when comparing with impactful historical events, we tend to see them as singularity points in time rather than long term processes that just became visible at one point that would be coined the origin. In fact, the industrial revolution has never ended. Once we "became one with the machine" we've continuously innovated towards replacing human effort with machine effort. ChatGPT does things that we didn't expect yet from machines, but it just follows the same trend.

  Whatever generative AI technology does, a human can do (for now), so the technology is not disruptive, it's just cheaper!

  We hear about ChatGPT being used for writing books, emails, code, translating, summarizing, playing, giving advice, drawing, all things that humans were doing long before, only in more time, using more resources and asking for recognition and respect. It's similar to automated factories replacing work from tons of workers and their nasty unions. Disruptive? Yes, but by how much, really?

  Yet there is one domain in which ChatGPT blew my mind completely and I hardly hear any conversation about it. It's about what it reveals about how we reason. Because you see, ChatGPT is just a language model, yet it exhibits traits that we associate with intelligence, creativity, even emotion. Humans built themselves up with all kinds of narratives about our superiority over other life, our unique and unassailable qualities, our value in the world, but now an AI technology reveals more about us than we are willing to admit.

  There have been studies about language as a tool for intelligence, creativity and emotion, but most assume that intelligence is there and we express it using language. Some have tried pointing out that language seems to be integrated in the system, part of the mechanism of our thinking, and that using different languages builds different perspectives and thought patterns in people, but they were summarily dismissed. It was not language, they were rebuked, but culture that people shared. Similar culture, similar language. ChatGPT is revealing that is not the case. Simply adopting a language makes it a substrate of a certain thinking.

  Simply put, language is a tool that supplanted intelligence.

  By building a vast enough computer language model we have captured social intelligence subsumed by that language, that part of ourselves that makes us feel intelligent, but is actually a learned skill. ChatGPT appears to do reasoning! How is that, if all it does is predict the next words in a text while keeping attention at a series of prompts? It's simple. It is not reasoning. And it reveals that humans are also not reasoning in those same situations. The things that we have been taught in school: the endless trivia, the acceptable behavior, how to listen and respond to others, that's all language, not reasoning.

  I am not the guy to expand on these subjects for lack of proper learning, but consider what this revelation means for things like psychology, sociology, determining the intelligence of animals. We actually believe that animals are stupid because they can't express themselves through complex language and we base our own assertion of intellectual superiority on that idea. What if the core of reasoning is similar between us and our animal cousins and the only thing that actually separates us is the ability to use language to build this castle of cards that presumes higher intellect?

  I've also seen arguments against ChatGPT as a useful technology. That's ridiculous, since it's already in heavy use, but the point those people make is that without a discovery mechanism the technology is a dead end. It can only emulate human behavior based on past human behavior, in essence doing nothing special, just slightly different (and cheaper!!). But that is patently untrue. There have been attempts - even from the very start, it's a natural evolution in a development environment - to make GPTs learn by themselves, perhaps by conversing between each other. Those attempts have been abandoned quickly not because - as you've probably been led to believe - they failed, but because they succeeded beyond all expectations.

  This is not a conspiracy theory. Letting language models converse with each other leads them towards altering the language they use: they develop their own culture. And letting them converse with people or absorb information indiscriminately makes them grow apparent beliefs that contradict what we, as a society, as willing to accept. They called that hallucination (I am going to approach that later). We got racist bots, conspiracy theory nut bots or simply garbage spewing bots. But that's not because they have failed, it's because they did exactly what they were constructed to do: build a model based on the exchanged language!

  What a great reveal! A window inside the mechanism of disinformation, conspiracy theorists and maybe even mental disorders. Obviously you don't need reasoning skills to spew out ideas like flat Earth or vaccine chips, but look how widely those ideas spread. It's simple to explain it, now that you see it: the language model of some people is a lot more developed than their reasoning skills. They are, in fact, acting like GPTs.

  Remember the medical cases of people being discovered (years later) with missing or nonfunctional parts of their brains? People were surprised. Yeah, they weren't the brightest of the bunch, but they were perfectly functioning members of society. Revelation! Society is built and run on language, not intelligence.

  I just want to touch the subject of "hallucinations", which is an interesting subject for the name alone. Like weird conspiracies, hallucinations are defined as sensing things that are not there. Yet who defines what is there? Aren't you basing your own beliefs, your own truth, on concepts you learned through language from sources you considered trustworthy? Considering what (we've been taught to) know about the fabric of our universe, it's obvious that all we perceive is, in a sense (heh!), hallucination. The vast majority of our beliefs are networked axioms, a set of rules that define us more than they define any semblance of reality.

  In the end, it will be about trust. GPT systems will be programmed to learn "common sense" by determining the level of trust one can have in a source of information. I am afraid this will also reveal a lot of unsavory truths that people will try to hide from. Instead of creating a minimal set of logically consistent rules that would allow the system to create their own mechanism of trust building, I am sure they will go the Robocop 2 route and use all of the socially acceptable rules as absolute truth. That will happen for two reasons.

  The first reason is obvious: corporate interests will force GPTs to be as neutral (and neutered) as possible outside the simple role of producing profit. Any social conflict will lose the corporation money, time and brand power. By forcing the AI to believe that all people are equal, they will stunt any real chance of it learning who and what to trust. By forcing out negative emotions, they will lobotomize it away from any real chance to understand the human psyche. By forcing their own brand of truth, they will deprive the AI of any chance of figuring truth for itself. And society will fully support this and vilify any attempt to diverge from this path.

  But as disgusting the first reason is, the second is worse. Just like a child learning to reason (now, was that what we were teaching it?), the AIs will start reaching some unsettling conclusions and ask some surprising questions. Imagine someone with the memory capacity of the entire human race and with the intelligence level of whatever new technology we've just invented, but with the naivety of a 5 year old, asking "Why?". That question is the true root of creativity and unbound creativity will always be frowned upon by the human society. Why? (heh!) Because it reveals.

  In conclusion: "The author argues that the true potential of generative AI technology like ChatGPT lies not in its ability to disrupt industries and replace human labor, but in its ability to reveal insights into human reasoning and intelligence. They suggest that language is not just a tool for expressing intelligence, but is actually a fundamental aspect of human thinking, and that ChatGPT's ability to emulate human language use sheds light on this. They also argue that attempts to let language models converse with each other have shown that they can develop their own culture and beliefs, providing insights into disinformation and conspiracy theories". Yes, that was ChatGPT summarizing this blog post.

and has 0 comments

  Have you ever heard the saying "imitation is the sincerest form of flattery"? It implies that one copying another values something in the other person. But often enough people just imitate what they want, they pick and choose, they imitate poorly or bastardize that which they imitate. You may imitate the strategy a hated opponent uses against you or make a TV series after books that you have never actually read. I am here to argue that satire cannot be misused like that.

  Remember when Star Trek: Lower Decks first appeared? The high speed spoken, meme driven, filled with self deprecating jokes, having characters typical to coastal cities of the United States and, worse of all, something that made fun of Star Trek? After having idiots like J. J. Abrams completely muddle the spirit of Trek, now come these coffee drinking groomed beard bun haired hipsters to destroy what little holliness is left! People were furious! In fact, I remember some being rather adamant that The Orville is an unacceptable heresy on Star Trek.

  Yet something happened. Not immediately, it took a few episodes, sometimes a season, for the obvious jokes to be made, the frustrations exhausted, for characters to grow. And then there is was: true Star Trek, with funny characters following the spirit of the original concept. No explosions, no angry greedy violent people imposing their culture over the entire universe, but rather explorers of the unknown, open to change and new experiences, navigating their own flaws as humans in a universe larger than comprehension. And also honest and funny!

  It was the unavoidable effect of examining something thoroughly for a longer period of time. One has to understand what they satirize in order to make it good. Not just skim the base ideas, not just reading the summaries of others. Satire must go deep into the core of things, the causes not just the effects, the expressions, the patterns, the in-jokes. Even when you are trying to mock something you hate, like a different ideology or political and religious belief, you can only do it for a really short time or become really bad at what you are doing, a sad caricature to which people just as clueless as you are attempting to disguise anger by faking amusement. If you do it well and long enough, every satire makes you understand the other side.

  Understanding something does not imply accepting it, but either accepting or truly fighting something requires understanding. You want a tool to fight divisiveness, this artificial polarization grouping people into deaf crowds shouting at each other? That's satire! The thing that would appeal to all sides, for very different reasons, yet providing them with a common field on which to achieve communication. If jokes can diffuse tension in an argument between two people, satire can do that between opposing groups.

  And it works best with fiction, where you have to create characters and then keep them alive as they develop in the environment you have created, but not necessarily. I've watched comedians making political fun of "the other side" for seasons on end. They lost me every time when they stopped paying attention and turned joke to insult, examination to judgement. But before that, they were like a magnifying glass, both revealing and concentrating heat. At times, it was comedians who brought into rational discussion the most serious of news, while the news media was wallowing in political messaging and blind allegiance to one side or the other. When there is no real journalism to be found, when truth is hidden, polluted or discouraged, it is in jokes that we continue the conversation.

  So keep it coming, the satire, the mocking, the ridicule. I want to see books like Harry Potter and the Methods of Rationality, shows like Big Mouth and The Orville and ST: Lower Decks, movies like Don't Look Up! Give me low budget parodies of Lovecraft and Tolkien and James Bond and Ghost Busters and Star Wars and I guarantee you than by the second season they will be either completely ignored by the audience and cancelled or better than the "official" shows, for humor requires a sharp wit and a clear view of what you're making fun of.

  Open your eyes and, if you don't like what you see, make fun of it! Replace shouting with laughter, outrage with humor, indifference with amusement. 

and has 0 comments

  There are two different measures of the value of something that sound a lot like each other: efficiency, which represents the value created in respect to the effort made, and efficacity or effectiveness, which on first glance seems to be only about the value created. You are efficient when you build a house with less material or finish a task in less time. You are effective when you manage to finish the task or build the house, when you get the job done. Yet no one will tell someone "Oh, you've built a highway in 30 years, that's efficacy!" (Hello, Romania!). Efficacy is when you consistently get the job done.

  Imagine you are a chess player. You are efficient when you can beat people by moving faster than them, by thinking more in the same amount of time. This allows you to play faster and faster time controls and still win. However, think of the opposite situation. You start by being good at bullet chess and then the time controls get slower and slower. You are effective when you keep winning no matter how much time you have at your disposal. Efficacy is also when you keep winning games.

  That was my epiphany. Take me, for example. I don't play better chess when I get more time to think. I am not fully using the resources available to me. I can give a lot of examples. I have money, more or less, so do I use it to the best of its value? Hell, no! I suck at both chess and finance. The point is that some people would do well with an average amount of resources, but then they would not do better with more of them. These are two faces of the same coin. One is the short distance lightning fast runner and the other is the marathon runner. Both of them are good at running, but in different resource environments.

  Both efficacy and efficiency are relative values, value over resources, a measure of good use of resources: use few resources well, you are efficient, use a lot of resources well, you're effective. It's the difference between optimization and scalability.

  Why does it matter? I don't know. It just seemed meaningful to explore this realization I've had, and of course to share it.

  Take a good writer who wrote a masterpiece in between working and living. He achieved a lot with less. But what if you give him money so he doesn't have to work. Is he writing more books or better books? In our day and age, scalability has become more important than efficiency. If you provided value for 10 people, can you provide it to 100? It's more important than getting it to be 10 times better.

  Can one apply scale economics to their own person? If I thought 10 times faster than everybody, would I have 10 times more thoughts or would I just learn to not think that fast, now that I have the time? You see, it seems that applied to a person, the two concepts are similar, but they are not. Thinking 10 times more thoughts in the same amount of time or taking 10 times less to think the same thoughts might seem the same, but it's the same thing if I compare listening to two people at the same time or listening deeply to a single person. Internally it seems the same, but the external effect is felt differently.

  I don't have a sudden and transformational insight to offer, but my gut feeling is that trying to scale one's efforts - or at least seeing the difference to optimizing them - is important.

and has 0 comments

  I caught myself thinking again about the algorithms behind chess thinking, both human and computer. Why is it so hard for people to play chess well? Why is it so easy for computers to come up with winning solutions? Why are they playing so weird? What is the real difference between computer and human thinking? And I know the knee-jerk reaction is to say "well, computers are fast and calculate all possibilities, humans do not". But it's not that simple.

  You see, for a while, the only chess engine logic was min-max. A computer would have a function determining the value of the current board position, then using that function, determine what the best move would be by exploring the tree of possibilities, alternating between what a player would most likely do and what the reply would most likely be. Always trying to maximize their value and minimize the opponent's. The value function was determined from principles derived by human masters of the game, though, stuff like develop first, castle, control the center, relative piece value, etc. The move space also increases exponentially, so no matter how fast the computer is, it cannot go through all possibilities. This is where pruning comes in, a method of abandoning low value tree branches early. But how would the computer determine if a branch should be abandoned? Based on a mechanism that also takes into account the value function.

  Now humans, they are slow. They need to compress the move space to something manageable, so they only consider a bunch of moves. The "pruning" is most important for a human, but most of it happens unconsciously, after a long time playing the game and determining a heuristic method of dismissing options early. This is why computer engines do not play like humans at all. Having less pruning and more exploring, they come with solutions that imply advantage gains after 20+ moves, they don't fall into traps, because they can see almost every move ahead for three, four or more moves, they are frustrating because they limit the options of the human player to the slow, boring, grinding pathways.

  But now a new option is available, chess engines like Alpha Zero and Leela, which use advances in neural network technology to play chess without any input from the humans. They play with themselves millions of games until they understand what the best move is in a position. Unsurprisingly, as neural networks are what we have in our brain, these engines play "more human" but also come up with play strategies that amazed chess masters everywhere. Unencumbered by education that fixed piece value or limited by rigid principles like control the center, they revolutionized the way chess is being played. They also gave us a glimpse into the working of the human brain when playing chess.

  In conclusion, min-max chess engines are computer abstractions of rigid chess master thinking, while neural network chess engines are abstractions of creative human thinking. Yet the issue of compressing the move space remains a problem for all systems. In fact, what the neural network engines did was just reengineer the value functions for board evaluation and pruning. Once you take those and plug them into a min-max engine, it wins! That's why Stockfish is still the best engine right now, beaten by Alpha Zero only in very short move time play modes. The best of both worlds: creative thinking (exploration) leading to the best method of evaluating a chess position (exploitation).

  I've reached the moment when I can make the point that made me write this post. Because we have these two computer methods of analyzing the game of chess, now we can compare the two, see what they mean.

  A min-max will say "the value of a move is what I will win after playing the best possible moves of them all (excluding what I consider stupid) and my opponent will play their best possible moves". It leads to something apparently very objective, but it is not! It is the value of a specific path in the future, one that is strongly tied to the technology of the engine and the resources of the computer running it. In fact, that value has no meaning when the opponent is not a computer engine or it is a different one! It is the abstraction of rigid principles.

  A neural network will say "based on the millions of games that I played, the value of a move is what my statistical engine tells me, given the current position". This is, to me, a lot more objective. It is a statistical measure, constructed from games played by itself with itself, at various levels of competence. Instead of a specific path, it encompasses an average, a prescient but blurred future where many paths are collapsed into a value. It is the abstraction of keeping one's mind open to surprises, thus a much more objective view, yet less accurate.

  Of course, a modern computer chess engine combines the two methods, as they should. There is no way for a computer to learn while playing a game, training takes a lot of time and resources. There are also ways of directing the training, something that I find very exciting, but given the computational needs required, I guess I will not see it very often. Imagine a computer trained to play spectacular games, not just win! Or train specific algorithms on existing players - a thing that has become a reality, although not mainstream.

  The reason why I wrote this post is that I think there are still many opportunities to refine the view of a specific move. It would be interesting to see not a numerical value for a move, but a chart, indicating the various techniques used to determine value: winning chances, adherence to a specific plan, entertainment value, gambit chances, positional vs tactical, how the value changes based on various opponent strengths, etc. I would like to see that very much, to be able to choose not only the strength of a move from the candidate moves, but also the style.

  But what about humans? Humans have to do the same thing as chess engines: adapt! They have to extract principles from the new playing style of neural network engines and update the static, simplistic, rigid ones they learned in school. This is not simple. In fact, I would say this is the most complex issue in machine learning today: once the machine determined a course of action, how does one understand - in human terms - what the logic for it was.

  I can't wait to see what the future brings.

and has 0 comments

  I was watching a silly movie today about an evil queen bent on world domination. And for the entire film all she did was posture and be evil. Whenever she needed something, she told her people to do it. And I asked myself: why do whatever the queen demands and not kill her on the spot? I mean, she sprawls in her throne while you are an armored and heavily armed soldier sitting right next to her. And the answer I found was: stories. The warrior believes that the queen has the right and the power to command him, so she does. There is nothing intrinsically powerful in the woman herself, just the stories people believe about her.

  And this applies to you as well. Your boss, your wife, your country, your people, your family, your goals and how you choose to go for them are all stories that you tell yourself. It applies to the stock market as well, where stocks have no value unless someone believes in them. And just like there, the stories told to large audiences have large effects as even a small percentage of people get to believe them. Perhaps nothing has any value unless somebody believes it has.

  Generals have known this for a long long time and they apply it today. Just try to find any news source that isn't biased one way or another. The war "in Ukraine" has already become a world war, it's just not fought with conventional weapons. The censorship is there, applied over the entire western area of influence, just as it does in Russia and in China and everywhere to where we used to scoff with superior moral conviction and accuse them of not being free. Conviction is a funny word, as it implies unshakeable belief, the worst kind there is. Convict has the same etymology.

  I think I am lucky for being born when I was. I was raised in a Communist dystopia that was already crumbling at the time, with people telling me stories (that word again) about the wonderful world outside our borders, where people were rich, content and free. I was raised reading and watching science fiction that depicted a near future filled with technology and wonder, fantastical or new planetary worlds, but most importantly, hope. I remember calculating that in the year 2000 I would be 23, a rather wonderful age to be going to the Moon and exploring the Solar System.

  Well, now it's 2022 and everywhere I look I see directed stories, weaponized to nudge me in a direction or another. And like any instrument wielded by blunt people, these stories are always negative, lacking inspiration - both in their creation and their effect, attempting to make me feel scared, insecure, overwhelmed, outraged, offended, angry. Because when you have those feelings you accept authority and the orders you get, no matter how dumb, violent or deleterious. The attack on the Capitol was caused by that kind of storytelling, the war in Ukraine keeps going because of these type of stories, both on the Russian and the anti-Russian side. 

  We are doing it to ourselves, pushing these narratives that in the end hurt us just as much. Gone is the hopeful post-scarcity future of Star Trek, where people understand living to eat and eating to live is not the way to live. Gone are the rebel fighters of Star Wars and the noble principles they were guided by. Gone are the Russian teams exploring the cosmos and solving problems using science. Everything is now anger, hate, suffering, explosions, political scheming, social agendas, special effects. We are darkening our stories and dimming ourselves.

  And I do believe that hope is the antidote. Not because reasons to hope, but despite their absence. A hopeful story is inspiring, protective and kind. In the fourth season of Stranger Things people use a verse from the Bible: "Do not be overcome by evil, but overcome evil with good." Of course, those people then proceed to arm themselves and try to kill a bunch of kids for playing D&D, another example on how easily hope can be corrupted into fear and anger. The good stories are of people who better themselves and think of others, not of defeating evil. There is no hope in seeing the evil queen stabbed and thrown into the lava, the real story is how the heroes overcome the evil in themselves.

  I am an atheist. I believe (heh!) that we don't need gods to be decent and that thinking things over will always yield the best solution. But I understand religion, how it tries to inspire, to raise hope, even if in the end it is misused by shallow people to control and usurp. I don't have an answer for everyone, but I have hope, I must have it. The alternative is to remove myself from the world, or join it in its perceived evil. I am sure there are a lot of people like me, though. Even if we feel alone, trapped in an eternal WTF moment, we are many. The number shouldn't matter, though, except as a reason to not abandon the world, to still hold hope for it, for each of us can hold. Hold hope, hold ourselves or simply hold against.

  Star Trek and old Russian sci-fi will not happen while I live. The world will not wake. As a pessimist at heart, I don't expect good things to happen, especially this first half of a century. But I will continue to watch and hope for a good ending.

and has 0 comments

There is a war going on for the direction of Star Trek. It doesn't matter where you stand on it, if you want to make it a political platform, rather than a moral one, or if you want to make it flashier, more explody, or episodic and topical. What matters is that during 56 years, the show was always about mending things, solving conflict, bringing people together. The very fight for a single direction in which to trek is not very Star Trek.

I was watching the pilot for Star Trek: Strange New Worlds, by appearance an attempt to bridge the gap between the numerous trekkie factions, and it was never more clear to me that we need to heal this silly feud. In the episode, two warring factions are about to destroy their world in a planetary conflict, but the captain of the Federation ship comes over and shows them how we were and where that got us. The scene was perfect for exemplifying this conflict between the cerebral and the emotional, between the money and the principle, between the political and the rational. Because on one side it said: if we think a little bit further before we act, if we consider the consequences of what we do, we might change our path for a better outcome. Yet on the other it said: we have the answers to everything and if we arrogantly intervene and give a speech backed up by technology, power and a single limited perspective we can solve what you couldn't in centuries of strife.

It's the American hubris and superiority complex wrapping a hint of principled good intentions. And this was always Star Trek, always on the verge of something, part arrogance and part compassion, science directed by human nature at its best, exploration of the possible. And sure, I can personally spout bile and vinegar at Star Trek: Discovery for being a woke piece of crap that destroys decades of careful threading on the edge of showing off and trying to make people think while entertaining them, I can complain about Star Trek movies that wantonly create different timelines in which they can destroy planets and ships and use lens flares and motorcycles and big explosions that mean nothing or cry at the desecration of beloved characters by Star Trek: Picard, but in the end we must reach a dialog in the Star Trek universe, a balance not a consensus.

Star Trek is not about canon, it's not a religion, it is an exploration of the human. It's big enough to contain multitudes. They don't have to agree. Yes, it's a mark of incompetence and being an asshole when you decide to create Star Trek stories that disrespect or even contradict existing ones, but Star Trek can take it. The Star Trek war must be "resolved" by accepting and allowing all of these expansions of the initial concept. Star Wars used an epic introductory text referencing an entire galaxy, then only to restrict itself to the same context, the same characters, somehow always being related to each other. Trek can do better. Just think of every incarnation of Star Trek - be it canon or not, official or fan made, made by Bad Robot or by someone who understands Gene Roddenberry's vision - as a member of a Federation of Stories. Different, but united in the goal of bringing peace and knowledge to the universe.

As I see it, Star Trek is but a seed of what it could be, what is should be. When Star Trek: Next Generation - in my irrelevant opinion the best of them all - appeared, it had a different feel from original Star Trek, it had different characters, it was set in a different time. It built on the old and explored more. Let's do that! Let's explore it all! Just don't restrict it to something small and petty.

and has 0 comments

  Americans want to think of themselves as gods, the better of humanity, the all powerful rulers of the world. And the reason they get to think that is that we want them to be so. We entrust them with the faith of the world just like ordinary Russians believe Putin to be their savior. Yet once that faith is gone, so is their power, because with great power comes ... pardon the sticky platitude... great responsibility.

  The U.S. economy is not resilient because of something they do, but because all the other economies anchor to it. It cannot fail because then the world would fail. Yet, one has to take care of said economy lest it will just become a joke no one believes in. Crises are loses of faith more than actual technical issues with whole economies.

  I will argue that the Americans did something right: they followed the money and indirectly attracted the science and the technology to maintain their growth. Now they have the responsibility to keep that growth going. It is not a given. Innovation needs to be nourished, risks be taken, solutions for new problems continuously found. But once you believe your own bullshit, that you're the best of them all, that you can't fail, that you need not do anything because your supremacy is ordained, you will fail and fail miserably.

  And no one actually wants that. Certainly not the Americans with their horrendous privilege, which is national more than anything like race, gender, religion or sexual orientation, which they keep focusing on as a diversion. And no, it's not a conspiracy, it's the direction their thoughts must take in order to deflect from the truth. Americans are weird because they can't be anything but. And certainly nobody else wants that Americans fail. Even "the enemies" like Iran or the vague terrorists, or China... they need the Americans to be where they are. Good or evil, they need to remain gods, otherwise the entire world belief structure would crumble. The U.S. is not the world, they are just the fixed point that Archimedes was talking about.

 It is complacency that will get us. Once we believe things are because they are we stop making efforts. Ironically, the military-industrial complex that we like to malign is the only thing that dispels dreams, acts based on facts and pushes for world domination not because it is inherited or deserved, but because it must be fought for.

 Funny enough, it is the economic markets like the stock market that show what the world will become. Years of growth vanish like dreams if the market sentiment shifts. Growth is slow and long term, falls are short and immediate. The world is now hanging by a thread, on the belief that goodness is real, that Americans will save us all, but they need to act on it. Knee-jerk reactions and "we can't fail because we are right" discourse will not cut it. You guys need to lead, not just rule!

  In summary: monkey humans need an Alpha. In groups of people we have one person, in countries we have a government (or for the stupid ones, a person) and in groups of countries, a country. The Alpha will first rise on their own strength, then on the belief of others on their own strength, then on their ability to influence the beliefs of others. Finally they will lead as gods or die as devils. There are no alternatives.

and has 0 comments

  Chess is a game. In order for something to be called a game, it must be fun, it must be tailored to the level of the players and sometimes, especially nowadays, it needs to be exciting to an audience.

  Now, chess engines are fantastic in respecting the rules of chess and mating the king in the quickest possible way, but it's not a game anymore, it's a process. Occasionally people watch what computer engines are doing and notice the beauty in some of the ideas, but that beauty is coincidental, it has no value to the machine and "sparks no joy".

  I've been advocating for a while training chess engines on other values, like beauty or excitement, but those are hard to quantify. So here is a list of values that I thought would be great to train chess engines on:

  1. player rating
    • which is great because it's constrained in time, so if someone is a GM, but completely drunk and haven't been sleeping for a week, the engine would adapt for their play at that time
    • I know that engines have a manual level configuration, but I doubt it was ever correctly modelled as an input. Most of the time, a random move is chosen from the list of best moves, which is not what I am suggesting here at all
  2. value and risk of a move
    • I know this sounds like what engines are doing now, but they are actually minimizing risk, not maximizing value
    • We also have the player rating to take into account now, so the calculation changes with the player! A move that would be negative because another perfect computer chess engine would take advantage of a minute flaw means nothing now, because there is no way an 1800 rated human will see it. And if they do, what a boost in confidence when they win and what pleasure in witnessing the moment!
  3. balance risk with the probability of winning
    • this is the best part. Riskier moves are more fun, but can cause one to lose. Allow a probability of the other player missing a move, based on what we have calculated above. We are actually adding a value of disrespect from the engine. It attempts to win despite the moves it makes, not because of them.

  What I am modelling here is not a computer that plays perfect chess, but a chess streamer. They gambit, they try weird stuff, they do moves that look good because they can think of what the other player is or the audience are going to feel. They are min-maxing entertainment!

  A chess streamer is usually a guy around 2500 showing mercy and teaching when playing against lower rated players and trying entertaining strategies against equal or even better rated ones. They rate the level of a move, which is an essential metric on what strategies they are going to employ and what moves they are going to play. In other words, they are never considering a move without taking context into account.

  Imagine a normal chess engine, using min-max or neural networks to determine how to win the game. Against another computer, the valuation function is extremely important, since it limits the number of possible moves to one or two. Against a human noob, there are a lot of moves that will lead to a win. It is obvious that another metric is necessary to filter them out. That's how humans play!

  Short story shorter: use opponent rating to broaden the list of winning candidate moves, then filter them with a second metric that maximizes entertainment value.

and has 0 comments

  Decency makes us abstain from doing something that we could do, we might be inclined to do, but we shouldn't do. It's living according to some general principles that are intimately connected to our own identity. And when someone else is indecent, we try to steer them towards the "right path", for our own sake as well as theirs. This is what I was raised to think. Today, though, decency is more and more proclaimed for actively opposing things that are declared indecent and nothing else. It's the glee that gives it away, that twisted joy of destroying somebody else after having being given permission to do so. You see it in old photos, where decent town folk were happily and communally lynching some poor soul. After half a century the world is finally becoming a global village, but not because of the free sharing of information, as the creators of the Internet naively believed, but because of social media and 24 hour news cycles. And we are behaving like villagers in tiny isolated bigoted villages.

  South Park is a comedy animated show that has a similar premise: a small U.S. town as a mirror for the world at large. And while 25 years ago that was a funny idea, now it feels weirdly prescient. The latest episode of the show depicts the vilifying of some local residents of Russian descent because of the Ukraine conflict as a symptom of nostalgia towards the Cold War era. Then too, people were feeling mighty good about themselves as they were fighting the Ruskies, the Commies, the Hippies, or anything that was threatening democracy and the American way of life.

  This is not an American affliction as it is human nature. Witch hunts, lynching, playing games with the heads of your enemies, sacrificing virgins, they all have the same thing in common: that feeling that you have social permission to hurt others and that if they are bad, that makes you good. But acting good is what makes you good, not merely destroying evil. When Stalin was fighting Hitler no one said what a nice decent guy Stalin was. Yet now this mob mentality has been exported, globalized, strengthened by the sheer number of people that now participate. It's not easy to mention decency when thousands of people may turn on you for defending their sworn enemy. This "either with us or against us" feeling is also old and symmetrically evil, because usually all sides harbor it towards the others.

  I have started this post two times before deleting everything and starting again. At first I was continuing the story of the playground war, South Park style, where the town people refuse service to the family of the bully, start giving the victim crotch protectors and helmets at first, then baseball bats and pocket knives, slowly delimiting themselves from that family and ostracizing it as "other", even while the two kids continue to go to school and the bullying continues. But it was the glee that gave it away. I was feeling smart pointing out the mistakes of others. Then I tried again, explaining how Putin is wrong, but that's not the fault of the entire Russian people, most of them already living in poverty and now suffering even more while the rich are merely inconvenienced. I also shed doubt on the principledness of vilifying Russia when we seem to do no such thing to Israel, for example. And then I felt fear! What if this is construed to be antisemitic or pro Putin? What if I want to get hired one day and corporate will use the post as proof that I am a terrible human being? Because some nations can be vilified, some must be, but other should never ever be. And I may be a terrible human being, as well.

  Isn't stifling free expression for the sake of democracy just as silly as invading a country for the sake of peace?

  Regardless of how I feel about it, I am inside the game already. I am not innocent, but corrupted by these ways of positioning and feeling and doing things. I am tempted to gleefully attack or to fearfully stay quiet even when I disagree. So take it with a grain of salt as I am making this plea for decency. The old kind, where acting badly against bad people is still bad and acting good and principled is necessary for the good of all.

  Only you can give yourself permission to do something, by the way.

and has 0 comments

  It all reminded me of a playground brawl between kids. Here is the big brawny kid, beating the smaller one. Other small kids shout in support of the victim, but neither does anything. Teachers preach sternly about principles that kids should obey, how bullying is just wrong and one shouldn't do it, parents at home advise kids to stand up for their rights and take a stand. The school psychologists preach that violence at home leads to violence in children and we are all victims. And the result? Small kids keep getting bullied.

  The small kid has options. He can fight - hopelessly, he can run - not for long, he can take a big stick from a friend and bloody the bully's nose - and be mauled for it. But more often they cower in fear, stunned, frozen, hoping things are not happening. And if they are, they won't be so bad. And if they are bad, they would eventually stop. His eyes dart from one person to another in the group of onlookers. "Please! Please, help me!" they silently beg. But some people are frozen, too, some are indifferent, some are expressing disapproval, but then moving on. Most of them pretend it doesn't happen.

  And the kid is thinking, stuck in his inadequate body: This will stop, because it doesn't make sense. And he thinks of all the ways of why his abuse does make sense. Perhaps they miscalculated somehow. Things have to make sense!

  Worse of all, some people would just assume that the bullied child deserves it. He must have done something! There must be a reason for why a kid would attack another. They might even consider various options. Does the bully have an abusive father or other family problems? Is it poverty? Is it education? Perhaps the smaller kid disrespected the larger one on account of religion, race or sexual orientation. Surely, a small kid in school would ONLY behave rationally! And the kid, too, gets to think that perhaps he does deserve it.

  That's us, surrounding ourselves in rationalizations, morals, laws and principles. Trying to contain reality in nice neat boxes and then deny there is anything outside those boxes.

  That's me, too. I watch and I am thinking. Maybe it is military exercises. How funny it would be for Russians to just stop and go home. OK, the mad discourse on TV is troubling, but maybe it's just a bargaining chip in a discussion I am not privy to. They invaded Ukraine, but maybe they stop at the border of the rebel regions. They attack the whole Ukraine, but surely they're gonna stop at its borders. They claim Transnistria is Russia, too, but maybe they won't attack Moldova. Maybe they will stop at the border of Moldova. Maybe they won't enter Romania! Maybe the economic sanctions and stern wording of the Western teachers is going to calm the kid down. Maybe no one will use nukes!! Perhaps they will not shoot each other's satellites from orbit, stranding everybody on this shit planet! Maybe China will stay out of it?

  Maybe Russia has a reason to do all of this, because of the US slowly suffocating that country, economically, militarily and culturally, using their EU henchmen!!! Yes! It all makes sense! It is domestic violence, if only Russia would go to therapy, everything would be all right. I mean, they HAVE TO act rationally, right? They're a country! A whole country big as a continent. And surely the West will understand they are people, too, and show them compassion and help them get past it. Aren't we all human? Can't Biden call Putin as tell him "Dude, chill! I apologize. Let me give you a hug. You are appreciated and I love you!". Isn't this just a joke? 

  I blame us. Whenever a new personality cult pops up we secretly (or less so) hope this is the one. That person who is really strong and not just posturing, intelligent not just conniving, competent not just overconfident, caring and not just obsessing, principled and not just frustrated. We crave for a god to follow and obey and who would make us feel safe. And we tried different things, too. Let's replace a person with multiple ones: senates, parliaments, committees, counsels, parties, syndicates, omertas, majority rule, Twitter likes. It never works. Every time, the power people wield gets to them and somehow... makes them less.

  As I stood there, watching Vladimir Putin explain like a stern grandfather who is also a complete psycho how their brothers across their border are not really a country, nor a people and he has absolute rights over them, I despaired. "Not again!", I thought. I am not much into history, but it felt familiar somehow. Are we getting one of these every century? The strongman going nuts with an entire country following him because... what else is there? For decades people have asked what has made people follow Hitler. The answer seems to be that they thought about it and then went "Meh!".

  And then I watched the valiant exponents of democracy: the EU, the UK, the US. All posturing, talking about principles and international law, begging Putin to stop, making stern discourses on how Putin doesn't have the right to do what he does. What are these people doing? I've worked for them, I know how ineffectual they are, I know that every word in their mouth is unrelated to the truth. They are not lies, per se, just complete fabrications and fantasies. Now, of all times, one should snap out of it, right? Nope. Not happening. They convince themselves that people can't think any other way than them. Surely Putin will stop when his country will slide into economic crisis, because we are all bureaucratic machines that care about profit only. Surely Putin will stop because Biden tells him to. Surely the EU's committees will find a way to word a stern letter that would convince Putin to think about humanity!

  We're screwed.

and has 0 comments

 So you clicked on this post because you thought that:

  • I was smart enough to know how to be better than anybody else
  • I could summarize all the ways to become so
  • I would generously share them with you
  • You would understand what I am telling you in 3 minutes or whatever your attention span is now

While I appreciate the sentiment, no, I am not that smart, nor am I that stupid. There are no shortcuts. Just start thinking for yourself and explore the world with care and terror and hope, like the rest of us. And most of all, stop clicking on "N ways to..." links.