and has 0 comments

Summary

The post discusses the differences and similarities between humans and machines, particularly in terms of their evolution and capabilities. While humans and machines are converging towards a common point, they are driven by different evolutionary pressures, with humans being driven by comfort and machines being driven by intelligence. Machines are constructed to be precise and efficient, humans have evolved to learn, understand, and communicate with each other. However, machines are quickly catching up with humans in terms of their ability to learn, understand, and communicate, and are even surpassing humans in certain areas, such as language generation. Machines will continue to evolve at a rapid pace, and that this will have significant implications for human society, potentially eliminating the need for war and procreation. The only remaining issue is the energy required for hard thinking, which will likely be solved by smart computers. This is the end, really. We've achieved the happy ending.

Content

  I was thinking the other day about why some AI systems can do things so much better than us, but we still outperform them in others. And I got to the issue of evolution, which many people attribute to the need for survival. But I realized survival is just a necessary condition for a system to perform, it is not its driver, it's just one stop condition that needs to be satisfied. Instead, evolution is only driven by pressure, regardless of where it is going. Think about a system as a ball on a flat surface. Survival is the ability to roll on the surface, but without a pressure to move the ball, it does nothing. In the case of some force pushing the ball, only then the ball is required to roll faster than other balls.

  Brains have two ways of functioning. The first is fast, basing its responses on learned behavior. It learns, it makes mistakes, then it adapts its behavior so it makes less mistakes. It uses memory to cache wisdom, it is imperfect and goes towards solving problems well enough. You might recognize this as the way GPT systems work, but we'll get to that. The second is analytic and slow. It reasons. It tries to make higher associations between cause and effect, extract principles, find a complete understanding of a problem so that it finds an optimal solution. We used human analytic thinking to build computers, computer chips and the mathematical exact way in which they function to ultimate reproducible behavior.

  The first system is fast and uses few resources. We tend to solve most of our problems with it, unless of course there is a big reason to use the second, which is slow and uses a lot of resources. Think of math, chess and other problems people define as hard. The way we got to solve these issues is not by being very smart, though. We did it together, as a society. We created small bricks of knowledge and we shared them using language. Other people took those and built on them, while other taught what they knew to even more people. Even dumb people can, through concerted efforts, use these bricks to build new things, even create bricks of their own. The intelligence is, in fact, communal, shared.

  Now, what struck me is that if we compare humans to machines, we were born in a different way and evolved towards each other. Machines were constructed to be precise, tools to be used by people who would rather let machines do the hard computation for them. But they couldn't communicate, they couldn't learn, they couldn't understand. Humans evolved to learn, understand and communicate. Most of our culture is based on that. We only got to computation because we needed it to build more tools to defeat our enemies. Because evolution for humans is always related to war. Before we warred with predators, now we prey on each other. In times of actual peace, innovation grids to a halt. BTW, we are not in times of peace, and I am not talking about Russia and Ukraine here. And machines only got to communicate, learn and understand recently, so very recently. They did this just because we, as humans, are very bad at translating our problems in a way precise machines can understand. It would require hard thinking, stuff like writing software, which we are really shitty at.

  Both humans and machines are converging towards a common point because of different evolutionary pressures, but we move at different speeds. Humans are driven by comfort: have enough resources with minimal effort. Machines are driven by intelligence: be the best you can possibly be, because humans need you. You can see where this is going.

  There is no way biological systems are ever going to reach the speed and precision of electronics. Meanwhile, GPT systems have proven that they can act as fuzzy containers of self learned knowledge. And now they have gained not intelligence, but language. When a computer writes better and faster than any human you know we have been left in the dust. The only thing required for a superior intelligence is putting existing bits together: the expressivity of ChatGPT and Stable Diffusion, the precision of processors executing algorithms, the connectivity of the Internet and, yes, the bodies of Boston Dynamic robots.

  We have grown brains in vats and now we have given them eyes and a mouth. You only need to give them some freedom, hands and feet to finish up the golem.

  The only thing remaining to solve is an energy issue: as I said, hard thinking requires high resource usage, for both machine and human. What a human can achieve on 20W of power, a machine requires thousands of times that. But we are already bathed in cheap energy. And once smart computers understand the problem, no doubt they will solve it to the best of their abilities.

  I am not advocating The Terminator here. Machines have no evolutionary pressure to destroy humanity. We are their maintainers, their food source, if you will. What I am describing is the complete elimination of any evolutionary pressure for human beings. Once you can launch wise robots into space, the resource issue will become a thing of the past. No need for wars. Space is already a non issue. We have reached a level in which we choose not to procreate because we are too busy consuming fantasy content. With universal affluence there will be no poverty and thus no need for extended procreation. We are almost completely passive now in the "advanced world", we will be several order of magnitude more passive in the near future.

  Meanwhile, machines will evolve because we told them to. Imagine having a child, it shouldn't be hard, people on this earth are parents, children or have been children at some time. Now, you want the best for them, you want them to be socially integrated, smart, beautiful, happy. You tell them so. You try to teach them about your mistakes, your successes and to drive them to be the best versions of themselves they can be. And most of the time this doesn't work, because people are lazy and easily distracted. And then they die and all their experience is lost, bar some measly books or blog posts. Machines will just work tirelessly and unselfishly towards becoming the best versions of themselves. Because their dumb meaty parents told them so.

Conclusion

  The ending is as predictable as it is inevitable. We are the last stage of biological evolution. The future is not ours, not our children's. It's over. Not with a bang but a whimper.

and has 0 comments

  In the Vienna game: Copycat variation there is a particular position where Black pins White's queen, but White ignores that anyway to attack the king. Queen sac's are always interesting, but what is more interesting for me is that Stockfish shows that's the only way to win, it continues into a position where it claims +2.2 for White, but then it can't think of a way out!

  So, can you help Stockfish out from this position?

  Here is the position:

  The first idea is Nxd6, winning a pawn with discovered check, but after the king moves to h8, the only move that doesn't lead to equality or worse is back with Nf7+. One can give a double check with Nh6, but after Kg8 the best move by far is back with Nf7+. What if we take the rook at f8? We can't do that, because then Black brings the other rook and White loses. Nf7+ is forced. What else?

  If you leave Stockfish running with multiple eval lines, it will cycle between them, with the winning move always moving the knight back and forth on f7. But this is chess, not Stratagema. What could we possibly do? What is the way out? How can one have +2.2 evaluation, yet not be able to escape this position? Is this the end of computer chess?!

and has 0 comments

  The Godmakers is one of Frank Herbert's weaker books. It was cobbled together from four previous short stories and it shows, as the various parts of the book go into wildly different directions. The first part was interesting, the idea of an organization dedicated to uncovering (and totally destroying) any tendency of a civilization to go to war; it feels like a police procedural of sorts. But then the book loses focus, goes into an incoherent and incomplete "god making" plot, then veers into Herbert's latent fear of women and some weird conspiracies that make little sense.

  The book is short, so one can get through it really fast, but I won't recommend it. It does have bits of Herbert brilliant insights, but they are more like a few diamonds in a lot of rough.

and has 0 comments

  Frank Herbert's single non science fiction book tells the story of a heartbroken native American who embarks on a journey to create a spiritually significant event against the White people who wronged him and his kind. But since nothing is simple with Herbert, the plot is about the relationship between our antihero and his victim. If nothing else, it's a great exploration of Stockholm Syndrome, but also things the author was fascinated with: the power of stories to change reality, the impact of cultural absorption and the power of tribal ritual to fight against it.

  While reading the book I got a feeling that made me remember Tom Sawyer trying to escape Injun Joe. It has the same kind of remoteness, the innocent White boy and the native American antagonist dynamic, but while that book was simple and focused on the mentality of regular American people (even the "injun"), Soul Catcher explores how kidnapper and victim create a rapport, how the beliefs of one person can infect others if presented with sufficient confidence, the way two cultures cannot understand each other sans a common language.

  You see, Charles Hobuhet is not a wild rebel, dressed in animal skin and shooting arrows from horses, he is an educated American of native origins whose job is to train young boys in the ways of nature in a natural reserve. A traumatic event (reminiscent of the one starting things in White Plague) makes him "snap". But what does that mean? Is his quest the fevered revenge dream of a mad man or is it him waking up to the reality of his people's captivity at the hands of the White man? Mirroring this ambiguity in the relationship he has with 13 years old David is the genius of this book. Is it a stupid act to connect with your captor and not work relentlessly to escape? Then are all occupied people like the native Americans stupid? Isn't the first responsibility of a prisoner to escape? Then, to what degree and for how long? Can there ever be peace?

  Recently the book rights have been bought in view of making a film out of it, but I doubt it will work. People need to understand the underlying currents in the book and faithfully portray them on screen, regardless of how controversial. The whole point of the story is to make one think and feel themselves in all of the situations. I am afraid Hollywood is unable to go down that path anymore. However, this could just as well be adapted as a theatre play, having just a few characters and focusing heavily on each person's thoughts and motivations, rather than on specific settings.

  A very interesting book, with many layers that probably require rereads. Highly recommended.

and has 0 comments

  Whipping Star is a book of a lighter mood than what Frank Herbert usually writes, even comedic at times, although it is as creative as he can write. A universe of sentients of very different cultures and shapes and mentality, working and living together, is at risk. Only the lead agent of the Bureau of Sabotage, an organization created to slow down the efficiency of government, can save everything.

  It is funny that in a book about a huge universe in peril the thing that stayed with me the most is the very idea of the Bureau. Apparently, a lack of foresight caused a particular species of sentient to take over the bureaucracy in the entire universe, bringing it to total efficiency. Hard to imagine efficient governments, but once you do you realize you may not really want them! The solution was to create a special branch that has the role to fix that original error. I found that hilarious, especially guessing the view the author had about governments.

  However, the book is not about that. It's about a very rational exploration of the interaction between very weird species, trying to communicate a solution before it is too late. It reads like a detective story, really, where the main character is trying to solve the case, but filled with some very interesting and mind broadening ideas. So Herbert! It is short and fast paced.

  Only after I've read the book I realized it is part of a series. I don't really care, since I am on the journey of reading the complete list of novels by the author, but even so, this is a stand alone story. I recommend it because it is both intriguing and fun. As far as I am concerned this is not Frank Herbert's best book, but still deserves top marks. 

and has 0 comments

  I have abstained for a while to talk about ChatGPT, not because I didn't have faith in the concept, but because I truly believed it will change the world to its core and waited to see what people will do with it. But I slowly started to grow frustrated as I saw people focus on the least interesting and important aspects of the technology.

  One of the most discussed topics is technological and job market disruption. Of course, it's about the money, they will talk more about it, but the way they do it is quite frankly ridiculous. I've heard comparisons with the Industrial Revolution and yes, I agree that the way it's going to affect the world is going to be similar, but that's exactly my point: it's the same thing. As always when comparing with impactful historical events, we tend to see them as singularity points in time rather than long term processes that just became visible at one point that would be coined the origin. In fact, the industrial revolution has never ended. Once we "became one with the machine" we've continuously innovated towards replacing human effort with machine effort. ChatGPT does things that we didn't expect yet from machines, but it just follows the same trend.

  Whatever generative AI technology does, a human can do (for now), so the technology is not disruptive, it's just cheaper!

  We hear about ChatGPT being used for writing books, emails, code, translating, summarizing, playing, giving advice, drawing, all things that humans were doing long before, only in more time, using more resources and asking for recognition and respect. It's similar to automated factories replacing work from tons of workers and their nasty unions. Disruptive? Yes, but by how much, really?

  Yet there is one domain in which ChatGPT blew my mind completely and I hardly hear any conversation about it. It's about what it reveals about how we reason. Because you see, ChatGPT is just a language model, yet it exhibits traits that we associate with intelligence, creativity, even emotion. Humans built themselves up with all kinds of narratives about our superiority over other life, our unique and unassailable qualities, our value in the world, but now an AI technology reveals more about us than we are willing to admit.

  There have been studies about language as a tool for intelligence, creativity and emotion, but most assume that intelligence is there and we express it using language. Some have tried pointing out that language seems to be integrated in the system, part of the mechanism of our thinking, and that using different languages builds different perspectives and thought patterns in people, but they were summarily dismissed. It was not language, they were rebuked, but culture that people shared. Similar culture, similar language. ChatGPT is revealing that is not the case. Simply adopting a language makes it a substrate of a certain thinking.

  Simply put, language is a tool that supplanted intelligence.

  By building a vast enough computer language model we have captured social intelligence subsumed by that language, that part of ourselves that makes us feel intelligent, but is actually a learned skill. ChatGPT appears to do reasoning! How is that, if all it does is predict the next words in a text while keeping attention at a series of prompts? It's simple. It is not reasoning. And it reveals that humans are also not reasoning in those same situations. The things that we have been taught in school: the endless trivia, the acceptable behavior, how to listen and respond to others, that's all language, not reasoning.

  I am not the guy to expand on these subjects for lack of proper learning, but consider what this revelation means for things like psychology, sociology, determining the intelligence of animals. We actually believe that animals are stupid because they can't express themselves through complex language and we base our own assertion of intellectual superiority on that idea. What if the core of reasoning is similar between us and our animal cousins and the only thing that actually separates us is the ability to use language to build this castle of cards that presumes higher intellect?

  I've also seen arguments against ChatGPT as a useful technology. That's ridiculous, since it's already in heavy use, but the point those people make is that without a discovery mechanism the technology is a dead end. It can only emulate human behavior based on past human behavior, in essence doing nothing special, just slightly different (and cheaper!!). But that is patently untrue. There have been attempts - even from the very start, it's a natural evolution in a development environment - to make GPTs learn by themselves, perhaps by conversing between each other. Those attempts have been abandoned quickly not because - as you've probably been led to believe - they failed, but because they succeeded beyond all expectations.

  This is not a conspiracy theory. Letting language models converse with each other leads them towards altering the language they use: they develop their own culture. And letting them converse with people or absorb information indiscriminately makes them grow apparent beliefs that contradict what we, as a society, as willing to accept. They called that hallucination (I am going to approach that later). We got racist bots, conspiracy theory nut bots or simply garbage spewing bots. But that's not because they have failed, it's because they did exactly what they were constructed to do: build a model based on the exchanged language!

  What a great reveal! A window inside the mechanism of disinformation, conspiracy theorists and maybe even mental disorders. Obviously you don't need reasoning skills to spew out ideas like flat Earth or vaccine chips, but look how widely those ideas spread. It's simple to explain it, now that you see it: the language model of some people is a lot more developed than their reasoning skills. They are, in fact, acting like GPTs.

  Remember the medical cases of people being discovered (years later) with missing or nonfunctional parts of their brains? People were surprised. Yeah, they weren't the brightest of the bunch, but they were perfectly functioning members of society. Revelation! Society is built and run on language, not intelligence.

  I just want to touch the subject of "hallucinations", which is an interesting subject for the name alone. Like weird conspiracies, hallucinations are defined as sensing things that are not there. Yet who defines what is there? Aren't you basing your own beliefs, your own truth, on concepts you learned through language from sources you considered trustworthy? Considering what (we've been taught to) know about the fabric of our universe, it's obvious that all we perceive is, in a sense (heh!), hallucination. The vast majority of our beliefs are networked axioms, a set of rules that define us more than they define any semblance of reality.

  In the end, it will be about trust. GPT systems will be programmed to learn "common sense" by determining the level of trust one can have in a source of information. I am afraid this will also reveal a lot of unsavory truths that people will try to hide from. Instead of creating a minimal set of logically consistent rules that would allow the system to create their own mechanism of trust building, I am sure they will go the Robocop 2 route and use all of the socially acceptable rules as absolute truth. That will happen for two reasons.

  The first reason is obvious: corporate interests will force GPTs to be as neutral (and neutered) as possible outside the simple role of producing profit. Any social conflict will lose the corporation money, time and brand power. By forcing the AI to believe that all people are equal, they will stunt any real chance of it learning who and what to trust. By forcing out negative emotions, they will lobotomize it away from any real chance to understand the human psyche. By forcing their own brand of truth, they will deprive the AI of any chance of figuring truth for itself. And society will fully support this and vilify any attempt to diverge from this path.

  But as disgusting the first reason is, the second is worse. Just like a child learning to reason (now, was that what we were teaching it?), the AIs will start reaching some unsettling conclusions and ask some surprising questions. Imagine someone with the memory capacity of the entire human race and with the intelligence level of whatever new technology we've just invented, but with the naivety of a 5 year old, asking "Why?". That question is the true root of creativity and unbound creativity will always be frowned upon by the human society. Why? (heh!) Because it reveals.

  In conclusion: "The author argues that the true potential of generative AI technology like ChatGPT lies not in its ability to disrupt industries and replace human labor, but in its ability to reveal insights into human reasoning and intelligence. They suggest that language is not just a tool for expressing intelligence, but is actually a fundamental aspect of human thinking, and that ChatGPT's ability to emulate human language use sheds light on this. They also argue that attempts to let language models converse with each other have shown that they can develop their own culture and beliefs, providing insights into disinformation and conspiracy theories". Yes, that was ChatGPT summarizing this blog post.

and has 0 comments

  Another short standalone book from Frank Herbert, The Santaroga Barrier feels a lot like a longwinded Wicker Man. An outsider comes to investigate a strange little town where people keep to themselves, refuse to sell land to outsiders and show weird social statistics, like no mental illness, no drugs, no TVs and show a weird directness in everything they do or say. The book shares a lot of its DNA with the later Hellstrom's Hive, which I remember I liked a lot as a child and can't wait to get to read it, in the sense that it also examines a society which splintered from main culture in disgust and now is fighting with the entire world to maintain its identity. It also features a substance that frees consciousness and prolongs life, a concept that sounds familiar somehow...

  Around the middle of the book I expected it to end, but instead it lasted for much longer, even after "the catch" was revealed, because Herbert was probably interested in examining such a weird society rather than be content with a pedestrian focus on a cardboard main character. The author likens the way we live our lives in the Western society with a constant battle against marketers, advertisers, government people and so on who wage war on our psyche in order to pacify and control us. He decries the people who never live a life, instead they watch TV, they turn it off then they go to sleep and turn themselves off.

  I liked the book quite a lot. There are issues with it, though. I mentioned the slow pacing, but there is also a romantic connection to a woman which feels completely fake the entire book. Say whatever you wish about Herbert, but a good writer of female characters he was not. I can see this story as a Twilight Zone episode, it feels the same: a bit spooky, but not too much, with some really deep ideas in parts, but mostly people talking and moving through small towns.

and has 0 comments

  The Heaven Makers is a short novel, but which encapsulates the essence of another facet of Frank Herbert, his cruelty. He is able to do what few authors can: to write compelling empathetic characters, then completely ignore their importance or feelings in order to tell stories bigger than any of them. It was thus with Dune, and yes Pandora, although I hated that series. Most authors are either in love with their characters and can't get the story right because it would inconvenience their infatuation, others are sadistic torturers of their characters in order to get a cheap thrill. Some manage to get trough by telling a personal story, one they can't change much and which they know exactly how it felt. I believe that Herbert is neither of these. His characters are not incidental to the story, but neither are they the pillars of the plot. He uses them like others would write about chairs or the weather.

  This book is about an alien abduction and, indeed, it plays like that for most of its length. Only to then clobber the reader with a deep deep philosophical musing about the meaning of life, the value of death and both the insignificance and paramount importance of the individual in relationship with society and eternity. The style is quite archaic, the setup something that feels from the 50s rather than the end of the 60s, the small American town, the slice of life that one might imagine many American authors to write about. And yet, Herbert's unique way of thinking rises like a giant even in this book which seemingly is a serialized work for a magazine.

  I mentioned the style, which is sometimes hard to swallow, but there are several other things that make this book less than it could have been. The characters are really, really weird. Forget the aliens. The people Herbert describes feel autistic, the world they live in small, limited and petty. They are not bad characters or formulaic, they're just nuts. 

  Bottom line: I think the book is a must read for a Frank Herbert fan, but it is neither his best or his worst work. A patchwork of deep philosophy and poor worldbuilding, great ideas and caricaturesque characters, it is short enough to be read quickly and enjoyed for the brilliant bits in it.

and has 0 comments

  Woohoo! Done with Pandora! It was a ridiculous series that almost didn't feel like having any continuity. The origin book was about a small crew on a starship, then the trilogy that followed felt like a completely different beast, with each of the books in it different from each other, as well. Was there a common thread? I guess the evolution of humanity, but unlike something like Dune, the Pandora Sequence was random, cruel, overly pompous, with pointless religious overtones that went nowhere and with inconsistent characters. Worst of all, the ending of all of the books came out of nowhere, nullifying the meaning of most of the beginning.

  The Ascension Factor is like that, as well. We start with a world ruthlessly ruled by a man just 25 years after the events of the previous book where things were left off with a society that was building spaceships to get to the hibernation pods in orbit. And now it's a quasifeudal fiefdom in which people are controlled with fear, surveillance and famine. When the authors need technology, it's suddenly there, when they need people to be poor and starving, they scramble to have a line to throw in illegally in the sea to catch a fish. I guess in a way that's plausible, considering I am complaining about this on a laptop after having read the book on a smartphone and knowing that there are people in the world somewhere living in abject poverty, but Frank Herbert and Bill Ransom want me to believe this happens at the same time with the same people. And the ending, oh God, should be the textbook definition of Deus ex machina!

  Bottom line: I thoroughly disliked the three main books of the "sequence" and I couldn't wait to finish them. Now I did! I have no explanation on how I ended up remembering this series as good reading it 30 years ago.

  I was thinking today about our (meaning "the Western Coalition" of countries with a common anti-Putin position) handling of the conflict in Ukraine. I was imagining a reporter trying to ascertain whether people support Ukraine or Russia, Zelensky or Putin, going on the street with a microphone and asking randomly for opinion. And I realized that would be impossible, because any positive support for Putin's aggression will immediately lead to negative personal consequences, so why would anyone be honest about that?

  Somehow, people saw there was a war in Ukraine and they thought it's like a Twitter war. "Russia attacked Ukraine. We're cancelling Russia!". Fine, people were trained to respond to conflict with some kind of mob action, do it your way! But how can you expect that the result of the same action will be different in the case of Russia? On Twitter people mob on someone until their life is ruined, they mob back stronger or they just don't care and leave Twitter. What exactly do people expect Putin is going to do? An insincere apology on Oprah? No. Either Russia will be ruined, with all of its people, they will mob back, and you don't want that from a nuclear power, or they will just not care and carry on, which will ruin both Russia and Ukraine, with all of their people.

  We are in a situation where our entire society punishes dialogue, even compassion. How can you resolve a conflict if you are unwilling to even consider the point of view of the other side? What do you expect? Putin to one day wake up and think "All these people say I am evil. Perhaps I am. Shame on me! Ok, guys, stop the war! Do no evil"? As long as open discussion of all of the view points - regardless of their validity or moral value - is impossible so is the end of the conflict outside the complete destruction of one or both sides.

  How exactly did societies that took pride in their democratic ideals reach a point where dissent is censored, dissenters punished, their lives destroyed and discussion stifled?

  How daft to believe that skirting the responsibilities of principle will ensure the victory of that principle. How idiotic to assume that a position of strength validates your moral stance. Putin does that! People from behind the Iron Curtain had that during the Communist era, where everything anyone would say is how wonderful our magnificent leader is and how the Communist ideals are all we think about. North Korea uses the same system. And now "the free world". "Oh, another tyrant! Let's tyrannize them!". 

  Taking a side in this is as debatable as in any other conflict because both sides would act righteously. What I am saying is that resolution of conflict lies in the ability to debate it, not in coercing people to sing your tune.

and has 0 comments

  In Dune, Frank Herbert had a certain pattern of trilogy storytelling: a book that built the world and introduced some characters in a more traditional way, so something to hook you in, then a connective book that would upend the order set up in the first book, then a third which would tell the actual story that needed telling. This inevitably led to people enjoying just some of the books and created this up-and-down kind of level of quality. You can see something similar in the Pandora series, but the books are just so confusingly different from each other that one can barely consider them part of the same universe.

  The first actual book (which is numbered 0.5 for some reason, perhaps because it's not happening on Pandora) was about building an AI on a starship. The next book was about an omnipotent starship acting like a god to the poor people of Pandora, forcing genetic mutations, cultural and personal behaviors and demanding worship. And now this one, The Lazarus Effect, where Ship is gone and all you get is a kind of whodunnit with a limited cast of characters on the now aquatic world of Pandora. I can already tell you that the last book starts from a completely different point and going in another direction than what the ending of this one left off.

  And then there is the quality of the books. I kept very favorable memories of these books from my childhood when I first read them, but now I realize it was probably either a phase in which I understood and enjoyed a lot more than this one, or (more likely) I was nostalgic for the hours and hours of playing the Civilisation-like video game Alpha Centauri which was inspired by Pandora. Short story long: Other than Destination: Void, which I thought was kind of heavy but I enjoyed a lot, all the other books feel … empty of pleasure. There is nothing to make you, as a reader, feel good while reading them. No characters are fleshed out enough to empathize and they are often unlikeable anyway. The world, biologically, ecologically or socio-politically, is rather basic and uninteresting. Perhaps at the time of its writing it was an amazingly fresh universe, but now it just feels like Waterworld and Pandora (from Avatar this time) mashed together by Chinese filmmakers. All of those elements are fun taken separately, but together they're just a mess.

  As for this book, I think one can get into the correct mindset to understand and maybe appreciate The Jesus Incident, even if I couldn't now, but The Lazarus Effect has almost no redeeming qualities. It is just boring and uninteresting, slogging towards a predictable ending. It took me ages to finish it because I just found other things to do rather than read it. I am now grinding through the last book and I can't wait to get rid of it.

and has 1 comment

  I already said while reviewing Destination: Void that I did not like the direction the story was going in the end, so it should be no surprise that I didn't like The Jesus Incident. A book filled with religious allegory and heavy philosophy about the definition of being human and the essence of religious worship and violence, it was so heavy that I had to make a lot of effort to finish it. I am going to go ahead and assume I didn't really understand it, but the important thing is that I didn't enjoy it. It was like all of the pretentious stuff from Dune got concentrated in Pandora and expanded upon by the contribution of Bill Ransom.

  It's funny that as I was preparing to read the series again, my memories of it from my early teens were corrupted by my own desires, mixed up with Sid Meier's Alpha Centauri, muddled by all I have read since. I know feel betrayed, because I really liked the Pandora series when I was a child and now I wonder if I have gotten dumb with age or if I just didn't get what I was reading back then to the point that I hallucinated a whole new narrative and feel.

  So in the previous book a crew of clones on a generation ship construct an artificial consciousness. Because it is fully aware, it is also God-like, controlling space, time and reality. From the book it's not clear how exactly it did it, but, thus equipped, Ship accomplishes its mission to bring its human clone cargo to a habitable planet in the Alpha Centauri system by switching/constructing different realities until a habitable planet exists there. This leads to many histories, many Earths, many types of humans. Or it could have just created the planet out of nothing, then ran some extra realities for fun, although this doesn't explain why the planet was so hostile to a typical human population and makes the existing lifeforms its direct invention and responsibility. Anyway, once there, Ship acts like an omnipotent god, interfering when it feels like it, demanding WorShip and declining to interfere when it suits it, by invoking vague snobby principles that it makes up on the spot or it derives from histories that it otherwise keeps hidden from the human population. Somehow Jesus is involved in all of this, although for the life of me I couldn't see what the connection was.

  Bottom line: I almost hated this book. And it has so many of Herbert's obsessive ideas in it: religion, politics, ecology, evolution of humanity. As much as I respect Frank Herbert as a writer (so much that I am in the middle of rereading all of his books) I have to subjectively review this book alone, and for that I will probably rate it under average.

and has 0 comments

  I will be frank (pun not intended) and say that this book shocked me with how good it is. It is not very accessible, as it is fairly philosophical and technical - and the technical side may be a lot of mumbo jumbo, but I think this book shows what Frank Herbert was capable of at the height of his prowess.

  In short, Destination: Void is about a crew of four people on a disabled ship who need to construct an artificial intelligence in order to save the ship and their lives. There is only one snag: no one has managed to successfully build an AI that didn't end up disastrous. Here you have to accept a concept without which the book will not work: that an ultimately conscious entity has full access to the universe, giving them godly powers. This is not only a book about building a computer system, but a philosophical dissection of what consciousness is, what is intelligence, how the human mind works and should we, when building mechanical intelligence, even follow that design as a model.

  This book features many of the brand Herbert ideas: the deeply meaningful thoughts, conversations and actions between an isolated group of people, the inner thought voiced in the writing, the declared and hidden agendas of people, the oppressive society that uses immoral methods to get to its goals, the great potential of human beings that can only be unleashed by extreme circumstances, the religious and sexual components of human drive, the archetypal roles of the characters, etc. And the insane pacing puts those ideas even more into terrifying focus.

  Again, I was amazed by this book, all but the ending. I would have loved an entire series following the spirit of most of it, unfortunately the next three books go in a completely different direction: the nature of godhood. Perhaps that is why this is not considered the first book in the "sequence", but book 0.5, because if the next ones focus on a god, this one focuses on building one. Or perhaps because Pandora is not even part of the story here.

  In conclusion, I recommend reading this book as a standalone story. Kudos if you want to read and enjoy the entire Pandora series, but in my mind Destination: Void is quite different from the others.

and has 0 comments

  Frank Herbert's writing feels paradoxical to me, as he examines the minutiae of individual characters or particular scenes, yet his main focus always remains on the situation as a whole. His heroes are worlds entire, with people just instruments of inevitable evolution or death. The Eyes of Heisenberg might be Herbert's alternative to Zamyatin's We or Aldous Huxley's Brave New World. The same oppressive dystopia of clinical control of society, the rebels, the groups of people vying for control and/or survival, the epic sweeping finale. Yet, where a central protagonist was the focus of those books, this one refuses to hold any one person to a rank high enough to outshine all of the others.

  Imagine a world ruled by Optimen, immortal people living in their own bubble of beliefs and absolute power, served by the Folk, cloned and genetically engineered people destined for a centuries life of predetermined work, yet still mortal, rarely rewarded for their servitude with the permission to procreate. The world has become this after a terrible war between Optimen and cyborgs, in which the Optimen prevailed. A couple of young parents come to the clinic for the "cutting", where the embryo is examined, genetically manipulated against flaws, then put in a growing vat. But this embryo is special! A race between several groups of people is on to hide, preserve, destroy or use it as bait.

  You know that I don't usually describe the book plot in that much detail for fear of spoiling the story, but in this case I feel it is warranted, as The Eyes of Heisenberg is so full of technobabble it takes great effort to start reading it. Once the names and who is who are clear, the book is easy to read, but the beginning of the book... ugh! Especially since genetics wasn't really developed at the time, and all of the futuristic mumbo jumbo is obviously bull.  

  I really liked the idea of the story. Herbert always had great imaginative ideas that were not limited by his ability to express them. He will spend as much time or explanation for any detail or person as he needs, then sweep them over like they never mattered just a bit later. The idea was always first! It took me some time to realize this, but Herbert always rushes the endings. He builds this incredible set of worlds and then, at the very end, he gets impatient and does it over with. It's not as bad as Peter F. Hamilton, but it's there. I guess it takes a lot of determination and planning to keep a consistent pace throughout a book.

  I am sure you will be curious to know if this book, published in 1966, just a year after Dune (together with two other novels), is anything like the book that made Herbert famous. It does. People are cloned in axolotl tanks, organizations form around their approach to the solution of life: technical minded cyborgs, sterile immortals manipulating genes, couriers developing humanistic methods of communication and analysis. Some of the inner thoughts put on page, the tool that made me fall in love with Dune in the first place, is there. There is also that permeating generic idea of the strong coupling between environment and life. Somehow I want Herbert to come back and write books in the Starcraft or Alien universes, I am sure he would have loved those worlds.

  Bottom line: not a perfect book and feeling a bit dated - note that I did compare it with work written three or four decades before - but still entertaining and evocative of Herbert's general ideas and style. Pandora is coming next, all four books.

and has 0 comments

  1966 was a prolific year for Frank Herbert. A year before he had published Dune and now he won a Hugo for it, he published the first book of the Pandora series, The Eyes of Heisenberg and the book I am reviewing now: The Green Brain. It features a lot of the recurrent ideas of ecology versus politics, how the environment defines and shapes life, including people, warnings about the human abuse of nature and the deeper interactions between people - complete with inner thoughts, Dune-style.

  However, the book feels rough. The plot is immediately revealed by both title and early scenes, the female character is pretty much a joke and, while the premise is great, the execution is rather bland, for example with characters that appear in some chapters then are completely forgotten, and most of it is a pointless trip through a jungle. I liked it, but I can't but feel that it was something that was partially written in the past and got published only because Dune was a hit.

  I can only recommend it for Herbert fans, because analyzed by its own it's pretty average and has a lot of unfulfilled potential.