The previous post was the 500th post in this blog! And I haven't even noticed. Let's celebrate late with news about the end of the world!

How bored can I be? I've read this article about the asteroid Apophis striking the Earth in 2029 or 2036 and my first thought was "oh man! Why so late?". I was already calculating how unfair it is that I would have to live to be 59 years old before the world ends. So I have to cope with it until then, then, close to my retirement, the pleasant moment when I get money from the state and do nothing while my mind slowly rots away, it all goes away. Then it hit me: I am a complete idiot! What was I thinking? Sheesh!

Anyway, here is a cute animation made by a guy on youtube. I personally prefer the first one, the one that is only graphics and no realism. Again, who wants real? But it wouldn't be in the spirit of the blog :) So watch the second one, the one made to satisfy the critics.

Update: the guy REMOVED (who does that?!) the videos from YouTube. I am putting another ridiculously grand video of a possible asteroid (although this seems more like a dwarf planet :) ) destruction of the Earth to satisfy the ones hungry for obliteration.

[youtube:InPNk44v7uw]

Wow, think how many blog posts I would have written until 2036!

and has 0 comments
I took a test recently, one of those asking ridiculous C# syntax questions rather than trying to figure out if your brain works, but anyway, I got stuck at a question about structs and classes. What is the difference between them?

Credit must be given where it is due, I took the info from a dotnetspider article, by Dhyanchandh A.V. , who organised the answer to my question very well:
  • Classes are reference types and structs are value types. i.e. one cannot assign null to a struct
  • Classes need to be instantiated with the new keyword, structs need only be declared
  • When one instantiates a class, it will be allocated on the heap.When one instantiates a struct, it gets created on the stack
  • One works with the reference to a class, but directly with the struct
  • When passing a class to a method, it is passed by reference. When passing a struct to a method, it’s passed by value instead of as a reference
  • One cannot have instance Field initializers in structs
  • Classes can have explicit parameterless constructors. Structs cannot
  • Classes support inheritance. But there is no inheritance for structs, except structs can implement interfaces
  • Since struct does not support inheritance, the access modifier of a member of a struct cannot be protected or protected internal
  • It is not mandatory to initialize all fields inside the constructor of a class. All the fields of a struct must be fully initialized inside the constructor
  • A class can declare a destructor, while a struct cannot
.

What is the purpose of a struct, then? It seems to be only a primitive type of class. Well, it serves purposes of backward compatibility with C. Many C functions (and thus COM libraries) use structs as parameters. Also, think of the struct as a logical template over a memory location. One could use the same memory space of an Int32 under a struct { Int16 lo,hi }. Coming from an older and obsolete age, I sometimes feel the need to just read the memory space of a variable and be done with it. Serialization? Puh-lease! just grab that baby's memory and slap it over someone else's space! :)

and has 0 comments
One of our sites started exibiting some strange behaviour when adding a lot of strings to a StringBuilder. The error was intermittent (best kind there is) and the trace showed the error originating from the StringBuilder class and then from within the String class itself (from an external method).

On the web other people noticed this and one possible explanation was that strings need to be contiguous and thus they cannot grow bigger than the largest free contiguous memory block. Remember defragmentation? Heh.

In order to test this, I added a letter to a StringBuilder, then added itself to it for a few times to see where it breaks. It broke at a size of 2^27 (134,217,728 = 0x8000000)! At first I thought it was a StringBuilder bug, but getting sb.ToString() then adding to it a letter resulted in an error. Then I thought maybe it is a default size that, for whatever reason, is considered the maximum string size. That would suck. I don't usually need 134Mb of string, but what if I do? I have 2.5Gb of memory in this computer. But no, people reported the same problem but with other powers of 2, like 29 and on a Vista computer I got 2^28.

Is it from the fragmentation of memory? Right now I have only 1.8Gb in use. I rather doubt that in the 700Mb of space left there is only one 134Mb contiguous memory block. But I cannot prove it one way or another. My guess is that there is another mechanism that actually interferes with this and effectively limits the maximum string size.

But what does that mean, anyway? For practical purposes it means that you have to plan not to have strings that big in your applications. Creating a custom string builder might work, but it would have to use some other methods than strings.

I have tried the same thing with byte arrays. While declaring a byte array of 2^28 was possible (probably also because a string uses Unicode characters internally and thus takes twice the memory) writing it to a MemoryStream resulted in an error.

So watch these things out, try to always keep single variable blocks to manageable sizes.

It has been a pain in the ass for me to use the graphical designer in Visual Studio. Instead I have used writing the markup of web pages and controls using the keys. It went several times faster, but still, it lacked the speed of any text editor I have ever seen. Moving up, down, left, right with the keys would make the system lag, jerk, etc. It was never too annoying to investigate until today.

What happends is that whenever you work in a VS window, it sends events to just about everything on the screen. The more windows you have open, the more it lags. And the culprit for this particular problem: the Properties window. Every time I was moving with the keys it tried to update itself with the information of the object I was moving over. Closing the Properties window fixed it! \:D/

and has 0 comments
I heard a bit of this song in a super market and remembered it and couldn't get it out of my head. The funny thing is that when I listened to it properly, it didn't sound so good as I remembered it... Is there another version or is my memory playing tricks on me?

Anyway, here it is, British singer Bryan Ferry playing with his band Roxy Music.


I will write a really short post here, maybe I'll complete it later. I had to create some controls dynamically on a page. I used a ViewState saved property to determine the type of the control I would load (in Page_Load). It all worked fine until I've decided to add some ViewState saved properties in the dynamic controls. Surprise! The dynamically created controls did not persist their ViewState and my settings got lost on postback.

It's a simple google search away: dynamically controls created in Page_Load have empty ViewState bags. You should create them in Page_Init. But then again, in Page_Init the page doesn't have any ViewState, so you don't know what control type to load. It seemed like a chicken and egg thing until I tried the obvious, since the ViewState is loaded in the LoadViewState method, which is thankfully protected virtual, why not create my control there, after the page loads its view state? And it worked.

Bottom line:
protected override void LoadViewState(object savedState)
{
base.LoadViewState(savedState);
CreateControl(ControlName);
}

First of all, I seem to be the proverbial man who can't do it so he teaches it. I've not worked in a Scrum or XP environment, but I did read a few books about them and this is what I gathered. I beg of you to point out any mistruth or inconsistency. You might want to take a look at this previous post, more general post, on the matter of agile development.

Some key elements of all agile methods I've read about are:
  • the code does not belong to any programmer, in other words anyone can change any piece of code in order to solve an issue
  • the members of the team are interchangeable, so not a bunch of experts in different fields, but people that can do all things (and be easily replaced by people just as agile as them :) )
  • the members of the team must have similar competencies, one cannot do pair programming between a rookie and a senior, for example. That is called teaching :)
  • the client is supposed to change their mind often and unpredictably, one plans for the unplannable
  • the client must be represented in the agile team, so as to not have delays or misunderstandings in requirements


Scrum



The Scrum system does seem to be more of a disciplined way of developing than a method in itself. There are Scrum principles that must be upholded, but if you ignore them, the whole system looks like this:

  • All development is done in fixed time increments called Sprints. Scrum specifies 15 or 30 days, although I bet most dev companies actually plan this on a calendaristic month.
  • At the start of each Sprint a meeting of 8 hours takes place (so the first day) in which half of it is to present the requests by the Product Owner (in our case that would be either the client or the person that did the analysis) and the other half to plan which of the tasks in the Project BackLog (requirements list) can be done in the current Sprint. This last part if the responsability of the Team (that would be the developers and their team leaders and managers).
  • In the last day of the Sprint two meetings will be held: a 4 hour meeting that will allow the Team to present what was done in the current Sprint to the Product Owner (this would be an informal meeting that "is intended to bring people together and help them collaboratively determined what the Team should do next") and a 3 hour meeting in which the ScrumMaster (the person in charge with the implementation of Scrum in the project) "encourages the Team to revise, within the Scrum process framework and practices, its development process to make it more effective and enjoyable for the next Sprint"
  • The development is one in the rest of 28 days
  • Each day there is a 15 minute Scrum Meeting held within the Team in which "each Team member answers three questions: What have you done on this project since the last Daily Scrum meeting? What do you plan on doing on this project between now and the next Daily Scrum meeting? What impediments stand in the way of you meeting your commitments to this Sprint and this project? The purpose of the meeting is to synchronize the work of all Team members daily and to schedule any meetings that the Team needs to forward its progress".


What is important about these Sprints is that at the end of each sprint the product should be fully implemented, tested and ready for production. At each increment the client could just take the product and leave. Any changes to the specifications must be included in the backlog and prioritised so that the developers apply them in the next Sprints. Once a Sprint is planned, there are no changes to it.

So, as far as I understand, this is a method of making rigid planning for very small periods of time, then executing it, effectively reducing each project to a bunch of smaller ones. Instead of "Make me a business management application" there will be projects like "Make me a member management interface", then "Add activities management" and so on. It reminds me of the time when I wanted to learn in college and I would divide the number of pages I had to understand and memorize to the number of days remaining till the exam.

I don't consider Scrum a very innovative way of development, although back in 1986 it probably was, but that's also good. One can easily adapt some of these ideas to their own system of development. By allowing the developer to build a finite number of things in a predetermined time, they can select a time to test the application in which they are certain no more requests will delay that process. Of course, I don't know what happends if the client changes their mind about a thing that is supposed to be done in a Sprint. Do we abandon the task in the current Sprint and plan it modified in the next? Do we build it as if nothing happened, then start making the changes or, worse, remove it?

XP (Extreme Programming)



The Extreme Programming development method seems to have the same roots as Scrum does. The idea is to develop in successive iterations that encapsulate planning, testing, development and refactoring. The "12 principles" of XP are again and again mentioned in the book, but I think that's crap. The most important ideas in XP, to me at least, seem to be :
  • User stories as requirements gathering; Most important! a detailed story of what the user will do and why, like a narrative, the Word version of an UML flow diagram, which is the responsability of the client! The actual developing is the implementation in code of those stories
  • iterations, which in the case of XP don't have a specific time length, each one is planned depending on what there is to do and what can be done
  • the separation of user and client, the user is the one that actually uses the program, while the client... well, you know
  • user-on-site, you can always ask the user what they think and receive quick feedback
  • Test driven development, which, together with pair programming, seem the only actual extreme parts of XP, where they insist on tests first, programming later.
  • Spikes: small bursts of programming for no other reason than to research an idea. Developers don't have to be rigurous in spike programming, since they only do the bit of code, test its functionality, then throw it away, the idea being that they learn how to do the actual code they wanted to do and what problems they might be facing. In this particular case, the spike is part of the planning or designing of a piece of code.


I will mention here Pair Programming as well, although I clearly don't see it happening. The idea is that two programmers sit on the same machine, one programs, while the other does just-in-time code review and thinks of the large implications of the code. While the concept is sound and I seldom find myself wanting to be able to code and also think in a larger context, I don't see how this can be done anymore than a master painter could get help from a second one that watches from afar and keeps nagging him on how to do things. Besides, sitting near a code that is being written sounds both boring and terribly frustrating.

But then again, I always like talking to other programmers that are as passionate as I am, so maybe a hands-on discussion, even an argument, might provide the drive to good code. Besides, it is harder to waste time on news sites and online games when you have some guy next to you :)

Conclusion



My conclusion is that agile is a solution to the problems that arose during the Waterfall days. It is not a solution to all problems and it certainly presents some level of difficulty in implementation.

I believe it would be hard to do in a small team with high turnover. One needs a stable team that works well together and has a decent management to implement agile development. But I do see it as a positive thing, as it puts the needs of the customer first and, no matter how good a coder you are, your primary goal is to satisfy the client.

and has 1 comment
Here is a point raised by Tudor, from the infamous Romanian blog it-base.ro. Incidentally, he does some teaching in the Java field and is a good web designer and PHP programmer.

Ok, some of you know those informatics teachers that start talking about a programming language by giving you a silly problem that would never occur in real life or by asking you to "decypher" a piece of code that looks unusable in any scenario. But bare with me and try to think this through before you read on in the post.
Question: what is the value of i (initially 0) after the following operations in C#?
  • i = i++
  • i = Math.Pow(i++,2)
  • some method that ends in return i++


First, let's think about the meaning of this ++ arcane symbol. It means increment a number, or add 1 to it. In C based languages, you can use it either after or before a variable, thus changing meaning to add 1 to i after or before assignment. As MSDN says: "The increment operator (++) increments its operand by 1. The increment operator can appear before or after its operand:
++ var
var ++
The first form is a prefix increment operation. The result of the operation is the value of the operand after it has been incremented.
The second form is a postfix increment operation. The result of the operation is the value of the operand before it has been incremented."


Now the answer is pretty clear: i will be zero after any of the operations described above. But why?! Let's examine them a little.

The first makes no sense in real world, but you could easily imagine something like i=j++ that could be used somewhere and that makes sense to set i to the original value of j, then increment j. But then doesn't it mean that i should be 1 because it gets incremented after the assignment? Well I think it should, as the last operation, but what I think happends is that the value for i gets pushed in a stack, then retrieved at the end of the sentence like this:
  i = i
push 0 to stack as the value of i
i++
push 1 to stack as the value of i
pop value of i from stack (1)
pop value of i from stack (0)


Ok, ok, I guess that makes some sort of sense, but what about that int F(int i) { return i++; } thing? Shouldn't it increment i after the operation then return it? Apparently not. The method returns the value of i and aborts the pending increment operation.

According to Tudor, PHP would return 1 in these situations, although most C implementations, including javascript, return 0. Update: he later posted a comment retracting that statement. He also suggested it would be hard to debug something like this in case it happends. Ha! My beautiful ReSharper Visual Studio addon immediately added a wiggly line under i and said value assigned is not used in any execution path. ReSharper - Computer teachers : 1-0 ! :)

and has 0 comments
This was something I have been meaning to write for quite some time, but actually, I've never had to work with a proper serialization scenario until recently. Here is goes:

First of all, whenever one wants to save an object to a string they google ".Net serializer" and quickly reach the XmlSerializer because that's what most people think serialization is. But actually, it is not. The whole point of serializing an object is that you can transfer and store it. Therefore you need to use a format that is as open, clear and standard as possible and to send only the relevant data, which in case of objects is the PUBLIC data. And for that, the XmlSerializer does its job, albeit, it does have some problems I am going to describe later.

But suppose you didn't really want to send mere data over to another computer, but an entire class, with its state intact, ready to do work as if the transfer never happened? Then you need to FORMAT the object. Enter the IFormatter interface with its most prominent implementation: BinaryFormatter. Funny enough, the methods used to spurt an object through a stream are also called Serialize and Deserialize. The advantages of the IFormatter way is that it is saving the entire object graph, private members included, and doesn't need all the requirements the XmlSerializer does. It also produces a smaller output. So, is this it? Why use Xml (which everybody secretly hates) when you can use the good ole obscure binary file with almost no trouble? Well, because of the almost. Yes, this way of doing things is not fullproof either.

Some people feel that the sending only the data is not serialization, and that the saving of the completele graph and internal state of the object is. Wikipedia says: "serialization is the process of converting an object into a sequence of bits so that it can be stored on a storage medium (such as a file, or a memory buffer) or transmitted across a network connection link. When the resulting series of bits is reread according to the serialization format, it can be used to create a semantically identical clone of the original object.". So, they bail by using the obscure phrasing of "semantically identical", which pretty much says "they mean the same thing even if their structures may differ". So, I think I am in the right, as the BinaryFormatter has real issues with structure change.

Now for the quick and dirty reference:
The XmlSerializer
  • Only serializes public READ/WRITE properties and fields - it doesn't throw any error when trying to serialize readonly properties, so be careful
  • Needs the class to have a parameterless constructor - this pretty much restricts the design of the classes you can serialize
  • Does not work on Dictionaries
  • Does use a Type definition to serialize and deserialize, which means you can still use it if the types are named differently or of different versions, even if they are radically different, as it will only fill the values that it stored and not care about the others
  • There are all sorts of attributes one can decorate their classes with to control serialization as well as some events that are fired during deserialization
  • Has issues with circular references
  • If your class implements IXmlSerializable it can control how the serialization is done

The BinaryFormatter
  • It serializes both public and private, readonly or read/write properties and fields as long as their type classes are marked as Serializable - that sucks for classes that are not yours
  • It's a rigid method of serializing objects - if you change the source or destination objects or even their namespace, the deserialization won't work

The SoapFormatter
  • Just when I thought that a class that combines the benefits of both BinaryFormatter and XmlSerializer exists, it appears it has been obsoleted in .Net 3.5. Besides, it did far less than the BinaryFormatter


It seems that Microsoft's idea of serialization blatantly differs from mine. I would have wanted a class that can serialize binary or Xml based on a simply property, send public OR both types of fields and properties, be flexible in how decorating attributes are used and what the output is. In my project I had to switch from BinaryFormatter, which seemed to solve all problems, to XmlSerializer (thus having to change a lot of the classes and design of the app) just because the type of the class sent by the client application could not have the same namespace as the one on the server.

That doesn't mean one cannot build their own class to do everything I mentioned above, of course. Here are some CodeProject examples:
A Deep XmlSerializer, Supporting Complex Classes, Enumerations, Structs, Collections, and Arrays
AltSerializer - An Alternate Binary Serializer.

Update: the .Net framework 3.5 has added an object called JavaScriptSerializer which turns an object into a single line of text, JSON style. It worked great for me in order to log hierarchical or collection based data. Use it just like the XmlSerializer.

Another link to check out is this one: Fast Serialization, but read the entire article before using any code.

Well, I am alive and blogging. It is a new year, one that brings as much hope and fulfilment as the last one (lots of hope there!), the big 2009. I can vaguely remember a kid that computed his age for the year 2000 and thought "I will be old enough to go to Mars!", but apparently, no human is old enough yet.

So what am I planning this year? Getting back on track would be a good idea. Stop wasting time that I don't have and if I have and waste, then I don't deserve. My book? Ahem. Let's hope I get inspired beyond the mere autobiographic. My AI MMORPG WMATCL project? I still have to design an AI that is worthy of its name. My job? Well, it's still there. I find myself wondering why? from time to time, but I guess it is good to have a job in this troubled time. My blog? Well, I intend to spice it up, but I need to actually do interesting stuff for that. I will update it with info gathered from a new Windows Forms application that I am building as well as information about the ReportViewer control that I've finally managed to use and to love/hate. My personal life? I have reached that point that many people find themselves at without really understanding how they got there. I've made the compromises that make one accept their life "as is" and postpone who they are. Sometimes that "myself" I have imprisoned deep inside growls and pulls on the bars. But maybe he's there for life (pun intended). Then again, maybe not. He feels more and more like a stranger now.

Oi! What's with the depressed text!? Forget all that! It's a new year! Happy new year!!! [Party trompet and silly face]

and has 0 comments
You probably know Melissa Auf der Maur as the bassist of the band Hole. She was the cute skinny redhead. She also toured with Smashing Pumpkins for a while. I don't remember where I've heard of her, but I got her album and listened to it and I really enjoyed it. Here is a taste of her music from her (so far) only album Auf der Maur.

About a second album, I am quoting Wikipedia:In a 2007 interview, Auf der Maur announced that she had finished her second solo album which would go hand in hand with a graphic novel and a concept film, the release dates of which are unclear. The album will be released under the name of MAdM, whereas the comic and film will go by Out of Our Minds, or OOOM for short. A website containing teasers of the projects, as well as a movie trailer, was launched in August, 2007 and can be found at xMAdMx.com

Enjoy!


Well, I was trying to use for the first time the Microsoft ReportViewer control. I used some guy's code to translate a DataSet XML to RDLC and ran the app. Everything went OK except for the export to Excel (which was the only real reason why I would attempt using this control). Whenever I tried exporting the report, the 'Asp.net Session expired' error would pop up.

Googling I found these possible solutions:
  • set the ReportViewer AsyncRendering property to false - doesn't work
  • use the IP address instead of the server name, because the ReportViewer has issues with the underline character - doesn't work, albeit, I didn't have any underline characters in my server name to begin with
  • set the maximum workers from 2 to 1 in the web.config (not tried, sounds dumb)
  • setting cookieless to true in the web.config sessionState element - it horribly changed my URL, and it worked, but I would never use that
  • setting ProcessingMode to local - that seemed to work, but then it stopped working, although I am not using the ReportViewer with a Reporting Services server
  • Because towards the end I've noticed that the problem was not an expiration, but more of a Session communication problem, I tried setting machineKey in web.config, although it doesn't work for the InProc setting. So it didn't work either.


For a few days, this blog post showed the last solution as working. Then it failed. I don't know why. I fiddled with the RDLC file a little (arranging columns and colors and stuff) and then it seemed to work again. I have no idea why.

I got mad and used Reflector to get the source code for the ReportViewer control and see where it all happends and why! I have found the following:
  • the error message looks like this:
    ASP.NET session has expired
    Stack Trace:
    [AspNetSessionExpiredException: ASP.NET session has expired]
    Microsoft.Reporting.WebForms.ReportDataOperation..ctor() +556
    Microsoft.Reporting.WebForms.HttpHandler.GetHandler(String operationType) +242
    Microsoft.Reporting.WebForms.HttpHandler.ProcessRequest(HttpContext context) +56
    System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +181
    System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75
  • the error occurs in the constructor of ReportDataOperation:
    public ReportDataOperation():
    this.m_instanceID =
    HandlerOperation.GetAndEnsureParam(requestParameters, "ControlID");
    this.m_reportHierarchy =
    (ReportHierarchy) HttpContext.Current.Session[this.m_instanceID];
    if (this.m_reportHierarchy == null)
    throw new AspNetSessionExpiredException();
  • the Session object that makes the error be thrown is set in the SaveViewState() override method in ReportViewer
  • Apparently, the error occurs only sometimes (probably after the HttpApplication was restarted and during the debug mode of Visual Studio).


This reminded me of the time when I was working on a Flash file upload control and I used a HttpHandler to get the flash file from the assembly. Back then the control would not work with FireFox and some other browsers which would use different sessions for the web application and for getting the Flash from the axd http handler.

This time it works with FireFox and IE, but it fails in debug mode and only in IE. I am using IE8, btw.

My conclusion is that
  1. the ReportViewer control was poorly designed
  2. the ASP.Net Session expired error is misdirecting the developer, since it is not an expiration problem
  3. the actual problem lies in the inability of the ReportViewer control to communicate with the HttpHandler.
  4. The problem also could be related to the browser using separate threads to get the application and access the HttpHandler.

and has 1 comment
Hi, I am working on a new blog format. As I am lazy and a complete html and CSS noob, it will take a while. Please, feel free to comment on the new look. Actually, feel obligated to do so! :)

I went to this presentation of a new Microsoft concept called Windows Azure. Well, it is not so much as a new concept, more like them entering the distributed computing competition. Like Google, IBM and - most notably - Amazon before it, Microsoft is using large computer centers to provide storage and computing as services. So, instead of having to worry about buying a zillion computers for your web farm, manage the failed components, the backups, etc. you just rent the storage and computing and create an application using the Windows Azure SDK.

As explained to me, it makes sense to use a large quantity of computers, especially structured for this cloud task, having centralized cooling, automated update, backup and recovery management, etc, rather than buying your own components. More than that. since the computers run the tasks of all customers, there is a more efficient use of CPU time and storage/memory use.

You may pay some extra money for the service, but it will closely follow the curve of your needs, rather than the ragged staircase that is usually a self managed system. You see, you would have to buy all the hardware resources for the maximum amount of use you expect from your application. Instead, with Azure, you just rent what you need and, more importantly, you can unrent when the usage goes down a bit. What you need to understand is that Azure is not a hosting service, nor a web farm proxy. It is what they call cloud computing, the separation of software from hardware.

Ok, ok, what does one install and how does one code against Azure? There are some SDKs. Included is a mock host for one machine and all the tools needed to build an application that can use the Azure model.

What is the Azure model? You have your resources split into storage and computing workers. You cannot access the storage directly and you have no visual interface for the computing workers. All the interaction between them or with you is done via REST requests, http in other words. You don't connect to SQL Servers, you use SQL Services, and so on and so on.



Moving a normal application to Azure may prove difficult, but I guess they will work something out. As with any new technology, people will find novell problems and the solutions for them.

I am myself unsure of what is the minimum size of an application where it becomes more effective to use Azure rather than your own servers, but I have to say that I like at least the model of software development. One can think of the SDK model (again, using only REST requests to communicate with the Azure host) as applicable to any service that would implement the Azure protocols. I can imagine building an application that would take a number of computers in one's home, office, datacenter, etc and transforming them into an Azure host clone. It wouldn't be the same, but assuming that many of your applications are working on Azure or something similar, maybe even a bit of the operating system, why not, then one can finally use all those computers gathering dust while sitting with 5% CPU usage or not even turned on to do real work. I certainly look forward to that! Or, that would be really cool, a peer to peer cloud computing network, free to use by anyone entering the cloud.

and has 0 comments
Well, I just said I can't wait for the third book, haven't I? :) Anyway, Dexter in the Dark was a bit of a disapointment to me. Apparently, Dexter's inner demons are just that, demons, liked to some ancient deity from the times of Solomon called Moloch which is like an alien parasite thing. Really... What did Lindsay do? Read Snow Crash? Watch Fallen? Try to mix Stargate Goa'ulds with Wicker Man and Eyes Wide Shut? Geez!

When I was getting so comfortable with the character of Dexter, thinking that Jeff Lindsay was a genius for portraying a type of character I was always thinking of writing, he just takes all that inner maniacal urge that both empowered and limited the character and transforms it into an external, fantasy like thing. Bad writer, bad!

Anyway, that doesn't mean I didn't enjoy the book. I just think that when the third series of the TV show became too far fetched, they were still safe when compared to it. I mean, until now Dexter was a brilliant guy with a dark path and also with a sort of artificial morality, mix in some police stuff, some blood spatter, the weird police seargent sister. It was a perfect setting for introspection and solitary struggle. I loved that! And now demons? As Doakes would have put it "the hell for?".

The fourth Dexter book is supposedly due for february 5th 2009. I hope Lindsay abandons the weird supernatural crap and instead focuses on Dexter's training of his adoptive children into the art of killing. Otherwise I can only see it turn toward so many bad directions like Blade or Hellboy or other green "hybrid saves the planet" thing.