I was working on this application and so I found it easy to create some controls as UserControl classes. However, I realised that if I wish to centralize the styling of the application or even move some of the controls in their own control library with a Themes folder, I would need to transform them into Control classes.

I found that there are two major problems that must be overcome:

  1. the named controls in a user control can be accessed as fields in the code behind; a theme style does not allow such direct access to the elements in the control template.
  2. the controls that expose an event may not expose an associated command. In a user control a simple code behind handler method can be attached to a child control event, but in a theme a command must be available in order to be bound.



Granted, when the first problem is solved, it is easy to attach events in the code of the control to the child elements, but this presents two very ugly problems: the template of the control will need to contain the child elements in question and the event will not be declared in the theme, so the behaviour would be fixed as well.

I will solve the handling of events as commands by using a solution from Samuel Jack. The article is pretty detailed, but to make the story short, one creates an attached property for each of the handled events by using a helper class:


public static class TextBoxBehaviour
{
public static readonly DependencyProperty TextChangedCommand =
EventBehaviourFactory.CreateCommandExecutionEventBehaviour(
TextBox.TextChangedEvent, "TextChangedCommand", typeof (TextBoxBehaviour));

public static void SetTextChangedCommand(DependencyObject o, ICommand value)
{
o.SetValue(TextChangedCommand, value);
}

public static ICommand GetTextChangedCommand(DependencyObject o)
{
return o.GetValue(TextChangedCommand) as ICommand;
}
}

then by using a very simple syntax on the control that fires the event:


<TextBox ff:TextBoxBehaviour.TextChangedCommand="{Binding TextChanged}"/>



The problem regarding access to the elements in the template is solved by reading the elements by name from the template. In some situations, like when one uses a control as a source for a data control (like using a TreeView as the first item in a ComboBox), the approach will have to be more complicated, but considering the element is stored in the template of the control, something like this replaces the work that InitializeComponent does inside a UserControl:


[TemplatePart(Name = "PART_textbox", Type = typeof (TextBox))]
public class MyThemedControl : Control, ITextControl
{
private TextBox textbox;

public override void OnApplyTemplate()
{
base.OnApplyTemplate();
textbox = Template.FindName("PART_textbox", this) as TextBox;
}
...


The code is pretty straight forward: use the FrameworkTemplate.FindName method to find the elements in the OnApplyTemplate override and remember them as fields that you can access. The only weird part is the use of the TemplatePartAttribute, which is not mandatory for this to work, but is part of a pattern recommended by Microsoft. Possibly in the future tools will check for the existence of named elements in the templates and compare them against the ones declared in the control source.

The code of a demo project can be downloaded here.

Some other technologies I have used in the project:

  • the RelayCommand class, to make it easier to defined ICommand objects from code without declaring a type for each.
  • the AccessKeyScoper class that allows an IsDefault button to act locally in a panel.

and has 0 comments

Google was born from an idea in 1996. It gained momentum and it became a word in the English dictionary. To google means more than to search for something, it means to delegate the responsibility of the search, it means not simply search, but find the answers to your question.

It reminds me of that scifi joke about a universe populated by billions of races that decided to combine all their networks into a large information entity. Then they asked the question "Is there a God?" and the machine answered "Now there is" and melted the off switch with a bolt of lightning. Can one really trust the answers given to them by a machine?

I am not the paranoid type. This is not a blog post about the perils of machine domination or about the Machiavellian manipulation of the company wielders. Instead is an essay on the willingness of humans to delegate responsibility. "Surely Google is just a search engine, it is not intelligent and it could never take over the world", one might say. But that's exactly the problem. Millions of people in the world are willing to let this stupid thing find answers for them.

Why? Because it worked. The search engine has more information available that any human could possibly access, not to mention remember. It is a huge statistical machine that finds associations on words, concepts, the search person preferences, the relationships between people and any other data available, like who the searcher is. Any AI dabbler could tell you that this is the first step towards intelligence, but again, that is not the point. The algorithms employed are starting to fail. The information that has been gathered by Google is being eroded by "Search Engine Optimization" techniques, by time and by the people's own internal algorithms that have started to trust and care about only the first links in a search.

Already there are articles about the validity of the answers given by "Doctor Google", a nickname given to the search engine used in the context of finding out medical solutions. The same principle applies to almost everything. The basis of Google's search is that pages that are linked by other sites and blogs are probably more important or interesting that those that are not. Of course, there is more than that, like when was the page last updated, balck and white lists, and stuff like that, but basically, old information has better chances to get in the first searches. Also information that is on sites that are well done and organized. That raises the question: would a true specialist that spends a large amount of effort and time researching their field of activity have the skill set and be willing to spend the resources to have a professional web site? How about the people that are not specialists? How about people that are actively trying to take advantage of you?

You can easily check this by searching for a restaurant name. Chances are that the site for the restaurant is not even on the first page, which has been usurped by aggregators, review sites and others like that. If a technology has not changed its name, but went through a large change, chances are that googling for its name will get you reading about it before the change. Search for a book and you will get to Amazon, not a review or (God forbid) a download site. Search for "[anything] download" and you will get to huge ad-ridden sites that have a page for just about every search that contains those words, but, surprise, no download.

Do not think that I am attempting to bash Google. Instead, I am trying to understand why such obvious things are not taken into consideration by the people doing the search. The same thing applies to other sites that have gained our confidence, so now are targets for more and more advanced cons. Confidence is a coin, after all, one that gets increasingly important as the distribution monopoly gets out of the hands of huge corporations and dissembles into a myriad of blogs and forum sites. This includes Wikipedia, IMDb, aggregators of all kinds, YouTube, Facebook, Twitter, blogs, etc. I know that we don't really have the time to do in depth searches for everything, but do you remember the old saying "God is in the details"?

Has Google reached godhood? Is it one we faithfully turn to for our answers? The Church of Google seems to think so. There are articles being written now about Searching without searching, algorithms that would take into consideration who you are when you are searching in order to give you the relevant information. It is a great concept, but doesn't that mean we will trust in a machine's definition of our own identity?

I once needed to find some information about some Java functions. Google either has statistical knowledge that .Net is cooler or that I have searched .Net related topics in the past and would swamp me with .Net results, which have pretty similar method names. Imagine you are trying to change your identity, exploring things that are beyond your scope of knowledge. Google would just try to stop you, just like family and friends, who give comfort, but also hold dearly to who you were rather that who you might be or want to become. And it is a global entity, there for you no matter where. You can't just move out!

To sum up, I have (quite recently) discovered that even for trivial searches, paying attention to all the links on the first page AND the second is imperative if I want to get the result I want, not just the one suggested. Seeing a Wikipedia link in the found items doesn't mean I should go there and not look at the others. Imdb is great at storing information about movies, but I can't trust the rating (or the first review on the page). YouTube is phenomenal at hosting video, but if I want something that is fresh and not lawyer approved I need to go to other sites as well. When having a problem and asking a friend, I appreciate their answer and seek at least a second opinion.

and has 3 comments
Yesterday I was talking with friends of a weird situation near my home: there are 3 pharmacies in the same building and 3 others very close by. Well, pharmacies and banks have the same business model, if you think about it, so no wonder there. What felt a bit strange is that in the building there was the pharmacy that I have been buying medicine from my youth and recently it was bought by one of the larger pharmaceutical chains and converted into a flashy, colorful and very expensive venture. The same happened to a company called Plafar, a Romanian company opened in 1999 with state capital which specialized in natural remedies, infusions and so on. In 2007 it was purchased by a pharmaceutical chain that loses its origin in the vagaries of stock exchange. Now all they sell is terribly expensive as well.

So yes, I think that is strange as seen from the naive view of free market in capitalism. You have a competitive segment of the market, providing no bullshit service at low price, being bought and replaced by the people that gain at least half of their money by overpricing. It's like a virus (A virus enters a bar. The bartender says "We don't serve viruses in here". The virus replaces the bartender with a copy of its own and says "Now you do") and it spreads especially fast in a low immunity environment like a freshly "liberated" country like Romania.

What is going on here? Well, since we are in the medical/pharmaceutical context, let's address the notion of economic health. At economics.about.com they say The value of stock market indices seem to be the barometer many use for the health of the economy.. Well, that is not what I had in mind. What I think is health in an economy is how fresh growth is not hindered, but nurtured. Just as in the human body, disease impedes growth and disables functioning mechanisms that are vital to life. Are other countries healthy? No! They are in the same kind of crap, only there it is harder to suffocate others.

Is there a solution? I don't know. But a full ecosystem is needed to promote health. When only predators remain, the land dies.

My friend Meaflux mentioned a strange concept called polyphasic sleep that would supposedly allow me to spend less time sleeping, thus maximizing my waking time. I usually love sleep, I can sleep half a day if you let me and I am very cranky when forcefully waken up... as in every day when going to work, doh! Also, I enjoy dreaming and even nightmares. Sure, I get scared and lose rest and there are probably underlying reasons for the horrors I experience at night sometimes, but they are cool! Better than any Hollywood horror, that's for sure. My brain's got budget :)

Anyway, as I get older I understand more and more the value of time, so a method that would give me an extra of 2 to 6 hours a day sounds magical and makes me reminisce of the good times of my childhood when I had time for anything! Just that instead of skipping school I would skip sleep. But does it work?

A quick Google shows some very favourable articles, including one called How to Hack your Brain and the one on Wikipedia, which is ridiculously short and undocumented. A further search reveals some strong criticism as well, such as this very long and seemingly documented article called Polyphasic Sleep: Facts and Myths. Then again, there are people that criticise the critic like in An attack on polyphasic sleep. Perhaps the most interesting information comes from blog comments from people who have tried it and either failed miserably or are extremely happy with it. Some warn about the importance of the sleep cycles that the polyphasic sleep skips over, like this Stage 4 Sleep Deprivation article.

Given the strongly conflicting evidence, my only option is to try it out, see what I get. At least if I suddenly stop writing the blog you know you should not try it and lives will be saved :) Ok, so let's summarise what this all is about, just in case you ignored all the links above.

Most people are monophasic sleepers, a fancy name for people who sleep once a day for about 8 hours (more or less, depending on how draconic your work schedule and responsibilities are). Many are biphasic, that means they sleep a little during the afternoon. This apparently is highly appreciated by "creative people", which I think means people that are self employed and doing well, so they can afford the nap. I know many retired people have a biphasic sleep cycle at least and probably children. Research shows that people normally feel they need to sleep most at around 2:00 and 14:00, which accounts for the sleepiness we feel after lunch. The mid day sleep is also called Siesta.

Now, poliphasic sleep means you reduce your sleep (which in the fancy terminology is called core sleep) and then compensate by having short sleep bursts of around 20 minutes of sleep at as fixed intervals as possible called naps. This supposedly "fixes" your brain with REM sleep, which is the first in the sleep lifecycle, however it is a contested theory. The only sure thing seems to come from an italian researcher called Claudio Stampi who did a lot of actual research and who clearly stated that sleeping many short naps is better than sleeping only once at the same number of hours of sleep. So in other words six 20 minutes naps are better than one 3 hour sleep.

Personally, I believe there is some truth to the method, as many people are actually using it, but with some caveats. Extreme versions like the Uberman (six naps a day, resulting in 2 hours of actual sleep) probably take their toll physiologically, even if they might work for the mental fitness. Also, probably some people are better suited than others for this type of customised sleep cycles. And, of course, it is difficult for a working man to actually find the place and time to nap during the afternoon, although I hear that it has become a fashion of sorts in some major world cities to go to Power nap places and sleep for 20 minutes in special chairs. No wonder New Yorkers are neurotic :) On a more serious yet paranoid note: what if this works and then employers make it mandatory? :-SS

So, in the interest of science, I will attempt this for a while, see if it works. My plan is to sleep 5 hours for the core, preferably from 1:00 to 6:00, then have two naps, one when I get back from work (haven't decided if before or after dinner, as there are people recommending not napping an hour after eating) and another close to 8:30 when I go to work. So far I have been doing it for three days, but it seems all this needs at least a few weeks of adjustment.

Now, with 5 hours and 40 minutes of sleep instead of 7 I only gain 1.33 hours a day, but that means an extra TV show, programming a small utility, reading a lot and maybe even writing... so wish me luck!

Update: I did try it, but I didn't get the support I needed from the wife, so I had to give it up. My experience was that, if you find the way to fall asleep in about 5 minutes, the method works. I didn't feel sleepy, quite the contrary, I felt energized, although that may be from the feeling of accomplishment that the thing actually works :) Besides, I only employed the method during the work week and slept as much as I needed in the weekend. I actually saved about 40 hours a month, which I could use for anything I wanted. If one works during that time, it means an increase in revenue to up to 25%. That's pretty neat.


Today I've released version 1.2 of the HotBabe.NET application. It is a program that stays in the traybar, showing a transparent picture, originally of a woman, sitting on top of your other applications. Clicks go through the image and the opacity of the picture is set so that it doesn't bother the use of the computer. When the CPU use or the memory use or other custom measurements change, the image changes as well. The original would show a girl getting naked as the use of CPU went up. Since I couldn't use what images it should use, I did my own program in .NET. This blog post is about what I have learned about Windows Forms while creating this application.

Step 1: Making the form transparent



Making a Windows Form transparent is not as simple as setting the background transparent. It needs to have:
  • FormBorderStyle = FormBorderStyle.None
  • AllowTransparency = true
  • TransparencyKey = BackColor
However, when changing the Opacity of the form, I noticed that the background color would start showing! The solution for this is to set the BackColor to Color.White, as White is not affected by opacity when set as TransparencyKey, for some reason.

Step 2: Making the form stay on top all other windows



That is relatively easy. Set TopMost = true. You have to set it to true the first time, during load, otherwise you won't be able to set it later on. I don't know why, it just happened.

Update: I noticed that, even when TopMost was set, the image would vanish beneath other windows. I've doubled the property with setting WS_EX_TopMost on the params ExStyle (see step 4).

Step 3: Show the application icon in the traybar and hide the taskbar



Hiding the taskbar is as easy as ShowInTaskbar = false and putting a notification icon in the traybar is simple as well:
_icon = new NotifyIcon(new Container())
{
Visible = true
};
Set the ContextMenu to _icon and you have a tray icon with a menu. There is a catch, though. A NotifyIcon control needs an Icon, an image of a certain format. My solution was, instead of bundling an icon especially for this, to convert the main girl image into an icon, so I used this code.

Step 4: Hide the application from Alt-Tab, make it not get focus and make it so that the mouse clicks through



In order to do that, something must be done at the PInvoke level, in other words, using unsafe system libraries. At first I found out that I need to change a flag value which can be read and written to using GetWindowLong and SetWindowLong from user32.dll. I needed to set the window style with the following attributes:
WS_EX_Layered (Windows Xp/2000+ layered window)
WS_EX_Transparent (Allows the windows to be transparent to the mouse)
WS_EX_ToolWindow (declares the window as a tool window, therefore it does not appear in the Alt-Tab application list)
WS_EX_NoActivate (Windows 2000/XP: A top-level window created with this style does not become the foreground window when the user clicks it).

Then I found out that Form has a virtual method called CreateParams giving me access to the style flag value. Here is the complete code:
protected override CreateParams CreateParams
{
get
{
CreateParams ws = base.CreateParams;

if (ClickThrough)
{
ws.ExStyle |= UnsafeNativeMethods.WS_EX_Layered;
ws.ExStyle |= UnsafeNativeMethods.WS_EX_Transparent;
}
// do not show in Alt-tab
ws.ExStyle |= UnsafeNativeMethods.WS_EX_ToolWindow;
// do not make foreground window
ws.ExStyle |= UnsafeNativeMethods.WS_EX_NoActivate;
return ws;
}
}


However, the problem was that if I changed ClickThrough, it didn't seem to do anything. It was set once and that was it. I noticed that changing Opacity would also set the click through style, so I Reflector-ed the System.Windows.Forms.dll and looked in the source of Opacity. Something called UpdateStyles was used (This method calls the CreateParams method to get the styles to apply) so I used it.

Update: Apparently, the no activate behaviour can also be set by overriding ShowWithoutActivation and returning true. I've set it, too, just to be sure.

Step 5: Now that the form is transparent and has no border or control box, I can't move the window around. I need to make it draggable from anywhere



There is no escape from native methods this time:
private void mainMouseDown(object sender, MouseEventArgs e)
{
// Draggable from anywhere
if (e.Button == MouseButtons.Left)
{
UnsafeNativeMethods.ReleaseCapture();
UnsafeNativeMethods.SendMessage(Handle,
UnsafeNativeMethods.WM_NCLBUTTONDOWN,
UnsafeNativeMethods.HT_CAPTION, 0);
}
}
Both ReleaseCapture and SendMessage are user32.dll functions. What this mouse down event handler does is say to the Window that no matter where it was clicked, it actually clicked the draggable area.

Step 6: Remove flicker



Well, I am getting a bit ahead of myself, here, the flickering becomes annoying only when I implement the blending of an image into another, but since it is also a style setting, I am putting it here:
SetStyle(ControlStyles.AllPaintingInWmPaint 
| ControlStyles.UserPaint
| ControlStyles.OptimizedDoubleBuffer, true);
This piece of code, placed in the Form constructor, tells the form to use a double buffer for drawing and to not clear the form before drawing something else.

Update: It seems the same thing can be achieved by setting the Control property DoubleBuffer to true as it seems to be setting ControlStyles.OptimizedDoubleBuffer | ControlStyles.AllPaintingInWmPaint and ControlStyles.UserPaint seems to be set by default.

Step 7: Blend the images one into the other



Well, in order to make an image blend nicely into the next, I used a Timer. 10 times a second I would decrease the opacity of the first, increase the opacity of the second and draw them one over the other.

A small detour: if you think about it, this is not absolutely correct. A 70% opacity pixel blocks 70% of the light and lets 30% of the image behind show. If the image underneath has 30% opacity, then it shows 30% left from 30% of the image and it doesn't get opaque. But if I just set the opacity of the bakground image to 100%, it shows really strong on the parts of the images that are not overlapping, where image1 is transparent and image2 is not.

Unfortunately there is no resource friendly way to read write/pixels. It's either GetPixel/SetPixel in a Bitmap class (which are very slow) or using pinvoke again. I prefered to use the opacity hack which looks ok.

I was already using an extension method to invoke any change on the form on its own thread (as the Timer ran on its own thread and I would have gotten the "Cross-thread operation not valid: Control [...] accessed from a thread other than the thread it was created on" exception or "Invoke or BeginInvoke cannot be called on a control until the window handle has been created"):
public static void SafeInvoke(this Control control, Action action)
{
if (control.IsDisposed)
{
return;
}
if (!control.IsHandleCreated)
{
try
{
action();
}
catch (InvalidOperationException ex)
{
}
return;
}
if (control.InvokeRequired)
{
control.BeginInvoke(action);
}
else
{
action();
}
}

This is where I got the "Object is currently in use elsewhere" InvalidOperationException. Apparently the Image class is not thread-safe, so both the timer and the form were trying to access it. I tried locking the setter and getter of the Image property on the class responsible with the image blending effect, with no real effect. Strangely enough, the only solution was to Clone the image when I move it around. I am still looking for a solution that makes sense!

Step 8: Showing the window while dragging it



Using WS_EX_NOACTIVATE was great, but there is a minor inconvenience when trying to move the form around. Not only that the image is not shown while moving the form (On my computer it is set to not show the window contents while dragging it), but the rectangular hint that normally shows is not displayed either. The only way to know where your image ended up was to release the mouse button.

It appears that fixing this is not so easy as it seems. One needs to override WndProc and handle the WM_MOVING message. While handling it, a manual redraw of the form must be initiated via the user32.dll SetWindowPos method.

The nice part is that in this method you can actually specify how you want the form draw. I have chosen SWP_NoActivate, SWP_ShowWindow and SWP_NoSendChanging as flags, where NoActivate is similar with the exstyle flag, ShowWindow shows the entire form (not only the rectangle hint) and NoSendChanging seems to improve the movement smoothness.

Quirky enough, if I start the application without Click through set, then the rectangle hint DOES appear while dragging the window. And with my fix, both the image and the rectangle are shown, but not at the same time. It is a funny effect I don't know how to fix and I thought it was strange enough to not bother me: the rectangle is trying to keep up with the hot babe and never catches on :)

Step 9: Dragging custom images



I am a programmer, that means that it is likely to add too many features in my creations and never make any money out of them. That's why I've decided to add a new feature to HotBabe.NET: droppping your own images on it to display over your applications.

At first I have solved this via ExStyle, where a flag tells Windows the form accepts files dragged over it. A WndProc override handling the WM_DROPFILES message would do the rest. But then I've learned that Windows Forms have their own mechanism for handling file drops.

Here are the steps. First set AllowDrop to true. Then handle the DragEnter and DragDrop events. In my implementation I am checking that only one file is being dropped and that the file itself can be converted into an image BEFORE I display the mouse cursor hint telling the user a drop is allowed. That pretty much makes the ugly MessageBox that I was showing in the previous implementation unnecessary.

Step 10: Reading files from web, FTP, network, zip files, everywhere, with a single API



Reading and writing files is easy when working with local files, using System.IO classes, but when you want to expand to other sources, like web images or files bundles in archives, you get stuck. Luckily there is a general API for reading files using URI syntax in the System.Net.WebRequest class. Here is a sample code for reading any file that can be represented as an URI:
WebRequest req = WebRequest.Create(uriString);
using (var resp = req.GetResponse())
{
using(var stream= resp.GetResponseStream())
{
// do something with stream
}
}


WebRequest can also register your own classes for specific schemas, others than http, ftp, file, etc. I created my own handler for "zip:" URIs and now I can use the same code to read files, web resources of zipped files.

One point that needs clarification is that at first I wanted an URI of the format "zip://archive/file" and I got stuck when the host part of the URI, mainly the archive name, would not accept spaces in it. Instead use this format "schema:///whatever" (three slashes). This is a valid URI in any situation, as the host is empty and does not need to be validated in any way.

The rest are details and you can download the source of HotBabe.NET and see the exact implementation. Please let me know what you think and of any improvement you can imagine.

This will be a long article, one that I intend to add on while understanding more about the MVVM pattern and patterns in general. Also, while I am sure I will add some code during the development of the post, this is intended mostly as a theoretical understanding of the said pattern.

For an easy and short explanation of the concepts, read this article: WPF Apps With The Model-View-ViewModel Design Pattern.

The start of every concept research these days seems to start with Wikipedia. The wiki article about Model View ViewModel says that MVVM is a specialization of the PresentationModel design pattern introduced by Martin Fowler specific for the Windows Presentation Foundation (WPF). Largely based on the Model-view-controller pattern (MVC).
Further reading from Martin Fowler's site on the MVC pattern, which seems to stand at the core of all this specialization, revealed this: Probably the widest quoted pattern in UI development is Model View Controller (MVC) - it's also the most misquoted. [...] In MVC, the domain element is referred to as the model. Model objects are completely ignorant of the UI. [...] The presentation part of MVC is made of the two remaining elements: view and controller. The controller's job is to take the user's input and figure out what to do with it. There is a lot more there, about how in the early MVC concept there were no events or binding of any sort. Instead the controller would get the input from the UI, update the model, then the View would change as the model changes using some sort of Observer pattern. Even from these few quotes, one can see that in this "holy trinity" there are actually two basic actors: the Model (which, to make it easier to differentiate later on from other models, I will call Data Model) and the Presentation (controller+view in MVC).
Let's see what the PresentationModel pattern is all about: Presentation Model pulls the state and behavior of the view out into a model class that is part of the presentation. The Presentation Model coordinates with the domain layer and provides an interface to the view that minimizes decision making in the view. The view either stores all its state in the Presentation Model or synchonizes its state with Presentation Model frequently. As I see it, it introduces a new model, one specific to the Presentation side but independent of the UI controls. Martin Fowler specifically says about the PresentationModel pattern: Probably the most annoying part of Presentation Model is the synchronization between Presentation Model and view. It's simple code to write, but I always like to minimize this kind of boring repetitive code. Ideally some kind of framework could handle this, which I'm hoping will happen some day with technologies like .NET's data binding. and Presentation Model allows you to write logic that is completely independent of the views used for display. You also do not need to rely on the view to store state. The downside is that you need a synchronization mechanism between the presentation model and the view. This synchronization can be very simple, but it is required.

I also find this article about different Model View Presenter patterns very informative and the diagrams easier to understand than Fowlers UML or whatever that horrible diagraming he uses is :)

This brings us to MVVM. It is basically the PresentationModel pattern, where WPF/Silverlight types of complex binding take care of the synchronization of View and ViewModel. For me, one of the most important aspects of this approach is that the complex interactions between UI components (and that don't involve the data in the DataModel) can be left in the View and completely ignored further down. That makes interchanging Views something very easy to do, as the entire "UI logic" can be separated from the more general presentation logic. In this, I see that the UI becomes a third layer by the introduction of the ViewModel/PresentationModel in between the Data Model and the Presentation.
I have imagined doing this in a Web or stricly Windows Forms environment. As Fowler said, the plumbing required for synchronization between the view and the viewmodel makes it not worth the effort. That is where the WPF Data Binding comes in.

Let's start the MVVM chapter with a simple example. There is a need to search people by using different filters, display the list of found people and give the ability to click a person and see the details in a separate detail pane. The filters can be simple (Google like textbox) or complex (specific role, age, etc searches). The complex filters of the search are hidden in a separate panel that can be shown or not.
An ASP.Net or Windows Forms application would probably create a form containing the searchbox, the additional filters in a panel with a checkbox or button to show/hide it, the details panel with textual information and a grid where the list of people would be displayed. Events would provide all the needed plumbing, with the code executed on them placed in the code behind of the form, changing what was needed. See, the code behind was already an attempt to separate the presentation from code, although the separation was mostly symbolic. One might have employed a flavour of the MVC pattern, creating a separate controller class that would have worked with the data model and the form (as a view) through interfaces. That means a lot of plumbing, anyway.
In WPF, one creates the form, as in the Windows Forms approach above, but then it binds no events (or very few, I will talk about that later). Instead, it uses data binding to link UI components to properties that it expects to find on the object that will be provided to the view as a DataContext, that is the ViewModel. It doesn't know what the format of this object is and, indeed, the properties are found using reflection, which makes this slightly slower than other methods.
What this means is that any code that reacts to a change of a UI component would be placed on an event handler of the property to which it is bound. When the property changes, stuff happens, not when someone clicks a checkbox. This makes the architecture a lot more testable from code, as all a test needs to do is change a property, not perform a click. It also means that a lot of extra plumbing must be done on those properties, for example the ViewModels could implement INotifyPropertyChanged and then notify on any property being changed. Also lists must not only inform on the get/set operations on them, but also on their items, which implies using ObservableCollection, ObservableDictionary, BindingList and other objects that observer their items and notify on change. On the Views, Dependency and Attached properties come into play , and I will link to some explanatory posts later on. They are extremely important in WPF, because they compute the value, rather than store it, but that's another story altogether.
What this also means is that events, in the way there are used in Windows Forms scenarios, are almost a hinderance. Events cannot be bound. If they are handled in bits of code that change properties in the ViewModel, then the code must either have a reference to a specific type of ViewModel, which defeats the whole purpose of MVVM, or to read/write properties using reflection, which means extra plumbing in the View code. Not that this cannot be done, and there are several solutions to that. However, it would be ugly to write a view completely in XAML, binding everything you need to properties that are to be found on the ViewModel, then starting writing code just for a few events. Here is where commands come in.
The Command pattern is an Gang of Four pattern, useful in WPF by providing objects that can be bound and that encapsulate a behaviour that will be executed. Read more about Commanding on MSDN. Many WPF controls exposes events as well as commands for common actions, for example the Button class exposes the OnClick event, but also the Command property (which will be executed on click) and the Clicked property (which will be set on click).
Commands in WPF implement the ICommand interface which exposes the Execute method as well as the CanExecute method. A default WPF button that has a command bound to its Command member will appear as disabled if the CanExecute method returns false, that because the ButtonBae class implements ICommandSource. More about commands when I present the RelayCommand class, which has become quite commonplace in the MVVM world.
A problem is that not all controls have a command for every concievable event. A solution is, of course, to inherit from the control and create your own command for a specific event. It only requires that you handle the event internally, expose a property that implements ICommand and execute that command inside the event handler. This brings the advantage that the control can be reused with minimal changes in the XAML. There are other solutions, one of them is to use Attached Properties. If you don't want an attached property for every event that you use, read this article. A very comprehensive article about the application of Commanding in WPF can be found here: WPF Command-Pattern Applied.

So far so good. Using the concepts above we can separate the UI from the data completely, as the View only uses binding on the Data Model and can be replaced with any other View that binds to existing properties. This pattern can be used on any level, be it the window or the user control level. Controls that are strictly UI, of course, don't need to implement MVVM. There are other aspects that were not covered here, more specific to WPF, like Routed Commands and Events and concepts like global messaging. But since they are not really part of the MVVM idea, I will leave them for other posts.
There is also the question of code. I will not be doing any in the post for now. However, I will be ending this with a few links that seem relevant.

Extra links:
Adventures in MVVM -- Ball of Mud vs MVVM
Hands-On Model-View-ViewModel (MVVM) for Silverlight and WPF
Exploring a Model-View-ViewModel Application; WPF Password Manager, Cipher Text

Another important thing to consider is the myriad MVVM frameworks out there, all of them implementing some helper classes and prewiring of applications. I was talking earlier about the RelayCommand. Imagine you want to create a ViewModel that exposes a Command. That command would need to implement ICommand, therefore being an object that has two methods: one to execute and the other to determine if it is possible. Creating a class for each such command would be tedious. The RelayCommand is a generic class of T (where T is the type of the command parameter) with a constructor that accepts an Action of T and a Func of T. You instantiate it with the methods in your class that are to be used and that is it.

I will update this material with more information as it becomes available and if I have enough time for it.

Attached properties allow you to add new properties and functionality without changing one bit of the code of the affected classes. Attached properties are quite similar to Dependency properties, but they don't need an actual property in the affected object. You have probably worked with one when setting the Grid.Column property of controls inside a WPF Grid.

How does one implement it? Well, any class can have the static declaration of an Attached property for any other class. There are decorative attributes that indicate to which specific classes the property should appear in the Visual Studio property window. The caveat here is that if the namespace of the class has not been loaded by VS, the property will not appear, so it is better to place the class containining the property in the same namespace as the classes that the property is attached to.

Well, enough with the theory. Here is an example:

public static readonly DependencyProperty SizeModeProperty
= DependencyProperty.RegisterAttached(
"SizeMode",
typeof (ControlSize), typeof (MyEditor),
new FrameworkPropertyMetadata(
ControlSize.Custom,
FrameworkPropertyMetadataOptions.OverridesInheritanceBehavior,
sizeModeChanged)
);

[AttachedPropertyBrowsableForType(typeof (TextBox))]
public static ControlSize GetSizeMode(DependencyObject element)
{
if (element == null)
{
throw new ArgumentNullException("element");
}
return (ControlSize) element.GetValue(SizeModeProperty);
}

[DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public static void SetSizeMode(DependencyObject element, ControlSize value)
{
if (element == null)
{
throw new ArgumentNullException("element");
}
element.SetValue(SizeModeProperty, value);
}

In this piece of code I have just defined a SizeMode property for a class called MyEditor, the default value being ControlSize.Custom. To use it, I would write in the XAML something like MyEditor.SizeMode="Large" and it would attach to any DependencyObject. The FrameworkPropertyMetadataOptions flags are important, I will review them later on. This also declares a sizeModeChanged method that will be executed when the SizeMode changes.

The GetSizeMode and SetSizeMode methods are needed for the attached property to work. You might also notice this line: [AttachedPropertyBrowsableForType(typeof (TextBox))], decorating the getter, which tells Visual Studio to display SizeMode in the properties window of TextBox objects. Another possible attribute is [AttachedPropertyBrowsableForChildren(IncludeDescendants = true)] which tells Visual Studio to display the property for all the children of the control as well.

Now, how can this be useful? There are more ways it can.
One of them is to bind stuff to the property in Triggers or Templates like this: Binding="{Binding Path=(Controls:MyEditor.SizeMode), RelativeSource={RelativeSource Self}}". This is interesting because one can use in the visual UI properties that are not part of the actual code or ViewModel.
Another solution is to use the change method, but be careful that the method must consider all possible uses for the property and also it will not work for when you explicitly set the default value (as it doesn't actually change)! Let me detail with a piece of code:

private static void sizeModeChanged(DependencyObject d,
DependencyPropertyChangedEventArgs e)
{
FrameworkElement elem = d as FrameworkElement;
if (elem == null)
{
throw new ArgumentException(
"Size mode only works on FrameworkElement objects");
}
switch ((ControlSize) e.NewValue)
{
case ControlSize.Small:
elem.Width = 110;
break;
case ControlSize.Medium:
elem.Width = 200;
break;
case ControlSize.Large:
elem.Width = 290;
break;
case ControlSize.Custom:
break;
default:
throw new ArgumentOutOfRangeException("e",
" ControlSize not supported");
}
}


Here I am setting the Width of a control (provided it is a FrameworkElement) based on the change in SizeMode.

Ok, that is almost it. I wanted to shaed some extra light to the FrameworkPropertyMetadataOptions flags. One that is very important is Inherits. If set, the property will apply to all the children of the control that has the property defined. In the example above I first set FrameworkPropertyMetadataOptions.Inherits as a flag and I got an error, because it would try to set the Width to children controls that were not FrameworkElements like Border.

Another interesting page that is closely related to this is I’ve created an Attached Property, now how do I use it? where an Attached property is used in a Behavior, which is actually implemented by the Blend team and, as such it is still in the Expression assembly. Here are two other pages about this:
Using a Behavior to magnify your WPF applications
The Attached Behavior pattern.

My world view is limited by the data that comes to me. I have my tiny slice of reality, a few friends, my work, then there are movies, news, documentaries, the Internet and so on. You will notice that I placed them in a certain order, it is the order that to me seems to go from more bullshit and less information to more information. I never believed the expression "Truth is stranger than fiction", so lets set the slice of reality aside. And, being first on my list... full of bullshit :)

Movies teach me a lot, but there is just as much untruth and deceit in them as there is stuff worth knowing. News are focused on a part of life that normally doesn't interest me, but they still have a higher percent of useful information. Then there are the documentaries, stuff from Discovery Channel and the likes. Well, I have mixed feelings about those. There are things that they teach me and they do it in a pleasant manner, yet, by the time they end, I feel like there is so much more that I wanted to know and that it all just stopped when it got interesting. On further analysis, it seems the quantum of information in an hour of film was something I could blog in two or three paragraphs.

And then there is the Internet. It is bursting with information, if only I knew where to look and only if I had the discipline of researching, summarising and storing that information. I am working on that, even this blog is used to store what I find, but I am still only an amateur. There is something that attracted me a while ago, something called Open Courseware. There were courses from the largest universities, freely available on the net. However, they left me feeling disappointed as they were mostly text, the few that were in media format were mostly audio and, in the end, they were only poor recordings of classroom courses, sounds of scribling on the blackboard included.

Enter The Teaching Company, a company that produces recordings of lectures by nationally top-ranked university professors as well as high-school teachers. The lectures are well done, they feature some guy or gal that present the information without having to write stuff on blackboards. If anything is to be shown, it will be a computer slide or animation, while the details on spoken information are added to the screen (for example the names of people). Wonderful stuff, only it is not free.

If you go to the official site you will find courses on just about anything, priced at around 35$ per download and 70$ per DVD if they are "on sale" and the rest of them going to about 250$, with a range of 20-40 lectures per course. Of course, there is the option of looking for "TTC torrent" on Google and see what you find there. For the people in Africa that just got an Internet cable installed, I mean.

I had the luck to start with linguistics (Understanding Linguistics: The Science of Language by John McWhorter), lucky not because linguistics is so interesting, but because John McWhorter was really charismatic and had a very well constructed set of lectures. And because linguistics is an interesting topic, at least at the introductory level of this course. It was funny, too, the guy is what I imagine a typical New Yorker to be. He is black with a Scottish name, he talks a lot of Broadway plays and old movies, he is socially astute; very cosmopolitan.

Then I went for astronomy (New Frontiers: Modern Perspectives on Our Solar System by Frank Summers). If you like those National Geographic documentaries about the solar system, you will love this. Towards the end it got detailed in a bad way, but only compared with the beginning of the course, which was really well done. The lectures are about the Solar System, from the standpoint of a modern astronomer, in light of all the recent discoveries. Also, a very well made point about why the structure of the solar system was revised and Pluto got demoted. At the end it talks of other star systems and what are the methods to detect and study them.

Not all the courses are so good, though. I had the misfortune of trying out Superstring Theory: The DNA of Reality by Sylvester James Gates, Jr. The guy is a black man in his late fifties who tries to explain Superstring theory without using any mathematics. He starts by repeating a lot of what he said in previous lectures and, indeed, in the same one earlier on, then goes asking these stupid questions that repeat what he said again. Something like "As I said in a previous lecture this and this and this happened. But why did this and this and this happen?". Ugh. If it was only about that, I would have finished watching the course, but it was something completely unstructured, boring and dragging. After 12 lectures out of 24 I knew nothing about string theory, except vague things like "if I imagine a ball that goes towards another ball and they shout at each other and the waves make other balls while the previous balls disappear but wait they appear again...". What I knew is that I had to stop watching. Sorry, Mr. Gates, lecturing... just not your thing. Stick to short appearances on Nova PBS shows.

Right now I am on Building Great Sentences: Exploring the Writer's Craft by Brooks Landon. It talks about constructing good sentences in order to improve one's writing. I have the feeling that the guy uses more detail than necessary. Like when he explains a concept he has to give at least 5 examples, when 2 or 3 would have been enough. But then again, maybe I am wrong. I will have to finish the course to give you a definite opinion.

Next on my list:
Quantum Mechanics: The Physics of the Microscopic World by Benjamin Schumacher
Understanding Genetics: DNA, Genes, and Their Real-World Applications by David Sadava
Introduction to Number Theory by Edward B. Burger
Understanding the Brain by Jeanette Norden

Does all this make me a very smart person? Not really. Remember that most of these are introductory courses. They do not contain exercises or books that you need to read, nor do they require a very high level of previous knowledge in order to understand them. They are, pure and simple, like those Discovery Channel shows, only they don't end when they get interesting and they are not so full of bullshit. After watching one of these courses (or, indeed, listening to them as podcasts while you are going to work) you will have an idea on where to go digging deeper for the topics that interest you.

Good learning!

and has 0 comments
We all know that dogs are smart. They understand verbal commands and can make complex decisions in new situations. However, they can't speak. Well, there are some weird rare cases of dogs sort of snarling "mama", but it's not real speach.

Right now, though, I've had an epiphany: dogs WON'T SPEAK, because they simply are not equipped to. They are smart enough to try and learn from their failures. However, the dog that lives next to my office now howls to the same notes as the ambulances that pass by the building. Also there are numerous cases where dogs are howling in the tune of a song they hear.

Now this is my idea: what if dogs are able of speach, just not human one? What if a properly constructed highly vocal and high pitched language would work for dogs? WE could not speak dog then, but we are smart, we have devices and computers and stuff like that.

Update: Having thought a bit more about this I have come to a conclusion. It makes sense that dogs should be able to communicate by howling. Duh! They are descended from wolves. They are still, genetically speaking, wolves. What about the barking? Wolves bark when they are pups. Somehow, the domestication process makes canides retain some youthful characterstics. Therefore, it only makes sense that they should be able of communication at a higher level through howling rather than barking. Although, dogs being smart as they are, it is only one hypothesis that needs proof.

My friend, Meaflux, by his own description "an anthropology buff", reminded me of the other "smart animals", the Cetacea order, whales and dolphins and such. They sing, they use high pitched wails (whails? :D) to communicate. I agree, it makes sense underwater, but since they are descended from a wolf like ancestor and since fish don't use this communication system, I would say there is a strong connection.

So, in conclusion, it is possible that the pack communication method of wolf howling combined with the millenia old dog interaction with humans could result, with some training, in sone sort of meaningful conversation skills? If only people working with dogs and apes would read my blog...

What has gone into Siderite and made him rave mad? Is he high? Everybody knows that software patterns are all the rage and the only perfect and delicious way to make software. You can't just go "cowboy style" on software, it's an industry after all.

Well, I am not saying that (although you can probably guess from my impression of my virtual/inner critic that I am a bit partial to the cowboy approach). All I am saying is that once you identify a pattern (and yes, to open another parenthesis, a pattern is identified not learnt) one should never stoop low enough to use it. Some software should do that for him!

One good example is the Iterator Pattern. It sounds so grand, but the software implementation of it is the foreach command. Does anyone actually think while iterrating through a collection that they are using a pattern? As I said before, patterns are identified. You think of what you have been doing, see a pattern, make some software to take care of similar situations, then get on to identifying another pattern.

Well, yes, but you can't entrust everything to a software, Siderite! You will bloat your code, create tools that will do less than you wanted and still end up doing your own efficient code. I know, I've seen it before!

Well, thank you, critic! You have just identified a pattern! And any pattern should be solved. And yes, I agree that software can't do everything for you (yet!) and that sometimes the tools that are designed to help us with a problem become a problem themselves. But instead of having "two problems" you have a bad solution to a previous problem. Fixing the solution would fix everything and the problem domain is now one level of abstraction higher.

Stuff like managed code, linq, TDD, ORMs, log4net... just about every new technology I can think of, they are all solutions to patterns, stuff that introduces new problems on a higher level. What C# programmer cares about pointers anymore? (developers should still be aware of the true nature of pointers, but care less about it).

There is one final issue though, the one about the actual detection of patterns. Using "prediscovered" patterns like from the classic Gang of Four book or anything from Martin Fowler is ok, but only if they actually apply to your situation. That in itself shows you have to have a clear image of your activity and to be able to at least recognize patterns when you see them. Sometimes you do work that is so diverse or so slow that you don't remember enough of what you did in order to see there is a repetitive pattern. Or, worse, you do so much work that you don't have time to actually think about it, which I think is the death of every software developer. Well, what then?

Obviously a log (be it a web one or just a simple notebook or computer tracking system) would help. Writing stuff down makes one remember it better. Feeling the need to write about something and then remembering that you have already done so is a clear sign of a pattern. Now it is up to you to find a solution.

Back to the actual title of the post, I recognize there are situations where no automated piece of code can do anything. It's just too human or too complex a problem. That does mean you should solve it, just not with a computer tool. Maybe it is something you need to remember as a good practice or maybe you need to employ skills that are not technical in nature, but should you find a solution, think about it and keep thinking about it: can it be automated? How about now? Now? Now?

After all, the Romans said errare humanum est, sed perseverare diabolicum. The agile bunch named it DRY. It's the same thing: stop wasting time!

and has 0 comments
I don't pretend to know much about mathematics, but that should make it really easy to follow this article, because if I understood it, then so should you. I was watching this four episode show called Story of Maths. Its first episode was pretty nice and I started watching the second. The guy presented what he called the Chinese Remainder Theorem, something that was created and solved centuries before Europeans even knew what math was. It's a modular arithmetic problem. Anyway, here is the problem:

A woman selling eggs at the market has a number of eggs, but doesn't know exactly how many. All she knows is that if she arranges the eggs in rows of 3 eggs, she is left with one egg on the last row, if she uses rows of 5, she is left with 2 eggs, while if she uses rows of 7, 3 eggs are left on the last row. What is the (minimum) number of eggs that she can have?
You might want to try to solve it yourself before readind the following.

Here is how you solve it:

Let's call the number of eggs X. We know that X 1(mod 3) 2(mod 5) 3(mod 7). That means that there are three integer numbers a, b and c so that X = 3a+1 = 5b+2 = 7c+3.

3a = 5b+1 from the first two equalitites.
We switch to modular notation again: 3a 1(mod 5). Now we need to know what a is modulo 5 and we do this by looking at a division table or by finding the lowest number that satisfies the equation 3a = 5b+1 and that is 2. 3*2 = 5*1+1.

So 3a 1(mod 5) => a 2(mod 5).

Therefore there is an integer number m so that a = 5m+2 and 3a+1 = 7c+3. We do a substitution and we get 15m+7 = 7c+3.

In modular that means 15m+7 3(mod 7) or (7*2)m+7+m 3(mod 7). So m 3(mod 7) so there is an integer n that satisfies this equation: m = 7n+3. Therefore X = 15m+7 = 15(7n+3)+7 = 105n+52

And that gives us the solution: X 52(mod 105). The smallest number of eggs the woman had was 52. I have to wonder how the Chinese actually performed this calculation.

Let me summarize:
X 1(mod 3) 2(mod 5) 3(mod 7) =>
X = 3a+1 = 5b+2 = 7c+3 =>
3a 1(mod 5) =>
a 2(mod 5)=>
a = 5m+2 =>
X = 15m+7 = 7c+3 =>
15m+7 3(mod 7) =>
m 3(mod 7) =>
m = 7n+3 =>
X = 15(7n+3)+7 = 105n+52 =>
X 52(mod 105)
.

For me, what seemed the most hard to understand issue was how does 3a 1(mod 5) turn into a 2(mod 5). But we are in modulo 5 country here, so if 3a equals 1(mod 5), then it also equals 6(mod 5) and 11 and 16 and 21 and so on. And if 3a equals 6(mod 5), then a is 2(mod 5). If 3a equals 21(mod 5), then a equals 7(mod 5) which is 2(mod 5) all over again.

I have been working on a silly little project that involved importing files and then querying the data in a very specific way. And I wanted to do it with the latest technologies so I used The Entity Framework! (imagine a little glowing halo around that name and a choir in the background).

Well, how do I do an efficient import in Linq to Entities? I can't! At most I can instantiate a lot of classes and add them to the DataModel, then SaveChanges. In the background this translates to a lot of insert statements. So it occurred to me that I don't really need Entities here. All I needed is good old fashioned ADO.Net and a SqlBulkCopy object. So I used it like that. A bit of unfortunate translation of objects to a DataTable because the SqlBulkCopy class knowns how to import only a DataTable and I was set.

Ok, now back to the querying the data. I could have used ADO.Net, of course, and in this project, I would probably have been right, but I suspected the requirements for the project will change so I used Entities. It worked like a charm and yes, the requirements did get bigger and stranger as I went. But then I had to select the users that have amassed a number of rows in two related tables (or use a value in the user table) but only if the total number of amassed rows would satisfy a formula based on a string column in the user table that mapped to a certain value stored in the web.config a complicated query. I did it (with some difficulty) in Linq, then I had to solve all kind of weird issues like not being able to compare a class variable with an enum value because the number of types that can be used in a Linq to Entities query is pretty limited at the moment.

Well, was it the best way? I don't know. The generated SQL is something containing a lot of select from select from select sometimes 6 or 7 levels deep. Even the joining is done with select from tables. Shouldn't I have used a stored procedure instead?

To top it off, I listened to a podcast today about Object Databases. They do away with ORMs because there is no relational to begin with. The guy argued that if you need to persist objects, wouldn't an Object Database be more appropriate? And, as reporting would be a bitch when having to query large amounts of tabular data, shouldn't one use a Relational Database for this particular task in the project?

So this is it. It got me thinking. Is the database/data access layer the biggest golden hammer out there? Are we trying to use a single data access model and then build our applications around it like big twisting spaghetti golden nails?

A while ago I wrote a little post about pandemics. I was saying then how little we know about them and how little we are taught about disease outbreaks as opposed to, say, war. This post, however, it about the reverse of the coin: mediatization of pandemic fears.

I was watching the news and there was this news about a swine flu pandemic in Mexico. Thousands were infected, more than 100 people dead and the disease had already spread in the entire world and it was impossible to contain. Gee, serious trouble, yes? I had to stay informed and safe. (see the twisted order on which my brain works?)

So I went directly to the World Health Organization site and subscribed to their disease outbreak RSS feed. And what do I read? 27 cases of infections and 9 dead. Come again? They said 150 dead on the news. The news can't possibly lie! It must be either a) a US site where they only list US citizens b) a machination so that people don't panic when the situation is so obviously blown. [... a week passed ...] I watch the news and what do I see? The reported death toll from the swine influenza strain has dropped to about 15 people. False alarm, people, the rest of those 150 people actually died of other unrelated stuff. So the WHO site was right after all, maybe it having to do with the fact that they work with data, not viewer rates. Hmm.

The moral of the story? My decision to stop watching TV is a good one. Get the real genuine source of information and "feed" from it. I am now subscribed to the new disease outbreaks feed and the earthquake feed and I feel quite content in that particular regard.

That doesn't mean the "Swine" flu is something to be taken lightly. As of today, there are almost 1000 cases of infection world wide and, even if the flu development has reached a descendant curve, this might change. The 1918 epidemic actually had four outbreaks, two consecutive years, in the spring and autumn.

On a more personal note, my wife has (and probably myself, too) something called toxoplasmosis, a disease that you take from a cat. I only heard about it two times, one from a colleague and one from Trainspotting. It a strange disease, one that is mostly asymptomatic, doesn't have a real cure, causes behavioral changes in mice and has been linked to a certain type of schizophrenia. Wikiing it, I got that there are about 30% to 65% of the world population that have it and that the drug used to treat it is actually a malaria drug. Is toxoplasmosis the malaria of the developed world? A lot of us have it, but we bear with it?

Stuff like that shows how fragile is both our understanding of as well as our defense from the microscopic world. Could it be that, with all the medical advances from the last century, we are still in the Dark Ages?

I've been listening to my favourite podcasts, HanselMinutes and .NetRocks, as usual and I've stumbled upon another gem of a show. It was about Test Driven Development. Why am I talking so much about this, although I don't practice it? Because I am sure I will get around and do practice it. It is not just a hype, it is the only way to do software. And I will explain why. But before that, let's talk about a confusion that has been cleared by the show I have been talking about.

The name Test Driven Development is usually associated with Automated Unit Testing. While this is mostly used only in TDD, it is not required by TDD at all. The badly chosen word Test actually means "meaningful, measurable, goals", in other words, the specifications! If you have those, you can test your application against the requirements and determine what is wrong, if anything. Without a clear view of the specs, you cannot tell if the project is performing as needed.

So if you think about TDD as Specifications Driven Development, you realize that you have been doing it all along! Admittedly, now it sounds even more like STD, but hey, sacrifices must be done in the name of improving code blog readability.

Now, I was saying that this is the only way to do software. Actually, I have explained why just above, but I will get into some personal details. I have been "blessed" with a project where the deadline was set before the specifications were drawn. Worse even, the specs did not come from people that really understand the business process, but from people using another piece of software that they want replaced. In other words, we're pretty much inventing ways of porting a badly designed Windows desktop app into ASP.Net. As this wasn't enough, we are also inventing features that were badly described by the client and starting from a partially functional ASP.Net project written by junior programmers.

What a drag! But that was actually not so bad as realizing that my developer output was slow, bad, and overall smelly and ugly. Why was that? Why couldn't I just stop whining and do what I know had to be done? Because there were no specs!. Without clearly drawn specs of not only what I had to do, but also what the initial project was supposed to do, my hands were tied. I could not refactor the code, because I had no way of telling if I broke anything. Has it ever happened to you to take a piece of code, make it better, then realize it is not working and you don't know why? The fear of that happening is the most important reason why people don't refactor. The next important factor being a manager that thinks refactoring is just a waste of time and has no vision of the future of the project.

But also, having no vision of what is to be done is the reason why developers are not motivated to do their job. Even the lowliest code monkey has to have a glimpse of the future of what they are doing, otherwise they are literally flying blind. Software development is just as much of an art as web design. It is actually strange that people don't understand there are many types of art just as there are many types of scientific thought. Even if we don't actually care how the app is gonna look as long as it does the job, we do feel pride in its functionality and it is nothing that hurts more as not knowing what that software is supposed to do and a clear way of measuring our own performance.

OK, enough of this. The bottom line is that a project needs to have clear specifications. The first test for a software is the compiler! You can even call it an automated test! ...but the last test is running through the spec list and determining if it does the job as required. Another podcast said that the process of creating automated tests has as a side effect the significant improvement of software quality, but not because of the tests themselves, but of the process of designing the tests. If your tests are meaningful, then you know what the app is to do and you have a clear vision of what failure and success mean and in the process of test design, you get to ask yourself the questions that lead to understanding the project. THAT is Test Driven Development!

and has 0 comments
At first, I thought it was a coincidence, but it turns out it was all deliberate. This dog would come out from the park and then shit on the sidewalk. And not any kind of poop, but that smelly, sticky, yellow crap that only dogs seem able to manufacture. Then the dog would go back into the park, just far enough so that its presence would not be obvious, and it would watch. People would eventually step in the shit, all reacting to it in different ways. The ears of the dog would jump straight up, its eyes focused on the target, absorbing everything that happened.
He did that for three days in a row, so it couldn't have been a coincidence. However, when faced with the facts, they didn't seem so strange to me. Actually it makes perfect sense for predators to ambush their prey and learn from its behavior as much as possible. I admit, for a dog it was a pretty sick and smart thing to do, but it was all within reason. After all, I was doing the same thing, staking the man up, watching his every move, learning his habits.

I can remember the days when this filled me with excitement, the thrill of the hunt reverberating from some place deep inside my skull, but now I could only feel calm. Not boredom, though; boredom is dangerous, makes people sloppy, gets them hurt. I was missing a certain element, something that, in the past, made all this fun, but I couldn't quite find it now. That's why this was the last one. And only because it was necessary.

The guy showed up at exactly the time I was expecting him to do. Small time thug, acting cool while pretending to be larger than he was capable of ever being. I let him go up to his apartment, slightly amused at the concentrated look-around that he pretended to throw before entering the building. If he wouldn't have been pretending, he would have been able to notice me, watching him from a car for three days in a row. A different car, I admit, but who would be dumb enough to do it differently? Maybe the cops... and the guy was looking out for police presence, I guess. Unfortunately for him, I was no policeman.

I gave him 5 minutes to change his mind. Maybe he would feel the wrongness of the situation, maybe luck would be with him and make him leave for some reason. Maybe he would just grab a beer and drop in his big armchair, watching mind numbing TV shows, as I knew he would. Yet, I always like to allow for the unknown, for the unexplainable, makes it seem more real. Although, when doing this thing for a long time, one sees all kinds of shit and knows almost every way people react when stepping in it. I got out of my car.


I smashed the door lock with my foot and entered the room. Dirty, sloppy, typical bachelor room with a twinge of gangsta. What a dump! My guy froze for a second on his armchair, then, to my surprise, moved really fast and produced a revolver from underneath the small table next to him. He pointed it at me and shouted something along the lines of inquiring on my identity. Of course, a lot of "fuck" and "mudafucka" was involved, although that sounded a bit off coming from an oversized white guy.

I froze for a moment, too. A gun, who would have thought? I closed the door behind me, then turned to him and started telling him what had to be said. I did have three days to think it over in my head, after all.

"There is a saying", I calmly conversed, completely ignoring the vulgar threats coming from my target, " that every boy kills his father to become a man. Of course, it's a metaphor most of the time".

I waited for a reaction, watching how the feeling of control provided by the gun was slightly fading away. He decided to enforce it by standing up and aiming the gun at me from somewhere above his head, throwing profanities at me while doing so. Doesn't he know he can hurt his wrist by firing that way? Not to mention having almost no accuracy whatsoever.

"In other words, no man is complete without killing his father first, metaphorically speaking of course". It was almost hilarious; for a second, the guy thought I was talking about him and me. I could see on his face how he considered being my father or vice-versa. Well, at least not all of this will have been devoid of fun.

"Shut da fuck up, mudafucka! Who da fuck are you anyway? You escaped from some loony ... mental... "

I ignored him "However, it was you that actually killed my father. I therefore seem to be entitled to feel... incomplete.".
I could see the reaction right away. Killing one's father was a serious thing even for a brain dead thug. He knew he was in danger now, maybe he even felt guilty, even if he had no idea who my father was. He did take a step back and aimed the gun with both hands at me. Now, that was better. I could tell he was considering squeezing the trigger right then, but people are always too curious for their own good. He had to know how it plays out.

"I have decided to pursue my quest for completeness by killing you.", I then added, gently pushing him over the edge with a hard look. He fired.


There is something slightly poetic in having your own gun explode in your hands, killing you instantly with a stray piece of metal through the eye and brain. Of course, the cement in the barrel and the well planned weakening of the metal of the revolver cock would be obvious during the police investigation, but it would also be clear that the gun was unregistered and that the victim pulled the trigger voluntarily.

I didn't quite feel complete, though. He hadn't kill my father, either, so I guess it was to be expected, but I was sure he did kill someone's parent at some point in time, so I also expected a bit more gratification.

After all, it was a good call to stop doing this. No fun at all. Well, maybe not a complete stop, more like a sabbatical, to clear one's thoughts. I may never do it again.