I had an idea one of the previous days, an idea that seemed so great and inevitable that I thought about patenting it. You know, when you have a spark of inspiration and you tell no one about it or maybe a few friends and a few years later you see someone making loads of money with it? I thought I could at least "subscribe" to the idea somehow, make it partly my own. And so I asked a patent specialist about it.

He basically said two things. First of all, even if it is a novel idea, if it made of previously existing parts that can obviously be put together, then it doesn't qualify as a patent. If the concept is obvious enough in any way, it doesn't qualify. Say if someone wrote a scientific paper about a part of it and you find the rest in some nutjob blog about alien conspiracies, then you can't patent it. The other thing that he told me is that a true patent application costs about 44000$, in filing and attorney fees. I don't imagine that's a small sum for someone in the US or in another rich country, but it is almost insane for anyone living anywhere else.

But there is a caveat here. What if the nutjob alien conspiracy blog would be this one? What if, by publishing my idea here, no one could ever patent it and the best implementation would be the one that would gather the most support? It's a bit of "no, fuck you!", but still, why the hell not? So here it is:

I imagine, with the new climate of "do not track"ing and privacy concerns that search engines will have a tougher and tougher time gathering information about your personal preferences. Google will not know what you searched for before and therefore will not be able to show you the things it thinks you are most interested in. And that is a problem, since it probably would have been right and you would have been interested in those things. The user, seeing how the search engine does not find what they are looking for, will not be happy.

My solution, and something that is way simpler than storing cookies and analysing behaviour, is to give the responsibility (back) to the user. They would choose a "search profile" and, based on that, the search engine would filter and prioritize the results in a way specific to that profile. You can customize your profile and maybe save it in a list or you can use a standard one, but the results you get are the ones you intended to get.

A few examples, if you will: the "I want to download free stuff" profile would prioritize blogs and free sites and filter out commercial sites that contain words like "purchase", "buy", "trial", "shareware", etc; it would remove Amazon and other online shops from the result list and prioritize ThePirateBay, for example. Some of the smarter and tech savy Googlers are using the "-" filter to remove such words, but they are still getting the most commercially available sites there are. A search profile like this would try to analyse the site, see if it fits the "commercial" category and then filter it out. Now, you might think that sites will adapt and try to trick the engine into thinking they are not commercial in nature. No, they won't, because then the "I want to buy something" profile would not find them. Of course, they will adapt somehow and create two versions of the site, one that would seem commercial and one that would not. But the extra effort would remove from their profit margin. Or try a search profile like "long tail", where the stories that get most coverage and are reproduced in a lot of sites would get filtered out, allowing one to access new information as it comes in.

Bottom line is, I need such a service, but at the moment I am unwilling to invest in making one. First of all it would be a waste of time if it didn't work. Second of all it would get stolen and copied immediately by people with more money than me if it did work. Guess what? It's in my free blog. If anyone does it, they can't patent it, they can only use it because it is a good idea and they should make it really nice and usable before other people make it better.

To my shame, I've lived a long time with the impression that for an HTML element, a style attribute that is written inline always beats any of the CSS rules that apply to that element. That is a fallacy. Here is an example:
<style>
div {
width:100px;
height:100px;
background-color:blue !important;
}
</style>
<div style="background-color:red;"></div>
What do you think the div's color will be? Based on my long standing illusion, the div should be red, as it has that color defined inline, in the style attribute. But that is wrong, the !important keyword forces the CSS rule over the inline styling. The square will actually be blue! And it is not some new implementation branch for non-Internet Explorer browsers, either. It works consistently on all browsers.

Now, you might think that this is some information you absorb, but doesn't really matter. Well, it does. Let me enhance that example and change the width of the div, using the same !important trick:
<style>
div {
width:100px !important;
height:100px;
background-color:blue !important;
}
</style>
<div style="background-color:red;"></div>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
<script>
$(function() {
// $('div').width(200); // the width remains 100!
// $('div').width('200px'); // the width remains 100!
// $('div')[0].style.width='200px'; // the width remains 100!

// $('div').width('200px !important'); //this is not a valid value for the width parameter! in IE it will even show an error
// $('div').css('width','200px !important'); //this is not a valid value for the width parameter! in IE it will even show an error
// $('div')[0].style.width='200px !important'; //this is not a valid value for the width parameter! in IE it will even show an error

var oldStyle=$('div').attr('style')||'';
$('div').attr('style',oldStyle+';width: 200px !important'); // this is the only thing I found to be working!

});
</script>
As you can notice, the might jQuery failed, setting the width property in style failed, the only solution was to add a string to the style tag and override the !important keyword from the CSS with an inline !important keyword!

Update (via Dan Popa): And here is why it happens: CSS Specificity

I've heard about Dilbert as this corporate satire comic that is very funny. I've avoided it as much as possible, probably because I was afraid it would make fun of my way of life and find it makes valid points. Today, I succumbed to my curiosity and clicked a YouTube Dilbert video.



Conclusion: I am Dilbert...

I know of this method from the good old Internet Explorer 6 days. In order to force the browser to redraw an element, solving weird browser refresh issues, change it's css class. I usually go for element.className+='';. So you see, you don't have to actually change it. Sometimes you need to do this after a bit of code has been executed, so put it all in a setTimeout.

More explicitly, I was trying to solve this weird bug where using jQuery slideUp/slideDown I would get some elements in Internet Explorer 8 to disregard some CSS rules. Mainly the header of a collapsible panel would suddenly and intermittently seem to lose a margin-bottom: 18px !important; rule. In order to fix this instead of panel.slideUp(); I used
panel.slideUp(400/*the default value*/,function() { 
setTimeout(function() {
header.each(function(){
this.className+='';
});
},1);
});
where panel is the collapsible part and header is the clickable part. Same for slideDown.

I had to fix this weird bug today, where only in IE9 the entire page would freeze suddenly. The only way to get anything done was to select some text or scroll a scrollable area. I was creating an input text field, then, when pressing enter, I would do something with the value and remove the element.

It appears that in Internet Explorer 9 it is wrong to remove the element that holds the focus. Use window.focus(); before you do that.

I was having one of those Internet Explorer moments in Javascript, when I wanted to use Array.isArray and I couldn't because it was IE8. So, I thought, I would create my own isArray function and attach it to Array, so that it works cross browser. The issue now was how do I detect if an object in Javascript is an Array.

The instanceOf operator came to mind immediately. After all, don't you do the same thing in C#, compare if an object "is" something? Luckily for me, I checked the Internet and reached the faithful StackOverflow with an answer. The interesting bit was explaining why instanceOf would not work for all cases and that is that objects that cross the frame boundaries have their own version of class.

Let's say that you have two pages and one if having the other in an iframe. Let's call them innovatively testParent and testChild. If you create an array instance in testChild like x=new Array(); or x=[];, then the result of x instanceOf Array will be true in testChild, but false in testParent. That's because the Array in one page is different from Array in the other. And, damn it, it makes sense, too. Imagine you did what I did and added a function to the Array class. Would that class be the same as the Array in the iframe, without the function? What if I decide to add
Array.prototype.indexOf?

So, bottom line: in Javascript, instanceOf will not work in any meaningful way across frame boundaries.

Oh, and just so you do have a good way to check if an object is and array, do this:
var strArray=Object.prototype.toString(new Array());
Array.isArray=function(obj) {
return Object.prototype.toString(obj)==strArray;
}

I had to do a very simple Microsoft SQL query in which I wanted to update some of the values in a row from a row in the same table. Actually, the query was already there, but was using two local variables to store the information, then make the update. Something like this:
DECLARE @Var1 INT
DECLARE @Var2 INT
SELECT @Var1=Column1,@Var2=Column2 FROM MyTable WHERE ID=1
UPDATE MyTable SET Column1=@Var1,Column2=@Var2 WHERE ID=2
I really hated that I was using two SQL statements and all that declaring to do a simple update, so I looked up the syntax for the UPDATE statement. It said that if I want to update a table from a source I need to use the FROM keyword, like this:
UPDATE MyTable 
SET Column1=Alias.Column1,Column2=Alias.Column2
FROM MyOtherTable AS Alias
WHERE ID=2
AND Alias.ID=1
As you can see, we use an alias to name another table or query, we use the Alias name for all the conditions for that table and nothing for the conditions on the table we update. Easy, no? I even tested it and it worked. So I tried this:
UPDATE MyTable 
SET Column1=Alias.Column1,Column2=Alias.Column2
FROM MyTable AS Alias
WHERE ID=2
AND Alias.ID=1
I used the same table to update and to alias and it seemed to work. However, the number of updated columns was always 0. Remarkable how difficult it is to find on the net a straight answer about a simple situation like this.

It turns out that even with the alias, MSSql is confusing some things. The solution is to use a query from your table, rather than the name of the table itself. Here is how you do it:
UPDATE MyTable 
SET Column1=Alias.Column1,Column2=Alias.Column2
FROM (SELECT * FROM MyTable) AS Alias
WHERE ID=2
AND Alias.ID=1


SQL 2005 also introduced Common Table Expressions, which can be used to clarify a query. In this case, using a CTE results in the same execution plan and makes the entire query even more convoluted:
WITH Alias(Column1,Column2)
AS (
SELECT Column1, Column2 FROM MyTable
)
UPDATE MyTable
SET Column1=Alias.Column1,Column2=Alias.Column2
FROM Alias
WHERE ID=2
AND Alias.ID=1

Even if the documentation says you can specify a CTE without declaring the column names, I couldn't do it in this situation, I don't know why. I admit I only tried the CTE solution for a minute before discarding it as too verbose.

This will be a short blog post that shows my error in understanding what Javascript maps or objects are. I don't mean Google Maps, I mean dynamic objects that have properties that can be accessed via a key, not an index. Let's me exemplify:
var obj={
property1:"value1",
property2:2,
property3: new Date()
};
obj["property 4"]="value 4";
obj.property5=new MyCustomObject();
obj[6]='value 6';
console.log(obj.property1);
console.log(obj['property2']);
console.log(obj["property3"]);
console.log(obj['property 4']);
console.log(obj.property5);
console.log(obj[6]);
In this example, obj is an instance of object that had three properties and others are added. First the declaration notation is JSON like, then any object can be assigned to a property via two notations: the '.'(dot) and the square brackets. Note that the value of 'property 4' and of '6' can only be accessed via square brackets, there is no dot notation to escape that space and obj.6 is invalid.

Now, the gotcha is that, coming from the C# world, I've immediately associated this with a Hashtable class: something that can have any object as key and any object as value, but instead, a map is more like a Dictionary<string,object>.

Let me show you why that may be confusing. This is perfectly usable:
obj[new Date()]=true;
In this example I've used a Date object as a key. Or have I? In Javascript any object can be turned into a string with the toString() function. In fact, our Javascript map uses a key much like 'Sat Jul 14 2012 00:07:00 GMT+0300 (GTB Daylight Time)'. The translations from one type to another are seamless (and can generate quite a bit of righteous anger, too).

My point is that you can also use something like
obj[new MyObject()]=true;
only to see it blow in your face. The key will most likely be '[Object object]'. Not at all what was expected.


So remember: javascript properties can be any string, no matter how strange, but not other types. obj[6] will return the value you have set in obj[6] because in both cases that 6 is first turned into a string '6' and then used. It has nothing to do with the '6th value' or '6th property'. Those are arrays. The same for a Date or some custom object that has a toString() function that returns something unique for that object. I wouldn't use that, though, as you would probably want to use objects as keys and compare them by reference, not string value.


Programming Game AI by Example is one of those books that would have changed my life had I had read them when I was 15. Mat Buckland is taking a really high tech portion of game making and turning it into child's play. With source code!

From the very beginning we are being told that AI in games is different from what we would normally associate with Artificial Intelligence. AI in games is the thing that makes game agents look smart, but let the user enjoy the game the most. In other words, something that seems smart, but is just stupid enough for you to continue playing.

The book is comprised of ten chapters, heavy with code, but very well structured. The main tool in use are Finite State Machines, but we first get a mechanics physics lecture in chapter 1 where we learn what a vector is and how to normalize it and how to use this in the game physics. Moving to chapter 2, we learn what a state machine is and how to optimize memory by making each one a singleton, how to compose them and why more exciting aspects of artificial intelligence, like say neural networks, are not used more in games. We delve further into methods to optimize what we have learned to make it practical: prioritized dithering, partitioning, BSP, quad and oct trees, fuzzy-Q logic, cell space partitioning, all with code examples, in chapter 3. Chapter 5 is reserved for graphs, Dijkstra, A* and such. Chapter 6 goes into integrating Lua into your games, as a good tool to define and tweak the innards of your game before compiling it all for performance into a single code base. Raven, the example game engine, is detailed in chapter 7. Path planning is described in chapter 8, complete with many optimizations and tricks to make an algorithmic movement of units look natural and smart. Chapter 9 is about goal driven agent behaviour, where we learn how to make an agent define goals and act upon those goals. The composite pattern is suggested as a good solution for goals within goals. We end with a very interesting chapter about fuzzy logic. The basis of this is to fuzzify a situation, infer a behaviour, then defuzzify into a usable algorithmic value.

The bottom line is that this is a very easy book to read, explaining matter-of-factly how to easily create the intelligence in games like Fifa or Counter Strike. The code examples are extensive, but not necessary to understand the gist of things. At the end, it is both a fascinating and intriguing read as well as a good reference book for when you actually need this stuff.

I end this review with a quote from Dijkstra that was also mentioned in the book: The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim. Very nice book and a recommended read.

Yesterday I wanted to upgrade the NUnit testing framework we use in our project to the latest stable version. We used 2.5.10 and it had reached 2.6.0. I simply removed the old version and replaced it with the new. Some of the tests failed.

Investigating revealed all tests had something in common: they were testing if two collections are not equal (meaning not the same instance) then that the collections are not equivalent (meaning none of the items in one collection is found in the other), yet that the values in the items are the same. Practically it was a test that checked if a cloning operation was successful. And it failed because from this version on, the two collections were considered Equal and Equivalent.

That is at least strange and so I searched the release notes for some information about this and found this passage: EqualConstraint now recognizes and uses IEquatable<T> if it is implemented on either the actual or the expected value. The interface is used in preference to any override of Object.Equals(), so long as the other argument is of Type T. Note that this applies to all equality tests performed by NUnit.

Indeed, checking the failing tests I realized that the collections contained IEquatable types.

It was not a complete surprise, but I did not expect it, either: the switch statement in Javascript is type exact, meaning that a classic if block like this:
if (x==1) { 
doSomething()
} else {
doSomethingElse();
}
is not equivalent to
switch(x) {
case 1:
doSomething();
break;
default:
doSomethingElse();
break;
}
If x is a string with the value '1' the if will do something, while the switch will do something else (pardon the pun). The equivalent if block for the switch statement would be:
if (x===1) { 
doSomething()
} else {
doSomethingElse();
}
(Notice the triple equality sign, which is type exact)

Just needed to be said.

and has 2 comments
I found a bit of code today that tested if a bunch of strings were found in another. It used IndexOf for each of the strings and continued to search if not found. The code, a long list of ifs and elses, looked terrible. So I thought I would refactor it to use regular expressions. I created a big Regex object, using the "|" regular expression operator and I tested for speed.

( Actually, I took the code, encapsulated it into a method that then went into a new object, then created the automated unit tests for that object and only then I proceeded in writing new code. I am very smug because usually I don't do that :) )

After the tests said the new code was good, I created a new test to compare the increase in performance. It is always good to have a metric to justify the work you have been doing. So the old code worked in about 3 seconds. The new code took 10! I was flabbergasted. Not only that I couldn't understand how that could happen, how several scans of the same string could be faster than a single one, but I am the one that wrote the article that said IndexOf is slower than Regex search (at least it was so in the .Net 2.0 times and I could not replicate the results in .Net 4.0). It was like a slap in the face, really.

I proceeded to change the method, having now a way to determine increases in performance, until I finally figured out what was going on. The original code was first transforming the text into lowercase, then doing IndexOf. It was not even using IndexOf with StringComparison.OrdinalIgnoreCase which was, of course, a "pfff" moment for me. My new method was, of course, using RegexOptions.IgnoreCase. No way this option would slow things down. But it did!

You see, when you have a search of two strings, separated by the "|" regular expression operator, inside there is a tree of states that is created. Say you are searching for "abc|abd", it will search once for "a", then once for "b", then check the next character for "c" or "d". If any of these conditions fail, the match will fail. However, if you do a case ignorant match, for each character there will be at least two searches per letter. Even so, I expected only a doubling of the processing length, not the whooping five times decrease in speed!

So I did the humble thing: I transformed the string into lowercase, then did a normal regex match. And the whole thing went from 10 seconds to under 3. I am yet to understand why this happens, but be careful when using the case ignorant option in regular expressions in .Net.

and has 1 comment
A short post about an exception I've met today: System.InvalidOperationException: There was an error reflecting 'SomeClassName'. ---> System.InvalidOperationException: SomeStaticClassName cannot be serialized. Static types cannot be used as parameters or return types.

Obviously one cannot serialize a static class, but I wasn't trying to. There was an asmx service method returning an Enum, but the enum was nested in the static class. Something like this:
public static class Common {

public enum MyEnumeration {
Item1,
Item2
}

}

Therefore, take this as a warning. Even if the compilation does not fail when a class is set to static, it may fail at runtime due to nested classes.

and has 6 comments
It's a horribly old bug, something that was reported on their page since 2007 and it in the issue list for HtmlAgilityPack since 2011. You want to parse a string as an HTML document and then get it back as a string from the DOM that the pack is generating. And it closes the form tag, like it has no children.
Example: <form></form> gets transformed into <form/></form>

The problem lies in the HtmlNode class of the HtmlAgilityPack project. It defines the form tag as empty in this line:
ElementsFlags.Add("form", HtmlElementFlag.CanOverlap | HtmlElementFlag.Empty);
One can download the sources and remove the Empty value in order to fix the problem or, if they do not want to change the sources of the pack, they have the option of using a workaround:
HtmlNode.ElementsFlags["form"]=HtmlElementFlag.CanOverlap;
Be careful, though, the ElementsFlags dictionary is a static property. This change will be applied on the entire application.



I think that The Checklist Manifesto is a book that every technical professional should read. It is simple to read, to the point and extremely useful. I first heard about it in a Scrum training and now, after reading it, I think it was the best thing that came out of it (and it was a pretty awesome training session). What is this book about, then? It is about a surgeon that researches the way a simple checklist can improve the daily routine in a multitude of domains, but mainly, of course, in surgery. And the results are astounding: a two fold reduction in operating room accidents and/or postoperatory infections and complications. Atul Gawande does not stop there, though, he uses examples from other fields to bring his point around, focusing a lot on the one that introduced the wide spread use of checklists: aviation.

There is a lot to learn from this book. I couldn't help always comparing what the author had to say about surgery with the job I am doing, software development, and with the Scrum system we are currently employing. I think that, given he would have heard of Scrum and the industrial management processes it evolved from, Gawande would have surely talked about it in the book. There is no technical field that could not benefit from this, including things like playing chess or one's daily routine. The main idea of the book is that checklists take care of the simple, dumb things that we have to do, in order to unclutter our brain for the complex and intuitive work. It enables self discipline and allows for unexpected increases in efficiency. I am certainly considering using in my own life some of the knowledge I gained, and not only at the workplace.

What I could skim from the book, things that I marked as worthy to remember:
  • Do not punish mistakes, instead give more chances to experience and learning - this is paramount to any analytical process. The purpose is not to kill the host, but to help it adapt to the disease. Own your mistakes, analyse them, learn from them.
  • Decentralize control - let professionals assume responsibility and handle their own jobs as they know best. Dictating every action from the top puts enormous pressure on few people that cannot possibly know everything and react with enough speed to the unpredictable
  • Communication is paramount in managing complex and unexpected situations, while things like checklists can take care of simple and necessary things - this is the main idea of the book, enabling creativity and intuition by checking off the routine stuff
  • A process can help by only changing behaviour - Gawande gives an example where soap was freely given to people, together with instructions on how and when to use it. It had significant beneficial effects on people, not because of the soap per sé, but because it changed behaviour. They were already buying and using soap, but the routine and discipline of soap use was the most important result
  • Team huddles - like in some American sports, when a team is trying to achieve a result, they need to communicate well. One of the important checks for all the lists in the book was a discussion between all team members describing what they are about to do. Equally important is communicating during the task, but also at the end, where conclusions can be drawn and outcomes discussed
  • Checklists can be bad - a good checklist is precise, to the point, easy to use. A long and verbose list can impede people from their task, rather than help them, while vague items in the lists cause more harm than good
  • A very important part of using a checklist system is to clearly define pause points - they are the moments at which people take the list and check things from it. An undefined or vaguely defined pause point is just as bad as useless checklist items
  • Checklists are of two flavours - READ-DO, like a food recipe, with clear actions that must be performed in order, and DO-CONFIRM, where people stop to see what was accomplished and what is left to do, like a shopping list
  • A good checklist should optimally have between five and nine items - the number of items the human brain can easily remember. This is not a strong rule, but it does help
  • Investigate failures - there is no other way to adapt
  • A checklist gotcha is the translation - people might make an effort to make a checklist do wonders in a certain context, only to find that translating it to other cultures is very difficult and prone to errors. A checklist is itself subject to failure investigation and adaptation
  • Lobbying and greed are hurting us - a particularly emotional bit of the book is a small rant in which the author describes how people would have jumped on a pill or an expensive surgical device that would have brought the same great results as checklists, only to observe that people are less interested in something easy to copy, distribute and that doesn't bring benefits to anyone except the patients. That was a painful lesson
  • The star test pilot is dead - there was a time when crazy brave test pilots would risk their lives to test airplanes. The checklist method has removed the need for unnecessary risks and slowly removed the danger and complexity in the test pilot work, thus destroying the mythos. That also reduced the number of useless deaths significantly.
  • The financial investors that behave most like airline captains are the most successful - they balance their own greed or need for excitement with carefully crafted checklists, enabling their "guts" with the certainty that small details were not missed or ignored for reasons of wishful thinking
  • The Hudson river hero(es) - an interesting point was made when describing the Hudson river airplane crash. Even if the crew worked perfectly with each other, keeping their calm in the face of both engines suddenly stopping, calming and preparing the passengers, carefully checking things off their lists and completing each other's tasks, the media pulled hard to make only the pilot a hero. Surely he denied it every time and said that it was a crew effort because he was modest. Clearly he had everything under control. That did not happen and it also explains why the checklist is so effective and yet so few people actually employ it. We dream of something else
  • We are not built for discipline - that is why discipline is something that enables itself. It takes a little discipline to become more disciplined. A checklist ensures a kind of formal discipline in cases previously analysed by yourself. It assumes control over the emotional need for risk and excitement.
  • Optimize the system, not the parts - it is always the best choice to look at something as a whole and improve it as a whole. The author mentions an experiment of building a car from the best parts, taken from different companies. The result was a junk car that was not very good. The way the parts interact with one another is often more important than individual performance

I am ending this review with the two YouTube videos on how to use and how not to use the WHO Surgical Safety Checklist that Atul Gawande created for surgical team all around the globe.