I know of this method from the good old Internet Explorer 6 days. In order to force the browser to redraw an element, solving weird browser refresh issues, change it's css class. I usually go for element.className+='';. So you see, you don't have to actually change it. Sometimes you need to do this after a bit of code has been executed, so put it all in a setTimeout.

More explicitly, I was trying to solve this weird bug where using jQuery slideUp/slideDown I would get some elements in Internet Explorer 8 to disregard some CSS rules. Mainly the header of a collapsible panel would suddenly and intermittently seem to lose a margin-bottom: 18px !important; rule. In order to fix this instead of panel.slideUp(); I used
panel.slideUp(400/*the default value*/,function() { 
setTimeout(function() {
header.each(function(){
this.className+='';
});
},1);
});
where panel is the collapsible part and header is the clickable part. Same for slideDown.

I had to fix this weird bug today, where only in IE9 the entire page would freeze suddenly. The only way to get anything done was to select some text or scroll a scrollable area. I was creating an input text field, then, when pressing enter, I would do something with the value and remove the element.

It appears that in Internet Explorer 9 it is wrong to remove the element that holds the focus. Use window.focus(); before you do that.

I was having one of those Internet Explorer moments in Javascript, when I wanted to use Array.isArray and I couldn't because it was IE8. So, I thought, I would create my own isArray function and attach it to Array, so that it works cross browser. The issue now was how do I detect if an object in Javascript is an Array.

The instanceOf operator came to mind immediately. After all, don't you do the same thing in C#, compare if an object "is" something? Luckily for me, I checked the Internet and reached the faithful StackOverflow with an answer. The interesting bit was explaining why instanceOf would not work for all cases and that is that objects that cross the frame boundaries have their own version of class.

Let's say that you have two pages and one if having the other in an iframe. Let's call them innovatively testParent and testChild. If you create an array instance in testChild like x=new Array(); or x=[];, then the result of x instanceOf Array will be true in testChild, but false in testParent. That's because the Array in one page is different from Array in the other. And, damn it, it makes sense, too. Imagine you did what I did and added a function to the Array class. Would that class be the same as the Array in the iframe, without the function? What if I decide to add
Array.prototype.indexOf?

So, bottom line: in Javascript, instanceOf will not work in any meaningful way across frame boundaries.

Oh, and just so you do have a good way to check if an object is and array, do this:
var strArray=Object.prototype.toString(new Array());
Array.isArray=function(obj) {
return Object.prototype.toString(obj)==strArray;
}

I had to do a very simple Microsoft SQL query in which I wanted to update some of the values in a row from a row in the same table. Actually, the query was already there, but was using two local variables to store the information, then make the update. Something like this:
DECLARE @Var1 INT
DECLARE @Var2 INT
SELECT @Var1=Column1,@Var2=Column2 FROM MyTable WHERE ID=1
UPDATE MyTable SET Column1=@Var1,Column2=@Var2 WHERE ID=2
I really hated that I was using two SQL statements and all that declaring to do a simple update, so I looked up the syntax for the UPDATE statement. It said that if I want to update a table from a source I need to use the FROM keyword, like this:
UPDATE MyTable 
SET Column1=Alias.Column1,Column2=Alias.Column2
FROM MyOtherTable AS Alias
WHERE ID=2
AND Alias.ID=1
As you can see, we use an alias to name another table or query, we use the Alias name for all the conditions for that table and nothing for the conditions on the table we update. Easy, no? I even tested it and it worked. So I tried this:
UPDATE MyTable 
SET Column1=Alias.Column1,Column2=Alias.Column2
FROM MyTable AS Alias
WHERE ID=2
AND Alias.ID=1
I used the same table to update and to alias and it seemed to work. However, the number of updated columns was always 0. Remarkable how difficult it is to find on the net a straight answer about a simple situation like this.

It turns out that even with the alias, MSSql is confusing some things. The solution is to use a query from your table, rather than the name of the table itself. Here is how you do it:
UPDATE MyTable 
SET Column1=Alias.Column1,Column2=Alias.Column2
FROM (SELECT * FROM MyTable) AS Alias
WHERE ID=2
AND Alias.ID=1


SQL 2005 also introduced Common Table Expressions, which can be used to clarify a query. In this case, using a CTE results in the same execution plan and makes the entire query even more convoluted:
WITH Alias(Column1,Column2)
AS (
SELECT Column1, Column2 FROM MyTable
)
UPDATE MyTable
SET Column1=Alias.Column1,Column2=Alias.Column2
FROM Alias
WHERE ID=2
AND Alias.ID=1

Even if the documentation says you can specify a CTE without declaring the column names, I couldn't do it in this situation, I don't know why. I admit I only tried the CTE solution for a minute before discarding it as too verbose.

This will be a short blog post that shows my error in understanding what Javascript maps or objects are. I don't mean Google Maps, I mean dynamic objects that have properties that can be accessed via a key, not an index. Let's me exemplify:
var obj={
property1:"value1",
property2:2,
property3: new Date()
};
obj["property 4"]="value 4";
obj.property5=new MyCustomObject();
obj[6]='value 6';
console.log(obj.property1);
console.log(obj['property2']);
console.log(obj["property3"]);
console.log(obj['property 4']);
console.log(obj.property5);
console.log(obj[6]);
In this example, obj is an instance of object that had three properties and others are added. First the declaration notation is JSON like, then any object can be assigned to a property via two notations: the '.'(dot) and the square brackets. Note that the value of 'property 4' and of '6' can only be accessed via square brackets, there is no dot notation to escape that space and obj.6 is invalid.

Now, the gotcha is that, coming from the C# world, I've immediately associated this with a Hashtable class: something that can have any object as key and any object as value, but instead, a map is more like a Dictionary<string,object>.

Let me show you why that may be confusing. This is perfectly usable:
obj[new Date()]=true;
In this example I've used a Date object as a key. Or have I? In Javascript any object can be turned into a string with the toString() function. In fact, our Javascript map uses a key much like 'Sat Jul 14 2012 00:07:00 GMT+0300 (GTB Daylight Time)'. The translations from one type to another are seamless (and can generate quite a bit of righteous anger, too).

My point is that you can also use something like
obj[new MyObject()]=true;
only to see it blow in your face. The key will most likely be '[Object object]'. Not at all what was expected.


So remember: javascript properties can be any string, no matter how strange, but not other types. obj[6] will return the value you have set in obj[6] because in both cases that 6 is first turned into a string '6' and then used. It has nothing to do with the '6th value' or '6th property'. Those are arrays. The same for a Date or some custom object that has a toString() function that returns something unique for that object. I wouldn't use that, though, as you would probably want to use objects as keys and compare them by reference, not string value.


Programming Game AI by Example is one of those books that would have changed my life had I had read them when I was 15. Mat Buckland is taking a really high tech portion of game making and turning it into child's play. With source code!

From the very beginning we are being told that AI in games is different from what we would normally associate with Artificial Intelligence. AI in games is the thing that makes game agents look smart, but let the user enjoy the game the most. In other words, something that seems smart, but is just stupid enough for you to continue playing.

The book is comprised of ten chapters, heavy with code, but very well structured. The main tool in use are Finite State Machines, but we first get a mechanics physics lecture in chapter 1 where we learn what a vector is and how to normalize it and how to use this in the game physics. Moving to chapter 2, we learn what a state machine is and how to optimize memory by making each one a singleton, how to compose them and why more exciting aspects of artificial intelligence, like say neural networks, are not used more in games. We delve further into methods to optimize what we have learned to make it practical: prioritized dithering, partitioning, BSP, quad and oct trees, fuzzy-Q logic, cell space partitioning, all with code examples, in chapter 3. Chapter 5 is reserved for graphs, Dijkstra, A* and such. Chapter 6 goes into integrating Lua into your games, as a good tool to define and tweak the innards of your game before compiling it all for performance into a single code base. Raven, the example game engine, is detailed in chapter 7. Path planning is described in chapter 8, complete with many optimizations and tricks to make an algorithmic movement of units look natural and smart. Chapter 9 is about goal driven agent behaviour, where we learn how to make an agent define goals and act upon those goals. The composite pattern is suggested as a good solution for goals within goals. We end with a very interesting chapter about fuzzy logic. The basis of this is to fuzzify a situation, infer a behaviour, then defuzzify into a usable algorithmic value.

The bottom line is that this is a very easy book to read, explaining matter-of-factly how to easily create the intelligence in games like Fifa or Counter Strike. The code examples are extensive, but not necessary to understand the gist of things. At the end, it is both a fascinating and intriguing read as well as a good reference book for when you actually need this stuff.

I end this review with a quote from Dijkstra that was also mentioned in the book: The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim. Very nice book and a recommended read.

Yesterday I wanted to upgrade the NUnit testing framework we use in our project to the latest stable version. We used 2.5.10 and it had reached 2.6.0. I simply removed the old version and replaced it with the new. Some of the tests failed.

Investigating revealed all tests had something in common: they were testing if two collections are not equal (meaning not the same instance) then that the collections are not equivalent (meaning none of the items in one collection is found in the other), yet that the values in the items are the same. Practically it was a test that checked if a cloning operation was successful. And it failed because from this version on, the two collections were considered Equal and Equivalent.

That is at least strange and so I searched the release notes for some information about this and found this passage: EqualConstraint now recognizes and uses IEquatable<T> if it is implemented on either the actual or the expected value. The interface is used in preference to any override of Object.Equals(), so long as the other argument is of Type T. Note that this applies to all equality tests performed by NUnit.

Indeed, checking the failing tests I realized that the collections contained IEquatable types.

It was not a complete surprise, but I did not expect it, either: the switch statement in Javascript is type exact, meaning that a classic if block like this:
if (x==1) { 
doSomething()
} else {
doSomethingElse();
}
is not equivalent to
switch(x) {
case 1:
doSomething();
break;
default:
doSomethingElse();
break;
}
If x is a string with the value '1' the if will do something, while the switch will do something else (pardon the pun). The equivalent if block for the switch statement would be:
if (x===1) { 
doSomething()
} else {
doSomethingElse();
}
(Notice the triple equality sign, which is type exact)

Just needed to be said.

and has 2 comments
I found a bit of code today that tested if a bunch of strings were found in another. It used IndexOf for each of the strings and continued to search if not found. The code, a long list of ifs and elses, looked terrible. So I thought I would refactor it to use regular expressions. I created a big Regex object, using the "|" regular expression operator and I tested for speed.

( Actually, I took the code, encapsulated it into a method that then went into a new object, then created the automated unit tests for that object and only then I proceeded in writing new code. I am very smug because usually I don't do that :) )

After the tests said the new code was good, I created a new test to compare the increase in performance. It is always good to have a metric to justify the work you have been doing. So the old code worked in about 3 seconds. The new code took 10! I was flabbergasted. Not only that I couldn't understand how that could happen, how several scans of the same string could be faster than a single one, but I am the one that wrote the article that said IndexOf is slower than Regex search (at least it was so in the .Net 2.0 times and I could not replicate the results in .Net 4.0). It was like a slap in the face, really.

I proceeded to change the method, having now a way to determine increases in performance, until I finally figured out what was going on. The original code was first transforming the text into lowercase, then doing IndexOf. It was not even using IndexOf with StringComparison.OrdinalIgnoreCase which was, of course, a "pfff" moment for me. My new method was, of course, using RegexOptions.IgnoreCase. No way this option would slow things down. But it did!

You see, when you have a search of two strings, separated by the "|" regular expression operator, inside there is a tree of states that is created. Say you are searching for "abc|abd", it will search once for "a", then once for "b", then check the next character for "c" or "d". If any of these conditions fail, the match will fail. However, if you do a case ignorant match, for each character there will be at least two searches per letter. Even so, I expected only a doubling of the processing length, not the whooping five times decrease in speed!

So I did the humble thing: I transformed the string into lowercase, then did a normal regex match. And the whole thing went from 10 seconds to under 3. I am yet to understand why this happens, but be careful when using the case ignorant option in regular expressions in .Net.

and has 1 comment
A short post about an exception I've met today: System.InvalidOperationException: There was an error reflecting 'SomeClassName'. ---> System.InvalidOperationException: SomeStaticClassName cannot be serialized. Static types cannot be used as parameters or return types.

Obviously one cannot serialize a static class, but I wasn't trying to. There was an asmx service method returning an Enum, but the enum was nested in the static class. Something like this:
public static class Common {

public enum MyEnumeration {
Item1,
Item2
}

}

Therefore, take this as a warning. Even if the compilation does not fail when a class is set to static, it may fail at runtime due to nested classes.

and has 6 comments
It's a horribly old bug, something that was reported on their page since 2007 and it in the issue list for HtmlAgilityPack since 2011. You want to parse a string as an HTML document and then get it back as a string from the DOM that the pack is generating. And it closes the form tag, like it has no children.
Example: <form></form> gets transformed into <form/></form>

The problem lies in the HtmlNode class of the HtmlAgilityPack project. It defines the form tag as empty in this line:
ElementsFlags.Add("form", HtmlElementFlag.CanOverlap | HtmlElementFlag.Empty);
One can download the sources and remove the Empty value in order to fix the problem or, if they do not want to change the sources of the pack, they have the option of using a workaround:
HtmlNode.ElementsFlags["form"]=HtmlElementFlag.CanOverlap;
Be careful, though, the ElementsFlags dictionary is a static property. This change will be applied on the entire application.



I think that The Checklist Manifesto is a book that every technical professional should read. It is simple to read, to the point and extremely useful. I first heard about it in a Scrum training and now, after reading it, I think it was the best thing that came out of it (and it was a pretty awesome training session). What is this book about, then? It is about a surgeon that researches the way a simple checklist can improve the daily routine in a multitude of domains, but mainly, of course, in surgery. And the results are astounding: a two fold reduction in operating room accidents and/or postoperatory infections and complications. Atul Gawande does not stop there, though, he uses examples from other fields to bring his point around, focusing a lot on the one that introduced the wide spread use of checklists: aviation.

There is a lot to learn from this book. I couldn't help always comparing what the author had to say about surgery with the job I am doing, software development, and with the Scrum system we are currently employing. I think that, given he would have heard of Scrum and the industrial management processes it evolved from, Gawande would have surely talked about it in the book. There is no technical field that could not benefit from this, including things like playing chess or one's daily routine. The main idea of the book is that checklists take care of the simple, dumb things that we have to do, in order to unclutter our brain for the complex and intuitive work. It enables self discipline and allows for unexpected increases in efficiency. I am certainly considering using in my own life some of the knowledge I gained, and not only at the workplace.

What I could skim from the book, things that I marked as worthy to remember:
  • Do not punish mistakes, instead give more chances to experience and learning - this is paramount to any analytical process. The purpose is not to kill the host, but to help it adapt to the disease. Own your mistakes, analyse them, learn from them.
  • Decentralize control - let professionals assume responsibility and handle their own jobs as they know best. Dictating every action from the top puts enormous pressure on few people that cannot possibly know everything and react with enough speed to the unpredictable
  • Communication is paramount in managing complex and unexpected situations, while things like checklists can take care of simple and necessary things - this is the main idea of the book, enabling creativity and intuition by checking off the routine stuff
  • A process can help by only changing behaviour - Gawande gives an example where soap was freely given to people, together with instructions on how and when to use it. It had significant beneficial effects on people, not because of the soap per sé, but because it changed behaviour. They were already buying and using soap, but the routine and discipline of soap use was the most important result
  • Team huddles - like in some American sports, when a team is trying to achieve a result, they need to communicate well. One of the important checks for all the lists in the book was a discussion between all team members describing what they are about to do. Equally important is communicating during the task, but also at the end, where conclusions can be drawn and outcomes discussed
  • Checklists can be bad - a good checklist is precise, to the point, easy to use. A long and verbose list can impede people from their task, rather than help them, while vague items in the lists cause more harm than good
  • A very important part of using a checklist system is to clearly define pause points - they are the moments at which people take the list and check things from it. An undefined or vaguely defined pause point is just as bad as useless checklist items
  • Checklists are of two flavours - READ-DO, like a food recipe, with clear actions that must be performed in order, and DO-CONFIRM, where people stop to see what was accomplished and what is left to do, like a shopping list
  • A good checklist should optimally have between five and nine items - the number of items the human brain can easily remember. This is not a strong rule, but it does help
  • Investigate failures - there is no other way to adapt
  • A checklist gotcha is the translation - people might make an effort to make a checklist do wonders in a certain context, only to find that translating it to other cultures is very difficult and prone to errors. A checklist is itself subject to failure investigation and adaptation
  • Lobbying and greed are hurting us - a particularly emotional bit of the book is a small rant in which the author describes how people would have jumped on a pill or an expensive surgical device that would have brought the same great results as checklists, only to observe that people are less interested in something easy to copy, distribute and that doesn't bring benefits to anyone except the patients. That was a painful lesson
  • The star test pilot is dead - there was a time when crazy brave test pilots would risk their lives to test airplanes. The checklist method has removed the need for unnecessary risks and slowly removed the danger and complexity in the test pilot work, thus destroying the mythos. That also reduced the number of useless deaths significantly.
  • The financial investors that behave most like airline captains are the most successful - they balance their own greed or need for excitement with carefully crafted checklists, enabling their "guts" with the certainty that small details were not missed or ignored for reasons of wishful thinking
  • The Hudson river hero(es) - an interesting point was made when describing the Hudson river airplane crash. Even if the crew worked perfectly with each other, keeping their calm in the face of both engines suddenly stopping, calming and preparing the passengers, carefully checking things off their lists and completing each other's tasks, the media pulled hard to make only the pilot a hero. Surely he denied it every time and said that it was a crew effort because he was modest. Clearly he had everything under control. That did not happen and it also explains why the checklist is so effective and yet so few people actually employ it. We dream of something else
  • We are not built for discipline - that is why discipline is something that enables itself. It takes a little discipline to become more disciplined. A checklist ensures a kind of formal discipline in cases previously analysed by yourself. It assumes control over the emotional need for risk and excitement.
  • Optimize the system, not the parts - it is always the best choice to look at something as a whole and improve it as a whole. The author mentions an experiment of building a car from the best parts, taken from different companies. The result was a junk car that was not very good. The way the parts interact with one another is often more important than individual performance

I am ending this review with the two YouTube videos on how to use and how not to use the WHO Surgical Safety Checklist that Atul Gawande created for surgical team all around the globe.



I had a pretty strange bug to fix. It involved a class used in a web page that provided localized strings, only it didn't seem to work for Japanese, while French or English or German worked OK. The resource class was used like this: AdminUIString.SomeStringKey, where AdminUIString was a resx file in the App_GlobalResources folder. Other similar global resource resx classes were used in the class and they worked! The only difference between them was the custom tool that was configured for them. My problem class was using PublicResXFileCodeGenerator from namespace Resources, while the other classes used the GlobalResourceProxyGenerator, without any namespace.

Now, changing the custom tool did solve the issue there, but it didn't solve it in some integration tests where it failed. The workaround for this was to use HttpContext.GetGlobalResourceObject("AdminUIString", "SomeStringKey").ToString(), which is pretty ugly. Since our project was pretty complex, using bits of ASP.Net MVC and (very) old school ASP.Net, no one actually understood where the difference stood. Here is an article that partially explains it: Resource Files and ASP.NET MVC Projects. I say partially, because it doesn't really solve my problem in a satisfactory way. All it says is that I should not use global resources in ASP.Net MVC, it doesn't explain why it fails so miserable for Japanese, nor does it find a magical fix for the problem without refactoring the convoluted resource mess we have in this legacy project. It will have to do, though, as no one is budgeting refactoring time right now.

It was long overdue for me to read a technical book and I've decided to go for a classic from 1999 about refactoring, written by software development icons as Martin Fowler and Kent Beck. As such, it is not a surprise that Refactoring: Improving the Design of Existing Code feels a little dated. However, not as much as I had expected. You see, the book is trying to familiarize the user with the idea of refactoring, something programmers of these days don't need. In 1999, though, that was a breakthrough concept and it needed not only explained, but lobbied. At the same time, the issues they describe regarding the process of refactoring, starting from the mechanics to the obstacles, feel as recent as today. Who didn't try to convince their managers to allow them a bit of refactoring time in order to improve the quality and readability of code, only to be met with the always pleasant "And what improvement would the client see?" or "are there ANY risks involved?" ?

The refactoring book starts by explaining what refactoring means, from the noun, which means the individual move, like Extract Method, to the verb, which represents the process of improving the readability and quality of the code base without changing functionality. To the defense of the managerial point of view, somewhere at the end of the book, authors submit that big refactoring cycles are usually a recipe for disaster, instead preaching for small, testable refactorings on the areas you are working on: clean the code before you add functionality. Refactoring is also promoting software testing. One cannot be confident they did not introduce bugs when they refactor if the functionality is not covered by automated or at least manual tests. One of the most important tenets of the book is that you write code for other programmers (or for yourself), not for the computer. Development speed comes from quickly grasping the intention and implementation when reading, maintaining and changing a bit of code. Refactoring is the process that improves the readability of code. Machines go faster no matter how you write the code, as long as it works.

The book is first describing and advocating refactoring, then presenting the various refactoring moves, in a sort of structured way, akin to the software patterns that Martin Fowler also attempted to catalog, then having a few chapter written by the other authors, with their own view of things. It can be used as a reference, I guess, even if Fowler's site does a better job at that. Also, it is an interesting read, even if, overall, it felt to me like a rehearsal of my own ideas on the subject. Many of the refactorings in the catalog are now automated in IDEs, but the more complex ones have not only the mechanics explained, but the reasons for why they should be used and where. That structured way of describing them might feel like repeating the obvious, but I bet if asked you couldn't come up with a conscious description of the place a specific refactoring should be used. Also, while reading those specific bits, I kept fantasizing about an automated tool that could suggest refactorings, maybe using FxCop or something like that.

Things I've marked down from the book, in the order I wrote them down in:
  • Refactoring versus Optimization - Optimizing the performance or improving some functionality should not be mixed up with the refactoring of code, which aims to improve readability of code while preserving the initial functionality. Mixing them up is pitting the two essential stages of development one against the other.
  • Methods should use their data of their own object - one of the telltales of need to refactor is when methods from an object use data from another object. It smells like the method should be moved in the responsibility of that other object.
  • When it is easy to refactor, choose a simple design - Of course the opposite is true, as well: when you know it will be hard to refactor a piece of code, try to design it first. If not, it is better to not add unnecessary complexity. This is in line with the KISS concept.
  • Split your application into self encapsulated parts - One of the ways to simplify refactoring is to separate your application into bits that you can manage separately. If you didn't design your application like that, try to first split it, then refactor.
  • Whenever you need to write a comment, consider extracting a method with a meaningful name - or renaming methods to be more expressive.
  • Consider polymorphism when seeing a switch statement - Now that is an interesting topic in itself. Why would polymorphism help here? How could it be simpler to understand than a switch/case statement? The idea behind this is that if you have a switch somewhere, you might have it somewhere else as well. Instead of taking decisions inside each method, it is better to split that behaviour in separate classes, each describing the particular value that the switch would have operated on.
  • Test before refactoring - this would have been drilled in your head already, but if not, the book will do that to you. In order to not add faults to the program with the refactoring, make sure you have tests for the existing functionality, tests that should pass after the refactoring process, as well.
  • The Quantity pattern - Review the Quantity pattern in order to improve readability and encapsulate simple common actions performed on specific types of units.
  • Split conditionals into methods - in other words try to simplify your conditional blocks to if conditionMethod() then ifMethod() else elseMethod(). It might seem a sure way to get to a fragmented code base, with small methods everywhere, but the idea is sound. A condition, after all, is an intention. Encapsulate it into a well named method and it will be very clear what the programmer intended. Maybe the same method will be used in other places as well, and then, using polymorphism, one can get rid of the conditional altogether.
  • Use Null objects - an interesting concept that I haven't even considered before. It is easy to recognize the need for a Null object when there are a lot of checks for null. if x==null then something() else x.somethingElse() would be turned into a simple x.something() if instead of null, x would be an object that represents empty, but still has attached behavior. An interesting side effect of this is that often the Null object can be made an immutable singleton.
  • Code inside Assertions always executes - This is a gotcha I found interesting. Imagine the following code: Assert.IsTrue(SomeCondition()) Even if the Assert object is designed to not execute anything in Release mode, only compiled in Debug, the method SomeCondition() will execute all the time. One option is to use an extra condition: Assert.IsTrue(Assert.On&&SomeCondition()) or, in C#, try to send an expression: Assert.IsTrue(()=>SomeCondition())
  • Careful when replacing method parameters with parameter object in parallel processing scenarios - Which nowadays means always. Anyways, the idea is that old libraries designed for parallel processing used large value parameter lists. One might be inclined to Introduce Parameter Object, but that introduces a reference object that might lead to locking issues. Just another gotcha.
  • Separate Modifier from Query - This is a useful convention to remember. A method should either get some information (query) or change some data (modifier), not both. It makes the intention clear.

That's about it. I have wet dreams of cleaning up the code base I am working on right now, maybe in a pair programming way (also a suggestion in the book and a situation when pair programming really seems a great opportunity), but I don't have the time. Maybe this summary of the book will inspire others who have it.

Update:
I've pinpointed the issue after a few other investigations. The https:// site was returning a security certificate that was issued for another domain. Why it worked in FireFox anyway and why it didn't work in Chrome, but then it worked after an unauthorized call first, I still don't know, but it is already in the domain of browser internals.

I was trying to access an API on https:// from a page that was hosted on http://. Since the scheme of the call was different from the scheme of the hosted URL, it is interpreted as a cross domain call. You might want to research this concept, called CORS, in order to understand the rest of the post.

The thing is that it didn't work. The server was correctly configured to allow cross domain access, but my jQuery calls did not succeed. In order to access the API I needed to send an Authorization header, as well as request the information as JSON. Investigations on the actual browser calls showed the correct OPTIONS request method, as well as the expected headers, only they appeared as 'Aborted'. It took me a few hours of taking things apart, debugging jQuery, adding and removing options to suddenly see it work! The problem was that after resetting IIS, the problem appeared again! What was going on?

In the end I've identified a way to consistently reproduce the problem, even if at the moment I have no explanation for it. The calls succeed after making a call with no headers (including the Content-Type one). So make a bogus, unauthorized call and the next correct calls will work. Somehow that depends on IIS as well as the Chrome browser. In Firefox it works directly and in Chrome it seems to be consistently reproducible.