I had a pretty strange bug to fix. It involved a class used in a web page that provided localized strings, only it didn't seem to work for Japanese, while French or English or German worked OK. The resource class was used like this: AdminUIString.SomeStringKey, where AdminUIString was a resx file in the App_GlobalResources folder. Other similar global resource resx classes were used in the class and they worked! The only difference between them was the custom tool that was configured for them. My problem class was using PublicResXFileCodeGenerator from namespace Resources, while the other classes used the GlobalResourceProxyGenerator, without any namespace.

Now, changing the custom tool did solve the issue there, but it didn't solve it in some integration tests where it failed. The workaround for this was to use HttpContext.GetGlobalResourceObject("AdminUIString", "SomeStringKey").ToString(), which is pretty ugly. Since our project was pretty complex, using bits of ASP.Net MVC and (very) old school ASP.Net, no one actually understood where the difference stood. Here is an article that partially explains it: Resource Files and ASP.NET MVC Projects. I say partially, because it doesn't really solve my problem in a satisfactory way. All it says is that I should not use global resources in ASP.Net MVC, it doesn't explain why it fails so miserable for Japanese, nor does it find a magical fix for the problem without refactoring the convoluted resource mess we have in this legacy project. It will have to do, though, as no one is budgeting refactoring time right now.

It was long overdue for me to read a technical book and I've decided to go for a classic from 1999 about refactoring, written by software development icons as Martin Fowler and Kent Beck. As such, it is not a surprise that Refactoring: Improving the Design of Existing Code feels a little dated. However, not as much as I had expected. You see, the book is trying to familiarize the user with the idea of refactoring, something programmers of these days don't need. In 1999, though, that was a breakthrough concept and it needed not only explained, but lobbied. At the same time, the issues they describe regarding the process of refactoring, starting from the mechanics to the obstacles, feel as recent as today. Who didn't try to convince their managers to allow them a bit of refactoring time in order to improve the quality and readability of code, only to be met with the always pleasant "And what improvement would the client see?" or "are there ANY risks involved?" ?

The refactoring book starts by explaining what refactoring means, from the noun, which means the individual move, like Extract Method, to the verb, which represents the process of improving the readability and quality of the code base without changing functionality. To the defense of the managerial point of view, somewhere at the end of the book, authors submit that big refactoring cycles are usually a recipe for disaster, instead preaching for small, testable refactorings on the areas you are working on: clean the code before you add functionality. Refactoring is also promoting software testing. One cannot be confident they did not introduce bugs when they refactor if the functionality is not covered by automated or at least manual tests. One of the most important tenets of the book is that you write code for other programmers (or for yourself), not for the computer. Development speed comes from quickly grasping the intention and implementation when reading, maintaining and changing a bit of code. Refactoring is the process that improves the readability of code. Machines go faster no matter how you write the code, as long as it works.

The book is first describing and advocating refactoring, then presenting the various refactoring moves, in a sort of structured way, akin to the software patterns that Martin Fowler also attempted to catalog, then having a few chapter written by the other authors, with their own view of things. It can be used as a reference, I guess, even if Fowler's site does a better job at that. Also, it is an interesting read, even if, overall, it felt to me like a rehearsal of my own ideas on the subject. Many of the refactorings in the catalog are now automated in IDEs, but the more complex ones have not only the mechanics explained, but the reasons for why they should be used and where. That structured way of describing them might feel like repeating the obvious, but I bet if asked you couldn't come up with a conscious description of the place a specific refactoring should be used. Also, while reading those specific bits, I kept fantasizing about an automated tool that could suggest refactorings, maybe using FxCop or something like that.

Things I've marked down from the book, in the order I wrote them down in:
  • Refactoring versus Optimization - Optimizing the performance or improving some functionality should not be mixed up with the refactoring of code, which aims to improve readability of code while preserving the initial functionality. Mixing them up is pitting the two essential stages of development one against the other.
  • Methods should use their data of their own object - one of the telltales of need to refactor is when methods from an object use data from another object. It smells like the method should be moved in the responsibility of that other object.
  • When it is easy to refactor, choose a simple design - Of course the opposite is true, as well: when you know it will be hard to refactor a piece of code, try to design it first. If not, it is better to not add unnecessary complexity. This is in line with the KISS concept.
  • Split your application into self encapsulated parts - One of the ways to simplify refactoring is to separate your application into bits that you can manage separately. If you didn't design your application like that, try to first split it, then refactor.
  • Whenever you need to write a comment, consider extracting a method with a meaningful name - or renaming methods to be more expressive.
  • Consider polymorphism when seeing a switch statement - Now that is an interesting topic in itself. Why would polymorphism help here? How could it be simpler to understand than a switch/case statement? The idea behind this is that if you have a switch somewhere, you might have it somewhere else as well. Instead of taking decisions inside each method, it is better to split that behaviour in separate classes, each describing the particular value that the switch would have operated on.
  • Test before refactoring - this would have been drilled in your head already, but if not, the book will do that to you. In order to not add faults to the program with the refactoring, make sure you have tests for the existing functionality, tests that should pass after the refactoring process, as well.
  • The Quantity pattern - Review the Quantity pattern in order to improve readability and encapsulate simple common actions performed on specific types of units.
  • Split conditionals into methods - in other words try to simplify your conditional blocks to if conditionMethod() then ifMethod() else elseMethod(). It might seem a sure way to get to a fragmented code base, with small methods everywhere, but the idea is sound. A condition, after all, is an intention. Encapsulate it into a well named method and it will be very clear what the programmer intended. Maybe the same method will be used in other places as well, and then, using polymorphism, one can get rid of the conditional altogether.
  • Use Null objects - an interesting concept that I haven't even considered before. It is easy to recognize the need for a Null object when there are a lot of checks for null. if x==null then something() else x.somethingElse() would be turned into a simple x.something() if instead of null, x would be an object that represents empty, but still has attached behavior. An interesting side effect of this is that often the Null object can be made an immutable singleton.
  • Code inside Assertions always executes - This is a gotcha I found interesting. Imagine the following code: Assert.IsTrue(SomeCondition()) Even if the Assert object is designed to not execute anything in Release mode, only compiled in Debug, the method SomeCondition() will execute all the time. One option is to use an extra condition: Assert.IsTrue(Assert.On&&SomeCondition()) or, in C#, try to send an expression: Assert.IsTrue(()=>SomeCondition())
  • Careful when replacing method parameters with parameter object in parallel processing scenarios - Which nowadays means always. Anyways, the idea is that old libraries designed for parallel processing used large value parameter lists. One might be inclined to Introduce Parameter Object, but that introduces a reference object that might lead to locking issues. Just another gotcha.
  • Separate Modifier from Query - This is a useful convention to remember. A method should either get some information (query) or change some data (modifier), not both. It makes the intention clear.

That's about it. I have wet dreams of cleaning up the code base I am working on right now, maybe in a pair programming way (also a suggestion in the book and a situation when pair programming really seems a great opportunity), but I don't have the time. Maybe this summary of the book will inspire others who have it.

Update:
I've pinpointed the issue after a few other investigations. The https:// site was returning a security certificate that was issued for another domain. Why it worked in FireFox anyway and why it didn't work in Chrome, but then it worked after an unauthorized call first, I still don't know, but it is already in the domain of browser internals.

I was trying to access an API on https:// from a page that was hosted on http://. Since the scheme of the call was different from the scheme of the hosted URL, it is interpreted as a cross domain call. You might want to research this concept, called CORS, in order to understand the rest of the post.

The thing is that it didn't work. The server was correctly configured to allow cross domain access, but my jQuery calls did not succeed. In order to access the API I needed to send an Authorization header, as well as request the information as JSON. Investigations on the actual browser calls showed the correct OPTIONS request method, as well as the expected headers, only they appeared as 'Aborted'. It took me a few hours of taking things apart, debugging jQuery, adding and removing options to suddenly see it work! The problem was that after resetting IIS, the problem appeared again! What was going on?

In the end I've identified a way to consistently reproduce the problem, even if at the moment I have no explanation for it. The calls succeed after making a call with no headers (including the Content-Type one). So make a bogus, unauthorized call and the next correct calls will work. Somehow that depends on IIS as well as the Chrome browser. In Firefox it works directly and in Chrome it seems to be consistently reproducible.

I had to investigate a situation where a message of "Object moved to here", where "here" was a link, appeared in our ASP.Net application. First of all, we don't have that message in the app, it appears it is an internal message in ASP.Net, more exactly in HttpResponse.Redirect. It is a hardcoded HTML that is displayed as the response status code is set to 302 and the redirect location is set to the given URL. The browser is expected to move to the redirect location anyway, and the displayed message should be only a temporary thing. However, if the URL is empty, the browser does not go anywhere.

In conclusion, if you get to a webpage that has the following content:
<html><head><title>Object moved</title></head><body>
<h2>Object moved to <a href="[url]">here</a>.</h2>
</body></html>
then you are probably trying to Response.Redirect to an empty URL.

If I would have written an article two years ago (wait a minute, I did!) it would have been a cold enumeration of rules and my outsider opinion about it.

If I would have written an article about Scrum two months ago, it would have probably been an insider rant, explaining how just following the rules of Scrum leads to blind bureaucracy and to a lot of waste of time.

Well, now I am writing about Scrum as I understood it from a personal viewpoint, because I've had this epiphany: Scrum (or any other development process) is a personal process first and foremost. To use a metaphor, so that I can move it out of the way and talk shop, it is like driving a car on a straight road. You are the engine (strong and reliable, hopefully), the road is the development process and your speed is your development speed. It is so easy to do everything fast and furious, it's a straight road after all, but what if there is a fog? Then you would have to slow down, for danger that you wouldn't see a sudden obstacle.

To get real now, the fog is the lack of foreknowledge about what you are going to do. And I don't mean project vision, or strategic planning, I am talking about your personal schedule, of how well you know what you are going to do. Scrum is trying to achieve this by enforcing a time table (the sprints) and a schedule (the sprint planning) and a recurrent update mechanism (the daily meetings), but it is only the beginning. If you plan your sprint superficially, it is like adding fog to the road in front of you. If you (and I mean YOU!) do not update your schedule as you go, including documentation, estimated time, time spent and all useful metrics, you add fog to the road behind you. If you are surrounded by uncertainty, you cannot plan anything. You don't know where you are, where you are going and how fast you are going to get there.

After 6 months of badly implemented Scrum, I've experimented with using a simple text file to mark when I start a job and when I finish it and any breaks in between, updating the actual work time and estimated time in the Scrum tool. We are using a clearly defined specification document for each feature, including requirements, acceptance tests, implementation details, code reviews, definition of done, test plan, updated as we go. I've discovered that all this huge amount of information, instead of slowing me down, lifts the fog and allows me to push the pedal to the metal and go as fast as I can. I know when I am doing something, why, and who is doing everything else and why. At the end of the day I don't have to rack my brains to remember what I did, I just cut and paste the task names and times from my text file to an email and the Scrum master just goes over the list and we are free to talk about what really mattered in those tasks: new issues, dependencies, implementation details. The result is visibility of what we are doing and not less important, I get to go home early.

Of course, I am writing this enthusiastic post after a single day of well done and 6 months of poorly done, but I have a great feeling about this, something akin to fog being lifted from my eyes.

A Microsoft patch for ASP.Net released on the 29th of December 2011 adds a new functionality that rejects POST http requests with more than 1000 keys and any JSON http request with more than 1000 members. That is pretty huge, and if you have encountered this exception:
Operation is not valid due to the current state of the object.

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.InvalidOperationException: Operation is not valid due to the current state of the object.

Source Error:
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

Stack Trace:
[InvalidOperationException: Operation is not valid due to the current state of the object.]
System.Web.HttpValueCollection.ThrowIfMaxHttpCollectionKeysExceeded() +2692302
System.Web.HttpValueCollection.FillFromEncodedBytes(Byte[] bytes, Encoding encoding) +61
System.Web.HttpRequest.FillInFormCollection() +148

[HttpException (0x80004005): The URL-encoded form data is not valid.]
System.Web.HttpRequest.FillInFormCollection() +206
System.Web.HttpRequest.get_Form() +68
System.Web.HttpRequest.get_HasForm() +8735447
System.Web.UI.Page.GetCollectionBasedOnMethod(Boolean dontReturnNull) +97
System.Web.UI.Page.DeterminePostBackMode() +63
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +133


then your site has been affected by this patch.

Well, you probably know that something is wrong with the design of a page that sends 1000 POST values, but still, let's assume you are in a situation where you cannot change the design of the application and you just want the site to work. Never fear, use this:

<configuration xmlns=”http://schemas.microsoft.com/.NetConfiguration/v2.0>
<appSettings>
<add key="aspnet:MaxHttpCollectionKeys" value="5000" />
<add key="aspnet:MaxJsonDeserializerMembers" value="5000" />
</appSettings>
</configuration>


More details:
Knowledge base article about it
The security advisor for the vulnerability fixed
The entire MS11-100 security update bulletin

Update 3rd of March 2016: I've opened this blog post on the latest versions of Internet Explorer, Firefox and Chrome and the bug is not present anymore. You should consider this blog post obsolete

The CSS standard allows selecting an element with a certain CSS class, using the dot notation (.myClass), but it also allows an element to have more than one CSS class, separated by space (class="myClass anotherClass"). Now, sometimes we would like to select an element that has two simultaneous classes and the CSS syntax for this is to put two class selectors one after the other, with nothing to separate them (.myClass.anotherClass). This blog entry is about how you should avoid using this, at least for now, as it seems to be one of the buggiest parts of the CSS implementation in the current browsers.

First of all, Internet Explorer just fails with this. Up to version 8 simultaneous class selectors just failed in a random way. I was surprised a few days ago to see that Chrome and Firefox also have issues with this. Even if the selector did appear to work, it did not register in the css rule lists for either Firebug or the Chrome developer tools, when used with CSS3 content selectors. Remove the double class selector and they would magically appear.

The bug can be reproduced in this code:
<style type="text/css">
.a.b span:before {
content:"before ";
}

.b span:after {
content:" after";
}
</style>
<div class="a b">
<span>Test</span>
</div>

and here is the result:
Test


In Chrome and Firefox the span element will appear with an after rule, but not a before rule, even if they are both applied.

and has 0 comments
Update February 2016: Tested it on Visual Studio 2015, with the Roslyn compiler, and the problem seems to have vanished.

Here is the now obsolete post:

A class in .Net can have default parameters, with values that are specified in the constructor signature, like this:
public MyClass(int p1,int p2=0) {}

If the class is inheriting from the Attribute class, then one can also specify property values when using it to decorate something, like this:
public class MyTestAttribute:Attribute {
public int P3 { get;set; }
}

[MyTest(P3=2)]
public class MyClass {}

What do you think this code would do?
public class MyTestAttribute:Attribute {
public MyTestAttribute(int p1,int p2=0) {}
public int P3 { get;set; }
}

[MyTest(1,P3=2)]
public class MyClass {}


Well, I tell you what is going to happen. Visual Studio and ReSharper both will see no problem with the syntax, but the compiler will issue an error based on the exception "error CS0182: An attribute argument must be a constant expression, typeof expression or array creation expression of an attribute parameter type", but without specifying any file or line.

My guess is that it is trying to interpret the P3=2 line as an expression to be calculated and passed as the second attribute of the constructor. What I was expecting is to set the default value to the second constructor parameter, then set the property P3. The vagueness of the error points out to a possible bug.

It all started with this site that got stuck in the Google Chrome's DNS cache so that any changes to the Windows/System32/drivers/etc/hosts file were ignored. I didn't want to close all Chrome windows (since the DNS cache is application wide in Chrome), so I googled for an answer. And here it was, a simple url that, typed in the Chrome address bar, would allow me to clear the cache: chrome://net-internals#dns.

But there are a lot more cool things there: testing of failed sites, a log of browser network events, control over open connections and so much more. That got me curious on other cool chrome:// URLs and I found some links listing a lot of them.

I don't have the time to parse all these cool hidden Chrome URLs and review them in this blog entry, so I will just list some links and let you explore the goodness:
Google Chrome’s Full List of Special about: Pages
12 Most Useful Google Chrome Browser chrome:// Commands
About and Chrome URLs

Update: The Chrome url containing all others can be found at chrome://about/.

I was trying to access http://localhost/Reports/Page.aspx, in other words an ASP.Net page in the Reports path of the local site. Instead, I was getting a Windows authentication prompt that had no business being there. At first I thought to debug the page, but it wouldn't even get there before I got the authentication prompt. I googled for it, but I didn't get far because I was looking for weird Windows authentication prompts, not for the specific location of my page: the Reports folder. It was stranger yet, as I stopped IIS and the authentication dialog was still appearing!

In the end, a colleague told me the solution: SQL Reporting Services is answering on the local Reports path! I stopped the service and voila! no more authentication prompt. Instead, a Service unavailable 503 error. This article explained things quite clearly. Even if you stop the service, you have to delete the access control list entry for /Reports with the command netsh http delete urlacl url=http://+:80/Reports or, I guess, restart the system after you set the Reporting Services service to Manual or Disabled.

Update: It is even easier to go to Sql Server Configuration Tools (in the Start Menu), run the Reporting Service Configuration Manager, then change the URL for the Report Manager URL to something other than Reports.

But what is this strange Access Control List? You can get a clue by reading about Http.sys API in Windows Vista and above and about Namespace Reservation. Apparently, one can do similar things on Windows Server 2003 and maybe even XP with the Httpcfg utility.

Let's start with an example:
DECLARE @SiteId INT
SELECT @SiteId=isnull(SiteId,0) FROM Orders WHERE OrderID=15
UPDATE Order_Sites SET SiteID=@SiteId


Can you spot the problem? What if SiteID in Order_Sites is not nullable? What if there is no order with OrderId 15?

That's right, when you select into a variable, you must be certain that the query returns any rows, otherwise the variable will not be set at all.

The solution is to add another operation that sets the value correctly. Here are three possible options:
  • Set @SiteId to 0 before the select.
  • Set @SiteId to isnull(@SiteId,0) after the select and simplify the select to not contain the isnull.
  • Use the select as an argument of the isnull operation:
    SET @SiteId= isnull((select SiteId from Master_orders where OrderID=-1),0)
    Yes, you can do that.


Either way, always pay attention to this gotcha in using SQL.

I had this operation on a Javascript object that was using a complex regular expression to test for something. Usually, when you want to do that, you use the regular expression inline or as a local variable. However, given the complexity of the expression I thought it would be more efficient to cache the object and reuse it anytime.

Now, there are two gotchas when using regular expressions in Javascript. One of them is that if you want to match on a string multiple times, you need to use the global flag. For example the code
var reg=new RegExp('a',''); //the same as: var reg=/a/;
alert('aaa'.replace(reg,'b'));
will alert 'baa', because after the first match and replace, the RegExp object returns from the replace operation. That is why I normally use the global flag on all my regular expressions like this:
var reg=new RegExp('a','g'); //the same as: var reg=/a/g;
alert('aaa'.replace(reg,'b'));
(alerts 'bbb')

The second gotcha is that if you use the global flag, the lastIndex property of the RegExp object remains unchanged for the next match. So a code like this:
var reg=new RegExp('a',''); //same as: /a/;
 
reg.test('aaa');
alert(reg.lastIndex);
 
reg.test('aaa');
alert(reg.lastIndex);
will alert 0 both times. Using the global flag will lead to alerting 1 and 2.

The problem is that the solution to the first gotcha leads to the second like in my case. I used the RegExp object as a field in my object, then I used it repeatedly to test for a pattern in more strings. It would work once, then fail, then work again. Once I removed the global flag, it all worked like a charm.

The moral of the story is to be careful of constructs like _reg.test(input);
when _reg is a global regular expression. It will attempt to match from the index of the last match in any previous string.


Also, in order to use a global RegExp multiple times without redeclaring it every time, one can just manually reset the lastIndex property : reg.lastIndex=0;

Update: Here is a case that was totally weird. Imagine a javascript function that returns an array of strings based on a regular expression match inside a for loop. In FireFox it would return half the number of items that it should have. If one would enter FireBug and place a breakpoint in the loop, the list would be OK! If the breakpoint were to be placed outside the loop, the bug would occur. Here is the code. Try to see what is wrong with it:
types.forEach(function (type) {
if (type && type.name) {
var m = /(\{tag_.*\})/ig.exec(type.name);
// type is tag
if (m && m.length) {
typesDict[type.name] = m[1];
}
}
});
Click here to see the answer

I've had a horrible week. It all started with a good Scrum sprint (or so I thought) followed by a period of quiet in which I could concentrate on my own ideas. And one of my ideas was to optimize the structure of the solution we work on, containing 48 projects, in order to save space and compilation time. In my eyes, I was a hero, considering that for a company with tens to hundreds of devs, even a one second increase in speed would be important. So, I set up doing that.

Of course, the sprint was not as good as I had imagined. A single stored procedure led to not less than four bugs in production, with me being to blame for them all. People lost more time working on reproducing the bugs, deploying the fix, code reviewing, etc. At long last I thought I was done with it and I could show everyone how great the solution looked now (on my computer) and atone for my sins.

So from a solution that spanned from 700Mb clean and 4Gb after compilation, I managed to get it to a maximum of 1.4Gb. In fact, it was so small I could put it all in a Ram disk, leading to enormous speeds. In comparison, a normal drive goes to about 30MB per second, an SSD drive (without encryption) goes to about 250MB/s, while my RamDisk was running at a whooping 3.6GB/s. That sped up the compilation and parsing of files. Moreover, I had discovered that MsBuild has this /m parameter that makes it use more processors. A compilation would go to about 40 seconds, down from two minutes and a half. Great! Alas, it was not to be so easy.

First of all, the steps I was considering were simple:
  • Take all projects and make them have a single output folder. That would decrease the size of the solution since there would be no copies of the .dll files, Then the sheer speed of the compilation would have to increase, since there would be less copying and less compilation.
  • More importantly, I was considering making a symlink to a RAM drive and using it instead of the destination folder.
  • Another step I was considering was making all references to the dll files in the output folder, not to the projects, allowing for projects to be opened independently.


At first I was amazed the solution decreased in size so much and I just placed the entirety of it into a RAM drive. This fixed some of the issues with Visual Studio, because when I was selecting a file through a symlink to add as a reference, it would resolve to the target folder instead of the name of the symlink. And it was't easy either. Imagine removing all project references and replacing them with dll references for 48 projects. It took forever.

Finally I had the glorious compilation. Speed, power, size, no warnings either (since I also worked on that) and a few bug fixes thrown in there for good measure. I was a god! Then the problems appeared.

Problem 1: I had finished the previous sprint with a buggy stored procedure committed to production. Clients were losing money and complaining. That put a serious dent in my pride, especially since there were multiple problems coming from both less attention to how I wrote the code to downright lack of knowledge of the flow of the application. For the last part I am not really the only one to blame, but it was my responsibility.

Problem 2: The application was throwing some errors about the target framework of a dll. It was enough to make me understand a major flaw in my design: there were .Net 3.5 and .Net 4.0 assemblies in the solution and placing them all in the same output folder would break some build scripts. Even worse, the 8 web projects in the solution needed to have their output in the bin folder, so that IIS would find them. Fixed it only to see the size of the solution rise back to 3Gb.

Problem 3: Visual Studio would not be so smart as to understand that if a project is loaded, going to the declaration of a member in the compiled assembly means I want to see the actual source, not the IL code. Well, sometime it worked, but sometimes it didn't. As a result I restored the project references instead of the assembly references.

Problem 4: the MsBuild /m flag would do wonders on my machine, but it would not do much on the build server. Nor would it do its magic on slower, less multiprocessor computers than my own.

Problem 5: Facing a flood of problems coming from me, my colleagues lost faith and decided to not even try the modifications that removed the compilation warnings from the solution.

Conclusion: The build went marginally faster, but not enough to justify a whole week of work on it. The size decreased by 25%, making it feasible to put it all in a RAM Drive, so that was great, to the detriment of working memory. I still have to see if that is a good or a bad thing. The multiprocessor hacks didn't do much, the warnings are still there and even some of my bug fixes were problematic because someone else also worked on them and didn't tell anyone. All in a week's work.

Things I have learned from all this: Baby steps. When I feel enthusiasm, I must take it as a sign of trouble. I must be dispassionate as an ice cube and think things through. If I am working on a branch, integrate the trunk into it every day, so as to not make it harder to do at the end. When doing something, do it from start to finish, no matter what horrors I see while doing it. Move away from Sodom and not look back at it. Someone else will fix that, maybe, you just do your task well. When finishing something, commit it into the source control so it can easily be reverted through a single atomic operation.

It is difficult to me to adjust to something that involves this amount of planning and focus. I feel as if the chaotic development years of my youth were somewhat better, even if at the time I felt that it was stupid and focus and planning was needed. As a good Romanian, I am neurotic enough to see the worst side of everything, master at complaining about it, but incapable of actually doing something. Yeah... this was a bad week.

and has 0 comments
No news from my personal or work fields. However, I've found two interesting news just today and I wanted to share them.

First, the development of a camera to capture pictures that you can focus later. Although I have heard of solid metal lenses that would be less than 1$ to make and would achieve the same effect, the only actually functioning system I've heard of so far is the Lytro Living Picture camera. Here is an Ars Technica article on it and here is a YouTube video demo.

The second news is more IT related. It involves the cryptographic standards for XML, as defined by W3C. They failed! Here is an article about how they were cracked by using a vulnerability in the Cipher Block Chaining and here is a link to their press release.

and has 0 comments
A question arose at the office today: What is faster? Using a Dictionary<string,object> with a StringComparer.OrdinalCaseInsensitive constructor parameter or using a normal constructor call and using ToLower on the key before using it. The quick answer: using ToLower on the key.

The longer answer is that StringComparer.OrdinalCaseInsensitive implements IEqualityComparer<string> by using a native code function for GetHasCode(), which is very efficient. Unfortunately, it must use the case insensitive string comparison on both input key and stored keys, while calling ToLower on the keys before using them makes the comparison only once.