and has 2 comments
I won't get into explaining in detail what the dynamic keyword does in .Net 4.0, enough to say that it is enabled by the DLR (Dynamic Language Runtime) and it allows an object to have a type determined at runtime, not at compile time. I will use this feature to select the appropriate method for a specific type.

I've long had this problem with the object oriented principles that stated that methods with different signatures should be chosen based on the type of their parameters, but if the type of the parameter was unknown at compile time (let's say it was boxed in an object type) the method executed was always the one with the object parameter:

void DoSomething(object o) {
Console.WriteLine("I'm an object");
}

void DoSomething(int o) {
Console.WriteLine("I'm an int");
}

void Test() {
int i=1;
object o=i;
DoSomething(i);
DoSomething(o);
}

The result of this would be "I'm an int" followed by "I'm an object".

Solutions for this range from having a type casting cascade (because the switch statement doesn't work with Type) or using a Dictionary<Type,Action> or even using reflection to get the method with a specific signature and execute it.

A typical OOP solution that is correct, but really cumbersome to use is the double dispatch pattern. You can read an interesting article about it here.

The worst thing about this is that o.GetType() is System.Int32. It's like it's mocking us (and not in a unit testing way either).

Here comes dynamic:

void DoSomething(object o) {
Console.WriteLine("I'm an object");
}

void DoSomething(int o) {
Console.WriteLine("I'm an int");
}

void Test() {
int i=1;
object o=i;
dynamic d=o;
DoSomething(i);
DoSomething(o);
DoSomething(d);
}

This will have the same result as before followed by "I'm an int"! Pay attention to the fact that I did not set the value of d to the int, but to the object! What is ever cooler is that one can use only the methods that make sense, without the need of a general method that received a parameter of type object (unless, of course, you want to catch it and throw some meaningful error). No more casting insanity!

I use this technique to get an object that inherits from a base class, without knowing what the object is, then passing it to private methods that have the same name but different parameter types. All I have to do is cast my object to dynamic and pass it to the private method and the DLR does the rest for me.

I have not yet tested the performance aspect of this, but, considering that the DLR is the base for all dynamic languages in .Net like Ruby and Python, I bet it is faster than dictionaries with Actions in them.

So, to recap, if you ever want that a boxed object behave differently based on its true type, use myMethod((dynamic)obj) rather than myMethod(obj) and you are set.

Update: I have implemented this pattern in an application I am working on and I am very satisfied with it. I've created a separate assembly that adds the DynamicPatternAttribute and DynamicPatternIgnoreAttribute classes, which help decorate the methods I am using and also a PatternChecker class that can be used to check that there is a method implementation for each type inheriting from a specific base type. Here are some details on how it works.

First of all, we must define the purpose of such a pattern. As I said above, one can use it to specify behavior based on specific types that inherit from a base type. This can be desirable when the types in question cannot be changed to add new behavior. Even if they can be changed, it may not be desired to add a reference from the project containing our classes to the project containing the behavior. It is almost like defining static extension methods.

Then there are the elements:
  • The base class from which all classes that determine behavior inherit (object if nothing specific)
  • The routing method that receives at least a base class parameter
  • The behavior methods that have as a parameter the subclasses of the base type
  • The pattern implementation checker class


So far I've used it in the following way:
  • The new behavior is encapsulated in methods in a static class
  • The routing method receives as the first parameter the base type for the classes that should determine behavior
  • The only code in the routing method is calling the private behavior method(s) using the same parameters as itself, except the first which is cast to dynamic
  • The behavior methods usually are private methods having the same name as the routing methods, but ending in "handle"
  • The routing method is decorated with [DynamicPattern("nameOfTheBehaviorMethods")]
  • The routing method is decorated with [DynamicPatternIgnore(typeof(subClassToIgnore))] which tells the checker which classes do not need behavior implementations
  • The static class containing the pattern has a static constructor that calls a method to check the implementation of the pattern
  • The checker method is decorated with [Conditional("DEBUG")] so that it doesn't hinder the functionality of the program with slow reflection checks
  • The checker method calls PatternChecker.CheckImplementation(typeof(staticClass)) or PatternChecker.CheckImplementation(typeof(class).Assembly)


The PatternChecker class only checks if there is a method with the name specified in the DynamicPattern contructor for each of the subclasses of the base type of the first parameter in the routing method.

I hope you like this pattern. I certainly do. I leave you with an actual implementation example:

public static class RequestHandler
{

static RequestHandler()
{
checkDynamicPattern();
}

[Conditional("DEBUG")]
private static void checkDynamicPattern()
{
PatternChecker.CheckImplementation(typeof(RequestHandler));
}

[DynamicPattern("getObjectsHandle")]
[DynamicPatternIgnore(typeof(BaseDeleteRequest))]
[DynamicPatternIgnore(typeof(DeleteUsersRequest))]
[DynamicPatternIgnore(typeof(DeleteCategoriesRequest))]
[DynamicPatternIgnore(typeof(DeleteDataRequest))]
[DynamicPatternIgnore(typeof(DeleteApplicationsRequest))]
[DynamicPatternIgnore(typeof(GetEntityRequest))]
[DynamicPatternIgnore(typeof(BaseObjectActionRequest))]
[DynamicPatternIgnore(typeof(GetEntitiesRequest))]
[DynamicPatternIgnore(typeof(GetEntityByIdRequest))]
[DynamicPatternIgnore(typeof(BatchOperationRequest))]
public static BaseEntitiesResponse GetObjects(BaseRequest request, Coordinator coordinator)
{
return getObjectsHandle((dynamic)request, coordinator);
}

private static BaseEntitiesResponse getObjectsHandle(BaseRequest request, Coordinator coordinator)
{
throw new ArgumentException("Cannot find a suitable getObjects method for type of request " +
request.GetType());
}

private static BaseEntitiesResponse getObjectsHandle(GetApplicationRequest request,
Coordinator coordinator)
{
DataObject entity = coordinator.ApplicationManager.GetObject(request.Id,
request.IncludeOptions);
return getEntitiesResponse(entity);
}



Update: There is an wonderful unintended side-effect of the dynamic pattern when casting to generic types. Imagine you have a generic interface like ICustom<T> and you want to use the standard model of checking the type and selecting behaviour. You can't do it with as! There is no valid method of doing
var custom=obj as ICustom<T>;
so you are forced to use GetType() and then some weird methods that interogate the Type object. You can do it with the dynamic pattern.



checkIfCustom(obj);

private void checkIfCustom(object obj) {
dynamicCheckIfCustom((dynamic)obj);
}

private void dynamicCheckIfCustom(object obj) {
//do nothing
}

private void dynamicCheckIfCustom<T>(ICustom<T> iCustom) {
doSomethingWith(iCustom);
}


This thing works! If anything than an ICustom<T> is given, nothing happends. If it is the correct type, then doSomething is executed with it. Pretty neat, huh?

This will be a short post to describe my own stupidity. I was testing the new Entity Framework Plain Old CLR Objects (POCO) support and so I made a small test to:
  • Clear the database
  • Insert new items in the database
  • Select the items from the database, with and without related items

Every time I got the entire object tree, with child collection and parent objects, thus making me think that in this implementation of EF, the need to use Include was gone, instead an Exclude method was needed in order to tell the framework to NOT LOAD related objects! Insane!
After looking everywhere to find the answer, I finally turned to profiling SQL only to see that the database was not accessed with related items, but only what I had asked for. Then I had my "I'm an idiot!" moment. I was using the same context and EF knew the entire hierarchy of objects because (duh!) I had just inserted it a few code lines above. Using different contexts solved the "problem" and only returned the requested objects, making Include a necessity again.

Well, there are a lot of good reasons why that could happen because of your bad code, but this time it is a plain ugly Microsoft bug. You see, the code in the RaisePostBackEvent method in the TreeView control first checks if the control has an Adapter and if not, it just does its thing. If there is an Adapter, it tries to cast it to a IPostBackEventHandler and then fires the RaisePostBackEvent event there. However, if the TreeView control has an adapter and it is not a RaisePostBackEvent, nothing happends!

Here is the offending code:

protected virtual void RaisePostBackEvent(string eventArgument)
{
base.ValidateEvent(this.UniqueID, eventArgument);
if (base.IsEnabled)
{
if (base._adapter != null)
{
IPostBackEventHandler handler =
base._adapter as IPostBackEventHandler;
if (handler != null)
{
handler.RaisePostBackEvent(eventArgument);
}
}
else ...


Bottom line, you need to either not use an adapter for the TreeView, or use one that knows how to handle the postback. And given the complexity of the code in the method, it is better to just not use an adapter.

The solution I have adopted is to recreate the functionality in an override of the RaisePostBackEvent method and add some more (like TreeNodeClicked and SelectedNodeClicked). Hint: you need to also get in LoadPostData and remember which nodes are selected in order to check if the selected node has changed.

Wow, long title. The problem, however, is simple: when adding an inline Javascript script block that changes the window.location or window.location.href properties, FireFox and Chrome do not retain the original URL of the page in the browser history. The Back button doesn't work correctly.

Going to the Mozilla page for developers I find that both redirect methods are equivalent to location.assign(url) which implicitly sets the url in the browser history chain, as opposed to location.replace(url) which doesn't affect the history and just replaces the current URL. So I get to use one method and get the behaviour of the other!

Enough said. Long story short, the behaviour was not reproduced if the same script was being loaded in a button click event. That means it is another of those annoying Gecko page load completed issues.

The solution? Instead of location=url; use setTimeout(function() { location=url; },1);. I know, really ugly and stupid. If you find a better solution to cause a redirect from javascript, please let me know.

I am only linking to this blog post that shows how to instantiate the converters directly in the binding, without having to define a resource just for that.

WPF Quick Tip: Converters as MarkupExtensions

Update: After careful deliberation I've reached the conclusion that instead of custom converters that would have to be instantiated in the binding XAML I can just create new binding types. Here is how you can do it:
  1. Inherit from Binding
  2. Implement IValueConverter
  3. Set Converter=this in the constructor
  4. Use the new binding where you see fit


Actually, I have created a more complex Binding object that chooses the type of conversion based on an enumeration or, alternatively, gets converters as content and pipes them one after the other for a more dynamic reuse of converter power. I am still not sure which of these two solution I will use more often, though.

Here is a post from Phil Haack about extra extensibility options in ASP.NET 4.0. I would like to emphasize the third "gem", the one that lets you add an assembly programatically to the list of loaded assemblies (equivalent to adding it in the assemblies tag in web.config). I wanted this from .Net 1.0 ! Unfortunately it must be called during the Application_PreStartInit stage of the application, but maybe the first gem can help with that.

Of course, I haven't been using these things yet, so I may be saying stupid things. It is a distinct possibility... Quite distinctive... Oh, shut up!

First of all, check the WindowsUpdate.log file found in the Windows folder. It should tell you how the update failed. Look for something looking like this: WARNING: Command line install completed. Return code = 0x0000066a, Result = Failed and later on WARNING: Install failed, error = 0x80070643 / 0x0000066A.

If you have the same error, check this article out: Fix KB974417 Installation Failure—Microsoft .NET Framework 2.0 Service Pack 2 Security Update for Windows 2000, Windows Server 2003, and Windows XP.

However, my problem was that I had NOT installed the KB976569 Windows update that the guy recommends removing before installing the new one.

I found this article: KB 953297 and KB974417 Fails to Update Through Windows Update that recommends a clean reinstall of the .NET Framework. Haven't tried it, though. I just didn't do the update. Probably when all hell breaks loose I am going to regret it, but at least I passed the message on :)

I was minding my own business doing this ridiculous ASP.Net project and suddenly I am hit with: "Multiple controls with the same ID 'lbTitle' were found. FindControl requires that controls have unique IDs.". I was using this web control that contained two others, each of them having a lbTitle control.

If you think about it, it does make sense to throw this exception when the FindControl method is used in the parent control. Which control did I mean? But the error appears even if the FindControl is used inside one of the child controls, what is up with that?

As a secondary thought, I wasn't using any FindControl. It seems that what does is the AssociatedControlID property of the Label control. Therefore I will set the HtmlTextWriterAttribute.For attribute manually to resolve this. Pretty damn ugly, if you ask me!


Update: My fault! The controls needed to implement the INamingContainer interface. It's not that the ClientID was not different, but FindControl works by going to the NamingContainer control and finding then children by ID. Too bad that you can't just specify the ClientID and be done with it, but that's that.

I don't want you to think that I started working on ASP.Net MVC, but this article about a 36 year trek through backward compatibility hell was really funny and needed to be linked. What is so funny, I guess, is that it is factual and the humour is in the situation more than in the article itself.

On a separate note, ASP.Net 4.0 fixed this issue with a web.config setting.

I wanted to build a script to copy some files from a computer to another. This can be done using a few utilities like:
  • nc - for stream network transfer
  • tar - for clumping all files together
  • gpg - for encryption


Here is the quick and dirty version.
On the receiving computer:
nc -dl _MyPort_ | gpg -d -q --no-mdc-warning --passphrase _MyPassword_ | tar -xjC _DownloadFolder_

On the sending computer:
tar -cj -C _Folder1_ _File1_ -C _Folder2_ _File2_ -C _Folder3_ _File3_ _File4_ _File5_ | gpg --passphrase _MyPassword_ -c | nc _MyIP_ _MyPort_

In other words:
On the receiving side nc is listening on a specific port for a stream that will be passed through gpg and decrypted, then passed to tar which will decompress it and split it into different files in a specified download folder.
On the sending site the files to be transferred are clumped and compressed into a stream by tar passed through gpg and encrypted, then sent to a specified IP and port via nc.

nc and tar are standard Linux utilities. gpg must be downloaded and installed.

The scripts themselves are more complicated, but the gist of it is in here.

The book started really nice, at a beginner to medium level with which I could not feel neither embarrassed nor overwhelmed. The first chapter was about the expressiveness of Javascript and how different styles of programming could be employed to achieve the same goals. This part of it I would have liked to see expanded in a book of its own, with code examples and everything.

The second chapter was also interesting, comparing the interface style of programming with the options available inside Javascript as well as giving some real life solutions. Personally, I didn't think the solutions were valid, as writing the interface as comments and trying to enforce the interface inside methods and getters/setters feels cumbersome and "unJavascriptish" to me.

The third chapter, Encapsulation and Information Hiding, described object creation, private, privileged (not protected!) and public members, while the fourth was dedicated to inheritance. All these are great reading for a Javascript programmer, as they might teach one or two new things.

From then on, 13 chapters described various software patterns and their application in Javascript. Alas, since this was the explicit purpose of the book, I can't say I enjoyed that part of the book. It felt like any other rehashing of the original GoF book, only with the syntax changed. Well, maybe not quite so bad, but it lacked a consistency and a touch of the writer's personality that makes books easy to read and to remember.

That being said, the technical part was top notch and the structure of each chapter made it easy to understand everything in them. The software patterns described were: Singleton, Chaining, Factory, Bridge, Composite, Facade, Adapter, Decorator, Flyweight, Proxy, Observer, Command and Chain of Responsibility.

Overall, a nice book for reference, but not one that I would call memorable. An easy read and also an easy browse, since one can pass quickly through the book and still understand what it is all about.

I have been working on a jQuery based control library for a few months and today I replaced the embedded 1.3.2 version with the 1.4.2 version. I immediately noticed an increase in performance. Before, the profilers were screaming at the CLASS function, immediately followed by css. Now, the CLASS function seems to work 10 times faster. Some resizing script that I was using and was kind of sluggish now behaved all perky. I personally believe John Resig did some good things in the new 1.4 jQuery release.

Also, trying to optimize javascript code and having attended a Javascript training recently, I reached some conclusions that I want to formalize and place in a blog post, but just in case I don't find the resources, here are some quick pointers:
  • Javascript has function scope and closure support, which means that a variable defined in a function will be accessed faster than one that is outside or global, also that any function you create inside another function remembers all the variables declared in that scope. You should declare local variables in init functions and also declare local functions that use those variables, essentially guaranteeing faster access and that no one will overwrite them accidentally. Also, cache in this way the global variables and functions that are commonly used, like document
  • Cache the jQuery objects, not the elements. Not only it reduces the overhead of recreating them, but they "track" the underlying elements even if one removes/adds/modifies them.
  • Use the context format find method of the jQuery selector to get to child controls.
  • There are a lot of free profiling tools for javascript: dynaTrace Ajax is pretty cool, but also tools like the IE8 developer toolbar profiler or the famous Firebug and its many addons. You should use a profiler to understand where your bottlenecks are, they might not be where you expect them to be.
  • Redrawing the DOM elements is an expensive operation and changing the style of an element has many side effects. You should check if the value you wish to change in the style of an element is not already set. For example jQuery has the outerWidth and outerHeight functions that should return the same sizes as the value you pass to width and height (at least for divs with overflow not 'visible')


Here is some code that demonstrates the above rules:

/* set an element to be of the same size as another element */
function sameSize(base,target) {
// jQuery and cache
base=$(base);
target=$(target);
// local variable will be remembered
// and accessed faster
var lastSize={
width:-1,
height:-1
};
// local function will be private and also accessed faster
function checkSize() {
// get the outer size of the base element
var baseSize={
width:base.outerWidth(),
height:base.outerHeight()
};
// compare it with lastSize
// do nothing if this size was handled before
// use the !== inequality for extra speed (no type conversion)
if (baseSize.width!==lastSize.width
||baseSize.height!==lastSize.height) {
// get both the declared and real size of the target element
var targetSize={
declaredWidth:target.css('width'),
declaredHeight:target.css('height'),
width:target.outerWidth(),
height:target.outerHeight()
};
// only change the size (and thus fire all sorts of events
// and layout refreshes) only if the size is not already
// declared as such or simply the same for other reasons
// like 100% layout in a common container
if ((targetSize.width!==baseSize.width
&&targetSize.declaredWidth!=baseSize.width+'px')
||(targetSize.height!==baseSize.height
&&targetSize.declaredHeight!=baseSize.height+'px') {
// Javascript object notation comes in handy
// when using the css jQuery function
target.css(baseSize);
}
// cache the handled size
lastSize=baseSize;
}
}
// bind the function to any event you want to
// since it will only do anything if the size actually
// needs changing
base.bind('resize',checkSize);
$(window).bind('resize',checkSize);
setInterval(checkSize,1000);
}

/* faster getElementById */
// use a local anonymous function as a closure
(function(){
// cache the document in a private local variable
var d=document;
byId=function(id) {
return d.getElementById(id);
};
// a lot more code might come here, all using byId and accessing
// it faster since it is local to the closure
})();
/* you might think that this doesn't do much if no other code
in the closure, since byId would be a global function and just as
slow to be accessed, but remember that you only need to find the
function once, while document.getElementById needs to find document,
and then getElementById. And document is a large object. */

/* find the tables inside a div using the context syntax */
var div=$('#myDiv');
var childTables=$('table',div);

A while ago I was writing about the annoying The Controls collection cannot be modified because the control contains code blocks (i.e. <% ... %>). error and how to fix it by using a PlaceHolder or another container to hold your code blocks you can resolve this.

But what if the ASPX code is not yours and you are only building the control? How can you get around the dreaded code blocks? First, let's try to understand the mechanism that renders the code blocks. When the markup code is read, a CodeDomTreeGenerator class is used to parse it. All DOM tree generators inherit from BaseTemplateCodeDomTreeGenerator which does the following: if the block read is a control, create the control and add it to the control collection, if it is a code block, generate a dynamic render method and use that. From that moment on, you can't change the control collection because (stupid, if you ask me) the render method has already been generated and it only knows about the controls it had then.

You can test if the ControlCollection object cannot be changed via the IsReadOnly property. If it is read only, code blocks have been added. Indeed, in the ControlCollection class, a private field is used to hold an error message and, if not null, it will be used as the exception message when trying to add or remove controls from the collection.

Are you in the mood for some insanity? Ok, let's unset the string via reflection, see what happens! Well, first of all, no error! You can manipulate the control collection at your leisure. The problem is that the render method is still the generated one. If you change the control collection weird stuff will happen, like not getting your control rendered or, if inserted or deleted, seeing controls rendered instead of others or being pushed out of the "rendering queue". So, what is we remove the render method as well? Then the normal Render mechanism will be used. That means that the code blocks will be completely ignored!

So, if you are a mean son of a bitch like myself, instead of begging junior programmers to never use code blocks or to encapsulate them at least, screw the controls so that they ignore that bad code. Not very smart, but oh, so mean :)

Here is a bit of code to remove the "readonlyness" of control collections:



public static class MSOAB
{
private static readonly FieldInfo _readOnlyErrorMsgFieldInfo =
typeof(ControlCollection).GetField("_readOnlyErrorMsg",
BindingFlags.Instance | BindingFlags.NonPublic);
private static readonly PropertyInfo _rareFieldsEnsuredPropertyInfo =
typeof(Control).GetProperty("RareFieldsEnsured",
BindingFlags.Instance | BindingFlags.NonPublic);

private static FieldInfo _renderMethodFieldInfo;

public static void FixReadOnlyControlCollection(Control control)
{
if (control.Controls.IsReadOnly)
{

_readOnlyErrorMsgFieldInfo.SetValue(control.Controls, null);
var rareFieldsEnsured = _rareFieldsEnsuredPropertyInfo.GetValue(control, new object[] { });
if (_renderMethodFieldInfo == null)
{
_renderMethodFieldInfo = rareFieldsEnsured.GetType().GetField("RenderMethod");
}
_renderMethodFieldInfo.SetValue(rareFieldsEnsured, null);
}
}
}


This isn't really tested except the basic functionality and I haven't used it in a production environment, but it was fun as it was. I hope you enjoyed it as well.

Coincidence made it that this week people have asked me twice about how to preserve focus during postbacks, especially during auto postbacks. I didn't really know much about how ASP.Net does this other than building your own script, so I started investigating.

When you post back, there is a postback event reference that is used. The Page.ClientScript property, of the type ClientScriptManager, has a bunch of methods called GetPostBackEventReference which return different script strings for different options. The options are encapsulated into a PostBackOptions object which has, interestingly enough, a property called TrackFocus. Wow! Exactly what I wanted.

The problem comes when digging in the System.Web sources and seeing that no one actually sets this property. I did a bunch of code to add the script with the TrackFocus property and it seems it works. Not only does it preserve the focus before the postback, but also the scroll bars position. I tested in Internet Explorer, FireFox and Chrome and it worked on all.

So what is going on? Why does this feature, which I imagine is quite useful, seem only half done? And that since the era of .Net 2.0? I have no idea. I will post this method, applicable on TextBox controls, although I guess it could be applied on most if not all AutoPostback controls, that replaces the script for a normal AutoPostback with one that also preserves focus:

private void fixAutoPostBack(TextBox tb)
{
if (!tb.AutoPostBack)
return;
tb.AutoPostBack = false;
PostBackOptions options
= new PostBackOptions(tb, string.Empty)
{
TrackFocus = true,
AutoPostBack = true
};
if (tb.CausesValidation)
{
options.PerformValidation = true;
options.ValidationGroup = tb.ValidationGroup;
}
string onchange = string.Empty;
if (tb.HasAttributes)
{
onchange = tb.Attributes["onchange"];
if (!string.IsNullOrEmpty(onchange))
{
onchange = onchange.TrimEnd(';') + ";";
}
}
onchange += ClientScript
.GetPostBackEventReference(options, true);
tb.Attributes["onchange"] = onchange;
}

As you can see, the only thing I do differently from the normal AutoPostback code is to set TrackFocus to true.

I started using a script that shows me what javascript errors occur in my blog. I soon found out that there were about 10-20 errors each day, intermittently, the vast majority of them being '_WidgetManager is undefined'. I googled it, of course, only to discover that a lot of people had it, some fix was applied by Google and thus most of the articles on the web were completely unuseful.

Well, what seemed to happen is that some code was added at the end of the blog page, loading a js file, and if the js file was (for some weird reason) not loaded, the script following would throw an error. So I added my own script to create a fake _WidgetManager class just in case the script did not load.

Hoping it might help others, here is the script:
_WidgetInfo=function(){};
_WidgetManager={
_Init:function() {},
_SetPageActionUrl:function() {},
_SetDataContext:function() {},
_SetSystemMarkup:function() {},
_RegisterWidget:function() {}
};