Update: after a ton of searching, I found people that had the same problem as I did. It seems the culprit was Google Friends Connect. So sorry if you cannot "Follow me" anymore, I just removed the offending widget. Now the chat and the feedjit list and the shareit button work on IE8 as well. Maybe I can fix it somehow, but until then...

Today, out of the blue, without me changing anything in the blog or making any update, I started to get a lot of weird browser errors when viewing the blog with Internet Explorer 8. Things like 'Could not complete the operation due to error 800a03e8' and 'Unable to modify the parent container element before the child element is closed (KB927917)'.

Due to this (as yet unexplained issue) I have disabled the "share it" button on each post and the Jabbify and Feedjit widgets for anything that implements document.documentMode (i.e. Internet Explorer 8). I am sorry for the inconvenience. I will try to solve the issue, somehow.

I have determined that these weird errors are caused by trying to inject through javascript elements in the page before the page has finished loading. I have tried to delay the loading of these scripts until the page has rendered completely, but since they are external js files from various third parties, I have been unsuccesful. I will contact the various parties maybe they can change their js code, although I am not hopeful.

I have seen a small video presentation about the new ASP.Net 3.5 SP1 Script Combining feature. Basically you take a bunch of scripts (like the ones from AjaxControlToolkit) and you bundle them together in a single cacheable file. This decreases the number of concurrent connections on your production site.

The problem was that you had to use some component to see what script files were being loaded and then manually add them to the ScriptManager CompositeScript Scripts collection. And this applies only to correctly registered scripts, not stuff that is embedded in the HTML or what not. Isn't it easier to just parse the generated HTML and then replace the script tags, I thought?

Well, I did a small Response filter/IHttpHandler in about two hours. It would take all consecutive external file references and combine them in a single cached and cacheable call. Then I tested it with Asynchronous postbacks. Epic Fail! The main problem was that the combined scripts would re-register themselves at postback and throw all kinds of errors therefore. But how did they know if they were registered or not before?!

I vaguely remembered an old post of mine about the notifyScriptLoaded function that must be called at the end of every external javascript registrations. Examining the system a little I realised that the flow was like this:
  1. register the script (either include it in the HTML in regular render or sending it to the UpdatePanel javascript engine to be registered as a new dynamically script element in the Head page section)
  2. if the registration is an async dynamic one, check in an array if the script is loaded and if it is, don't register it
  3. in the script call Sys.Application.notifyScriptLoaded() which would take the src of the script element and use it as a key in the above mentioned array to declare it has been loaded


Of course, that means combining all the scripts in a single file registers only that file and you get the original files registered again. Then you get errors like 'Type [something] has already been registered.' or (because there are more than one script file bundled together) 'The script [something] contains multiple calls to Sys.Application.notifyScriptLoaded(). Only one is allowed.'.

Well, I did manage to solve the problem by following these steps:
  1. forcefully remove Sys.Application.notifyScriptLoaded() from bundled scripts
  2. add Sys.Application.notifyScriptLoaded() at the end of all scripts
  3. surround the various scripts with a code that looks like
    sb.AppendFormat(@"if (!window.Sys||!Sys._ScriptLoader||!Sys._ScriptLoader.isScriptLoaded('{0}')) {{
    {1}
    if (window.Array&&Array.add&&window.Sys&&Sys._ScriptLoader) Array.add(Sys._ScriptLoader._getLoadedScripts(), '{2}');
    }}
    ", HttpUtility.HtmlEncode(src), content.Replace("Sys.Application.notifyScriptLoaded();", ";"),src);
.

With problem solved, the AutoScriptCombiner works, but it feels wrong. I wanted to post it on Github, you see, but in the end I've decided not to. However I did learn something about how the Asp.Net Ajax framework functions internally and I wanted to share it with you.

Another great blog post from Rick Strahl made me investigate a little the new JSON support in the latest browsers. Internet Explorer 8 already has it and FireFox has it in its beta version and soon to be released 3.5 version.

Rick's article says more than I could. A short summary:
  • use the native JSON object
  • much faster and safer than any js library
  • still has problems with types of data which have no native notation or type, like datetime
.

Update: the native JSON object is not something that Microsoft and Mozilla got together and decided is a good idea (which for me seems the natural way of deciding the direction of code versus compatibility, not overbloated committees and standards) but because the release of the latest ECMA Javascript specs which contains, among other things, the JSON thing and the getter/setter syntax.

You are using a PopupControlExtender from the AjaxControlToolkit and you added a Button or an ImageButton in it and every time you click on it this ugly js error appears: this._postBackSettings.async is null or not an object. You Google and you see that the solution is to use a LinkButton instead of a Button or an ImageButton.

Well, I want my ImageButton! Therefore I added to the Page_Load method of my page or control this little piece of code to fix the bug:

private void FixPopupFormSubmit()
{
  var script =
    @"if( window.Sys && Sys.WebForms 
    && Sys.WebForms.PageRequestManager 
	&& Sys.WebForms.PageRequestManager.getInstance) {
  var prm = Sys.WebForms.PageRequestManager.getInstance();
  if (prm && !prm._postBackSettings) {
    prm._postBackSettings = prm._createPostBackSettings(false, null, null);
  }
}";
  ScriptManager.RegisterOnSubmitStatement(
    Page, 
    Page.GetType(), 
    "FixPopupFormSubmit", 
    script);
}



Hope it helps you all.

Actually, I think this applies to any dynamic modification of drop down list options. Read on!

I have used a CascadingDropDown extender from the AjaxControlToolkit to select regions and provinces based on a web service. It was supposed to be painless and quick. And it was. Until another dev showed me a page giving the horribly obtuse 'Invalid postback or callback argument. Event validation is enabled using <pages enableEventValidation="true"/> in configuration or <%@ Page EnableEventValidation="true" %> in a page. For security purposes, this feature verifies that arguments to postback or callback events originate from the server control that originally rendered them. If the data is valid and expected, use the ClientScriptManager.RegisterForEventValidation method in order to register the postback or callback data for validation.'. As you can see, this little error message basically says there is a problem with a control, but doesn't care to disclose which. There are no events or overridable methods to enable some sort of debug.

Luckily, Visual Studio 2008 has source debug inside the .Net framework itself. Thus I could see that the error is caused by the drop down lists I mentioned above. Google told me that somewhere in the documentation of the CascadingDropDown extender there is a mention on setting enableEventValidation to false. I couldn't find the reference, but of course, I didn't look too hard, because that is simply stupid. Why disable event validation for the entire page because of a control? It seems reasonable that Microsoft left it enabled for a reason. (Not that I accuse them of being reasonable, mind you).

Analysing further, I realised that the error kind of made sense. You see, the dropdownlists were not binded with data that came from a postback. How can one POST a value from a select html element if the select did not have it as an option? It must be a hack. Well, of course it was a hack, since the cascade extender filled the dropdown list with values.

I have tried to find a way to override something, make only those two dropdownlists not have event validation enabled. Couldn't find any way to do that. Instead, I've decided to register all possible values with Page.ClientScript.RegisterForEventValidation. And it worked. What I don't understand is why did this error occur only now, and not in the first two pages I have built and tested. That is still to be determined.

Here is the code

foreach (var region in regions)
Page.ClientScript.RegisterForEventValidation(
new PostBackOptions(ddlRegions,region)
);


It should be used in a Render override, since the RegisterForEventValidation method only allows its use in the Render stage of the page cycle.

And that is it. Is it ugly to load all possible values in order to validate the input? Yes. But how else could you validate the input? A little more work and a hidden bug that appears when you least expect it, but now even the input from those drop downs is more secure.

Update:
My control was used in two pages with EnableEventValidation="false" and that's why it didn't throw any error. Anyway, I don't recommend setting it to false. Use the code above. BUT, if you don't know where the code goes or you don't understand what it does, better use this solution and save us both a lot of grief.

I've spent about a day on a thing that I can only consider a FireFox bug. As a complete reverse from what I would expect from a javascript script, it worked anywhere but in FireFox! And FireFox 2.1, I haven't even installed 3.0 yet.

It concerned a simple javascript function from a third party that was supposed to get the absolute positioning of an element when clicked. I've written one myself a while ago, but it didn't work either! Here is the function that I was trying to fix:
function getPos(n) {
var t = this, x = 0, y = 0, e, d = t.doc, r;

n = t.get(n);

// Use getBoundingClientRect on IE, Opera has it but it's not perfect
if (n && isIE) {
n = n.getBoundingClientRect();
e = t.boxModel ? d.documentElement : d.body;
x = t.getStyle(t.select('html')[0], 'borderWidth'); // Remove border
x = (x == 'medium' || t.boxModel && !t.isIE6) && 2 || x;
n.top += t.win.self != t.win.top ? 2 : 0; // IE adds some strange extra cord if used in a frameset

return {x : n.left + e.scrollLeft - x, y : n.top + e.scrollTop - x};
}

r = n;
while (r) {
x += r.offsetLeft || 0;
y += r.offsetTop || 0;
r = r.offsetParent;
}

r = n;
while (r) {
// Opera 9.25 bug fix, fixed in 9.50
if (!/^table-row|inline.*/i.test(t.getStyle(r, "display", 1))) {
x -= r.scrollLeft || 0;
y -= r.scrollTop || 0;
}

r = r.parentNode;

if (r == d.body)
break;
}

return {x : x, y : y};
}


As you see, it is a little more complex than my own, although I don't know if it works better or not.

Anyway, I found that the problem was simple enough: the element I was clicking did not have an offsetParent! Here is a forum which discusses a possible cause for it. Apparently the Gecko rendering engine that FireFox uses does not compute offsetParent, offsetTop or offsetLeft until the page has finished loading. I didn't find anything more detailed and there were just a few pages that seemed to report a problem with offsetParent null in FireFox.

I tried to solve it, but in the end I gave up. My only improvement to the script was this line:
while (r&&!r.offsetParent) {
x+=r.offsetLeft||0;
y+=r.offsetTop||0;
r=r.parentNode;
};
which resulted in a more localised position, i.e. the position of the closest parent to which I could calculate a position.

In the end the problem was solved by restructuring the way the dynamic elements on the page were created, but I still couldn't find either an official cause or a way to replicate the issue in a simple, separate project. My guess is that some types of DOM manipulations while the page is loading (in other words, scripts that are just dropped on the page and not loaded in the window 'load' event which change stuff in the page element tree) lead to FireFox forgetting to compute the offset values or just even assuming that the page is never loaded.

You usually get this using an Internet Explorer version 6 (but also later versions might exibit this) on pages that seem to be loading ok in the background. You press ok and the page disappears and the regular error page is displayed.

What is happening is that javascript is trying to change a html element that has not finished loading. The usual cause of this problem is an embedded script block that executes as it loads, rather than on the onload event of the page.

The fix is easy. Take all the scripts that execute as they load and either mark them as defer or encapsulate those commands in a function that is executed on the onload event of document.body. Be careful with defer. This post describes the various implementations of the keyword in various browsers.

Somebody asked me how to add things at the end of an AutoCompleteExtender div, something like a link. I thought it would be pretty easy, all I would have to do is catch the display function and add a little thing in the container. Well, it was that simple, but also complicated, because the onmousedown handler for the dropdown is on the entire div, not on the specific items. The link was easy to add, but there was no way to actually click on it!

Here is my solution:

function addLink(sender,args) {
var div=sender.get_completionList();
if (div) {
var newDiv=document.createElement('div');
newDiv.appendChild(getLink());
div.appendChild(newDiv);
}
}

function getLink() {
var link=document.createElement('a');
link.href='#';
link.innerHTML='Click me for popup!'
link.commandName='AutoCompleteLinkClick';
return link;
}

function linkClicked() {
alert('Popsicle!');
}

function ACitemSelected(sender,args) {
var commandName=args.get_item().commandName;
if (commandName=='AutoCompleteLinkClick') linkClicked();
}


The AutoCompleteExtender must have the two main functions as handlers for the item selected and popup shown set like this:
<ajaxControlToolkit:AutoCompleteExtender OnClientItemSelected="ACitemSelected" OnClientShown="addLink" ...>


Now, the explaining.

First we hook on OnClientShown and add our link. The link is added inside a div, because else it would have been shown as a list item (with highlight on mouse over). I also added a custom attribute to the link: commandName.

Second we hook on OnClientItemSelected and check if the selected item has a commandName attribute, and then we execute stuff depending on it.

That's it folks!

How can I get some content from javascript (like a string) and send it to the client as a file? I don't have access to the content I want to send from the server.

This was a question posed to me in a rather more complex way: how do I take some file content from a web service and send it to the client as as file without downloading the content to the web server first?

The simple answer right now: you cannot do it. If you guys know more about this, please let me know. I've exausted all avenues I could think of, but then again, I am no master of Javascript and html responses.

Here is what I have tried. Basically, I have a string like a html table and I want it sent to the client browser as an excel download. So I opened a new window with javascript and tried to write the content there:
var win2=window.open('');
win2.document.write(s);


It worked and it displayed a table. Now, all I wanted to do is add/change the html header content-type to application/vnd.ms-excel. Apparently, you can't do it from Javascript. Ok, how about getting the necessary headers from the ASP.Net server? (remember, the restriction was that only the file content should not come from the web server). So I created a new page that would render a completely empty page with the right headers:

protected void Page_Load(object sender, EventArgs e)
{
Response.Clear();
Response.Buffer = true;
Response.ContentType = "application/vnd.ms-excel";
Response.AppendHeader("Content-Disposition", "attachment;filename=test.xls");
Response.Charset = "";

Response.Flush();
Response.End();
}


Then I just opened it in a new window (just ignore the browser pop-up filters for now) with
var win2=window.open('PushFile.aspx');
win2.document.write(s);


What happened was that the page was just rendered like a normal page. How come? I change the code so that it would write the content after a few seconds. And I got this: first the browser asks me if I want to permit downloading the file, then, after a few seconds, the warning goes away and the string is displayed in the new window. I tried with document.createTextNode, it didn't work.

So far, none of my attempts to serve javascript content as a binary file worked. If you know of a way to achieve this, please let me know. Thanks!

Update:
Meaflux took a swipe at this request and came up with two delicious ideas that, unfortunately, don't really work. But I had no idea things like these existed, so it is very much worth mentioning.

First: the data URI. Unfortunately it is only supported by FireFox and such and has no way of setting a content-disposition header or some other way of telling the browser that I actually want it saved. It would work for an excel file, but an image, for example, would be opened in a browser window.

Second: the IE execCommand javascript function which has a little command called SaveAs. Unfortunately this would only work for actual HTML pages. Even if the browser would open a binary file, I doubt that a saveAs command would save it correctly.

Besides, both these options, as well as my own attempts above, have a major flaw: there is no way to send chunks of data as you are receiving them from the web service. What is needed it declaring some sort of data stream, then writing stuff in it and then declaring it programatically closed.

Update: this fix is now on Github: Github. Get the latest version from there.

The scenario is pretty straightforward: a ListBox or DropDownList or any control that renders as a Select html element with a few thousand entries or more causes an asynchronous UpdatePanel update to become incredibly slow on Internet Explorer and reasonably slow on FireFox, keeping the CPU to 100% during this time. Why is that?

Delving into the UpdatePanel inner workings one can see that the actual update is done through an _updatePanel Javascript function. It contains three major parts: it runs all dispose scripts for the update panel, then it executes _destroyTree(element) and then sets element.innerHTML to whatever content it contains. Amazingly enough, the slow part comes from the _destroyTree function. It recursively takes all html elements in an UpdatePanel div and tries to dispose them, their associated controls and their associated behaviours. I don't know why it takes so long with select elements, all I can tell you is that childNodes contains all the options of a select and thus the script tries to dispose every one of them, but it is mostly an IE DOM issue.

What is the solution? Enter the ScriptManager.RegisterDispose method. It registers dispose Javascript scripts for any control during UpdatePanel refresh or delete. Remember the first part of _updatePanel? So if you add a script that clears all the useless options of the select on dispose, you get instantaneous update!

First attempt: I used select.options.length=0;. I realized that on Internet Explorer it took just as much to clear the options as it took to dispose them in the _destroyTree function. The only way I could make it work instantly is with select.parentNode.removeChild(select). Of course, that means that the actual selection would be lost, so something more complicated was needed if I wanted to preserve the selection in the ListBox.

Second attempt: I would dynamically create another select, with the same id and name as the target select element, but then I would populate it only with the selected options from the target, then use replaceChild to make the switch. This worked fine, but I wanted something a little better, because I would have the same issue trying to dynamically create a select with a few thousand items.

Third attempt: I would dynamically create a hidden input with the same id and name as the target select, then I would set its value to the comma separated list of the values of the selected options in the target select element. That should have solved all problems, but somehow it didn't. When selecting 10000 items and updating the UpdatePanel, it took about 5 seconds to replace the select with the hidden field, but then it took minutes again to recreate the updatePanel!

Here is the piece of code that fixes most of the issues so far:

/// <summary>
/// Use it in Page_Load.
/// lbTest is a ListBox with 10000 items
/// updMain is the UpdatePanel in which it resides
/// </summary>
private void RegisterScript()
{
string script =
string.Format(@"
var select=document.getElementById('{0}');
if (select) {{
// first attempt
//select.parentNode.removeChild(select);


// second attempt
// var stub=document.createElement('select');
// stub.id=select.id;
// for (var i=0; i<select.options.length; i++)
// if (select.options[i].selected) {{
// var op=new Option(select.options[i].text,select.options[i].value);
// op.selected=true;
// stub.options[stub.options.length]=op;
// }}
// select.parentNode.replaceChild(stub,select);


// third attempt
var stub=document.createElement('input');
stub.type='hidden';
stub.id=select.id;
stub.name=select.name;
stub._behaviors=select._behaviors;
var val=new Array();
for (var i=0; i<select.options.length; i++)
if (select.options[i].selected) {{
val[val.length]=select.options[i].value;
}}
stub.value=val.join(',');
select.parentNode.replaceChild(stub,select);

}};"
,
lbTest.ClientID);
ScriptManager sm = ScriptManager.GetCurrent(this);
if (sm != null) sm.RegisterDispose(lbTest, script);
}



What made the whole thing be still slow was the initialization of the page after the UpdatePanel updated. It goes all the way to the WebForms.js file embedded in the System.Web.dll (NOT System.Web.Extensions.dll), so part of the .NET framework. What it does it take all the elements of the html form (for selects it takes all selected options) and adds them to the list of postbacked controls within the WebForm_InitCallback javascript function.

The code looks like this:

if (tagName == "select") {
var selectCount = element.options.length;
for (var j = 0; j < selectCount; j++) {
var selectChild = element.options[j];
if (selectChild.selected == true) {
WebForm_InitCallbackAddField(element.name, element.value);
}
}
}
function WebForm_InitCallbackAddField(name, value) {
var nameValue = new Object();
nameValue.name = name;
nameValue.value = value;
__theFormPostCollection[__theFormPostCollection.length] = nameValue;
__theFormPostData += name + "=" + WebForm_EncodeCallback(value) + "&";
}

That is funny enough, because __theFormPostCollection is only used to simulate a postback by adding a hidden input for each of the collection's items to a xmlRequestFrame (just like my code above) in the function WebForm_DoCallback which in turn is called only in the GetCallbackEventReference(string target, string argument, string clientCallback, string context, string clientErrorCallback, bool useAsync) method of the ClientScriptManager which in turn is only used in rarely used scenarios with the own mechanism of javascript callbacks of GridViews, DetailViews and TreeViews. And that is it!! The incredible delay in this javascript code comes from a useless piece of code! The whole WebForm_InitCallback function is useless most of the time! So I added this little bit of code to the RegisterScript method and it all went marvelously fast: 10 seconds for 10000 selected items.

string script = @"WebForm_InitCallback=function() {};";
ScriptManager.RegisterStartupScript(this, GetType(), "removeWebForm_InitCallback", script, true);



And that is it! Problem solved.

Thanks for Raheel for presenting me this problem and the opportunity to fix it.

As you may know from a previous post of mine, Ajax requests differ from normal requests. At the Page level, the rendered content is changed to reflect only the things that changed in the UpdatePanels and the format is different from the usual HTML and it is also very strict. Any attempt to blindly add things to the string sent from the server will result in an ugly alert error. And who does indiscriminately add ugly content to your Ajax requests? "Free" servers that inject commercials in your pages!

So, what can you do? Patch the Ajax engine to remove the offending ads. Here is a simple example for a server that added crap at THE END of the content, crap that did not contain any '|' characters. This is important, as the patch looks for the last '|' character and removes all the things after it.

C# Code

private void LoadApplyPatch()
{
ScriptManager sm = ScriptManager.GetCurrent(Page);
if (sm == null) return;

string script =@"
if (window.Sys&&Sys.WebForms&&Sys.WebForms.PageRequestManager) {
Sys$Net$WebRequestExecutor$get_responseData=function() {
if (arguments.length !== 0) throw Error.parameterCount();
if (!this._responseAvailable) {
throw Error.invalidOperation(String.format(Sys.Res.cannotCallBeforeResponse, 'get_responseData'));
}
if (!this._xmlHttpRequest) {
throw Error.invalidOperation(String.format(Sys.Res.cannotCallOutsideHandler, 'get_responseData'));
}

var content=this._xmlHttpRequest.responseText;

// this is the added code, the rest is taken
// from the ASP.Net Ajax original code
var index=content.lastIndexOf('|');
if (index<content.length-1)
content=content.substring(0,index+1);


return content;
}
}"
;

ScriptManager.RegisterStartupScript(Page,
Page.GetType(), "adMurderer", script, true);
}

Update: I've posted the source and binaries for this control on Github. Free to use and change. Please comment on it.

This is the story of a control that shrinks the content sent from an UpdatePanel to down as 2% without using compression algorithms. I am willing to share the code with whoever wants it and I only ask in return to tell me if and where it went wrong so I can find a solution. Even if you are not interested in the control, the article describes a little about the inner workings of the ASP.Net Ajax mechanism.

You know the UpdatePanel, the control that updates its content asynchronously in ASP.Net, allowing you to easily transform a normal postback based application in a fully fledged Ajax app. Well, the only problem with that control is that you have to either put a lot of them on the page in order to update only what changes (making the site be also fast as you would expect from an Ajax application) but hard to maintain afterwards, or put a big UpdatePanel on the entire page (maybe in the MasterPage) and allow for large chunks of data to be passed back and forth and also other clear disadvantages, some detailed in this blog entry.

Not anymore! I have made a small class, in the form of a Response.Filter, that caches the previously rendered content and instructs the browser to do the same, then sends only a small fraction of the data from the server to the browser, mainly what has changed. There is still the issue of the speed it takes the browser to render the content, which is the same no matter what I do, like when rendering a huge table. It doesn't matter that I send only the changes in one cell, the browser must still render the huge grid. Also, if, for some reason, the update fails, I catch it and I send to the server that the updatepanel must be updated again, the old way.

Enough; let's talk code. I first had to tap into the content that was sent to the browser. That can only be done at Page render level (or PageAdapter, or Response.Filter and other things that can access the rendered content). So I did catch the rendered content in a filter, I recognized it as Ajax by its distinctive format, and I only processed the updatePanel type of token.

Here I had a few problems. First I replaced the updatePanel token with a scriptBlock token that changed the innerHTML of the update panel div element. It all seemed to work until I tested it a little. I discovered that the _updatePanel javascript method of the PageRequestManager object used by the normal ajax rendering on the browser was doing a few extra things, so I used that one instead of just replacing the innerHTML, resulting in a lower speed. But that didn't help either, because it failed when using validators. Even if I did replace the updatePanel token with a correct javascript block, it still got executed a bit later than it should have.

The only solution I had was to replace the _updatePanel method with my own. Itself having a small block of code that disposed some scriptblocks and some other stuff, then a plain innerHTML replace, I could not 'override' it, since it would change the innerHTML with some meaningless stuff (the thing I would send from the server), then I would parse and change the innerHTML again, resulting in bad performance, flickering, nonsense on screen, etc. So I just copy pasted the first part and added my own ApplyPatch function instead of the html replace code line.

Now, here I met another issue. The innerHTML property of an html element is not a simple string. It gets parsed immediately when set and it recreates when read, as explained in this previous article of mine. The only solution for that was create my own property of the update panel div element that remembers the string that was set. This solved a lot more problems, because it meant I could identify the content to be replaced by simple position markers rather than through regular expressions (as was my initial idea). That property would not get changed by custom local javascript either, so I was safe to use it.

About the regular expression engine in Javascript: it has no Singleline option. That means you can only change the content line by line. I could have used Steve Levithan's beautiful work, but with the solution found above, I would not need regular expressions at all.

The only other issue was with UpdatePanels inside UpdatePanels. I found out that in this case, only parent UpdatePanels are being rendered. That meant that the custom property I added to the child panel would disappear and break my code. Therefore I had to keep a tree of the updatepanels in the page and clear all the children cached content when the parents were being updated. So I did that, too.

What else? What if somehow the property would get deleted, changed, or something else happened, like someone decided to recreate the update panel div object or something like that? For that I made a little HttpHandler that would receive an UpdatePanel id and it would clear its cached content. Then, on return from the asynchronous call, the javascript would just push another update panel refresh using __doPostBack(updatePanelId,""). I don't particularily like this approach, since it could back fire with multiple UpdatePanels (as you know, only one postback at a time is supported), but I didn't find a better solution yet. Besides, this event should normally not happen.

So, the mechanism was all in place, all I had to do was make the actual patching mechanism, the one that would find the difference between previously rendered content and current content, then send only the changed part. First thing I did was remove the start and end of the strings that were identical. As you can imagine, that's the most common scenario: a change in the UpdatePanel means all the content up to the change remains unchanged and the same goes for the content after the change. But I was testing the filter with a large grid that would randomly change one cell to a random string. That meant two changes: the previous position and the last. Assuming the first change was in one of the starting cells and the last was in one of the cells at the end, then the compression would be meaningless. So I've googled for an algorithm that would give me the difference between two files/strings and I found Diff! Well, I knew about it so I actually googled for Diff :) It was in the Longest Common Substring algorithm category.

Anyway, the algorithm was nice, clear, explained, with code, perfect for what I wanted and completely useless, since it needed an array of m*n to get what I needed. It was slow, a complete memory hog and I couldn't possibly use an array of 500000x500000. I bet they were optimizations that covered this problem, but I was miserable so I just patched up my own messy algorithm. What would it do? It would randomly select a 100 characters long string from the current content and search for it in the previous content. If it found it, it would expand the selection and consider it a Reasonably Long Common Substring and work from then on recursively. If it didn't find it, it would search a few times other randomly chosen strings then give up. Well, actually is the same algorithm, made messy and with no extra memory requirements.

It worked far better than I had expected, even if it clearly could have used some optimizations. The code was clear enough in detriment of speed, but it still worked acceptably fast, with no noticeable overhead. After a few tweaks and fixes, the class was ready. I've tested it on a project we are working on, a big project, with some complex pages, it worked like a charm.

One particular page used a control I have made that allows for grid rows and columns to have children that can be shown/hidden at will. When collapsing a column (that means that every row gets some cells removed) the compressed size was still above 50% in up to 100 patch fragments. When colapsing a row, meaning some content from the middle of the grid would just vanish, the size went down to 2%! Of course, putting the ViewState into the session also helped. Gzip compression on the server would complement this nicely, shrinking the output even more.

So, I have demonstrated incredible compression of UpdatePanel content sent through the network with something as small and reusable as a response filter that can be added once in the master page. You could use it for customers that have network bandwidth issues or for sites that pay for sent out content. It would with sites made with one big UpdatePanel placed in the MasterPage as well :).

If you want to use it in your sites, please let me know how it performs and what problems you've encountered.

I have been working on an idea, one that assumed that if you remember the content sent to an UpdatePanel, you can send only the difference next time you update it. That would mean that you could use only one UpdatePanel, maybe in the MasterPage, and update only changes, wherever they'd be and no matter how small.

However, I also assumed that if I set the innerHTML property of an HTML element, the value will remain the same. In other words that elem.innerHTML=content; implied that elem.innerHTML and content are identical at the end of the operation. But they were not! Apparently the browser (each flavour in its own way) interprets the string that you feed it then creates the element tree structure. If you query the innerHTML property, it uses the node structure to recreate it.

So comparing the value that you've fed it to the actual value of innerHTML after the operation, you see quoted attribute values, altogether removed attributes when they have the default value, uppercased keywords, changed attribute order and so on and so on. FireFox only adds quotes as far as I see, but you never know what they'll do next.

On the bright side, now that my idea had been torn to shreds by the browser implementation, I now have an answer to all those stuck up web developers that consider innerHTML unclean or unstructured and criticize the browsers for not being able to render as fast when using DOM methods. The innerHTML property is like a web service. You feed it your changes in (almost) XML format and it applies it on the browser. Since you pretty much do the same when you use any form of web request, including Ajax, you cannot complain.

Update: On June 20th 2009, Codeplex notified me that the patch I did for the ACT has been applied. I haven't tested it yet, though. Get the latest source (not latest stable version) and you should be fine.

The Ajax Control Toolkit has a PopupExtender module that is used throughout the library in whichever controls need to show above other controls. I wanted to use the Calendar Extender in my web site, but the calendar appeared underneath other controls. I checked it out and it had a zIndex of 1000, which should have been enough. I took me an hour to realise that in the toolkit code zIndex was a property of the div element, not of the div style!

A download of the latest version from Feb 29 shows the problem is still there. The fix? go to the PopupExtender folder in the source code, open the PopupBehaviour.js file, search for a line that looks like this:
element.zIndex = 1000;
and replace it with
element.style.zIndex = 1000;
. Now it works!

The issue is already in the AjaxControlToolKit issue tracker, but it was not addressed yet.

I am going to quickly describe what happened in the briefing, then link to the site where all the presentation materials can be found (if I ever find it :))

The whole thing was supposed to happen at the Grand RIN hotel, but apparently the people there changed their minds suddenly leaving the briefing without a set location. In the end the brief took place at the Marriott Hotel and the MSDN people were nice enough to phone me and let me know of the change.

The conference lasted for 9 hours, with coffee and lunch breaks, and also half an hour for signing in and another 30 minutes for introduction bullshit. You know the drill if you ever went to one of such events: you sit in a chair waiting for the event to start while you are SPAMMED with video presentations of Microsoft products, then some guy comes in saying hello, presenting the people that will do the talking, then each of the people that do the talking present themselves, maybe even thank the presenter at the beginning... like a circular reference! Luckily I brought my trusted ear plugs and PDA, loaded with sci-fi and tech files.

The actual talk began at 10:00, with Petru Jucovschi presenting as well as holding the first talk, about Linq and C# 3.0. He has recently taken over from Zoltan Herczeg and he has not yet gained the necessary confidence to keep crouds interested. Luckily, the information and code were reasonably well structured and, even if I've heard them before, held me watching the whole thing.

Linq highlights:
  • is new in .NET 3.0+ and it takes advantage of a lot of the other newly introduced features like anonymous types and methods, lambda expressions, expression trees, extension methods, object initializers and many others.
  • it works over any object defined as IQueryable<T> or IEnumerable (although this last thing is a bit of a compromise).
  • simplifies our way of working with queries, bring them closer to the .NET programming languages and from the just-in-time errors into the domain of compiler errors.
  • "out of the box" it comes with support for T-Sql, Xml, Objects and Datasets, but providers can be built (easily) for anything imaginable.
  • the linq queries are actually execution trees that are only run when GetEnumerator is called. This is called "deffered execution" and it means more queries can be linked and optimised before the data is actually required.
  • in case you want the data for caching purposes, there are ToList and ToArray methods available


Then there were two back-to-back sessions from my favourite speaker, Ciprian Jichici, about Linq over SQL and Linq over Entities. He was slightly tired and in a hurry to catch the plain for his native lands of Timisoara, VB, but he held it through, even if he had to talk for 2.5 hours straight. He went through the manual motions of creating mappings between Linq to SQL objects and actualy database data; it wouldn't compile, but the principles were throughly explained and I have all the respect for the fact that he didn't just drag and drop everything and not explain what happened in the background.

Linq to SQL highlights:
  • Linq to SQL does not replace SQL and SQL programming
  • Linq to SQL supports only T-SQL 2005 and 2008 for now, but Linq providers from the other DB manufacturers are sure to come.
  • Linq queries are being translated, wherever possible, to the SQL server and executed there.
  • queries support filtering, grouping, ordering, and C# functions. One of the query was done with StartsWith. I don't know if that translated into SQL2005 CLR code or into a LIKE and I don't know exactly what happends with custom methods
  • using simple decoration, mapping between SQL tables and C# objects can be done very easily
  • Visual Studio has GUI tools to accomplish the mapping for you
  • Linq to SQL can make good use of automatic properties and object initialisers and collection initialisers
  • an interesting feature is the ability to tell Linq which of the "child" objects to load with a parent object. You can read a Person object and load all its phone numbers and email addresses, but not the purchases made in that name


Linq to Entities highlights:
  • it does not ship with the .NET framework, but separately, probably a release version will be unveiled in the second half of this year
  • it uses three XML files to map source to destination: conceptual, mapping and database. The conceptual file will hold a schema of local object, the database file will hold a schema of source objects and the mapping will describe their relationship.
  • One of my questions was if I can use Linq to Entities to make a data adapter from an already existing data layer to another, using it to redesign data layer architecture. The answer was yes. I find this very interesting indeed.
  • of course, GUI tools will help you do that with drag and drop operations and so on and so on
  • the three level mapping allows you to create objects from more linked tables, making the internal workings of the database engine and even some of its structure irrelevant
  • I do not know if you can create an object from two different sources, like SQL and an XML file
  • for the moment Linq to SQL and Linq to Entities are built by different teams and they may have different approaches to similar problems


Then it was lunch time. For a classy (read expensive like crap) hotel, the service was really badly organised. The food was there, but you had to stay in long queues qith a plate in your hand to get some food, then quickly hunt for empty tables, the type you stand in front of to eat. The food was good though, although not exceptional.

Aurelian Popa was the third speaker, talking about Silverlight. Now, it may be something personal, but he brought in my mind the image of Tom Cruise, arrogant, hyperactive, a bit petty. I was half expecting him to say "show me the money!" all the time. He insisted on telling us about the great mathematician Comway who, by a silly mistake, created Conway's Life Game. If he could only spell his name right, tsk, tsk, tsk.

Anyway, technically this presentation was the most interesting to me, since it showed concepts I was not familiar with. Apparently Silverlight 1.0 is Javascript based, but Silverlight 2.0, which will be released by the half of this year, I guess, uses .NET! You can finally program the web with C#. The speed and code protection advantages are great. Silverlight 2.0 maintains the ability to manipulate Html DOM objects and let Javascript manipulate its elements.

Silverlight 2.0 highlights:
  • Silverlight 2.0 comes with its own .NET compact version, independent on .NET versions on the system or even on operating system
  • it is designed with compatibility in mind, cross-browser and cross-platform. One will be able to use it in Safari on Linux
  • the programming can be both declarative (using XAML) and object oriented (programatic access with C# or VB)
  • I asked if it was possible to manipulate the html DOM of the page and, being written in .NET, work significantly faster than the same operations in pure Javascript. The answer was yes, but since Silverlight is designed to be cross-browser, I doubt it is the whole answer. I wouldn't put it past Microsoft to make some performance optimizations for IE, though.
  • Silverlight 2.0 has extra abilities: CLR, DLR (for Ruby and other dynamic languages), suport for RSS, SOAP, WCF, WPF, Generics, Ajax, all the buzzwords are there, including DRM (ugh!)


The fourth presentation was just a bore, not worth mentioning. What I thought would enlighten me with new and exciting WCF features was something long, featureless (the technical details as well as the presenter) and lingering on the description would only make me look vengeful and cruel. One must maintain apparences, after all.

WCF highlights: google for them. WCF replaces Web Services, Remoting, Microsoft Message Queue, DCOM and can communicate with any one of them.