Update: On June 20th 2009, Codeplex notified me that the patch I did for the ACT has been applied. I haven't tested it yet, though. Get the latest source (not latest stable version) and you should be fine.

The Ajax Control Toolkit has a PopupExtender module that is used throughout the library in whichever controls need to show above other controls. I wanted to use the Calendar Extender in my web site, but the calendar appeared underneath other controls. I checked it out and it had a zIndex of 1000, which should have been enough. I took me an hour to realise that in the toolkit code zIndex was a property of the div element, not of the div style!

A download of the latest version from Feb 29 shows the problem is still there. The fix? go to the PopupExtender folder in the source code, open the PopupBehaviour.js file, search for a line that looks like this:
element.zIndex = 1000;
and replace it with
element.style.zIndex = 1000;
. Now it works!

The issue is already in the AjaxControlToolKit issue tracker, but it was not addressed yet.

A little used thingie on the SqlConnection object called the InfoMessage event fires whenever there were info messages (duh!) from the last query or stored procedure execution. That means errors, of course, but it also means warnings and simple prints! You get where I am going with this?

Instead of changing stored procedures, datalayers and code whenever I need to get some new information from SQL, I can just add some nicely formatted PRINT commands and get all the information I need! Here is some code:

using System;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Windows.Forms;

namespace SqlPrintCommands
{
public partial class Form2 : Form
{
public Dictionary<string, string> values;

public Form2()
{
InitializeComponent();
values = new Dictionary<string, string>();
}

private void button1_Click(object sender, EventArgs e)
{
SqlConnection connection =
new SqlConnection("[connectionString]");
connection.Open();
connection.InfoMessage += sc_InfoMessage;
SqlCommand comm =
new SqlCommand("pr_Test", connection);
comm.ExecuteNonQuery();
connection.Close();
string s = "";
foreach (KeyValuePair<string, string> pair in values)
{
s += string.Format("{0} : {1}\r\n",
pair.Key, pair.Value);
}
label1.Text = s;
}

private void sc_InfoMessage(object sender,
SqlInfoMessageEventArgs e)
{
string commandPrefix = "Eval: ";
foreach (SqlError err in e.Errors)
{
if ((err.Message ?? "").StartsWith(commandPrefix))
{
string command =
err.Message.Substring(commandPrefix.Length);
string[] cmd = command.Trim().Split('=');
string commandArgument = cmd[0];
string commandValue = cmd[1];
values[commandArgument] = commandValue;
}
}
}
}
}


In this scenario I have a simple form with a button and a label. I execute a pr_Test stored procedure and then I parse the messages it returns. If the messages are of the format
Eval: Name=Value
I store the keys and values in a Dictionary. Not the nicest code, but it's for demo purposes.

So, you want to know the count of whatever operation you executed? Add
PRINT 'Eval: RowCount='+cast(@@rowcount as varchar)
in your stored procedure. Pretty cool huh?

Unfortunately I haven't been able to send messages asynchronously, even if the connection was async and the running was async and the messages were generated with
RAISERROR('message',1,1) WITH NOWAIT
. BTW, who is the idiot that spelled RAISERROR with only one E? What's a Rror and why would I raise it?

Update 19 February 2016:
I've done the test again, using another computer and .Net 4.6.1. The speed of filling the DataTableReplacement class given at the end of the article, plus copying the data into a DataTable object is 30% faster than using a DataTable directly with BeginLoadData/EndLoadData and 50% faster than using DataTable without the LoadData methods.

Now for the original post:

It was about time I wrote a smashing IT entry. Here is to the obnoxious DataTable object, something about I have written before of bugs and difficulty in handling. Until now I haven't really thought about what kind of performance issues I might face when using it. I mean, yeah, everybody says it is slow, but how slow can it be? Twice as slow? Computers are getting faster and faster, I might not need a personal research into this. I tried to make a DataTable replacement object once and it was not really compatible with anything that needed DataTables so I gave up. But in this article I will show you how a simple piece of code became 7 times faster when taking into account some DataTable issues.

But let's get to the smashing part :) I was using C# to transform the values in a column from a DataTable into columns. Something like this:

Name Column Value
George Money 100
George Age 31
George Children 1
Jack Money 150
Jack Age 26
Jack Children 0
Jane Money 300
Jane Age 33
Jane Children 2



and it must look like this:

Name Money Age Children
George 100 31 1
Jack 150 26 0
Jane 300 33 2



I have no idea how to do this in SQL, if you have any advice, please leave a comment.
Update: Here are some links about how to do it in SQL and SSIS:
Give the New PIVOT and UNPIVOT Commands in SQL Server 2005 a Whirl
Using PIVOT and UNPIVOT
Transposing rows and columns in SQL Server Integration Services

Using PIVOT, the SQL query would look like this:

SELECT * 
FROM #input
PIVOT (
MAX([Value])
FOR [Column]
IN ([Money],[Age],[Children])
) as pivotTable


Anyway, the solution I had was to create the necessary table in the code behind add a row for each Name and a column for each of the distinct value of Column, then cycle through the rows of the original table and just place the values in the new table. All the values are present and already ordered so I only need to do it using row and column indexes that are easily computed.

The whole operation lasted 36 seconds. There were many rows and columns, you see. Anyway, I profiled the code, using the great JetBrains dotTrace program, and I noticed that 30 seconds from 36 were used by DataRow.set_Item(int, object)! I remembered then that the DataTable object has two BeginLoadData and EndLoadData methods that disable/enable the checks and constraints in the table. I did that and the operation went from 36 to 27 seconds.

Quite an improvement, but the bottleneck was still in the set_Item setter. So, I thought, what will happen if I don't use a DataTable at all. After all, the end result was being bound to a GridView and it, luckily, knows about object collections. But I was too lazy for that, as there was quite a complicated binding code mess waiting for refactoring. So I just used a List of object arrays instead of the DataTable, then I used DataTable.Rows.Add(object[]) from this intermediary list to the DataTable that I originally wanted to obtain. The time spent on the operation went from... no, wait

The time spent on the operation went from the 27 seconds I had obtained to 5! 5 seconds! Instead of 225.351 calls to DataRow.set_Item, I had 1533 calls to DataRowCollection.Add, from 21 seconds to 175 miliseconds!

Researching the reflected source of System.Data.dll I noticed that the DataRow indexer with an integer index was going through

DataColumn column=_columns[index]; return this[column];

How bad can it get?! I mean, really! There are sites that recommend you find the integer index of table columns and then use them as integer variables. Apparently this is NOT the best practice. Best is to use the DataColumn directly!

So avoid the DataRow setter.

Update July 18, 2013:

Someone requested code, so here is a console application with some inline classes to replace the DataTable in GridView situations:

class Program
{
    static void Main(string[] args)
    {
        fillDataTable(false);
        fillDataTable(true);
        fillDataTableWriter();
        Console.ReadKey();
    }

    private static void fillDataTable(bool loadData)
    {
        var dt = new DataTable();
        dt.Columns.Add("cInt", typeof(int));
        dt.Columns.Add("cString", typeof(string));
        dt.Columns.Add("cBool", typeof(bool));
        dt.Columns.Add("cDateTime", typeof(DateTime));
        if (loadData) dt.BeginLoadData();
        for (var i = 0; i < 100000; i++)
        {
            dt.Rows.Add(dt.NewRow());
        }
        var now = DateTime.Now;
        for (var i = 0; i < 100000; i++)
        {
            dt.Rows[i]["cInt"] = 1;
            dt.Rows[i]["cString"] = "Some string";
            dt.Rows[i]["cBool"] = true;
            dt.Rows[i]["cDateTime"] = now;
        }
        if (loadData) dt.EndLoadData();
        Console.WriteLine("Filling DataTable"+(loadData?" with BeginLoadData/EndLoadData":"")+": "+(DateTime.Now - now).TotalMilliseconds);
    }

    private static void fillDataTableWriter()
    {
        var dt = new DataTableReplacement();
        dt.Columns.Add("cInt", typeof(int));
        dt.Columns.Add("cString", typeof(string));
        dt.Columns.Add("cBool", typeof(bool));
        dt.Columns.Add("cDateTime", typeof(DateTime));
        for (var i = 0; i < 100000; i++)
        {
            dt.Rows.Add(dt.NewRow());
        }
        var now = DateTime.Now;
        for (var i = 0; i < 100000; i++)
        {
            dt.Rows[i]["cInt"] = 1;
            dt.Rows[i]["cString"] = "Some string";
            dt.Rows[i]["cBool"] = true;
            dt.Rows[i]["cDateTime"] = now;
        }
        var fillingTime = (DateTime.Now - now).TotalMilliseconds;
        Console.WriteLine("Filling DataTableReplacement: "+fillingTime);
        now = DateTime.Now;
        var newDataTable = dt.ToDataTable();
        var translatingTime = (DateTime.Now - now).TotalMilliseconds;
        Console.WriteLine("Transforming DataTableReplacement to DataTable: " + translatingTime);
        Console.WriteLine("Total filling and transforming: " + (fillingTime+translatingTime));
    }
}

public class DataTableReplacement : IEnumerable<IEnumerable<object>>
{
    public DataTableReplacement()
    {
        _columns = new DtrColumnCollection();
        _rows = new DtrRowCollection();
    }

    private readonly DtrColumnCollection _columns;
    private readonly DtrRowCollection _rows;

    public DtrColumnCollection Columns
    {
        get { return _columns; }
    }

    public DtrRowCollection Rows { get { return _rows; } }

    public DtrRow NewRow()
    {
        return new DtrRow(this);
    }

    public DataTable ToDataTable()
    {
        var dt = new DataTable();
        dt.BeginLoadData();
        _columns.CreateColumns(dt);
        _rows.CreateRows(dt);
        dt.EndLoadData();
        return dt;
    }

    #region Implementation of IEnumerable

    public IEnumerator<IEnumerable<object>> GetEnumerator()
    {
        foreach (var row in _rows)
        {
            yield return row.ToArray();
        }
    }

    IEnumerator IEnumerable.GetEnumerator()
    {
        return GetEnumerator();
    }

    #endregion
}

public class DtrRowCollection : IEnumerable<DtrRow>
{
    private readonly List<DtrRow> _rows;

    public DtrRowCollection()
    {
        _rows = new List<DtrRow>();
    }

    public void Add(DtrRow newRow)
    {
        _rows.Add(newRow);
    }

    public DtrRow this[int i]
    {
        get { return _rows[i]; }
    }

    public void CreateRows(DataTable dt)
    {
        foreach (var dtrRow in _rows)
        {
            dt.Rows.Add(dtrRow.ToArray());
        }
    }

    #region Implementation of IEnumerable

    public IEnumerator<DtrRow> GetEnumerator()
    {
        return _rows.GetEnumerator();
    }

    IEnumerator IEnumerable.GetEnumerator()
    {
        return GetEnumerator();
    }

    #endregion
}

public class DtrRow
{
    private readonly object[] _arr;
    private readonly DataTableReplacement _dtr;

    public DtrRow(DataTableReplacement dtr)
    {
        _dtr = dtr;
        var columnCount = _dtr.Columns.Count;
        _arr = new object[columnCount];
    }

    public object this[string columnName]
    {
        get
        {
            var index = _dtr.Columns.GetIndex(columnName);
            return _arr[index];
        }
        set
        {
            var index = _dtr.Columns.GetIndex(columnName);
            _arr[index] = value;
        }
    }

    public object this[int columnIndex]
    {
        get
        {
            return _arr[columnIndex];
        }
        set
        {
            _arr[columnIndex] = value;
        }
    }

    public object[] ToArray()
    {
        return _arr;
    }
}

public class DtrColumnCollection
{
    private readonly Dictionary<string, int> _columnIndexes;
    private readonly Dictionary<string, Type> _columnTypes;

    public DtrColumnCollection()
    {
        _columnIndexes = new Dictionary<string, int>();
        _columnTypes = new Dictionary<string, Type>();
    }

    public int Count { get { return _columnIndexes.Count; } }

    public void Add(string columnName, Type columnType)
    {
        var index = _columnIndexes.Count;
        _columnIndexes.Add(columnName, index);
        _columnTypes.Add(columnName, columnType);
    }

    public int GetIndex(string columnName)
    {
        return _columnIndexes[columnName];
    }

    public void CreateColumns(DataTable dt)
    {
        foreach (var pair in _columnTypes)
        {
            dt.Columns.Add(pair.Key, pair.Value);
        }
    }
}


As you can see, there is a DataTableReplacement class which uses three other classes instead of DataColumnCollection, DataRowCollection and DataRow. For this example alone, the DtrRowCollection could have been easily replaced with a List<DtrRow>, but I wanted to allow people to replace DataTable wherever they had written code without any change to the use code.

In the example above, on my computer, it takes 1300 milliseconds to populate the DataTable the old fashioned way, 1000 to populate it with BeginLoadData/EndLoadData, 110 seconds to populate the DataTableReplacement. It takes another 920 seconds to create a new DataTable with the same data (just in case you really need a DataTable), which brings the total time to 1030. So this is the overhead the DataTable brings for simple scenarios such as these.

A bit slow on the wagon, but I didn't need this until now. It is all about creating a .NET assembly for use by the SQL Server. These are the quick and dirty steps to it.

C# Steps
1. Create a Class Library project (some links suggest an Sql Server Project, but that is not available for Visual Studio versions below Professional)
2. Add a public class
3. Add a public static method and decorate it with [SqlFunction]
4. Do not use static fields in the said class
Compile the assembly.

SQL Steps
1. Define the assembly in SQL:
CREATE ASSEMBLY MyAssembly FROM 'C:\SqlCLR\MyAssembly.dll'

2. Create the SQL function to use the method in the assembly:
CREATE FUNCTION MyUserDefinedFunction(
@s1 NVARCHAR(4000),@s2 NVARCHAR(4000) ... other parameters )
RETURNS FLOAT AS
EXTERNAL NAME MyAssembly.[MyNamespace.MyClass].MyUserDefinedFunction

3. Enable CLR execution in SQL Server:
EXEC sp_configure 'clr enabled', 1
GO
RECONFIGURE
GO

4. use the function like
SELECT dbo.MyUserDefinedFunction('test','test2'...)


Things to remember:
1. Make sure the parameter types are the same in the .NET method and the SQL function
- the float keyword in SQL means double in .NET! I have no idea what kind of SQL type you must use in your function to match a .NET float.
- the string in .NET is matched to nvarchar in SQL
- the bit is matched to a bool as expected
2. Whenever you change the DLL you must DROP all the functions, then DROP the assembly, then create it again. If there are no signature changes, I guess just replacing the dll file could work.

There are a lot of things to say about returning tables instead of single values or about defining user defined aggregate functions, but not in this post.

and has 1 comment
As a software developer with no formal training, I sometimes feel humbled by the more standardised approaches to programming like, for example, Test Driven Development or TDD. But I think that today I finally figured it out.

You see, TDD is supposed to "cover your code" with tests. Once all your tests run without fail, you know that you can focus on other parts of the code, like refactoring, interface or performance improvements or on new features. But what it actually means is a simulation of your client and/or debug person. Once you automatically simulate a client, you can make sure it is satisfied before you move on to the real person. And this is also where the whole thing fails, because it is obviously impossible to simulate a human being without real effort that surpasses the building of the very software you are trying to test. It would be fun to actually do it, then watch the software complain on its own functionality. If any software is going to take over the world, one that emulates an annoying client probably will.

One of the hardest parts to simulate is the work with the interface. No test will ever be able to look disappointed while expressing a lack of enthusiasm on your choice of colors. It is even harder to test web interfaces, although software that can test some of the behaviour of web apps and sites exists, with limited functionality. Also, it is impossible to test for functionality that is not there.

A good use of tests is to address (and in this way document) the bug findings! When you see a bug, you create a test that fails because of it, then you fix the bug. Also, by trying to cover as much of your code with the tests, you get to formalize the access to your code and also are forced to decouple interface from code. No wonder that all the test frameworks are used for Unit Testing. Once you can create a unit or a library of code with no other inputs or outputs than data types, you can test the hell out of it.

But it still leaves the interface out. I have been thinking of ways of describing the test procedure not in code, but in English, something like a unit test for a person rather than a testing framework. This can be further automated, where possible, or just followed by a dedicated tester.

What I would really be interested in would be a general way of creating tests for recurring bugs. A sort of code policy enforcement, if you will, but one that would test for the same bug multiple applications. Can it be done without also formalising the structure of those applications?

A while ago I wrote a quick post to remind me of how to use the AutoCompleteExtender, but recently I realised that it was terribly incomplete (pun not intended). I've updated it, but I also felt that I need to restructure the whole post, so here it is, with more details and more code fun.

First of all, a short disclaimer: I am not familiar with the ASP.Net Ajax javascript paradigm. If some of the things that I am doing seem really stupid, it's because I did it by trial and error, not by understanding why the code is as it is. Here it goes.

There are two ways in which to use the AutoCompleteExtender: using PageMethods or using web service methods. The details are in the previous post, I will only list the gotchas in this one.
  • PageMethods requirements:
    1. ScriptManager must have EnablePageMethods="true"
    2. The page method must have the form public static string[] MethodName(string prefixText, int count) AND THE SAME PARAMETER NAMES. If you change prefixText with text it will not work!
    3. The page method has to be public and STATIC
    4. No, it is not possible to declare the method in a web user control, it must be in the page
  • Web service requirements:
    1. The method must have the form public string[] MethodName(string prefixText, int count) AND THE SAME PARAMETER NAMES. If you change prefixText with text it will not work!
    2. The method has to be public and NOT STATIC
    3. The method must be marked as ScriptMethod
.

Now, the method can return an array different from a string array, but the only useful types there would be numerical or maybe dates. Any object that you send will ultimately be transformed into "[object Object]". There is a way to send a pair of value,text encoded in the strings, and for that you use:
AutoCompleteExtender.CreateAutoCompleteItem(text, value);

It doesn't help much outside the fact that in the client javascript events of the AutoCompleteExtender the first parameter will be the AutoCompleteExtender javascript object and the second an object with a _text and a _value properties.

One of the questions I noticed frequently on the web is: How do I show the user that there are no auto complete matches?. The easy solution is always to return at least one string in the string array that your method is returning. If there are no matches, make sure there is a "No Match" string in the list. But then the complicated part comes along: how do you stop the user from selecting "No Match" from the list? And I do have a solution. It seems that the text in the textbox is set based on the existence of a javascript object called control that has a set_text function. If the object or the function do not exist, then a simple textbox.value=text is performed. So I used this code:

string script = @"var tb=document.getElementById('" + tbAutoComplete.ClientID + @"');if (tb) tb.control={set_text:setText,element:tb};";
ScriptManager.RegisterStartupScript(Page,Page.GetType(),UniqueID+"_init",script,true);
to set the object for my textbox. And also the javascript code that looks liks this:
function setText(input) {
if (input=='No Match') return;
this.element.value=input;
}


These being said, I think that one can use the AutoCompleteExtender and know what the hell is making it not work.

Update: The 30 September 2009 release of the AjaxControlToolkit doesn't have the error that I fix here. My patch was applied in July and from September on the bug is gone in the official release as well. Good riddance! :)

==== Obsolete post follows

I've just downloaded the 29 feb 2008 release of the AjaxControlToolKit and I noticed that the TabContainer bug that I fixed in one of the previous posts did not work anymore. So the post is now updated with the latest fix.

Fixing TabContainer to work with dynamic TabPanels

Apparently, the guys that make the Ajax Control Toolkit are not considering this a bug, since I posted it a long time ago as well as a bunch of other folks and there are also some discussions about it on some forums.

This blog post is about ASP.Net Ajax calls (Update panel and such), if you are interested in aborting jQuery.ajax calls, just call abort() on the ajax return object.

Kamal Balwani asked for my help on the blog chat today and asked for the solution for a really annoying issue. He was opening a window when pressing a button on an ASP.Net page and that window used web services to request data from the server repeatedly. The problem was when the window was closed and FireFox (the error did not appear on Internet Explorer) showed a 'Sys is not defined' error on this javascript line: _this._webRequest.completed(Sys.EventArgs.Empty);.

It was a silly error, really. There was this javascript object Sys.Net.XMLHttpExecutor and it had a function defined called _onReadyStateChange where a completed function received Sys.EventArgs.Empty as an argument. At that time, though, the page was unloaded as well as any objects defined in it. I consider this a FireFox bug, as any javascript function should not try to access an object that was unloaded already.

Anyway, going through the Microsoft Ajax library is a nightmare. I am sure they had clear patterns in mind when they designed it this way but for me it was a long waste of time trying to get my head around it. Finally I've decided that the only solution here was to abort the last Ajax request and so I've reached these two posts:

Cancel a Web Service Call in Asp.net Ajax
How to cancel the call to web service.

Bottom line, you need to use the abort function on a WebRequestExecutor object which one can get from using the get_executor function on a WebRequest object which should be returned by a scriptmethod call.

But you see, when you execute TestService.MyMethod you get no return value. What you need to do is use TestService._staticInstance.MyMethod which returns the WebRequest object required! Good luck figuring that out without Googling for it.

From then on the ride was smooth: add an array of web requests and at the window.onbeforeunloading event, just abort them all.

Here is the code for the popup window:

<body onload = "runMethod();" onbeforeunload = "KillRequests();">

function runMethod() {
   if (!window._webRequests) window._webRequests = Array();
   _webRequests[_webRequests.length]
      = TestService._staticInstance
        .MyMethod(OnSuccess, OnTimeout);
   }


function OnSuccess(result) {
   //do something with the result
   setTimeout(runMethod, 500);
   }


function OnTimeout(result) {
   setTimeout(runMethod, 500);
   }


function KillRequests() {
   if (!window._webRequests) return;
   for (var c = 0; c < window._webRequests.length; c++) {
      if (window._webRequests[c]) {
         var executor = window._webRequests[c].get_executor();
         if (executor.get_started()) executor.abort();
         }
      }
   }

A chat user asked me the other day of how does one put the tabs in the AjaxToolKit TabContainer vertically and I had no idea. I've decided to do it today and write this blog post, maybe he'll come back and he'd get the answer.

So, the request is simple: take a web site with a TabContainer in it and make it show the tabs vertically. I can only assume that the vertical tabs would go on the left and the content in the right. So I took the Internet Developer Toolbar and analysed the html output of a page with four static tabs. I added a <style> tag with CSS classes and started making changes until it worked. Unfortunately, the same setup would not work on Firefox, so I had to repeat the process using Firebug to analyse the page output. In the end this is the result:
<style>
.ajax__tab_header {
float:left;
}
.ajax__tab_body {
/*float:left;*/
margin-left:220px;
}
.ajax__tab_outer {
display:block !important;
}
.ajax__tab_tab{
/*min-width:200px;*/
width:200px;
height:auto !important;
}
</style>
.

Add this on top of your page or include it in your CSS and the tabs will appear vertically.

Now for a bit of explaining.
  • First of all this does not overwrite the CSS that the TabContainer loads because it is organized under a general ajax__tab_xp class like: .ajax__tab_xp .ajax__tab_header .
  • Then the width of 200px is arbitrary. I used it to keep the vertical tabs at the same width. I tried using min-width first, but it won't display right in Firefox.
  • Another point is about the ajax__tab_body class that I tried to set up first as float left, which would place the body div next to the tabs div, however this breaks if the body tab is wider and the content would appear underneath the tabs div. Thanks to my colleague Romeo I used the margin-left trick. 220px is enough to work in both IE and Firefox. It can be made smaller (closer to 200px)if the default IE body margin would be 0.
  • The !important keyword is placed to overwrite some settings that are already set up in the original TabContainer CSS.
  • Last issue: now the right panel will be truncated if it gets too large. You should control the overflow of that div, although, as far as I am concerned, my job is done


As a kind of disclaimer, I am not a CSS expert. If you know of a better way of doing this, please let me know.

I named this post so because I started researching something that a chat user asked me: how do you add UpdatePanels programatically to a page. You see, the actual problem was that he couldn't add controls to the UpdatePanel after adding it to the page and that was because the UpdatePanel is a templated control, in other words it contains one or more objects that inherit from ITemplate and all the control's children are part of these templates.

So, the required application is like this: A page that has a button that does nothing but a regular postback and another button that adds an UpdatePanel. Each update panel must contain a textbox and a button. When the button is pressed, the textbox must fill with the current time only in that particular UpdatePanel. If the regular postback button is pressed, the UpdatePanels must remain on the page.

What are the possible issues?
First of all, the UpdatePanels must survive postbacks. That means that you have to actually create them every time the page loads, therefore inside Page_Load. Note: we could add them in Page_Init and in fact that's where they are added when getting the controls from the aspx file of a page, but during Init, the ViewState is not accessable!
Then, there is the adding of the UpdatePanels. It is done in a Click event from a button, one that is done AFTER the Page_Load, therefore adding of an UpdatePanel must also be done there. Note: we could put the CreatePanels method in Page_LoadComplete, but then the controls in the update panel will not respond to any events, since the Load phase is already complete.
There is the matter of how we add the TextBox and the Button in each UpdatePanel. The most elegant solution is to use a Web User Control. This way one can visually control the content and layout of each UpdatePanel and also (most important for our application) add code to it!
Now there is the matter of the ITemplate object that each UpdatePanel must have as a ContentTemplate. This is done via the Page.LoadTemplate method! We you give it the virtual path to the ascx file and it returns an ITemplate. It's that easy!

Update:if you by any chance want to add controls programatically to the UpdatePanel, use the ContentTemplateContainer property of the UpdatePanel like this:
updatePanel.ContentTemplateContainer.Controls.Add(new TextBox());


Enough chit-chat. Here is the complete code for the application, the DynamicUpdatePanels page and the ucUpdatePanelTemplate web user control:
DynamicUpdatePanels.aspx.cs
using System;
using System.Web.UI;

public partial class DynamicUpdatePanels : Page
{
private int? _nrPanels;

public int NrPanels
{
get
{
if (_nrPanels == null)
{
if (ViewState["NrPanels"] == null)
NrPanels = 0;
else
NrPanels = (int) ViewState["NrPanels"];
}
return _nrPanels.Value;
}
set
{
_nrPanels = value;
ViewState["NrPanels"] = value;
}
}

protected void Page_Load(object sender, EventArgs e)
{
CreatePanels();
}

private void CreatePanels()
{
for (int i = 0; i < NrPanels; i++)
{
AddPanel();
}
}

private void AddPanel()
{
UpdatePanel up = new UpdatePanel();
up.UpdateMode = UpdatePanelUpdateMode.Conditional;
up.ContentTemplate = Page.LoadTemplate("~/ucUpdatePanelTemplate.ascx");
pnlTest.Controls.Add(up);
}

protected void btnAdd_Click(object sender, EventArgs e)
{
NrPanels++;
AddPanel();
}
}


DynamicUpdatePanels.aspx
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="DynamicUpdatePanels.aspx.cs"
Inherits="DynamicUpdatePanels" %>


<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title>
Untitled Page</title>
</head>
<body>
<form id="form1" runat="server">
<asp:ScriptManager ID="ScriptManager1" runat="server" />
<asp:Panel ID="pnlTest" runat="server">
</asp:Panel>
<asp:Button ID="btnAdd" runat="server" Text="Add Panel" OnClick="btnAdd_Click" />
<asp:Button ID="btnPostBack" runat="server" Text="Postback" />
</form>
</body>
</html>


ucUpdatePanelTemplate.ascx.cs
using System;
using System.Web.UI;

public partial class ucUpdatePanelTemplate : UserControl
{
protected void btnAjax_Click(object sender, EventArgs e)
{
tbSomething.Text = DateTime.Now.ToString();
}
}


ucUpdatePanelTemplate.ascx
<%@ Control Language="C#" AutoEventWireup="true" CodeFile="ucUpdatePanelTemplate.ascx.cs"
Inherits="ucUpdatePanelTemplate" %>

<asp:TextBox ID="tbSomething" runat="server"></asp:TextBox>
<asp:Button ID="btnAjax" runat="server" OnClick="btnAjax_Click" />


That's it, folks!

and has 0 comments

Agile development is something of a growing software culture. Every dev team is trying to become agile. Programmers I know working like that are all happy and evangelizing the concept as the best thing since the fire simulation algorithm. The word is that Agile gives control back to the developer.

So, what does a code monkey have to do to become agile? (and no tree algorithm jokes here, please!). As experience dictates, Google for "become Agile in 11 seconds" and see what tools one can download and integrate in the personal toolbox! Surprise! There are tools, but they only help you when you are already agile. WTF? It was supposed to be a developer empowering thing!

Actually, this has an only marginal connection to the developer. Agile development is actually a management strategy. No programmer can become agile without the support of their manager. Management in itself is the application of scientific methods to achieve business goals, which are, almost every time, to maximize profits. You see, managers have noticed that they could hardly quantify programmer work. Software development was becoming more artsy and less scientific as technology grew ever faster and more complex.

The typical development cycle of a piece of software is to plan it (sort of) do stuff to it depending on the planning (which always results in changing the original plan or ignoring it altogether), then show it to the customer. They will, almost every time, just applaud and ask you "ok, this is nice (albeit full of bugs), but where is what I wanted?". The manager then gets scolded for not doing their job. But how could one possibly know what the client dreamed of when they said something completely different at the start?

So, what does a manager do when they have a big problem they can't get their hands on? They split it in smaller problems! Divide et Impera! said the Latin managers of old. So the Agile solution was to copy the development process in its entirety, make it as small as possible, then repeat it until it reaches the original size. It's like when you kill the monster at the boss level and he splits in many small bosses (let's call them middle managers) that you have to kill individually to continue the game. Hopefully I have explained this in a way most code monkeys would understand :).

What about empowering the developer? It was a really nice side effect, one that was taken into consideration at the beginning, of course, but I am sure exceeded the expectations of the designers of the process. You see, at the end of each iteration the application should work, somehow, and also be tested, documented, code reviewed and client approved. The client will undoubtedly notice a discrepancy between what he wanted and what was produced, but he can give feedback a lot sooner than at the end of the project. The developer doesn't have to work their ass off to finish the product, sometimes ignoring the most basic testing or documenting techniques in favour of speed, and then be forced to start work almost from scratch because the client either changed his mind or was not understood properly. Therefore there is an increase in motivation and thus productivity and a decrease in unnecessary work. This is also incorporated in one the agile principles YAGNI, which is actually a really buzzy and ugly acronym for "you aren't gonna need it" - in other words, do the least work to achieve the exact desired result.

But most importantly, this is a managerial process, one that is now easy to analyse and even quantify (since the idea to do as much automated testing as possible). If the bugs are too numerous, then testing must be improved. If code is too obscure, code review and refactoring must be done. If nobody knows how to use the product, documentation is needed. But only for the last small bit that was done. Even more cool, the documentation, code review and even refactoring can be done on the code from the previous iteration, while coders are working on the current one!

So, to summarize: to be agile means that your management has decided on a new strategy and you understand the principles enough to get the tools that would make it easier to work under them as well as design your applications to neatly mold to the concept. It does NOT mean you get some library or development tool and it does things for you. There is not an agilizer application (yet? :) take a look at Pex) and the different software patterns like MVC or MVP are used to facilitate the technical solution found to solve the managerial problem of quantifying results (in this case quality) which is Unit Testing.

It's not that Agile Development is not a good idea. Managers don't really see eye to eye with programmers because they have completely different goals - managers see things from the business perspective while developers see it from the beauty and functionality of code. It's like physicists and mathematicians all over again - but both are (or should be) scientific types that try to solve problems in the best possible way. Agile is a happy intersection of their goals, but as it happens, must be implemented in both worlds simultaneously.

Sometimes you need an information like the time taken for a web page to actually reach the client. It is different from the time it takes to create the rendered content as it includes some web server overhead and the actual network transfer. IIS doesn't know anything about your code, so you can't tell it to log everything you need. How do you synchronize the information in the IIS log with the one in your own logging system?

Use the Response.AppendToLog method that will add a custom string to the end of the IIS logged cs-uri-query field. That doesn't help you much, but since you can add any string you want, you can add a key that would help synchronize the two informations.

Quick example:
string key=Guid.NewGuid().ToString();
Response.AppendToLog(" key=["+key+"]");
MyLogger.Write(myInformation,key);


Now you will only have to Regex the cs-uri-query field to find the key, then search the corresponding line in your own log. Simple! Sort of...

We were working on a project for a company that suddenly started complaining of slow ASP.Net pages. I optimised what I could, but it seemed to me that it ran pretty fast. Then I find out that some of the customers use a slow Internet connection. The only way to test this was to simulate a slow connection.

But how can one do that on IIS 5.1, the Windows XP web server? After a while of searching I realised that it was the wrong question. I don't need this for other projects and if I did I certainly wouldn't want to slow the entire web server to check it out. Because yes, changing the metadata of the server can, supposedly, change the maximum speed the pages are delivered. But it was simply too much hassle and it wasn't a reusable solution.

My way was to create a Filter for the Response of all pages. Response.Filter is supposed to be a Stream that receives as parameter the previous Response.Filter (which at the very start is Response.OutputStream) and does something to the output of the page. So I've created a BandwidthThrottleFilter object and added it in the MasterPage Page_Load:
Response.Filter=new BandwidthThrottleFilter(Response.Fitler,10000);
. It worked.

Now for the code. Follow these steps:
  1. Create a BandwidthThrottleFilter class that inherits from the abstract class Stream
  2. Add a constructor that receives as parameters a Stream and an integer
  3. Add fields that will get instantiated from these two parameters
  4. Implement all abstract methods of the Stream object and use the same methods from the Stream field
  5. Change the Write method to also call a Delay method that receives as parameter the count parameter of the Write method


That's it. You need only create the Delay method which will do a Thread.Sleep for the duration of time it normally should take to transfer that amount of bytes. Of course, that assumes that the normal speed of transfer is negligeable.

Click to see the whole class code

In my own quest to find interesting books that would help me understand my place as a software developer I've stumbled upon Dreaming in Code, something I knew nothing about other than it featured the word "code" in the title. It had to be good!

In the end the book surpassed my expectations by describing software from a totally different point of view than the programming books I am used to. Dreaming in Code is not a technical book. It can be read by software developers and bored housewives alike. It features a kind and professional tone and the three years of documenting the book can only help put the whole story in perspective.

The storyline is simple: a software visionary decides to start a new project, one that would be open source, innovative and revolutionary and also a replacement for slumbering Outlook and Exchange type of software. Scott Rosenberg documents the development process, trying to figure out the answer to the decades long question: why is software hard? What starts very ambitious, with no financial or time contraints, ends up taking more than three years to get to a reasonable 0.6 release, time when the book ends. The project is still ongoing. They make a lot of mistakes and change their design a lot, but they keep at it, trying to learn from errors and adapt to a constantly changing world.

For me that is both a source of inspiration and concern. If Americans with a long history of software spend millions of dollars and years to create a software that might just as well not work, what chance do I stand trying to figure out the same questions? On the other hand the spirit of the team is inspirational, they look like a bunch of heroes battling the boring and pointless world of software development I am used to. And of course, there is the little smugness "Hey, I would have done this better. Give a million dollars to a Romanian and he will build you anything within a month". The problem, of course, is when you try to hire two Romanians! :)

Anyway, I loved this book. It ended before it had any chance of getting boring, it detailed the quest of the developers while in the same time putting everything in the context of great software thinkers and innovators and explaining the origin and motivation behind the most common and taken for granted technologies and IT ideas. It is a must read for devs, IT managers and even people that try to understand programmers, like their wives.

Here are some links:
Official book site
Scott Rosenberg's own blog
The official site of the Chandler software project

When one wants to indicate clearly that a control is to perform an asynchronous or a synchronous postback, one should use the Triggers collection of the UpdatePanel. Of course, I am assuming you have an ASP.Net Ajax application and you are stuck on how to indicate the same thing on controls that are insides templated controls like DataGrid, DataList, GridView, etc.

The solution is to get a reference to the page ScriptManager then use the method RegisterPostBackControl on your postback control. You get a reference to the page ScriptManager with the static ScriptManager.GetCurrent(Page); method. You get the control you need inside the templated control Item/RowCreated event with a e.Item/Row.FindControl("postbackControlID");

So, the end result is:

ScriptManager sm=ScriptManager.GetCurrent(Page);
Control ctl=e.Item/Row.FindControl("MyControl");
sm.RegisterPostBackControl(ctl);


Of course, if you want it the other way around (set the controls as Ajax async postback triggers) use the RegisterAsyncPostBackControl method instead.

Special thanks to Sim Singh from India for asking me to research this.