I wanted to use this Accordion control on a page and so I specified the ContentTemplate and HeaderTemplate and gave it a DataTable as a DataSource and DataBound it. Nothing! No errors, no warnings, no display of any kind. After a few frustrating minutes of trying to solve it I asked buddy Google about it and it turned out that the Accordion control MUST receive a DataView as a DataSource and nothing else. Using datatable.DefaultView solved it.

I've been listening to my favourite podcasts, HanselMinutes and .NetRocks, as usual and I've stumbled upon another gem of a show. It was about Test Driven Development. Why am I talking so much about this, although I don't practice it? Because I am sure I will get around and do practice it. It is not just a hype, it is the only way to do software. And I will explain why. But before that, let's talk about a confusion that has been cleared by the show I have been talking about.

The name Test Driven Development is usually associated with Automated Unit Testing. While this is mostly used only in TDD, it is not required by TDD at all. The badly chosen word Test actually means "meaningful, measurable, goals", in other words, the specifications! If you have those, you can test your application against the requirements and determine what is wrong, if anything. Without a clear view of the specs, you cannot tell if the project is performing as needed.

So if you think about TDD as Specifications Driven Development, you realize that you have been doing it all along! Admittedly, now it sounds even more like STD, but hey, sacrifices must be done in the name of improving code blog readability.

Now, I was saying that this is the only way to do software. Actually, I have explained why just above, but I will get into some personal details. I have been "blessed" with a project where the deadline was set before the specifications were drawn. Worse even, the specs did not come from people that really understand the business process, but from people using another piece of software that they want replaced. In other words, we're pretty much inventing ways of porting a badly designed Windows desktop app into ASP.Net. As this wasn't enough, we are also inventing features that were badly described by the client and starting from a partially functional ASP.Net project written by junior programmers.

What a drag! But that was actually not so bad as realizing that my developer output was slow, bad, and overall smelly and ugly. Why was that? Why couldn't I just stop whining and do what I know had to be done? Because there were no specs!. Without clearly drawn specs of not only what I had to do, but also what the initial project was supposed to do, my hands were tied. I could not refactor the code, because I had no way of telling if I broke anything. Has it ever happened to you to take a piece of code, make it better, then realize it is not working and you don't know why? The fear of that happening is the most important reason why people don't refactor. The next important factor being a manager that thinks refactoring is just a waste of time and has no vision of the future of the project.

But also, having no vision of what is to be done is the reason why developers are not motivated to do their job. Even the lowliest code monkey has to have a glimpse of the future of what they are doing, otherwise they are literally flying blind. Software development is just as much of an art as web design. It is actually strange that people don't understand there are many types of art just as there are many types of scientific thought. Even if we don't actually care how the app is gonna look as long as it does the job, we do feel pride in its functionality and it is nothing that hurts more as not knowing what that software is supposed to do and a clear way of measuring our own performance.

OK, enough of this. The bottom line is that a project needs to have clear specifications. The first test for a software is the compiler! You can even call it an automated test! ...but the last test is running through the spec list and determining if it does the job as required. Another podcast said that the process of creating automated tests has as a side effect the significant improvement of software quality, but not because of the tests themselves, but of the process of designing the tests. If your tests are meaningful, then you know what the app is to do and you have a clear vision of what failure and success mean and in the process of test design, you get to ask yourself the questions that lead to understanding the project. THAT is Test Driven Development!

Update: it has come to my attention that these kind of errors appear only when debug="true" in the web config. In production sites it might work regardless.

You want to programatically load a js in a user control. The user control becomes visible in the page only during async postbacks and it is hidden at page load. So you want to use some method of loading the external javascript file during that asynchronous UpdatePanel magic.

Use ScriptManager.RegisterClientScriptInclude. There is a catch, though. For each script that you want to include asynchronously, you must end it with
if (typeof (Sys) !== 'undefined') Sys.Application.notifyScriptLoaded();
else the Ajax.Net script loader will throw an error like: Microsoft JScript runtime error: Sys.ScriptLoadFailedException:
The script '...' failed to load. Check for:
Inaccessible path. Script errors. (IE) Enable 'Display a notification about every script error' under advanced settings.
Missing call to Sys.Application.notifyScriptLoaded().
.

For the aspx version, use the ScriptManagerProxy control:
    <asp:ScriptManagerProxy runat="server">
<Scripts>
<asp:ScriptReference Path="~/js/myScript.js" NotifyScriptLoaded="true" />
</Scripts>
</asp:ScriptManagerProxy>

So you did some work in an ASP.Net web site and then you decided to or to add the Ajax extensions to it or switch to ASP.Net 3.5 or whatever and suddenly you get Ajax errors like "Sys is undefined" or you don't see images and the css files are not loaded. And after googling like crazy and finding a zillion possible solutions you decide to enter in the browser address bar the url of the offending image, css file or even ScriptResource.axd files containing the ajax javascript and you see a beautiful ASP.Net error page displaying "Session state is not available in this context.". Huh?

There are some reasons why this might happen, but let's examine the one that actually prompted me to write the article. Someone made a change in global.asax, trying to work with Session there, more precisely in the Application_AcquireRequestState event. The error was returned by the Session property of the HttpApplication object, the one that the global.asax file represents! In fact, this error can only be thrown from there in the current ASP.Net implementations.

First mistake: there was a Session property available in the HttpApplication object and it was immediately assumed that it is the same as Page.Session or HttpContext.Current.Session. It is about the same, except it throws an error if the underlying Session object is null.

Ok, but why is Session null? You are a smart .Net developer and you know that the Session should be available in that event. Any documentation says so! Yes, but it applies to your page, not to all the IHttpHandler implementations, like ScriptResource.axd! Also, the css and image problems occured for me only when opening the site in Cassini, not in IIS, so maybe it opens separate threads to load those resources and ignores the session or something similar, at least at that level.

Well, how does one fix it? Adding in global.asax a condition on Session not being null throws the same error, so you have to add it on HttpContext.Current.Session. In other words:
if (HttpContext.Current.Session!=null) //do stuff with sessions

Maybe this problem occurs in other places as well, but only the Session property of the HttpApplication will throw the "Session state is not available in this context." error.

and has 0 comments
I took a test recently, one of those asking ridiculous C# syntax questions rather than trying to figure out if your brain works, but anyway, I got stuck at a question about structs and classes. What is the difference between them?

Credit must be given where it is due, I took the info from a dotnetspider article, by Dhyanchandh A.V. , who organised the answer to my question very well:
  • Classes are reference types and structs are value types. i.e. one cannot assign null to a struct
  • Classes need to be instantiated with the new keyword, structs need only be declared
  • When one instantiates a class, it will be allocated on the heap.When one instantiates a struct, it gets created on the stack
  • One works with the reference to a class, but directly with the struct
  • When passing a class to a method, it is passed by reference. When passing a struct to a method, it’s passed by value instead of as a reference
  • One cannot have instance Field initializers in structs
  • Classes can have explicit parameterless constructors. Structs cannot
  • Classes support inheritance. But there is no inheritance for structs, except structs can implement interfaces
  • Since struct does not support inheritance, the access modifier of a member of a struct cannot be protected or protected internal
  • It is not mandatory to initialize all fields inside the constructor of a class. All the fields of a struct must be fully initialized inside the constructor
  • A class can declare a destructor, while a struct cannot
.

What is the purpose of a struct, then? It seems to be only a primitive type of class. Well, it serves purposes of backward compatibility with C. Many C functions (and thus COM libraries) use structs as parameters. Also, think of the struct as a logical template over a memory location. One could use the same memory space of an Int32 under a struct { Int16 lo,hi }. Coming from an older and obsolete age, I sometimes feel the need to just read the memory space of a variable and be done with it. Serialization? Puh-lease! just grab that baby's memory and slap it over someone else's space! :)

and has 0 comments
One of our sites started exibiting some strange behaviour when adding a lot of strings to a StringBuilder. The error was intermittent (best kind there is) and the trace showed the error originating from the StringBuilder class and then from within the String class itself (from an external method).

On the web other people noticed this and one possible explanation was that strings need to be contiguous and thus they cannot grow bigger than the largest free contiguous memory block. Remember defragmentation? Heh.

In order to test this, I added a letter to a StringBuilder, then added itself to it for a few times to see where it breaks. It broke at a size of 2^27 (134,217,728 = 0x8000000)! At first I thought it was a StringBuilder bug, but getting sb.ToString() then adding to it a letter resulted in an error. Then I thought maybe it is a default size that, for whatever reason, is considered the maximum string size. That would suck. I don't usually need 134Mb of string, but what if I do? I have 2.5Gb of memory in this computer. But no, people reported the same problem but with other powers of 2, like 29 and on a Vista computer I got 2^28.

Is it from the fragmentation of memory? Right now I have only 1.8Gb in use. I rather doubt that in the 700Mb of space left there is only one 134Mb contiguous memory block. But I cannot prove it one way or another. My guess is that there is another mechanism that actually interferes with this and effectively limits the maximum string size.

But what does that mean, anyway? For practical purposes it means that you have to plan not to have strings that big in your applications. Creating a custom string builder might work, but it would have to use some other methods than strings.

I have tried the same thing with byte arrays. While declaring a byte array of 2^28 was possible (probably also because a string uses Unicode characters internally and thus takes twice the memory) writing it to a MemoryStream resulted in an error.

So watch these things out, try to always keep single variable blocks to manageable sizes.

It has been a pain in the ass for me to use the graphical designer in Visual Studio. Instead I have used writing the markup of web pages and controls using the keys. It went several times faster, but still, it lacked the speed of any text editor I have ever seen. Moving up, down, left, right with the keys would make the system lag, jerk, etc. It was never too annoying to investigate until today.

What happends is that whenever you work in a VS window, it sends events to just about everything on the screen. The more windows you have open, the more it lags. And the culprit for this particular problem: the Properties window. Every time I was moving with the keys it tried to update itself with the information of the object I was moving over. Closing the Properties window fixed it! \:D/

I will write a really short post here, maybe I'll complete it later. I had to create some controls dynamically on a page. I used a ViewState saved property to determine the type of the control I would load (in Page_Load). It all worked fine until I've decided to add some ViewState saved properties in the dynamic controls. Surprise! The dynamically created controls did not persist their ViewState and my settings got lost on postback.

It's a simple google search away: dynamically controls created in Page_Load have empty ViewState bags. You should create them in Page_Init. But then again, in Page_Init the page doesn't have any ViewState, so you don't know what control type to load. It seemed like a chicken and egg thing until I tried the obvious, since the ViewState is loaded in the LoadViewState method, which is thankfully protected virtual, why not create my control there, after the page loads its view state? And it worked.

Bottom line:
protected override void LoadViewState(object savedState)
{
base.LoadViewState(savedState);
CreateControl(ControlName);
}

First of all, I seem to be the proverbial man who can't do it so he teaches it. I've not worked in a Scrum or XP environment, but I did read a few books about them and this is what I gathered. I beg of you to point out any mistruth or inconsistency. You might want to take a look at this previous post, more general post, on the matter of agile development.

Some key elements of all agile methods I've read about are:
  • the code does not belong to any programmer, in other words anyone can change any piece of code in order to solve an issue
  • the members of the team are interchangeable, so not a bunch of experts in different fields, but people that can do all things (and be easily replaced by people just as agile as them :) )
  • the members of the team must have similar competencies, one cannot do pair programming between a rookie and a senior, for example. That is called teaching :)
  • the client is supposed to change their mind often and unpredictably, one plans for the unplannable
  • the client must be represented in the agile team, so as to not have delays or misunderstandings in requirements


Scrum



The Scrum system does seem to be more of a disciplined way of developing than a method in itself. There are Scrum principles that must be upholded, but if you ignore them, the whole system looks like this:

  • All development is done in fixed time increments called Sprints. Scrum specifies 15 or 30 days, although I bet most dev companies actually plan this on a calendaristic month.
  • At the start of each Sprint a meeting of 8 hours takes place (so the first day) in which half of it is to present the requests by the Product Owner (in our case that would be either the client or the person that did the analysis) and the other half to plan which of the tasks in the Project BackLog (requirements list) can be done in the current Sprint. This last part if the responsability of the Team (that would be the developers and their team leaders and managers).
  • In the last day of the Sprint two meetings will be held: a 4 hour meeting that will allow the Team to present what was done in the current Sprint to the Product Owner (this would be an informal meeting that "is intended to bring people together and help them collaboratively determined what the Team should do next") and a 3 hour meeting in which the ScrumMaster (the person in charge with the implementation of Scrum in the project) "encourages the Team to revise, within the Scrum process framework and practices, its development process to make it more effective and enjoyable for the next Sprint"
  • The development is one in the rest of 28 days
  • Each day there is a 15 minute Scrum Meeting held within the Team in which "each Team member answers three questions: What have you done on this project since the last Daily Scrum meeting? What do you plan on doing on this project between now and the next Daily Scrum meeting? What impediments stand in the way of you meeting your commitments to this Sprint and this project? The purpose of the meeting is to synchronize the work of all Team members daily and to schedule any meetings that the Team needs to forward its progress".


What is important about these Sprints is that at the end of each sprint the product should be fully implemented, tested and ready for production. At each increment the client could just take the product and leave. Any changes to the specifications must be included in the backlog and prioritised so that the developers apply them in the next Sprints. Once a Sprint is planned, there are no changes to it.

So, as far as I understand, this is a method of making rigid planning for very small periods of time, then executing it, effectively reducing each project to a bunch of smaller ones. Instead of "Make me a business management application" there will be projects like "Make me a member management interface", then "Add activities management" and so on. It reminds me of the time when I wanted to learn in college and I would divide the number of pages I had to understand and memorize to the number of days remaining till the exam.

I don't consider Scrum a very innovative way of development, although back in 1986 it probably was, but that's also good. One can easily adapt some of these ideas to their own system of development. By allowing the developer to build a finite number of things in a predetermined time, they can select a time to test the application in which they are certain no more requests will delay that process. Of course, I don't know what happends if the client changes their mind about a thing that is supposed to be done in a Sprint. Do we abandon the task in the current Sprint and plan it modified in the next? Do we build it as if nothing happened, then start making the changes or, worse, remove it?

XP (Extreme Programming)



The Extreme Programming development method seems to have the same roots as Scrum does. The idea is to develop in successive iterations that encapsulate planning, testing, development and refactoring. The "12 principles" of XP are again and again mentioned in the book, but I think that's crap. The most important ideas in XP, to me at least, seem to be :
  • User stories as requirements gathering; Most important! a detailed story of what the user will do and why, like a narrative, the Word version of an UML flow diagram, which is the responsability of the client! The actual developing is the implementation in code of those stories
  • iterations, which in the case of XP don't have a specific time length, each one is planned depending on what there is to do and what can be done
  • the separation of user and client, the user is the one that actually uses the program, while the client... well, you know
  • user-on-site, you can always ask the user what they think and receive quick feedback
  • Test driven development, which, together with pair programming, seem the only actual extreme parts of XP, where they insist on tests first, programming later.
  • Spikes: small bursts of programming for no other reason than to research an idea. Developers don't have to be rigurous in spike programming, since they only do the bit of code, test its functionality, then throw it away, the idea being that they learn how to do the actual code they wanted to do and what problems they might be facing. In this particular case, the spike is part of the planning or designing of a piece of code.


I will mention here Pair Programming as well, although I clearly don't see it happening. The idea is that two programmers sit on the same machine, one programs, while the other does just-in-time code review and thinks of the large implications of the code. While the concept is sound and I seldom find myself wanting to be able to code and also think in a larger context, I don't see how this can be done anymore than a master painter could get help from a second one that watches from afar and keeps nagging him on how to do things. Besides, sitting near a code that is being written sounds both boring and terribly frustrating.

But then again, I always like talking to other programmers that are as passionate as I am, so maybe a hands-on discussion, even an argument, might provide the drive to good code. Besides, it is harder to waste time on news sites and online games when you have some guy next to you :)

Conclusion



My conclusion is that agile is a solution to the problems that arose during the Waterfall days. It is not a solution to all problems and it certainly presents some level of difficulty in implementation.

I believe it would be hard to do in a small team with high turnover. One needs a stable team that works well together and has a decent management to implement agile development. But I do see it as a positive thing, as it puts the needs of the customer first and, no matter how good a coder you are, your primary goal is to satisfy the client.

and has 1 comment
Here is a point raised by Tudor, from the infamous Romanian blog it-base.ro. Incidentally, he does some teaching in the Java field and is a good web designer and PHP programmer.

Ok, some of you know those informatics teachers that start talking about a programming language by giving you a silly problem that would never occur in real life or by asking you to "decypher" a piece of code that looks unusable in any scenario. But bare with me and try to think this through before you read on in the post.
Question: what is the value of i (initially 0) after the following operations in C#?
  • i = i++
  • i = Math.Pow(i++,2)
  • some method that ends in return i++


First, let's think about the meaning of this ++ arcane symbol. It means increment a number, or add 1 to it. In C based languages, you can use it either after or before a variable, thus changing meaning to add 1 to i after or before assignment. As MSDN says: "The increment operator (++) increments its operand by 1. The increment operator can appear before or after its operand:
++ var
var ++
The first form is a prefix increment operation. The result of the operation is the value of the operand after it has been incremented.
The second form is a postfix increment operation. The result of the operation is the value of the operand before it has been incremented."


Now the answer is pretty clear: i will be zero after any of the operations described above. But why?! Let's examine them a little.

The first makes no sense in real world, but you could easily imagine something like i=j++ that could be used somewhere and that makes sense to set i to the original value of j, then increment j. But then doesn't it mean that i should be 1 because it gets incremented after the assignment? Well I think it should, as the last operation, but what I think happends is that the value for i gets pushed in a stack, then retrieved at the end of the sentence like this:
  i = i
push 0 to stack as the value of i
i++
push 1 to stack as the value of i
pop value of i from stack (1)
pop value of i from stack (0)


Ok, ok, I guess that makes some sort of sense, but what about that int F(int i) { return i++; } thing? Shouldn't it increment i after the operation then return it? Apparently not. The method returns the value of i and aborts the pending increment operation.

According to Tudor, PHP would return 1 in these situations, although most C implementations, including javascript, return 0. Update: he later posted a comment retracting that statement. He also suggested it would be hard to debug something like this in case it happends. Ha! My beautiful ReSharper Visual Studio addon immediately added a wiggly line under i and said value assigned is not used in any execution path. ReSharper - Computer teachers : 1-0 ! :)

and has 0 comments
This was something I have been meaning to write for quite some time, but actually, I've never had to work with a proper serialization scenario until recently. Here is goes:

First of all, whenever one wants to save an object to a string they google ".Net serializer" and quickly reach the XmlSerializer because that's what most people think serialization is. But actually, it is not. The whole point of serializing an object is that you can transfer and store it. Therefore you need to use a format that is as open, clear and standard as possible and to send only the relevant data, which in case of objects is the PUBLIC data. And for that, the XmlSerializer does its job, albeit, it does have some problems I am going to describe later.

But suppose you didn't really want to send mere data over to another computer, but an entire class, with its state intact, ready to do work as if the transfer never happened? Then you need to FORMAT the object. Enter the IFormatter interface with its most prominent implementation: BinaryFormatter. Funny enough, the methods used to spurt an object through a stream are also called Serialize and Deserialize. The advantages of the IFormatter way is that it is saving the entire object graph, private members included, and doesn't need all the requirements the XmlSerializer does. It also produces a smaller output. So, is this it? Why use Xml (which everybody secretly hates) when you can use the good ole obscure binary file with almost no trouble? Well, because of the almost. Yes, this way of doing things is not fullproof either.

Some people feel that the sending only the data is not serialization, and that the saving of the completele graph and internal state of the object is. Wikipedia says: "serialization is the process of converting an object into a sequence of bits so that it can be stored on a storage medium (such as a file, or a memory buffer) or transmitted across a network connection link. When the resulting series of bits is reread according to the serialization format, it can be used to create a semantically identical clone of the original object.". So, they bail by using the obscure phrasing of "semantically identical", which pretty much says "they mean the same thing even if their structures may differ". So, I think I am in the right, as the BinaryFormatter has real issues with structure change.

Now for the quick and dirty reference:
The XmlSerializer
  • Only serializes public READ/WRITE properties and fields - it doesn't throw any error when trying to serialize readonly properties, so be careful
  • Needs the class to have a parameterless constructor - this pretty much restricts the design of the classes you can serialize
  • Does not work on Dictionaries
  • Does use a Type definition to serialize and deserialize, which means you can still use it if the types are named differently or of different versions, even if they are radically different, as it will only fill the values that it stored and not care about the others
  • There are all sorts of attributes one can decorate their classes with to control serialization as well as some events that are fired during deserialization
  • Has issues with circular references
  • If your class implements IXmlSerializable it can control how the serialization is done

The BinaryFormatter
  • It serializes both public and private, readonly or read/write properties and fields as long as their type classes are marked as Serializable - that sucks for classes that are not yours
  • It's a rigid method of serializing objects - if you change the source or destination objects or even their namespace, the deserialization won't work

The SoapFormatter
  • Just when I thought that a class that combines the benefits of both BinaryFormatter and XmlSerializer exists, it appears it has been obsoleted in .Net 3.5. Besides, it did far less than the BinaryFormatter


It seems that Microsoft's idea of serialization blatantly differs from mine. I would have wanted a class that can serialize binary or Xml based on a simply property, send public OR both types of fields and properties, be flexible in how decorating attributes are used and what the output is. In my project I had to switch from BinaryFormatter, which seemed to solve all problems, to XmlSerializer (thus having to change a lot of the classes and design of the app) just because the type of the class sent by the client application could not have the same namespace as the one on the server.

That doesn't mean one cannot build their own class to do everything I mentioned above, of course. Here are some CodeProject examples:
A Deep XmlSerializer, Supporting Complex Classes, Enumerations, Structs, Collections, and Arrays
AltSerializer - An Alternate Binary Serializer.

Update: the .Net framework 3.5 has added an object called JavaScriptSerializer which turns an object into a single line of text, JSON style. It worked great for me in order to log hierarchical or collection based data. Use it just like the XmlSerializer.

Another link to check out is this one: Fast Serialization, but read the entire article before using any code.

Well, I was trying to use for the first time the Microsoft ReportViewer control. I used some guy's code to translate a DataSet XML to RDLC and ran the app. Everything went OK except for the export to Excel (which was the only real reason why I would attempt using this control). Whenever I tried exporting the report, the 'Asp.net Session expired' error would pop up.

Googling I found these possible solutions:
  • set the ReportViewer AsyncRendering property to false - doesn't work
  • use the IP address instead of the server name, because the ReportViewer has issues with the underline character - doesn't work, albeit, I didn't have any underline characters in my server name to begin with
  • set the maximum workers from 2 to 1 in the web.config (not tried, sounds dumb)
  • setting cookieless to true in the web.config sessionState element - it horribly changed my URL, and it worked, but I would never use that
  • setting ProcessingMode to local - that seemed to work, but then it stopped working, although I am not using the ReportViewer with a Reporting Services server
  • Because towards the end I've noticed that the problem was not an expiration, but more of a Session communication problem, I tried setting machineKey in web.config, although it doesn't work for the InProc setting. So it didn't work either.


For a few days, this blog post showed the last solution as working. Then it failed. I don't know why. I fiddled with the RDLC file a little (arranging columns and colors and stuff) and then it seemed to work again. I have no idea why.

I got mad and used Reflector to get the source code for the ReportViewer control and see where it all happends and why! I have found the following:
  • the error message looks like this:
    ASP.NET session has expired
    Stack Trace:
    [AspNetSessionExpiredException: ASP.NET session has expired]
    Microsoft.Reporting.WebForms.ReportDataOperation..ctor() +556
    Microsoft.Reporting.WebForms.HttpHandler.GetHandler(String operationType) +242
    Microsoft.Reporting.WebForms.HttpHandler.ProcessRequest(HttpContext context) +56
    System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +181
    System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75
  • the error occurs in the constructor of ReportDataOperation:
    public ReportDataOperation():
    this.m_instanceID =
    HandlerOperation.GetAndEnsureParam(requestParameters, "ControlID");
    this.m_reportHierarchy =
    (ReportHierarchy) HttpContext.Current.Session[this.m_instanceID];
    if (this.m_reportHierarchy == null)
    throw new AspNetSessionExpiredException();
  • the Session object that makes the error be thrown is set in the SaveViewState() override method in ReportViewer
  • Apparently, the error occurs only sometimes (probably after the HttpApplication was restarted and during the debug mode of Visual Studio).


This reminded me of the time when I was working on a Flash file upload control and I used a HttpHandler to get the flash file from the assembly. Back then the control would not work with FireFox and some other browsers which would use different sessions for the web application and for getting the Flash from the axd http handler.

This time it works with FireFox and IE, but it fails in debug mode and only in IE. I am using IE8, btw.

My conclusion is that
  1. the ReportViewer control was poorly designed
  2. the ASP.Net Session expired error is misdirecting the developer, since it is not an expiration problem
  3. the actual problem lies in the inability of the ReportViewer control to communicate with the HttpHandler.
  4. The problem also could be related to the browser using separate threads to get the application and access the HttpHandler.

I went to this presentation of a new Microsoft concept called Windows Azure. Well, it is not so much as a new concept, more like them entering the distributed computing competition. Like Google, IBM and - most notably - Amazon before it, Microsoft is using large computer centers to provide storage and computing as services. So, instead of having to worry about buying a zillion computers for your web farm, manage the failed components, the backups, etc. you just rent the storage and computing and create an application using the Windows Azure SDK.

As explained to me, it makes sense to use a large quantity of computers, especially structured for this cloud task, having centralized cooling, automated update, backup and recovery management, etc, rather than buying your own components. More than that. since the computers run the tasks of all customers, there is a more efficient use of CPU time and storage/memory use.

You may pay some extra money for the service, but it will closely follow the curve of your needs, rather than the ragged staircase that is usually a self managed system. You see, you would have to buy all the hardware resources for the maximum amount of use you expect from your application. Instead, with Azure, you just rent what you need and, more importantly, you can unrent when the usage goes down a bit. What you need to understand is that Azure is not a hosting service, nor a web farm proxy. It is what they call cloud computing, the separation of software from hardware.

Ok, ok, what does one install and how does one code against Azure? There are some SDKs. Included is a mock host for one machine and all the tools needed to build an application that can use the Azure model.

What is the Azure model? You have your resources split into storage and computing workers. You cannot access the storage directly and you have no visual interface for the computing workers. All the interaction between them or with you is done via REST requests, http in other words. You don't connect to SQL Servers, you use SQL Services, and so on and so on.



Moving a normal application to Azure may prove difficult, but I guess they will work something out. As with any new technology, people will find novell problems and the solutions for them.

I am myself unsure of what is the minimum size of an application where it becomes more effective to use Azure rather than your own servers, but I have to say that I like at least the model of software development. One can think of the SDK model (again, using only REST requests to communicate with the Azure host) as applicable to any service that would implement the Azure protocols. I can imagine building an application that would take a number of computers in one's home, office, datacenter, etc and transforming them into an Azure host clone. It wouldn't be the same, but assuming that many of your applications are working on Azure or something similar, maybe even a bit of the operating system, why not, then one can finally use all those computers gathering dust while sitting with 5% CPU usage or not even turned on to do real work. I certainly look forward to that! Or, that would be really cool, a peer to peer cloud computing network, free to use by anyone entering the cloud.

and has 0 comments
Well, it may have been obvious for many, but I had no idea. Whenever I was writing a piece of code that uses a StringBuilder I was terribly upset by the lack of a substring method.

It was there all along, only it is the ToString method. You give it the startIndex and length and you're set.

Looking at the source, I see that in .Net 1.1 this method is only a normal Substring on the internal string used by the string builder. In Net 2.0, the method uses a InternalSubStringWithChecks internal method of the string class, which is using a InternalSubString unsafe method that seems to be more basic and thus faster.

Actually, I think this applies to any dynamic modification of drop down list options. Read on!

I have used a CascadingDropDown extender from the AjaxControlToolkit to select regions and provinces based on a web service. It was supposed to be painless and quick. And it was. Until another dev showed me a page giving the horribly obtuse 'Invalid postback or callback argument. Event validation is enabled using <pages enableEventValidation="true"/> in configuration or <%@ Page EnableEventValidation="true" %> in a page. For security purposes, this feature verifies that arguments to postback or callback events originate from the server control that originally rendered them. If the data is valid and expected, use the ClientScriptManager.RegisterForEventValidation method in order to register the postback or callback data for validation.'. As you can see, this little error message basically says there is a problem with a control, but doesn't care to disclose which. There are no events or overridable methods to enable some sort of debug.

Luckily, Visual Studio 2008 has source debug inside the .Net framework itself. Thus I could see that the error is caused by the drop down lists I mentioned above. Google told me that somewhere in the documentation of the CascadingDropDown extender there is a mention on setting enableEventValidation to false. I couldn't find the reference, but of course, I didn't look too hard, because that is simply stupid. Why disable event validation for the entire page because of a control? It seems reasonable that Microsoft left it enabled for a reason. (Not that I accuse them of being reasonable, mind you).

Analysing further, I realised that the error kind of made sense. You see, the dropdownlists were not binded with data that came from a postback. How can one POST a value from a select html element if the select did not have it as an option? It must be a hack. Well, of course it was a hack, since the cascade extender filled the dropdown list with values.

I have tried to find a way to override something, make only those two dropdownlists not have event validation enabled. Couldn't find any way to do that. Instead, I've decided to register all possible values with Page.ClientScript.RegisterForEventValidation. And it worked. What I don't understand is why did this error occur only now, and not in the first two pages I have built and tested. That is still to be determined.

Here is the code

foreach (var region in regions)
Page.ClientScript.RegisterForEventValidation(
new PostBackOptions(ddlRegions,region)
);


It should be used in a Render override, since the RegisterForEventValidation method only allows its use in the Render stage of the page cycle.

And that is it. Is it ugly to load all possible values in order to validate the input? Yes. But how else could you validate the input? A little more work and a hidden bug that appears when you least expect it, but now even the input from those drop downs is more secure.

Update:
My control was used in two pages with EnableEventValidation="false" and that's why it didn't throw any error. Anyway, I don't recommend setting it to false. Use the code above. BUT, if you don't know where the code goes or you don't understand what it does, better use this solution and save us both a lot of grief.