A few days ago a coworker asked me about implementing an autocomplete textbox with not only text items, but also images. I thought, how hard can it be? I am sure the guys that made the AutoCompleteExtender in the AjaxControlToolkit thought about it. Yeah, right!

So, I needed to tap into the list showing mechanism of the AutoCompleteExtender, then (maybe) into the item selected mechanism. The AutoCompleteExtender exposes the OnClientShowing and the OnClientItemSelected properties. They expect a function name that accepts a behaviour and an args parameters.

Ok, the extender creates an html element to contain the list completion items or gets one from the property CompletionListElementID (which is obsoleted anyway). It creates a LI element for each item (or a DIV in case of setting CompletionListElementID). So all I had to do was iterate through the childNodes of the container element and change their content.

Then, on item selected, unfortunately the AutoCompleteExtender tries to take the text value with firstChild.nodeValue, which pretty much fails if the first child of the item element is not a text node. So we will tap in OnClientItemSelected, which args object contains item, the text extracted as mentioned above (useless to us), and the object that was passed from the web service that provided the completion list. The last one we need, but keep reading on.

So the display is easy (after you get the hang of the Microsoft patterns). But now you have to return a list of objects, not mere strings, in order to get all the information we need, like the text and the image URL. Here is the piece of code that interprets the values received from the web service:
// Get the text/value for the item
try {
var pair = Sys.Serialization.JavaScriptSerializer.deserialize('(' + completionItems[i] + ')');
if (pair && pair.First) {
// Use the text and value pair returned from the web service
text = pair.First;
value = pair.Second;
} else {
// If the web service only returned a regular string, use it for
// both the text and the value
text = pair;
value = pair;
}
} catch (ex) {
text = completionItems[i];
value = completionItems[i];
}


In other words, it first tries to deserialize the string received, then it checks if it is a Pair object (if it has a First property) else it passes the object as value and text! If deserialization fails, the entire original string is considered. Bingo! So on the server side we need to serialize the array of strings we want to send to the client. And we do that by using System.Web.Script.Serialization.JavaScriptSerializer. You will see how it goes into the code.

So far we displayed what we wanted, we sent what we wanted, all we need is to set how we want the completion items to appear. And for that I could have used a simple string property, but I wanted all the goodness of the intellisense in Visual Studio and all the objects I want, without having to Render them manually into strings.

So, the final version of the AutoCompleteExtender with images is this: A class that inherits AutoCompleteExtender, but also INamingContainer. It has a property ItemTemplate of the type ITemplate which will hold the template we want in the item. You also need a web service that will use the JavascriptSerializer to construct the strings returned.

Here is the complete code:
AdvancedAutoComplete.cs

using System;
using System.IO;
using System.Web.UI;
using System.Web.UI.WebControls;
using AjaxControlToolkit;

namespace Siderite.Web.WebControls
{
/// <summary>
/// AutoCompleteExtender with templating
/// </summary>
public class AdvancedAutoComplete : AutoCompleteExtender, INamingContainer
{
private ITemplate _template;

[TemplateContainer(typeof(Content))]
[PersistenceMode(PersistenceMode.InnerProperty)]
public ITemplate ItemTemplate
{
get { return _template; }
set { _template = value; }
}

protected override void OnInit(EventArgs e)
{
base.OnInit(e);
const string script = @"
function AdvancedItemDisplay(behaviour,args) {
var template=behaviour.get_element().getAttribute('_template');
//if (!template==null) template='${0}';
for (var i=0; i<behaviour._completionListElement.childNodes.length; i++) {
var item=behaviour._completionListElement.childNodes[i];
var vals = item._value;
var html=template;
for (var c=0; c<vals.length; c++)
html=html.replace(new RegExp('\\$\\{'+c+'\\}','g'),vals[c]);
item.innerHTML=html;
}
}

function AdvancedSetText(behaviour,args) {
var vals=args._value;
var element=behaviour.get_element();
var control = element.control;
if (control && control.set_text)
control.set_text(vals[0]);
else
element.value = vals[0];
}
"
;
ScriptManager.RegisterClientScriptBlock(this, GetType(), "AdvancedAutoComplete", script, true);

OnClientShowing = "AdvancedItemDisplay";
OnClientItemSelected = "AdvancedSetText";
}

protected override void OnPreRender(EventArgs e)
{
base.OnPreRender(e);
string template = GetTemplate();
((TextBox)TargetControl).Attributes["_template"] = template;
}

private string GetTemplate()
{
Content ph=new Content();
ph.Page = Page;
_template.InstantiateIn(ph);
HtmlTextWriter htw = new HtmlTextWriter(new StringWriter());
ph.RenderControl(htw);
return htw.InnerWriter.ToString();
}
}
}


MainService.cs

using System.Collections.Generic;
using System.Web.Script.Serialization;
using System.Web.Script.Services;
using System.Web.Services;

/// <summary>
/// Web service to send auto complete items to the AdvancedAutoComplete extender
/// </summary>
[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[ScriptService]
public class MainService : WebService
{
[WebMethod]
[ScriptMethod]
public string[] GetCompletionList(string prefixText, int count)
{
JavaScriptSerializer jss=new JavaScriptSerializer();
List<string> list=new List<string>();
for (int c = 0; (list.Count< count) && (c < 100000); c++)
{
string s = (c*c).ToString();
if (s.StartsWith(prefixText))
{
object[] item = new object[] {s, "images/demo.gif"};
list.Add(jss.Serialize(item));
}
}
return list.ToArray();
}
}



An example of the use

<asp:TextBox runat="server" ID="tbMain"></asp:TextBox>
<cc1:AdvancedAutoComplete ID="aceMain" runat="server" BehaviorID="mhMain" TargetControlID="tbMain"
ServiceMethod="GetCompletionList" ServicePath="~/MainService.asmx" MinimumPrefixLength="0"
CompletionInterval="0">
<ItemTemplate>
<asp:Image runat="server" ID="imgTest" ImageUrl="${1}" /><asp:Label runat="server"
ID="lbTest" Text="${0}"></asp:Label>
</ItemTemplate>
</cc1:AdvancedAutoComplete>


That's it! All you have to do is make sure the controls in the template render ${N} text that gets replaced with the first, second, Nth item in the list sent by the web service. The text that will be changed in the textbox is always the first item in the list (${0}).

Restrictions: if you want to use this a control in a library and THEN add some more functionality on the Showing and ItemSelected events, you need to take into account that those are not real events, but javascript functions, and the design of the autocompleteextender only accepts one function name. You could create your own function that also call on the one described here, but that's besides the point of this blog entry.

Usually when I blog something I am writing the problem and the solution I have found. In this case, based also on the lack of pages describing the same problem, I have decided to blog about the problem only. If you guys find the solution, please let me know. I will post it here as soon as I find it myself. So here it is:

We started creating some tests for one of our web applications. My colleague created the tests, amongst them one that does a simple file upload. She used the following code:
var fu = ie.FileUpload(Find.ByName("ctl00$ContentPlaceHolder1$tcContent$tpAddItem$uplGalleryItem$fuGalleryItem"));
fu.Set(UploadImageFile);

and it worked perfectly. She was using WatiN 1.2.4 and MBUnit 2.4.

I had Watin 2.0 installed and MBUnit 3.0. Downloaded the tests, removed the ApartmentState thing that seems not to be necessary in MBUnit 3.0, ran them.
On my computer the FileUpload Set method opens a file upload dialog and stops. I've tried a lot of code variants, to no avail; I've uninstalled both MBUnit and WatiN and installed the 1.2.4 and 2.4 versions. Tried all possible combinations actually, using .NET 1.1 and 2.0 libraries and changing the code. Nothing helped. On my computer the setting of the file name doesn't work.

I've examined the WatiN source and I've noticed that it used a FileUploadDialogHandler that determines if a window is a file upload window or not by checking a Style property. I have no idea if that is the correct solution, but just to be sure I inherited my own class from FileUploadDialogHandler and I've instructed it to throw an exception with a message containing the style of the first window it handles. The exception never fired, so I am inclined to believe that the handler mechanism somehow fails on my computer!

I have no idea what to do. I have a Windows XP SP3 with the latest updates and I am running these tests in Visual Studio 2008 Professional.

Update:
The only possible explanation left to me is that Internet Explorer 8 is the culprit, since my colleagues all have IE7. The maker of WatiN himself declared that identifying the windows by style is not the most elegant method possible, but he had no other way of doing it. My suspicion is that the window handling doesn't work at all in IE8, but I have no proof for it and so far I have found no solution for this problem.

I've spent about a day on a thing that I can only consider a FireFox bug. As a complete reverse from what I would expect from a javascript script, it worked anywhere but in FireFox! And FireFox 2.1, I haven't even installed 3.0 yet.

It concerned a simple javascript function from a third party that was supposed to get the absolute positioning of an element when clicked. I've written one myself a while ago, but it didn't work either! Here is the function that I was trying to fix:
function getPos(n) {
var t = this, x = 0, y = 0, e, d = t.doc, r;

n = t.get(n);

// Use getBoundingClientRect on IE, Opera has it but it's not perfect
if (n && isIE) {
n = n.getBoundingClientRect();
e = t.boxModel ? d.documentElement : d.body;
x = t.getStyle(t.select('html')[0], 'borderWidth'); // Remove border
x = (x == 'medium' || t.boxModel && !t.isIE6) && 2 || x;
n.top += t.win.self != t.win.top ? 2 : 0; // IE adds some strange extra cord if used in a frameset

return {x : n.left + e.scrollLeft - x, y : n.top + e.scrollTop - x};
}

r = n;
while (r) {
x += r.offsetLeft || 0;
y += r.offsetTop || 0;
r = r.offsetParent;
}

r = n;
while (r) {
// Opera 9.25 bug fix, fixed in 9.50
if (!/^table-row|inline.*/i.test(t.getStyle(r, "display", 1))) {
x -= r.scrollLeft || 0;
y -= r.scrollTop || 0;
}

r = r.parentNode;

if (r == d.body)
break;
}

return {x : x, y : y};
}


As you see, it is a little more complex than my own, although I don't know if it works better or not.

Anyway, I found that the problem was simple enough: the element I was clicking did not have an offsetParent! Here is a forum which discusses a possible cause for it. Apparently the Gecko rendering engine that FireFox uses does not compute offsetParent, offsetTop or offsetLeft until the page has finished loading. I didn't find anything more detailed and there were just a few pages that seemed to report a problem with offsetParent null in FireFox.

I tried to solve it, but in the end I gave up. My only improvement to the script was this line:
while (r&&!r.offsetParent) {
x+=r.offsetLeft||0;
y+=r.offsetTop||0;
r=r.parentNode;
};
which resulted in a more localised position, i.e. the position of the closest parent to which I could calculate a position.

In the end the problem was solved by restructuring the way the dynamic elements on the page were created, but I still couldn't find either an official cause or a way to replicate the issue in a simple, separate project. My guess is that some types of DOM manipulations while the page is loading (in other words, scripts that are just dropped on the page and not loaded in the window 'load' event which change stuff in the page element tree) lead to FireFox forgetting to compute the offset values or just even assuming that the page is never loaded.

I've stumbled upon a little VS2008 addon that I think could prove very useful. It's called Clone Detective. Here is how you use it:
  • Make sure VS2008 is closed
  • Download and install the setup file
  • Additionally the source is freely available!
  • Open VS2008 and load a solution up
  • Go to View -> Other Windows -> Clone Explorer
  • Click the Run Clone Detective button


Now you should be able to see the percentage of cloned code in each file and also see the cloned code as vertical lines on the right vertical border next to the code.

Also, you might get a Sys.WebForms.PageRequestManagerServerErrorException with code 500 when using Ajax. It usually happends when you click on a button in a GridView or another bound control. You expect it to work, but it doesn't, even if the code is relatively clear.

The answer is (probably) that you are binding the container bound control every time you load the page (instead of only on !IsPostBack). The thing is this used to work in ASP.Net 2.0.

Bottom line: check your binding, see if you are not doing any DataBind in between the button click and the eventual event catch.

and has 3 comments

We started our first ASP.Net 3.5 project and today I had to work with the Linq database access. Heralded by many as a complete revolution in the way of doing ORM, LInQ is not that simple to move to. A lot of stuff that has become second nature for me as a programmer now must be thrown in the garbage bin.

I'll skip the "how to do LInQ" (there are far too many tutorials on the net already) and get down to the problem. Let me give you a simple example. You want to select a User from the Users table. Let's see how I would have done it until now:

User u=User.SelectById(idUser);

or maybe

User u=new User(); u.IdUser=idUser;
u.SelectOne();


In LInQ you do it like this:

var c=new MyDataContext();
User user=(from u in users where u.idUser=idUser select u).FirstOrDefault();

or maybe

var c=new MyDataContext();
User user=c.Users.Select(u=>u.IdUser=idUser).FirstOrDefault();



I am typing this by hand, forgive me the occasional typos or syntax errors.

Well, my eyes scream for an encapsulation of this into the User class. We'll do that by creating a new User.cs file that contains a User partial class that will just mold on the LInQ generated one, then adding methods like SelectById.

Ok, I have the bloody user. I want to change something, let's say rename him or changing the password, then save the changes to the database. Here it gets tricky.

User u=User.SelectById(idUser);
u.Password="New Password";
....?!



In LInQ you need to do a DataContext.SubmitChanges(); but since I encapsulated the select functionality in the User class, I have no reference to the DataContext. Now here are a few solutions for doing this without saving the linq queries in the interface:

  1. Add the methods to the context class itself, stuff like SelectUserById();. You would still need to instantiate the DataContext though, in every query
  2. Add a second out or ref parameter to the methods so that you can get a reference to the DataContext used.
  3. Make a method for each operation, like User.SetPasswordById();, that would become quickly quite cumbersome.
  4. Add a reference to the DataContext in the User object, but that would become troublesome for operations like SelectAll

.

I am still not satisfied with any of these solutions. I am open to suggestions. I will link to this little article I found that suggests encapsulating the LInQ queries in separate extensions methods: Implementing ORM-independent Linq queries

You usually get this using an Internet Explorer version 6 (but also later versions might exibit this) on pages that seem to be loading ok in the background. You press ok and the page disappears and the regular error page is displayed.

What is happening is that javascript is trying to change a html element that has not finished loading. The usual cause of this problem is an embedded script block that executes as it loads, rather than on the onload event of the page.

The fix is easy. Take all the scripts that execute as they load and either mark them as defer or encapsulate those commands in a function that is executed on the onload event of document.body. Be careful with defer. This post describes the various implementations of the keyword in various browsers.

Yay! My first real SilverLight post :)

Anyway, the problem is with controls in SilverLight that expect an URI as a parameter. It doesn't work. After trying all kind of stuff and googling a litle I found out that
  1. the path is relative to the web application ClientBin directory
  2. the path cannot contain .. or other directory/URI navigation markers like ~
  3. the SilverLight control does not have access to a Request object like a web page does


This link already discusses it: Silverlight 2.0 Beta 1 Uri inconsistency, but I also have an additional solution to the ones listed there.

Here are the solutions for this:
  • Provide an absolute Uri either statically or by generating it with this piece of code:
    New Uri(Application.Current.Host.Source.AbsolutePath + "../../../video.wmv", UriKind.Absolute)
  • Copy the images, videos, files into the ClientBin directory then only provide their name
  • Create virtual directories inside ClientBin that point to your resource directories. For example create a virtual directory Videos that points to the ~/Resources/Videos folder in the site, then simply use Videos/video.wmv as the URI


Of these three, the last I find the most elegant, even if the setup of the website itself might be rendered a lot more difficult by this.

ASP.Net 2.0 added a very useful thing, the '~' sign, which indicates that a path is relative to the application directory. Since the application itself should be indifferent to its name, this little thing comes in useful when trying to set the location of user controls, style sheets and so on. The problem is that this feature only applies to user controls. That is not a problem for most tags, since even a link tag can be set as a user control with a runat=server attribute.

The problem comes when trying to set the location of javascript script blocks. A script block with runat=server must be a server code block by definition, i.e. C# or VB, whatever the site language is set to. One of the solutions often used to solve this is to use a code block inside the src attribute of the javascript block like this:
<script language="Javascript" type="text/javascript" 
src='<% =ResolveUrl("~/scripts/myScript.js")%>'></script>
.

But this is still a nasty issue, because of the dreaded The Controls collection cannot be modified because the control contains code blocks (i.e. <% ... %>). error. As in the post I linked to, the solution for this is to place all external script blocks within a server control like a PlaceHolder or a even a div with the runat=server attribute set. The drawback to this solution is that you can't do that in the <head> section of the HTML output.

The other solution is to use from within the server code the ScriptManager.RegisterClientScriptInclude method to add the script programatically to the page. While this is elegant, it also defeats the purpose of the aspx page, that is to separate content from code.

Somebody asked me how to add things at the end of an AutoCompleteExtender div, something like a link. I thought it would be pretty easy, all I would have to do is catch the display function and add a little thing in the container. Well, it was that simple, but also complicated, because the onmousedown handler for the dropdown is on the entire div, not on the specific items. The link was easy to add, but there was no way to actually click on it!

Here is my solution:

function addLink(sender,args) {
var div=sender.get_completionList();
if (div) {
var newDiv=document.createElement('div');
newDiv.appendChild(getLink());
div.appendChild(newDiv);
}
}

function getLink() {
var link=document.createElement('a');
link.href='#';
link.innerHTML='Click me for popup!'
link.commandName='AutoCompleteLinkClick';
return link;
}

function linkClicked() {
alert('Popsicle!');
}

function ACitemSelected(sender,args) {
var commandName=args.get_item().commandName;
if (commandName=='AutoCompleteLinkClick') linkClicked();
}


The AutoCompleteExtender must have the two main functions as handlers for the item selected and popup shown set like this:
<ajaxControlToolkit:AutoCompleteExtender OnClientItemSelected="ACitemSelected" OnClientShown="addLink" ...>


Now, the explaining.

First we hook on OnClientShown and add our link. The link is added inside a div, because else it would have been shown as a list item (with highlight on mouse over). I also added a custom attribute to the link: commandName.

Second we hook on OnClientItemSelected and check if the selected item has a commandName attribute, and then we execute stuff depending on it.

That's it folks!

We have this web application that needs to call a third party site that then redirects back to us. The other app is using a configured URL to redirect back. In order to develop and debug the application, we used a router redirect with a different port like this: the external site calls http://myExternalIp:81 and it gets redirected to my own computer on port 80.

I was amazed to notice that when entering my local page, Request.Url would be in the format http://myExternalIp, without the 81 port. As the page was executed in order to debug it, I was baffled by this behaviour. I tried a few things, then I decided to replicate it on a simple empty site and there it was. The only thing I could find that had any information about the original port number was Request.Headers["Host"] which looked something like myExternalIp:81.

I guess this is a bug in the Request object, since it uses the port of the actual server instead of the one of the request, since my server was responding on port 80 on localhost and not 81.

Here is a small method that gets the real Request URL:


public static Uri GetRealRequestUri()
{
if ((HttpContext.Current == null) ||
(HttpContext.Current.Request == null))
throw new ApplicationException("Cannot get current request.");
return GetRealRequestUri(HttpContext.Current.Request);
}

public static Uri GetRealRequestUri(HttpRequest request)
{
if (String.IsNullOrEmpty(request.Headers["Host"]))
return request.Url;
UriBuilder ub = new UriBuilder(request.Url);
string[] realHost = request.Headers["Host"].Split(':');
string host = realHost[0];
ub.Host = host;
string portString = realHost.Length > 1 ? realHost[1] : "";
int port;
if (int.TryParse(portString, out port))
ub.Port = port;
return ub.Uri;
}

Just a short infomercial. Response.Redirect(url) is the same with Response.Redirect(url,true), which means that after the redirect, Response.End will be executed. In case you get a weird 'Thread was being aborted' exception, you probably have the Redirect/End methods inside a try/catch block. Remove them from the block and it will work. Probably the ending of the Response session doesn't look good to the debugger and that particularily obtuse exception is thrown.

If you absolutely must put the thing in a try/catch block, just put everything EXCEPT the Redirect/End. Another option (only for Response.Redirect) is to add a false parameter so to not execute Response.End.

and has 0 comments
And still, pigs are not yet flying! Here is the news article:
history.forward()

Bottom line: IIS is being optimized for PHP, PHP is being optimized for Sql Server and, by becoming a sponsor of Apache, Microsoft probably gets a say in their development.

What a typical corporate move that is. But what would be the effect of such a move only time will tell. Will ASP.Net start working habitually on Apache?

This post will be quite lengthy and it will detail my findings on best practices with SQL, specifically Microsoft SQL Server.

I started with an article written by Vyas Kondreddi in 2001: SQL Server TSQL Coding Conventions, Best Practices, and Programming Guidelines. In 2001 people were not microblogging!

Well, to summarize the article and bring it up to date a little, here are some of the most important points (in my view):
  • Decide upon a database naming convention, standardize it across your organization, and be consistent in following it. It helps make your code more readable and understandable.
  • Write comments in your stored procedures, triggers and SQL batches generously, whenever something is not very obvious.
  • Try to avoid server side cursors as much as possible.
    As Vyas Kondreddi himself says: "I have personally tested and concluded that a WHILE loop is always faster than a cursor"
  • Avoid the creation of temporary tables while processing data as much as possible, as creating a temporary table means more disk I/O. Consider using advanced SQL, views, SQL Server 2000 table variable, or derived tables, instead of temporary tables.
    This is interesting, because I usually use a lot of temporary tables in my stored procedures to make the code more orderly. I guess that in the case of SQL Server 2005 and later one can always use Common Table Expressions to make the code more readable. For SQL 2000 and such I found two interesting articles about not using temporary tables and replacing them with either derived tables (selects in selects) or with table variables, although they do have some limitations, thoroughly explained in the latter post. Here are the links: Eliminate the Use of Temporary Tables For HUGE Performance Gains and Should I use a #temp table or a @table variable?
  • Try to avoid wildcard characters at the beginning of a word while searching using the LIKE keyword, as that results in an index scan, which defeats the purpose of an index.
    For a short analysis of index scans go to SQL SERVER - Index Seek Vs. Index Scan (Table Scan).
  • Use the graphical execution plan in Query Analyzer or SHOWPLAN_TEXT or SHOWPLAN_ALL commands to analyze your queries.
  • Use SET NOCOUNT ON at the beginning of your SQL batches, stored procedures and triggers in production environments, as this suppresses messages like '(1 row(s) affected)' after executing INSERT, UPDATE, DELETE and SELECT statements. This improves the performance of stored procedures by reducing network traffic.
  • Use the more readable ANSI-Standard Join clauses instead of the old style joins.
  • Incorporate your frequently required, complicated joins and calculations into a view so that you don't have to repeat those joins/calculations in all your queries.
  • Use User Defined Datatypes if a particular column repeats in a lot of your tables, so that the datatype of that column is consistent across all your tables.
    Here is a great article about Sql UDTs (not the new .NET CLR types): What's the Point of [SQL Server] User-Defined Types?. Never used them, myself, but then again I am not an SQL guy. For me it seems easier to control data from .Net code
  • Do not let your front-end applications query/manipulate the data directly using SELECT or INSERT/UPDATE/DELETE statements. Instead, create stored procedures, and let your applications access these stored procedures.
    I am afraid I also fail at this point. I don't use stored procedures for simple actions like selecting a specific item or deleting a row. Many time I have to build search pages with lots of parameters and I find it really difficult to add a variable number of parameters to a stored procedure. For example a string that I have to split by spaces and search for all found words. Would it be worth to use a stored procedure in such a situation?
  • Avoid dynamic SQL statements as much as possible. Dynamic SQL tends to be slower than static SQL, as SQL Server must generate an execution plan every time at runtime.
    Personally, I never use dynamic SQL. If I need to create an SQL string I do it from .Net code, not from SQL.
  • Consider the following drawbacks before using the IDENTITY property for generating primary keys. IDENTITY is very much SQL Server specific, and you will have problems porting your database application to some other RDBMS. IDENTITY columns have other inherent problems. For example, IDENTITY columns can run out of numbers at some point, depending on the data type selected; numbers can't be reused automatically, after deleting rows; and replication and IDENTITY columns don't always get along well.
    So, come up with an algorithm to generate a primary key in the front-end or from within the inserting stored procedure. There still could be issues with generating your own primary keys too, like concurrency while generating the key, or running out of values. So, consider both options and go with the one that suits you best.
    This is interesting because I always use identity columns for primary keys. I don't think a data export or a database engine change justify creating a custom identity system. However I do have to agree that in the case that data is somehow corrupted a GUID or some other identifier would be more useful. I am sticking with my IDENTITY columns for now.
  • Use Unicode datatypes, like NCHAR, NVARCHAR, or NTEXT.
  • Perform all your referential integrity checks and data validations using constraints (foreign key and check constraints) instead of triggers, as they are faster.
  • Always access tables in the same order in all your stored procedures and triggers consistently. This helps in avoiding deadlocks. Other things to keep in mind to avoid deadlocks are: Keep your transactions as short as possible. Touch as few data as possible during a transaction. Never, ever wait for user input in the middle of a transaction. Do not use higher level locking hints or restrictive isolation levels unless they are absolutely needed. Make your front-end applications deadlock-intelligent, that is, these applications should be able to resubmit the transaction incase the previous transaction fails with error 1205. In your applications, process all the results returned by SQL Server immediately so that the locks on the processed rows are released, hence no blocking.
    I don't have much experience with transactions. Even if I would need transactions in some complex scenarios, I would probably use the .Net transaction system.
  • Offload tasks, like string manipulations, concatenations, row numbering, case conversions, type conversions etc., to the front-end applications.
    Totally agree, except the row numbering, where SQL 2005 added all those nice Getting the index or rank of rows in SQL Server 2005 aggregate ranking options
  • Always add a @Debug parameter to your stored procedures. This can be of BIT data type. When a 1 is passed for this parameter, print all the intermediate results, variable contents using SELECT or PRINT statements and when 0 is passed do not print anything. This helps in quick debugging stored procedures, as you don't have to add and remove these PRINT/SELECT statements before and after troubleshooting problems.
    Interesting, I may investigate this further, although the SQL debugging methods have improved significantly since the article was written.
  • Make sure your stored procedures always return a value indicating their status. Standardize on the return values of stored procedures for success and failures. The RETURN statement is meant for returning the execution status only, but not data. If you need to return data, use OUTPUT parameters
  • If your stored procedure always returns a single row resultset, consider returning the resultset using OUTPUT parameters instead of a SELECT statement, as ADO handles output parameters faster than resultsets returned by SELECT statements.
  • Though T-SQL has no concept of constants (like the ones in the C language), variables can serve the same purpose. Using variables instead of constant values within your queries improves readability and maintainability of your code.



The next stop was SQL Server Best Practices from Microsoft.

Here are the articles I found most important, covering stuff from testing the I/O system of the system you want to install SQL server to up to Database backup, mirroring and maintainance:
Predeployment I/O Best Practices
SQL Server 2005 Deployment Guidance for Web Hosting Environments
SQL Server 2005 Security Best Practices - Operational and Administrative Tasks
Comparing Tables Organized with Clustered Indexes versus Heaps
Troubleshooting Performance Problems in SQL Server 2005
Implementing Application Failover with Database Mirroring
SQL Server 2005 Waits and Queues
TEMPDB Capacity Planning and Concurrency Considerations for Index Create and Rebuild
The Impact of Changing Collations and of Changing Data Types from Non-Unicode to Unicode
XML Best Practices for Microsoft SQL Server 2005
Performance Optimizations for the XML Data Type in SQL Server 2005
Top 10 Hidden Gems in SQL Server 2005

Lastly some links that I will not go in depth on:

SQL Server 2000 Best Practices
SQL SERVER - 2005 Best Practices Analyzer Tutorial - Sample Example describes the Microsoft Best Practices Analyser application. I tried it myself, it's not much. It touches mainly on the maintainance and security issues that I don't really concern myself with.
Top 10 Best Practices for Building a Large Scale Relational Data Warehouse. I don't think I will need it soon, but it is a short and interesting read.
SQL Server Pre-Code Review Tips. This Pinal Dave guy is pretty cool. He seems like a good resource for SQL related issues.
CMS Database Administration SQL Server Standards, a set of SQL coding standards for a medical government agency.

I am sure there are a lot of interesting resources on the net. I will update this post with new information once I get to it.

How can I get some content from javascript (like a string) and send it to the client as a file? I don't have access to the content I want to send from the server.

This was a question posed to me in a rather more complex way: how do I take some file content from a web service and send it to the client as as file without downloading the content to the web server first?

The simple answer right now: you cannot do it. If you guys know more about this, please let me know. I've exausted all avenues I could think of, but then again, I am no master of Javascript and html responses.

Here is what I have tried. Basically, I have a string like a html table and I want it sent to the client browser as an excel download. So I opened a new window with javascript and tried to write the content there:
var win2=window.open('');
win2.document.write(s);


It worked and it displayed a table. Now, all I wanted to do is add/change the html header content-type to application/vnd.ms-excel. Apparently, you can't do it from Javascript. Ok, how about getting the necessary headers from the ASP.Net server? (remember, the restriction was that only the file content should not come from the web server). So I created a new page that would render a completely empty page with the right headers:

protected void Page_Load(object sender, EventArgs e)
{
Response.Clear();
Response.Buffer = true;
Response.ContentType = "application/vnd.ms-excel";
Response.AppendHeader("Content-Disposition", "attachment;filename=test.xls");
Response.Charset = "";

Response.Flush();
Response.End();
}


Then I just opened it in a new window (just ignore the browser pop-up filters for now) with
var win2=window.open('PushFile.aspx');
win2.document.write(s);


What happened was that the page was just rendered like a normal page. How come? I change the code so that it would write the content after a few seconds. And I got this: first the browser asks me if I want to permit downloading the file, then, after a few seconds, the warning goes away and the string is displayed in the new window. I tried with document.createTextNode, it didn't work.

So far, none of my attempts to serve javascript content as a binary file worked. If you know of a way to achieve this, please let me know. Thanks!

Update:
Meaflux took a swipe at this request and came up with two delicious ideas that, unfortunately, don't really work. But I had no idea things like these existed, so it is very much worth mentioning.

First: the data URI. Unfortunately it is only supported by FireFox and such and has no way of setting a content-disposition header or some other way of telling the browser that I actually want it saved. It would work for an excel file, but an image, for example, would be opened in a browser window.

Second: the IE execCommand javascript function which has a little command called SaveAs. Unfortunately this would only work for actual HTML pages. Even if the browser would open a binary file, I doubt that a saveAs command would save it correctly.

Besides, both these options, as well as my own attempts above, have a major flaw: there is no way to send chunks of data as you are receiving them from the web service. What is needed it declaring some sort of data stream, then writing stuff in it and then declaring it programatically closed.