In the previous post I was discussing Firebase, used in Javascript, and that covered initialization, basic security, read all and insert. In this post I want to discuss about complex queries: filtering, ordering, limiting, indexing, etc. For that I will get inspiration (read: copy with impunity) from the Firebase documentation on the subject Retrieving Data, but make it quick and dirty... you know, like sex! Thank you, ma'am!

OK, the fluid interface for getting the data looks a lot like C# LInQ and I plan to work on a Linq2Firebase thing, but not yet. Since LInQ itself got its inspiration from SQL, I was planning to structure the post in a similar manner: how to do order by, top/limit, select conditions, indexing and so on, so we can really use Firebase like a database. An interesting concept to explore is joining, since this is an object database, but we still need it, because we want to filter by the results of the join before we return the result, like getting all the transaction of users that have the name 'Adam'. Aggregating is another thing that I feel Firebase needs to support. I don't want a billion records in order to compute the sum of a property.

However, the Firebase API is rather limited at the moment. You get .orderByChild, then stuff like .equalTo, .startAt and .endAt and then .limitToFirst and .limitToLast. No aggregation, no complex filters, no optimized indexing, no joining. As far as I can see, this is by design, so that the server is as dumb as possible, but think about that 1GB for the free plan. It is a lot.

So, let's try a complex query, see were it gets us.
ref.child('user')
.once('value',function(snapshot) {
var users=[];
snapshot.forEach(function(childSnapshot) {
var item=childSnapshot.val();
if (/adam/i.test(item.name)) {
users.push(item.userId);
}
});
getInvoiceTotalForUsers(users,DoSomethingWithSum);
});


function getInvoiceTotalForUsers(users,callback)
{
var sum=0;
var count=0;
for (var i=0; i<users.length; i++) {
var id=users[i];
ref.child('invoice')
.equalTo(id,'userId')
.orderByChild('price')
.startAt(10)
.endAt(100)
.once('value',function(snapshot) {
snapshot.forEach(function(childSnapshot) {
var item = childSnapshot.val();
sum+=item.price;
count++;
if (count==users.length) callback(sum);
});
});
}
}

First, I selected the users that have 'adam' in the name. I used .once instead of .on because I don't want to wait for new data to arrive, I want the data so far. I used .forEach to enumerate the data from the value event. With the array of userIds I call getInvoiceTotalForUsers, which gets all the invoices for each user, with a price bigger or equal to 10 and less or equal to 100, which finally calls a callback with the resulting sum of invoice prices.

For me this feels very cumbersome. I can think of several methods to simplify this, but the vanilla code would probably look like this.

I have been looking for a long time for this kind of service, mainly because I wanted to monitor and persist stuff for my blog. Firebase is all of that and more and, with a free plan of 1GB, it's pretty awesome. However, as it is a no SQL database and as it can be accessed via Javascript, it may be a bit difficult to get it at first. In this post I will be talking about how to use Firebase as a traditional database using their Javascript library.

So, first off go to the main website and signup with Google. Once you do, you get a page with a 5 minute tutorial, quickstarts, examples, API docs... but you want the ultra-quick start! Copy pasted working code! So click on the Manage App button.

Take note of the URL where you are redirected. It is the one used for all data usage as well. Ok, quick test code:
var testRef = new Firebase('https://*******.firebaseio.com/test');
testRef.push({
val1: "any object you like",
val2: 1,
val3: "as long as it is not undefined or some complex type like a Date object",
val4: "think of it as JSON"
});
What this does is take that object there and save it in your database, in the "test" container. Let's say it's like a table. You can also save objects directly in the root, but I don't recommend it, as the path of the object is the only one telling you what type of object it is.

Now, in order to read inserted objects, you use events. It's a sort of reactive way of doing things that might be a little unfamiliar. For example, when you run the following piece of code, you will get after you connect all the objects you ever inserted into "test".
var testRef = new Firebase('https://*******.firebaseio.com/test');
testRef.on('child_added', function(snapshot) {
var obj = snapshot.val();
handle(obj); //do what you want with the object
});

Note that you can use either child_added or value, as the retrieve event. While 'child_added' is fired on each retrieved object, 'value' returns one snapshot containing all data items, then proceeds to fire on each added item with full snapshots. Beware!, that means if you have a million items and you do a value query, you get all of them (or at least attempt to, I think there are limits), then on the next added item you get a million and one. If you use .limitToLast(50), for example, you will get the last 50 items, then when a new one is added, you get another 50 item snapshot. In my mind, 'value' is to be used with .once(), while 'child_added' with .on(). More details in my Queries post

Just by using that, you have created a way to insert and read values from the database. Of course, you don't want to leave your database unprotected. Anyone could read or change your data this way. You need some sort of authentication. For that go to the left and click on Login & Auth, then you go to Email & Password and you configure what are the users to log in to your application. Notice that every user has a UID defined. Here is the code to use to authenticate:
var testRef = new Firebase('https://*******.firebaseio.com/test');
testRef.authWithPassword({
email : "some@email.com",
password : "password"
}, function(error, authData) {
if (error) {
console.log("Login Failed!", error);
} else {
console.log("Authenticated successfully with payload:", authData);
}
});
There is an extra step you want to take, secure your database so that it can only be accessed by logged users and for that you have to go to Security & Rules. A very simple structure to use is this:
{
"rules": {
"test": {
".read": false,
".write": false,
"$uid": {
// grants write access to the owner of this user account whose uid must exactly match the key ($uid)
".write": "auth !== null && auth.uid === $uid",
// grants read access to any user who is logged in with an email and password
".read": "auth !== null && auth.provider === 'password'"
}
}
}
}
This means that:
  1. It is forbidden to write to test directly, or to read from it
  2. It is allowed to write to test/uid (remember the user UID when you created the email/password pair) only by the user with the same uid
  3. It is allowed to read from test/uid, as long as you are authenticated in any way

Gotcha! This rule list allows you to read and write whatever you want on the root itself. Anyone could just waltz on your URL and fill your database with crap, just not in the "test" path. More than that, they can just listen to the root and get EVERYTHING that you write in. So the correct rule set is this:
{
"rules": {
".read": false,
".write": false,
"test": {
".read": false,
".write": false,
"$uid": {
// grants write access to the owner of this user account whose uid must exactly match the key ($uid)
".write": "auth !== null && auth.uid === $uid",
// grants read access to any user who is logged in with an email and password
".read": "auth !== null && auth.provider === 'password'"
}
}
}
}

In this particular case, in order to get to the path /test/$uid you can use the .child() function, like this: testRef.child(authData.uid).push(...), where authData is the object you retrieve from the authentication method and that contains your logged user's UID.

The rule system is easy to understand: use ".read"/".write" and a Javascript expression to allow or deny that operation, then add children paths and do the same. There are a lot more things you could learn about the way to authenticate: one can authenticate with Google, Twitter, Facebook, or even with custom tokens. Read more at Email & Password Authentication, User Authentication and User Based Security.

But because you want to do a dirty little hack and just make it work, here is one way:
{
"rules": {
".read": false,
".write": false,
"test": {
".read": "auth.uid == 'MyReadUser'",
".write": "auth.uid == 'MyWriteUser'"
}
}
}
This tells Firebase that no one is allowed to read/write except in /test and only if their UID is MyReadUser, MyWriteUser, respectively. In order to authenticate for this, we use this piece of code:
testRef.authWithCustomToken(token,success,error);
The handlers for success and error do the rest. In order to create the token, you need to do some cryptography, but nevermind that, there is an online JsFiddle where you can do just that without any thought. First you need a secret, for which you go into your Firebase console and click on Secrets. Click on "Show" and copy paste that secret into the JsFiddle "secret" textbox. Then enter MyReadUser/MyWriteUser in the "uid" textbox and create the token. You can then authenticate into Firebase using that ugly string that it spews out at you.

Done, now you only need to use the code. Here is an example:
var testRef = new Firebase('https://*****.firebaseio.com/test');
testRef.authWithCustomToken(token, function(err,authData) {
if (err) alert(err);
myDataRef.on('child_added', function(snapshot) {
var message = snapshot.val();
handle(message);
});
});
where token is the generated token and handle is a function that will run with each of the objects in the database.

In my case, I needed a way to write messages on the blog for users to read. I left read access on for everyone (true) and used the token idea from above to restrict writing. My html page that I run locally uses the authentication to write the messages.

There you have it. In the next post I will examine how you can query the database for specific objects.

I have implemented a system that logs what people do on my blog, with the intent of making it more useful to my readers. In doing so I created a live dashboard where people going and leaving are displayed in real time. The conclusion is pretty humbling, but I have also noticed a pattern that might reflect badly on the state of the Internet today.

The conclusion I was talking about is that, even if I write about a lot of things, from books to software, from WPF to Javascript, the most visited posts by far are about why the Bittorrent client gets stuck, how to remove ads by installing Privoxy and Sift3, my string comparison algorithm. All of that info one can get from the Popular posts column in the right of the blog, but I had no idea how many people visit it only to find how they can download their movies faster!

And then there are the programming blog posts. I am filled with pride when people open a link to learn something from my experiences. And then I see that they are looking at the posts about Crystal Reports, AjaxControlToolkit and the old ASP.Net Ajax calls. Occasionally they come for the WPF bit, which is great, but the conclusion is clear: people are mostly interested in the old posts, the ones describing older technologies that no one is talking about anywhere anymore. True, I have not posted anything significant in the last two years, but still, I feel disappointed. My blog's merit here seems to be that it is still online!

But then I realized something else. Sometimes I feel joy at seeing that a visitor opens a post that no one has opened recently. Yet, in a very short time, other people are starting to open the same link. It has happened repeatedly several days in a row, so it can't be a coincidence. And people are coming from all over: Canada, US, Brazil, Mozambique, Ghana! I can only explain it with the theory that once visited, a link increases in visibility, its Google rank goes up, thus passing a threshold that makes it appear on the first search pages. It is a snowball effect, which in part I understand and agree with, but can't stop wondering if it doesn't apply everywhere. Instead of going for the relevance that Google and other big search engines aspire towards, they cheat by treating each click as a Facebook Like! More people read it, so more people should read it, which they do, and so on and so on.

The bottom line is that I wouldn't want to see a race towards a common goal be treated as a common race towards a goal. Let all pages share the glory, rank them based on content, not the preferences of people searching for stuff. How long before Google will helpfully suggest to me to go download a movie rather than search for something for work?

Today I wanted to read a PDF. Nothing more than that, just read a damn file. Instead, I got the first use of the Adobe Reader DC, which I may have inadvertently updated to when asked for an update (rather than a complete redesign). But I was OK with it, you know. Adobe Reader was always a lightweight reader of files and the updates have mostly been about security. So now they want to redesign it, what could possibly go wrong? Just rearrange some menus or something, right? Wrong.

I skipped EULAs and bloating features for a while until I got to (surprise!) an online login screen. I didn't want to log in anywhere, I just wanted to read the PDF, so I closed the login popup window. It reopened. I started looking for buttons or links for skipping, cancelling, ignoring, doing it later, but none were there. Quite literally I was locked until I chose to log in. You know what I did? I forcefully closed the process and installed Sumatra PDF reader. 10 seconds later I was reading my damn PDF.

And you know what? I have used Sumatra PDF on other computers in the past. What I found is that, besides some features for PDF that I will never use, like forms and such, Sumatra was better than Adobe Reader. Way faster, that's obvious, but then it read correctly some PDFs that could not be even opened by the default reading application from the company that invented the damn document standard. Once, I remember, we had a weird PDF that we had to wait for about a minute in Adobe Reader in order to render a single page. Sumatra opened it instantly. Not only it is smaller, leaner, faster, better, but Sumatra also reads a lot of other formats: ePub, MOBI, CHM, XPS, DjVu, CBZ, CBR.

Bottom line: I have abandoned the last Adobe tool that I was regularly using on my computer. Will they ever learn?

Whenever you want to test a REST API, Postman is a great tool. It allows configuring all aspects of a request: Method (GET, POST, etc), Headers, keeps previous attempts in history, manages collections of requests and saves them and it is installed as a Chrome extension, bringing it only two clicks away. It does everything! ... or does it? Short story long: no!

Reported as a problem here: Referer header is not sent when set in Postman, the issue appears to be that some headers are "protected" by Chrome, therefore unusable. Well, it is a bug in the sense that Postman should tell you that when you write something there it is completely ignored! There is a solution, that can be found as a link in the bug report, but it involves installing other crap and running Python scripts. Ugh!

Here is a list of the Chrome protected headers:
  • Accept-Charset
  • Accept-Encoding
  • Access-Control-Request-Headers
  • Access-Control-Request-Method
  • Connection
  • Content-Length
  • Cookie
  • Cookie 2
  • Content-Transfer-Encoding
  • Date
  • Expect
  • Host
  • Keep-Alive
  • Origin
  • Referer
  • TE
  • Trailer
  • Transfer-Encoding
  • Upgrade
  • User-Agent
  • Via

So whenever you believe that some web site has used a magical solution to detect your sneaky attempts to access their web API or site and you are wondering what, just remember that it is most likely a Referer header that Postman (via Chrome) silently ignored.

I was trying to do a simple thing: configure a daily rolling log for log4net, meaning that I wanted that log files would be created daily and the name of the files would contain the date. The log was already configured and working with a normal FileAppender, so all I had to do was find the correct configuration. There are several answers on the Internet regarding this. I immediately went to the trusty StackOverflow and read the first answers, copy pasted lazily, and it seemed to work. But it did not. So warning, the first answer on StackOverflow about this is wrong.

So, my logs had to be named something like Application_20150420.log. That means several things:
  • The name of the appender class has to be set to log4net.Appender.RollingFileAppender
  • The name of the log files need to start with Application_ and end in .log
  • The name of the log files need to contain the date in the correct format
  • The logger needs to know that the files need to be created daily

This is the working configuration:
<appender name="FileAppender" type="log4net.Appender.RollingFileAppender">
<filter type="log4net.Filter.LevelRangeFilter">
<acceptOnMatch value="true" />
<levelMin value="DEBUG" />
</filter>

<file type="log4net.Util.PatternString" value="c:\logfiles\Application_.log" />
<appendToFile value="true" />
<rollingStyle value="Date" />
<datePattern value="yyyyMMdd" />
<preserveLogFileNameExtension value="true"/>
<staticLogFileName value="false" />

<lockingModel type="log4net.Appender.FileAppender+MinimalLock"/>
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date %-5level %logger - %message%newline"/>
</layout>
</appender>

As you can see, it is a little counter-intuitive. You do not need to specify the date in the file name, it will be added automatically, and you absolutely need to use preserveLogFileNameExtension, otherwise your files would look like Application_.log20140420

Because of idiotic firewall rules at my workplace I am forced to use Hangouts rather than Yahoo Messenger as an instant messenger. I am not going to rant here about which one is best, enough to say that most of my friends are on YM and being on Hangouts doesn't help. Hangouts has many annoyances for me, like its propensity to freeze when you lose Internet connection often or the lack of features that YM had. In fact I was so annoyed that I planned to do my own professional messenger to rule them all. But that's another story.

I am writing this post because of a behaviour of the Google Hangouts instant messenger (which, to be fair, is only a Chrome extension), mainly that after a while, the green traybar icon of the messenger goes in the "hidden icons" group. I have to customize it every day, sometimes twice, as it seems to reset this behavior after a period of use, not just on restarts. There is a Google product forum that discusses this here: System tray icon resets every time Chrome is started where you also see a few comments from your truly.

I immediately wanted to create a script or a C# program to fix this, but at first I just searched for a solution on the web and I found TrayManager, a C# app that does what the "Customize..." tray link does and more. One of the best features is a command line! So here is what you do after downloading the software and installing it somewhere: TrayManager.exe -t "Hangouts" 2. Now, probably that doesn't solve the problem long term. It is just as you would go into the Customize... link, but it's faster. Also, it has no side effects if run multiple times, so you can use Task Scheduler to run it periodically. Yatta!

I've heard of for some time now, but never actually tried anything on the site. Today I tried a course that teaches the basics of R, a statistics programming language akin to Matlab, and I thought the site was great. I suspect it all depends on the quality of the course, but at least this one was very nice. You can see my "report card" here, although I doubt I am going to visit the site very often. However, for beginners or people who quickly want to "get" something, it is a good place to start, as it gives one a "hands on" experience, like actually coding to get results, but in a carefully explained step by step tutorial format.

I have at work a very annoying HTTP proxy that requires a basic authentication. This translates in very inconsistent behaviour between applications and the annoying necessity of entering the username and password whenever the system wants it. So I've decided to add another local proxy to the chain that handles the authentication for me forever.

This isn't as easy as it sounds. Sure, proxies are a dime a dozen, but for some reason most of them seem to be coming from the Linux world. That is really bad user interaction, vague documentation and unhelpful forums where any request for help ends up in some version of "read the fucking manual". Hence I've decided to help out by creating this lovely post that explains how you can achieve the purpose described above with very little headache. These are the very easy steps that you have to undertake:
  1. Download and install Privoxy
  2. Go to the Program Files folder and look for Privoxy. You will need to edit two files: config.txt and user.action
  3. Optional: change the listen-address port, otherwise the proxy will function on port 8118
  4. Enter your proxy authentication username and password in the fields below and press Help me configure Privoxy - this is strictly a client base Javascript so don't worry that I am going to steal your proxy credentials...
  5. Edit user.action and add the bit of text that appeared as destined for that file.
  6. Edit config.txt, look for examples of forward and add to it the bit that belongs to config.txt and replace proxy:port and the domains and IP masks with the correct values for you
  7. Restart Privoxy
  8. Configure your internet settings to use a proxy on 127.0.0.1 and the port you configured in step 2 (or the default 8118)

This should be it. Enjoy!

Username:
Password:


I realize that the original format of the post was confusing, so, before you start reading it, here is a summary of solutions. You should read on afterwards in order to understand the context of each solution in part, though:
  • Torrents seem to keep a reference to some of the proxy settings when they started downloading. I don't have a clear solution for this, in my case changing the configuration of the proxy and restarting both proxy and Bittorrent eventually solved it. An aggressive solution is to remove all of your torrents and readd them later.
  • Sometimes the settings file gets corrupted or the settings don't make a lot of sense. Save all your ongoing downloads, remember all the relevant settings - like download folder -, stop Bittorrent, go to your user's Application Data folder and remove the settings.dat and settings.dat.old files in the Bittorrent folder. (usually C:\Documents and Settings\YourUser\Application Data\Bittorrent or.and C:\Users\YourUser\Application Data\Bittorrent). After restarting Bittorrent you will have a fresh set of settings, so change them to what you need and reload the list of torrents that you saved.
  • Check the connection settings. Sometimes you want to minimize the number of connections Bittorrent makes because your provider gives you only a limited connection pool. Make sure the value is not too small. Bittorrent needs a lot of connections: to find peers, to download and upload stuff, to open new connections in order to find the faster one, etc. In my case 15 was too little and I had to restore it to the original 150, even it 15 worked with previous versions of the program.
  • Solution from a comment: check your firewall settings and the firewall settings of your router/modem
  • Solution from a comment: check your internet provider. Some, like Comcast, are throttling some types of traffic. This link might help: Stop your ISP from Throttling Bittorrent Speeds
  • Solution from a comment: enable Bittorrent protocol encryption and restart the application
  • Solution from a forum: disable DHT from the Bittorrent client settings (or enable it, if it is disabled?)
  • Solution from YouTube: start the torrent client 'As Administrator', suggesting there may be some file access issues you have.
  • Solution from a comment: enable DHT. If only some torrents seem to be stuck, see if you can enable DHT in those torrents' properties.

Hope that helps. Now for the original post:

Update (again): While resetting the settings did solve my problem temporarily, I had the same problem the next day as well. The only meaningful thing that I had changed was the global maximum number of connections (Preferences -> Bandwith) from 150, which seemed excessive, to 15. I changed the value back to 150 and it started downloading immediately! This is strange, since the value was 15 for years, I think. Well, I am getting tired of these solutions that only work until the next day. Hopefully this is the last time the problem appears.

Update: The enthusiasm I had after making some of the torrents work faded when I noticed most of the other torrents were still not working. My only solution was to go to the Application Data folder, then Bittorrent, and delete settings.dat and settings.dat.old while the program was closed, then restart Bittorrent. Warning! You will lose all the torrents and settings so save them first (I selected them all and copied the magnet links, then added them back one by one afterwards). The weird thing is that, while it worked, the software also looked quite different from what I was used to. Perhaps just blindly updating the version of Bittorrent all the time is not the best option. Sometimes we need to reset the setting to take advantage of the new software settings.

I've had a problem with a router which prompted me to remove it and put the Internet cable directly into my computer port. I wouldn't recommend it for you, since a router adds a level of protection against outside access, but still, I was not home and a friend of mine "fixed" the problem. Since then, I couldn't download anything on Bittorrent, as it got stuck at "Connecting to peers", even if it connected to the trackers, and saw all the seeds and leechers. Googling around I found a suggestion to remove "resume.dat" from the Bittorrent Application Data folder, but it didn't work for me. I tried a lot of things, but none worked until I found an obscure forum asking about a proxy server. And it dawned on me that it could be from there.
You see, I had previously installed a free proxy program called Privoxy, but I had bound it to the previous IP address of the computer. As a result, it didn't load anymore. The Internet settings were OK, I had removed any reference to a proxy, but strangely enough something in the Bittorrent client kept some information relating to the proxy. After I only bound the proxy to the local address, Privoxy started and miraculously also did the Bittorrent downloads (after a restart of the program).

So, if you have issues with the Bittorent client getting stuck at "Connecting to peers", check if you have recently changed your Internet proxy settings.

Update 12 Feb 2016: As an alternative, if you have access to your hosts file, you can use a generated list of domains to immediately block any access to those IPs, courtesy of Steven Black: StevenBlack/hosts, and you don't have to install anything and works on most operating systems.

Update 19 Feb 2016: Here are the latest files to directly download and use. Do read the rest of the article, as these files might not be what you are looking for:
user.action
user.filter

Now for the rest of the article:

I discovered today a new tool called Privoxy. It is a proxy software that has extra features, like ad blocking and extra privacy. What that means is that you can install the proxy, point your browser to that proxy and have an almost ad free untracked by marketing firms or FaceBook experience. The only problem, though, is that the default filters are not so comprehensive. Would it be great if one could take the most used list for ad blocking (Adblock Plus' Easylist) and convert it to Privoxy? Well it would, only that no one seems to want to do it for Windows. You get a few folks that have created Linux scripts to do the conversion, but they won't work for Windows. Nor do they deem it necessary to make an online tool or a web service or at least publish the resulting files so that we, Windows people, can also use the list!

Well, I intend to do a small script that will allow for this, preferably embedded in this blog post, but until then, I had no script, no files, only Privoxy installed. Luckily, I also have Cygwin installed, which allows me to run a ridiculous flavour of Linux inside Windows. I had to hack the script in order for it to work on Cygwin, but in the end, at last, I managed to make it work. For now, I will publish the resulting files here and give you instructions to install them. When they become obsolete, send me a comment and I will refresh them.

So, the instructions:
  • Install Privoxy
  • Go to the installation folder of Privoxy and look for files named 'user.action' and 'user.filter'
  • Download the user.action file from here and replace the default one.
  • Download the user.filter file from here and replace the default one.
  • Restart Privoxy
  • Of course, then you have to go to your browser settings and set the proxy to Privoxy (usually localhost, port 8118)

Warning! The filter file is quite big and it will cause some performance issues. You might want to use only the action file with the filter actions removed.

Update: If you can't download the files, let me know. I am using Github pages and it seems sometimes it doesn't work as I expected.

Also, I have lost faith that AdBlockPlus rules can be completely and correctly translated to Privoxy and I lack the time, so I am publishing my crappy program as it is and if you want to use it or fix it, be my guest. You can find the program here: AdblockPlus to Privoxy. I would ask that you share with me those modifications, though, so that everybody can benefit from them.

Update October 2014: Other people have contributed by making their own translation software. Here are the links for the binary and source code of adblock2privoxy made by Zubr, in Haskell mind you, which is pretty cool:

Binary: https://www.dropbox.com/s/69u1iqbubzft1yl/adblock2privoxy-1.2.4.rar?dl=0
Source code: https://projects.zubr.me/wiki/adblock2privoxy

It all started with this site that got stuck in the Google Chrome's DNS cache so that any changes to the Windows/System32/drivers/etc/hosts file were ignored. I didn't want to close all Chrome windows (since the DNS cache is application wide in Chrome), so I googled for an answer. And here it was, a simple url that, typed in the Chrome address bar, would allow me to clear the cache: chrome://net-internals#dns.

But there are a lot more cool things there: testing of failed sites, a log of browser network events, control over open connections and so much more. That got me curious on other cool chrome:// URLs and I found some links listing a lot of them.

I don't have the time to parse all these cool hidden Chrome URLs and review them in this blog entry, so I will just list some links and let you explore the goodness:
Google Chrome’s Full List of Special about: Pages
12 Most Useful Google Chrome Browser chrome:// Commands
About and Chrome URLs

Update: The Chrome url containing all others can be found at chrome://about/.

UAC is the most annoying feature of Windows 7, one that just has to be specifically designed to annoy the user with useless "Do you want to" alerts. Not even regular alerts, but screen dimming modal panic dialogs. I want it off. There are several options to do that, but the automated way to get rid of UAC is to change a value in the registry. Here is the reg file for it:
Windows Registry Editor Version 5.00



[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System]

"EnableLUA"=dword:00000000



Warning! If you don't know what a registry entry is, then you'd better not do anything stupid! Now, if I could dim the screen while you read that, I would be Microsoft material.



But I digress. In order to execute the reg file above you need to save it into a file with the .reg extension (let's say nouac.reg) and then load it with regedt32 /s nouac.reg. The /s switch removes any confirmation messages and silently loads the file into the registry effectively disabling UAC.



However, my laptop is in a domain with some ridiculous policies enforcing the UAC setting so that when I next restart the computer I get the annoying popups again. Now if I could only run nouac.reg at logoff, I would be all set. And we can do that, also. Just run gpedit.msc, go to User Configuration -> Windows Settings -> Scripts (logon/logoff) and double click logoff. Create a batch file that contains the line to load the registry file and then add it as a logoff script. All set.



Update: Hold the celebration. After I restarted my computer, the hated UAC was back. Either the script was not executed, or it doesn't work like that because the policy is enforced after logoff scripts or before logon. Drat it!

As I was blogging before, RedGate are assholes. They bought Reflector, promised to keep it free, then asked money for it. But every crisis can be turned into an opportunity. JetBrains promised a free decompiler tool and they have kept their word as they have released an early build. A total news to me, but not really a surprise, other software companies decided to build their own version in order to boost their visibility in the developer world. Telerik, for example, has just released JustDecompile, beta version.

It is no secret that JetBrains is a company that I respect a lot, as they made ReSharper, the coolest tool I've ever had the pleasure to work with, but I will try to be as unbiased as possible in comparing the options. I have tried dotPeek on WPF's PresentationFramework.dll from the .NET framework 4.0, as I often need to check the sources in order to understand functionality or bugs.

As a footnote, Reflector, just before it went commercial, could not decompile some of the code there. Not only it did not decompile it, but it presented empty methods like that was all there was in the code, with no warnings or errors or explicative comments. So, even if free, I bet Reflector would have sucked in the end, after getting into the money grabbing hands of RedGate.

dotPeek has seen and decompiled the code that Reflector did not. Also, I have to say that similar functionality like in ReSharper, finding usages, going to declaration, etc are making dotPeek a very nice tool to work with. What I did not quite like is that it doesn't have yet the functionality to save the sources to text files. But I am sure this is just a detail that was not implemented yet. Hopefully, they will provide a rich plugin model like old Reflector did.

Unfortunately, to download JustDecompile, they need you to have a Telerik login in order to download, which, as everyone knows, simply sucks. No one likes a registration form, folks! Especially one that presents you with wonderful prechecked checkboxes for permission for Telerik to send you all kind of stupid promotions and newsletters. Also, the download is of a .msi file. Most developers like to see what they are installing and preferably just copy it from a file archive. Running the .msi took forever, including the mandatory 100% CPU utilization bit that I will never understand in installation products. (coming from the .NET runtime optimization service, mscorsvw, called by ngen) But that's just the delivery system. Let's check out the actual thing.

JustDecompile starts reasonably fast and it also has a nice look, being build with Telerik controls and what not. The decompilation is a bit weird at first, since it shows only the method names and for a second there I thought it was as bad as Reflector was, but then I noticed the Expand All Members button. The context menu is not nearly as useful as dotPeek's, but there are a lot of options in the top toolbar and the navigation via links is fast and intuitive. It also has no text saving options yet.

As the decompiled sources were, I noticed these differences:
  • JustDecompiler places inline member declarations in constructors, dotPeek shows it inline. It might not seem an important thing, but an internal class gains a weird public constructor in order to place the declaration there, instead of using the only internal constructor that the class had. It looks strange too as its last line is base(); which is not even legal.
  • dotPeek seems to want to cast everything in the source code. For example List list = (List) null;
  • JustDecompiler shows a Dictionary TryGetValue method with a ref parameter, dotPeek shows the correct out.
  • dotPeek creates really simple names for local scope variables like listand list1, JustDecompiler seems to create more meaningful names like attachedAnnotations
  • JustDecompiler shows a class internal as dotPeek shows it as internal abstract.
  • JustDecompiler seems to fail to decompile correctly indexer access.
  • JustDecompiler doesn't seem to handle explicit interface implementations.
  • JustDecompiler doesn't seem to decompile readonly fields.
  • JustDecompiler transforms a piece of code into an if with a return in it and then some other code, dotPeek decompiles it into an if/else.
  • JustDecompiler doesn't seem to handle Unicode characters. dotPeek correctly encodes them in source like "\x001B".
  • dotPeek seems to join nested ifs in a single one, as opposed to JustDecompiler.
  • JustDecompiler uses base. in order to access members coming from base classes, while dotPeek uses this.


I will stop here. I am sure there are many other differences. My conclusion is that dotPeek could do with the naming algorithm JustDecompiler seems to use for local scope variables, but in most other ways is superior to JustDecompiler for now. As both programs are in beta, this could quickly change. I do hope that healthy competition between these two products (and, why not, shady developer meetings in bars over tons of beer and pizza, in order to compare ideas intercompanies) will result in great products. My only wish is that one of these products would become open source, but as both use proprietary bits from commercial products, I doubt it will happen.

Have fun, devs!

Update 23 Feb 2012:
Spurred by a comment from Telerik, I again tried a (quick and dirty, mind you) comparison of the two .Net decompilation tools: JetBrains dotPeek and Telerik JustDecompile. Here are my impressions:

First of all, the Telerik tool has a really cute installer. I am certainly annoyed with the default Windows one and its weird error codes and inexplicable crashes. Now, that doesn't mean the Telerik installer does better in the error section, since I had none, but how could it not? The problem with the installation of JustDecompile is that it also tried to install (option checkboxes set by default) JustCode and JustTrace. The checkboxes themselves were something really custom, graphically, so I almost let them checked, since they looked as part of the background picture. If it weren't for my brain spam detector which went all red lights and alarm bells when seeing a really beautiful installer for a free tool, I might have installed the two applications.

Now for the decompilation itself. I was trying to see what the VisualBasic Strings.FormatNumber method contained. The results:
  • dotPeek showed xml documentation comments, JustDecompile did not
  • dotPeek showed default values for method parameters, JustDecompile did not
  • JustDecompile could decompile the source in C#, VB and IL, dotPeek did only C#
  • JustDecompile showed the source closer to the original source (I can say this because it also shows VB, which is probably the language in which Microsoft.VisualBasic was written), dotPeek shows an equivalent source, but heavily optimized, with things like ternary operators, inversions of if blocks and even removals of else sections if the if block can directly return from the method
  • There are some decorative attributes that dotPeek shows, while JustDecompile does not (like MethodImplAttribute)
  • dotPeek has a tabbed interface that allows the opening of more than a single file, JustDecompile has only a code view window
  • dotPeek shows the code of a class in a window, in order to see a method, it scrolls to where the method is; JustDecompile shows a class as a stub, one needs to click on a method to see the implementation of only that method in the code window


My conclusion remains that dotPeek is a lot more usable than JustDecompile. As a Resharper user, I can accept that I am biased, but one of the major functionalities of a .Net decompiler is to show you usable code. While I can take individual methods or properties with JustDecompile and paste them in my code, I can take entire classes with dotPeek, which makes me choose dotPeek for the moment, no matter all the other points above. Of course, if any of the two tools would give me a button that would allow me to take a dll and see it as a Visual Studio project, it would quickly rise to the top of my choices.

Update 26 Apr 2013:
I've again compared the two .Net decompilers. JustDecompile 1.404.2 versus dotPeek EAP 1.1.1.511. You might ask why I am comparing with the Early Access Program version. It is because JustDecompile now has the option to export the assembly to a Visual Studio project (yay!), but dotPeek only has this in the EAP version so far.
I have this to report:
Telerik's JustDecompile:
  • the installer is just as cute as before, only it is for a suite called DevCraft, of which one of the products is JustDecompile
  • something that seemed a bit careless is the "trial" keyword appearing in both download page and installer. If installing just JD, it is not trial
  • again the checkboxes for JustCode and JustTrace are checked by default, but at least they are more visible in the list of products in the suite
  • a Help Improve the Telerik Installer Privacy Policy checkbox checked by default appeared and it is not that visible
  • the same need to have an account to Telerik in order to download JD
  • when installing JD, it also installs the Telerik Control Panel a single place to download and manage Telerik products, which is not obvious from the installer
  • the install takes about two minutes on my computer to a total size of 31MB, including the control panel
  • if a class is in a multipart namespace like Net.Dns, it uses folders named Net.Dns if there is no class in the Net namespace
  • not everything goes smoothly, sometimes the decompiler throws exceptions that are then logged in the code as comments, with the request to mail to JustDecompilePublicFeedback@telerik.com
  • it creates the AssemblyInfo.cs file in a Properties folder, just like when creating a project
  • resolves string concatenation with string.Concat, rather than using the '+' operator as in the original code
  • resolves foreach loops into while(true) loops with breaks when a condition is met
  • uses private static methods in a class with the qualified class name
  • resolves inline variables, leaving the code readable
  • overall it has a nicer decompiled code structure than dotPeek
  • adds explicit default constructors to classes
  • places generic class constraint at the end of constraints list, generating an exception
  • it doesn't catch all reference assemblies, sometimes you have to manually add them to the list
  • decompiles enum values to integer in method optional parameters default values, generating compilation errors
  • decompiles default(T) to null in method optional parameters default values, generating compilation errors
  • decompiles class destructors to Finalize methods which are not valid, generating compilation errors
  • types of parameters in calls to base constructors are sometimes wrong
  • places calls to base/this constructors at the end of constructor code blocks, which of course does not work, when you place more complex code in the calls
  • doesn't understand cast to ValueType (which is somewhat obscure, I agree)
  • really fucks up expressions trees like FluentNHibernate mapping classes, but I hate NHibernate anyway
  • resolves if blocks with return in them to goto/label sometimes
  • resolves readonly fields instantiated from a constructor to a mess that uses a local variable to set the field (which is not valid)
  • doesn't resolve corectly a class name if it conflicts with the name of a local method or field
  • inlines constants (although I don't think they can solve this)
  • switch/case statements on Enum values sometimes gain weird extra case blocks
  • sometimes it uses safe casting with value types (x as bool)

JetBrain's dotPeek:
  • the EAP version has a standalone executable version which doesn't need installation
  • the whole install is really fast and installs around 46MB
  • as I said above, it does not have the Export to Project option until version 1.1
  • the decompilation process is slower than JustDecompile's
  • if a class is in a multipart namespace like Net.Dns, it uses a folder structure like Net/Dns
  • sometimes things don't go well and it marks this with // ISSUE: comments, describing the problem. Note: these are not code exceptions, but issues with the decompiled code
  • it inlines a lot of local variables, making the code more compact and less readable
  • weird casting of items in string concatenations
  • a tendency to strong typed casting, making the code less readable and generating compilation errors at times
  • the AssemblyInfo.cs file is not created in a Properties folder
  • when there are more classes in a single file, it creates a file for each, named as the original file, but prefixed with a number, instead of using the name of the class
  • it has an option to create the solution for the project as well
  • it creates types for anonymous types, and creates files with weird names for them, which are not really valid, screwing the project.
  • it has problems with base constructor calls and constructor inheritance
  • it has problems with out parameters, it makes a complete mess of them
  • tries to create a type for Linq IQueryable results, badly
  • it has problems with class names that are the same as names of namespaces (this is an issue of ReSharper as well, when it doesn't present the option to choose between a class name and a namespace name)
  • resolves while(method) to invalid for loops
  • it doesn't resolve corectly a class name if it conflicts with the name of a local method or field
  • problems with explicit interface implementations: ISomething a=new Something(); a.Method(); (it declares a as Something, not ISomething)
  • problems with decompiling linq method chains
  • I found a situation where it resolved Decimal.op_Increment(d) for 1+d
  • indirectly used assemblies are not added to the reference list
  • it sometimes creates weird local variables like local_0, which are not declared, so not valid
  • adds a weird [assembly: Extension] in the AssemblyInfo file, which is not valid
  • a lot of messed up bool values resolved as (object) (bool) (value ? 1 : 0), which doesn't even work
  • inlines constants (although I don't think they can solve this)
  • __Null local1 = null; - really?

After decompiling, solving the issues and compiling again an assembly in the project I am working on I got these sizes:
JustDecompile: 409088
dotPeek: 395776
The original: 396288

Of course, this is not really a scientific comparison between the two. I was excited by the implementation of Export to Project in both products and I focused mainly on that. The navigation between types and methods is vastly improved in JustDecompile and, to my chagrin, I have to admit that it may be easier and safer to use than dotPeek at this time. Good job, Telerik! Oh, and no, they have NOT paid me to do this research :-)

Short version: open registry, look for file association entry, locate the command subkey and check if, besides the (Default) value, there isn't a command multistring value that looks garbled. Rename or remove it.

Now for the long version. I've had this problem for a long time now: trying to open an Office doc file by double clicking it or selecting "Open" from the context menu or even trying "Open with" and selecting WinWord.exe threw an error that read like this: This action is only valid for installed products. This was strange, as I had Office 2007 installed and I could open Word just fine and open a document from within; it only had problems with the open command.

As I am rarely using Office at home, I didn't deem it necessary to solve the problem, but this morning I've decided that it is a matter of pride to make it work. After all, I have an IT blog and readers look up to me for technical advice. Both of them. So away I go to try to solve the problem.

The above error message is so looked up that it came up in Google autocomplete, but the circumstances and possible solutions are so varied that it didn't help much. I did find an article that explained how Office actually opens up documents. It said to go in the registry and look in the HKEY_CLASSES_ROOT\Word.Document.12\shell\Open\command subkey. There should be a command line that looks like "C:\Program Files\Microsoft Office\Office12\WINWORD.EXE" /n /dde. The /dde flag is an internal undocumented flag that tells Windows to use the Dynamic Data Exchange server to communicate the command line arguments to Word, via the next key in the registry: HKEY_CLASSES_ROOT\Word.Document.12\shell\Open\ddeexec which looks like: [REM _DDE_Direct][FileOpen("%1")]. So in other words (pun not intended) WinWord should open up with the /n flag, which instructs to start with no document open, then execute the FileOpen command with the file provided. If I had this as the value of the command registry key, it should work.

Ok, opened up the registry editor (if you don't know what that is or how to use it, it is my recommendation to NOT use it. instead ask a friend who knows what to do. You've been warned!), went to where the going is good and found the command subkey. It held a (Default) value that looked like it should and then it held another value named also command, only this one was not a string (REG_SZ), but a multi string (REG_MULTI_SZ), and its value was something like C84DVn-}f(YR]eAR6.jiWORDFiles>L&rfUmW.cG.e%fI4G}jd /n /dde. Do not worry, there is nothing wrong with your monitor, I control the horizontal, vertical and diagonal, it looked just as weird as you see it. At first I thought it was some weird check mechanism, some partial hash or weird encoding method used in that weird REG_MULTI_SZ type, which at the moment I didn't know what it meant. Did I mention it was weird? Well, it turns out that a multi string key is a list of strings, not a single line string, so there was no reason for the weirdness at all. You can see that it was expecting a list of strings because when you modify the key it presents you with a multiline textbox, not a singleline one.

So, thank you for reading thus far, the solution was: remove all the annoying command values (NOT the command subkeys) leaving the (Default) to its normal state. I do not know what garbled the registry, but what happened is that Windows was trying to execute the strange string and, obviously, failed. The obscure error message was basically saying that it didn't find the file or command you were trying to execute and has nothing to do with Office per se.

Of course,you have to repeat the procedure for all the file types that are affected, like RTF, for example.