Just when I thought I don't have anything else to add, I found new stuff for my Chrome browser extension.

Bookmark Explorer now features:
  • configurable interval for keeping a page open before bookmarking it for Read Later (so that all redirects and icons are loaded correctly)
  • configurable interval after which deleted bookmarks are no longer remembered
  • remembering deleted bookmarks no matter what deletes them
  • more Read Later folders: configure their number and names
  • redesigned options page
  • more notifications on what is going on

The extension most resembles OneTab, in the sense that it is also designed to save you from opening a zillion tabs at the same time, but differs a lot by its ease of use, configurability and the absolute lack of any connection to outside servers: everything is stored in Chrome bookmarks and local storage.

Enjoy!

The Date object in Javascript is not a primitive, it is a full fledged object, with a constructor and various instances, with methods that mutate their values. That means that the meaning of equality between two dates is ambiguous: what does it mean that date1 equals date2? That they are the same object or that they point to the same instance in time? In Javascript, it means they are the same object. Let me give you some code:
var date1=new Date();
var date2=new Date(date1.getTime()); // now date2 points to the same moment in time as date1
console.log(date1==date2) // outputs "false"
date1=date2;
console.log(date1==date2) // outputs "true", date1 and date2 are pointing to the same object

So, how can we compare two dates? First thought it to turn them into numeric values (how many milliseconds from the beginning of 1970) and compare them. Which works, but looks ugly. Instead of using date1.getTime() == date2.getTime() one might use the fact that the valueOf function of the Date object also returns the same numeric value as getTime and turn the comparison into a substraction instead. To compare the two dates, just check if date2 - date1 == 0.

I was working on a project of mine that also has some unit tests. In one of them, the HTML structure is abstracted and sent as a jQuery created element to a page script. However, the script uses the custom jQuery selector :visible, which completely fails in this case. You see, none of the elements are visible unless added to the DOM of a page. The original jQuery selector goes directly to existing browser methods to check for visibility:
jQuery.expr.filters.visible = function( elem ) {

// Support: Opera <= 12.12
// Opera reports offsetWidths and offsetHeights less than zero on some elements
// Use OR instead of AND as the element is not visible if either is true
// See tickets #10406 and #13132
return elem.offsetWidth > 0 || elem.offsetHeight > 0 || elem.getClientRects().length > 0;
};

So I've decided to write my own simple selector which replaces it. Here it is:
$.extend($.expr[':'], {
nothidden : function (el) {
el=$(el);
while (el.length) {
if (el[0].ownerDocument===null) break;
if (el.css('display')=='none') return false;
el=el.parent();
}
return true;
}
});

It goes from the selected element to its parent, recursively, until it doesn't find anything or finds some parent which is with display none. I was only interested in the CSS display property, so if you want extra stuff like visibility or opacity, change it yourself. What I wanted to talk about was that strange ownerDocument property check. It all stems from a quirk in jQuery which causes $(document).css(...) to fail. The team decided to ignore the bug report regarding it. But, the question is, what happens when I create an element with jQuery and don't attach it to the DOM? Well, behind the scene, all elements are being created with document.createElement or document.createDocumentFragment which, it makes sense, fill the ownerDocument property with the document object that created the element. The only link in the chain that doesn't have an ownerDocument is the document itself. You might want to remember this in case you want to go up the .parent() ladder yourself.

Now, warning: I just wrote this and it might fail in some weird document in document cases, like IFRAMEs and stuff like that. I have not tested it except my use case, which luckily involves only one type of browser.

Bookmark Explorer, a Chrome browser extension that allows you to navigate inside bookmark folders on the same page, saving you from a deluge of browser tabs, has now reached version 2.4.0. I consider it stable, as I have no new features planned for it and the only changes I envision in the near future is switching to ECMAScript 6 and updating the unit test (in other words, nothing that concerns the user).

Let me remind you of its features:

  • lets you go to the previous/next page in a bookmark folder, allowing sequential reading of selected news or research items
  • has context menu, popup buttons and keyboard shortcut support
  • shows a page with all the items in the current bookmark folder, allowing selection, deletion, importing/exporting of simple URL lists
  • shows a page with all the bookmarks that were deleted, allowing restoring them, clearing them, etc.
  • keyboard support for both pages
  • notifies you if the current page has been bookmarked multiple times
  • no communication with the Internet, it works just as well offline - assuming the links would work offline, like local files
  • absolutely free


Install it from Google's Chrome Web store.

...is stupid.

For a very long time the only commonly used expression of software was the desktop application. Whether it was a console Linux thing or a full blown Windows application, it was something that you opened to get things done. In case you wanted to do several things, you either opted for a more complex application or used several of them, usually transferring partial work via the file system, sometimes in more obscure ways. For example you want to publish a photo album, you take all pictures you've taken, process them with an image processing software, then you save them and load them with a photo album application. For all intents and purposes, the applications are black boxes to each other, they only connect with inputs and outputs and need not know what goes on inside one another.

Enter the web and its novel concept of URLs, Uniform Resource Locators. In theory, everything on the web can be accessible from the outside. You want to link to a page, you have its URL to add as an anchor in your page and boom! A web site references specific resources from another. The development paradigm for these new things was completely different from big monolithic applications. Sites are called sites because they should be a place for resources to sit in; they are places, they have no other role. The resources, on the other hand, can be processed and handled by specific applications like browsers. If a browser is implemented in all operating systems in the same way, then the resources get accessed the same way, making the operating system - the most important part of one's software platform - meaningless. This gets us to this day and age when an OS is there to restrict what you can do, rather than provide you with features. But that's another story altogether.

With increased computing power, storage space, network speeds and the introduction and refining of Javascript - now considered a top contender for the most important programming language ever - we are now able to embed all kinds of crazy features in web pages, so much so that we have reached a time when writing a single page application is not only possible, but a norm. They had to add new functionality to browsers in order to let the page tweak the browser address without reloading the page and that is a big deal! And a really dumb one. Let me explain why.

The original concept was that the web would own the underlying mechanism of resource location. The new concept forces the developer to define what a resource locator means. I can pretty much make my own natural language processing system and have URLs that look like: https://siderite.com/give me that post ranting about the single page apps. And yes, the concept is not new, but the problem is that the implementation is owned by me. I can change it at any time and, since it all started from a desire to implement the newest fashion, destined to change. The result is chaos and that is presuming that the software developer thought of all contingencies and the URL system is adequate to link to resources from this page... which is never true. If the developer is responsible for interpreting what a URL means, then it is hardly "uniform".

Another thing that single page apps lead to is web site bloating. Not only do you have to load the stuff that now is on every popular website, like large pointless images and big fonts and large empty spaces, but also the underlying mechanism of the web app, which tells us where we are, what we can do, what gets loaded etc. And that's extra baggage that no one asked for. A single page app is hard to parse by a machine - and I don't care about SEO here, it's all about the way information is accessible.

My contention is that we are going backwards. We got the to point where connectivity is more important than functionality, where being on the web is more important than having complex well done features in a desktop app. It forced us to open up everything: resources, communication, protocols, even the development process and the code. And now we are going back to the "one app to rule them all" concept. And I do understand the attraction. How many times did I dream of adding mini games on my blog or make a 3D interface and a circular corner menu and so on. This things are cool! But they are only useful in the context of an existing web page that has value without them. Go to single page websites and try to open them with Javascript disabled. Google has a nice search page that works even then and you know what? The same page with Javascript is six times larger than the one without - and this without large differences in display. Yes, I know that this blog has a lot of stuff loaded with Javascript and that this page probably is much smaller without it, but the point it that the blog is still usable. For more on this you should take the time to read The Web Obesity Crisis, which is not only terribly true, but immensely funny.

And I also have to say I understand why some sites need to be single page applications, and that is because they are more application than web site. The functionality trumps the content. You can't have an online image processing app work without Javascript, that's insane. You don't need to reference the resource found in a color panel inside the photo editor, you don't need to link to the image used in the color picker and so on. But web sites like Flipboard, for example, that display a blank page when seen without Javascript, are supposed to be news aggregators. You go there to read stuff! It is true we can now decide how much of our page is a site and how much an application, but that doesn't mean we should construct abominations that are neither!

A while ago I wrote another ranty rant about how taking over another intuitively common web mechanism: scrolling, is helping no one. These two patterns are going hand in hand and slowly polluting the Internet. Last week Ars Technica announced a change in their design and at the same time implemented it. They removed the way news were read by many users: sequentially, one after the other, by scrolling down and clicking on the one you liked, and resorted to a magazine format where news were just side by side on a big white page with large design placeholders that looked cool yet did nothing but occupy space and display the number of comments for each. Content took a backseat to commentary. I am glad to report that two days later they reverted their decision, in view of the many negative comments.

I have nothing but respect for web designers, as I usually do for people that do things I am incapable of, however their role should always be to support the purpose of the site. Once things look cool just for the sake of it, you get Apple: a short lived bloom of user friendliness, followed by a vomitous explosion of marketing and pricing, leading to the immediate creation of cheaper clones. Copying a design because you think is great is normal, copying a bunch of designs because you have no idea what your web page is supposed to do is just direct proof you are clueless, and copying a design because everyone else is doing it is just blindly following clueless people.

My advice, as misguided as it could be, is forget about responsiveness and finger sized checkboxes, big images, crisp design and bootstrapped pages and all that crap. Just stop! And think! What are you trying to achieve? And then do it, as a web site, with pages, links and all that old fashioned logic. And if you still need cool design, add it after.

Update 17 June 2016: I've changed the focus of the extension to simply change the aspect of stories based on status, so that stories with content are highlighted over simple shares. I am currently working on another extension that is more adaptive, but it will be branded differently.

Update 27 May 2016: I've published the very early draft of the extension because it already does a cool thing: putting original content in the foreground and shrinking the reposts and photo uploads and feeling sharing and all that. You may find and install the extension here.

Have you ever wanted to decrease the spam in your Facebook page but couldn't do it in any way that would not make you miss important posts? I mean, even if you categorize all your contacts into good friends, close friends, relatives, acquaintances, then you unfollow the ones that really spam too much and you hide all posts that you don't like, you have no control over how Facebook decides to order what you see on the page. Worse than that, try to refresh repeatedly your Facebook page and see wildly oscillating results: posts appear, disappear, reorder themselves. It's a mess.

Well, true to this and my word I have started work on a Chrome extension to help me with this. My plan is pretty complicated, so before I publish the extension on the Chrome Webstore, like I did with my previous two efforts, I will publish this on GitHub while I am still working on it. So, depending on where I am, this might be alpha, beta or stable. At the moment of this writing - first commit - alpha is a pretty big word.

Here is the plan for the extension:
  1. Detect the user has opened the Facebook page
  2. Inject jQuery and extension code into the page
  3. Detect any post as it appears on the page
  4. Extract as many features as possible
  5. Allow the user to create categories for posts
  6. Allow the user to drag posts into categories or out of them
  7. Use AI to determine the category a post most likely belongs to
  8. Alternatively, let the user create their own filters, a la Outlook
  9. Show a list of categories (as tabs, perhaps) and hide all posts under the respective categories
This way, one might skip the annoying posts, based on personal preferences, while still enjoying the interesting ones. At the time of this writing, the first draft, the extension only works on https://www.facebook.com, not on any subpages, it extracts the type of the post and sets a CSS class on it. It also injects a CSS which makes posts get dimmer and smaller based on category. Mouse over to get the normal size and opacity.

How to make it work for you:
  1. In Chrome, go to Manage Extensions (chrome://extensions/)
  2. Click on the Developer Mode checkbox
  3. Click on the Load unpacked extension... button
  4. Select a folder where you have downloaded the source of this extension
  5. Open a new tab and load Facebook there
  6. You should see the posts getting smaller and dimmer based on category.
Change statusProcessor.css to select your own preferences (you may hide posts altogether or change the background color, etc).

As usual, please let me know what you think and contribute with code and ideas.

I've written another Chrome extension that I consider in beta, but so far it works. Really ugly makeshift code, but I am not gathering data about the way I will use it, then I am going to refactor it, just as I did with Bookmark Explorer. You may find the code at GitHub and the extension at the Chrome webstore.

This is how it works: Every time you access anything with the browser, the extension will remember the IPs for any given host. It will hold a list of the IPs, in reverse order (last one first), that you can just copy and paste into your hosts file. The hosts file is found in c:/Windows/System32/drivers/etc/hosts and on Linux in /etc/hosts. Once you add a line in the format "IP host" in it, the computer will resolve the host with the provided IP. Every time there is a problem with DNS resolution, the extension will add the latest known IP into the hosts text. Since the extension doesn't have access to your hard drive, you need to edit the file yourself. The icon of DNS resolver will show the number of hosts that it wants to resolve locally or nothing, if everything is OK.

The extension allows manual selection of an IP for a host and forced inclusion or exclusion from the list of IP/host lines. Data can be erased (all at once for now) as well. The extension does not communicate with the outside, but it does store a list of all domains you visit, so it is a slight privacy risk - although if someone has access to the local store of a browser extension, it's already too late. There is also the possibility of the extension to replace the host with IP directly in the browser requests, but this only works for the browser and fails in case the host name is important, as in the case of multiple servers using the same IP, so I don't recommend using it.

There are two scenarios for which this extension is very useful:
  • The DNS server fails for some reason or gives you a wrong IP
  • Someone removed the IP address from DNS servers or replaced it with one of their own, like in the case of governments censorship

I have some ideas for the future:
  • Sharing of working IP/host pairs - have to think of privacy before that, though
  • Installing a local DNS server that can communicate locally with the extension, so no more hosts editing - have to research and create one
  • Upvoting/Downvoting/flagging shared pairs - with all the horrible head-ache this comes with

As usual, let me know what you think here, or open issues on GitHub.

I have started writing Chrome extensions, mainly to address issues that my browser is not solving, like opening dozens of tabs and lately DNS errors/blocking and ad blocking. My code writing process is chaotic at first, just writing stuff and changing it until things work, until I get to something I feel is stable. Then I feel the need to refactor the code, organizing and cleaning it and, why not, unit testing it. This opens the question on how to do that in Javascript and, even if I have known once, I needed to refresh my understanding with new work. Without further ado: QUnit, a Javascript testing framework. Not that all code here will be in ES5 or earlier, mainly because I have not studied ES6 and I want this to work with most Javascript.

QUnit


QUnit is something that has withstood the test of time. It was first launched in 2008, but even now it is easy to use with simple design and clear documentation. Don't worry, you can use it even without jQuery. In order to use it, create an HTML page that links to the Javascript and CSS files from QUnit, then create your own Javascript file containing the tests and add it to the page together with whatever you are testing.

Already this raises the issue of having Javascript code that can be safely embedded in a random web page, so consider how you may encapsulate the code. Other testing frameworks could run the code in a headless Javascript engine, so if you want to be as generic as possible, also remove all dependencies on an existing web page. The oldest and simplest way of doing this is to use the fact that an orphan function in Javascript has its own scope and always has this pointing to the global object - in case of a web page, this would be window. So instead of something like:
i=0;
while (i<+(document.getElementById('inpNumber').value)) {
i++;
// do something
}
do something like this:
(function() {

var global=this;

var i=0;
while (i<+(global.document.getElementById('inpNumber').value)) {
i++;
// do something
}

})();

It's a silly example, but it does several things:
  • It keeps variable i in the scope of the anonymous function, thus keeping it from interfering with other code on the page
  • It clearly defines a global object, which in case of a web page is window, but may be something else
  • It uses global to access any out of scope values

In this particular case, there is still a dependency on the default global object, but if instead one would pass the object somehow, it could be abstracted and the only change to the code would be the part where global is defined and acquired.

Let's start with QUnit. Here is a Hello World kind of thing:
QUnit.test("Hello World", function (assert) {
assert.equal(1+1, 2, "One plus one is two");
});
We put it in 'tests.js' and include it into a web page that looks like this:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width">
<title>Unit Tests</title>
<link rel="stylesheet" href="https://code.jquery.com/qunit/qunit-1.23.1.css">
</head>
<body>
<script src="https://code.jquery.com/qunit/qunit-1.23.1.js"></script>
<div id="qunit"></div>
<div id="qunit-fixture"></div>

<script src="tests.js"></script>
</body>
</html>

The result:


As you can see, we declare a test with the static QUnit.test function, which receives a name and a function as parameters. Within the function, the assert object will do everything we need, mainly checking to see if a result conforms to an expected value or a block throws an exception. I will not go through a detailed explanation on simple uses like that. If you are interested peruse the QUnit site for tutorials.

Modules


What I want to talk about are slightly more advanced scenarios. The first thing I want to address is the concept of modules. If we declare all the tests, regardless on how many scripts they are arranged in, the test page will just list them one after another, in a huge blob. In order to somehow separate them in regions, we need a module. Here is another example:
QUnit.module("Addition");
QUnit.test("One plus one", function (assert) {
assert.equal(1+1, 2, "One plus one is two");
});
QUnit.module("Multiplication");
QUnit.test("Two by two", function (assert) {
assert.equal(2*2, 4, "Two by two is four");
});
resulting in:


It may look the same, but a Module: dropdown appeared, allowing one to choose which module to test or visualize. The names of the tests also includes the module name. Unfortunately, the resulting HTML doesn't have containers for modules, something one can collapse or expand at will. That is too bad, but it can be easily fixed - this is not the scope of the post, though. A good strategy is just to put all related tests in the same Javascript file and use QUnit.module as the first line.

Asynchronicity


Another interesting issue is asynchronous testing. If we want to test functions that return asynchronously, like setTimeout or ajax calls or Promises, then we need to use assert.async. Here is an example:
QUnit.config.testTimeout = 1000;
QUnit.module("Asynchronous tests");
QUnit.test("Called after 100 milliseconds", function (assert) {
var a=assert.async();
setTimeout(function() {
assert.ok(true, "Assertion was called from setTimeout");
a();
});
},100);

First of all, we needed to declare that we expect a result asynchronously, therefore we call assert.async() and hold a reference to the result. The result is actually a function. After we make all the assertions on the result, we call that function in order to finish the test. I've added a line before the test, though, which sets the testTimeout configuration value. Without it, an async test that fails would freeze the test suite indefinitely. You can easily test this by setting testTimeout to less than the setTimeout duration.

Asynchronous tests raise several questions, though. The example above is all nice and easy, but what about cases when the test is more complex, with multiple asynchronous code blocks that follow each other, like a Promise chain? What if the assertions themselves need to be called asynchronously, like when checking for the outcome of a click handler? If you run jQuery(selector).click() an immediately following assertion would fail, since the click handler is executed in another context, for example. One can imagine code like this, but look how ugly it is:
QUnit.test("Called after 500 milliseconds", function (assert) {
var a = assert.async();
setTimeout(function () {
assert.ok(true, "First setTimeout");
setTimeout(function () {
assert.ok(true, "Second setTimeout");
setTimeout(function () {
assert.ok(true, "Third setTimeout");
setTimeout(function () {
assert.ok(true, "Fourth setTimeout");
a();
}, 100);
}, 100);
}, 100);
}, 100);
setTimeout(function () {
assert.notOk(true, "Test timed out");
}, 500)
});

In order to solve at least this arrow antipattern I've created a stringFunctions function that looks like this:
function stringFunctions() {
if (!arguments.length)
throw 'needs functions as parameters';
var f = function () {};
var args = arguments;
for (var i = args.length - 1; i >= 0; i--) {
(function () {
var x = i;
var func = args[x];
if (typeof(func) != 'function')
throw 'parameter ' + x + ' is not a function';
var prev = f;
f = function () {
setTimeout(function () {
func();
prev();
}, 100);
};
})();
};
f();
};
which makes the previous code look like this:
QUnit.test("Called after 500 milliseconds", function (assert) {
var a = assert.async();
stringFunctions(function () {
assert.ok(true, "First setTimeout");
}, function () {
assert.ok(true, "Second setTimeout");
}, function () {
assert.ok(true, "Third setTimeout");
}, function () {
assert.ok(true, "Fourth setTimeout");
}, a);
setTimeout(function () {
assert.notOk(true, "Test timed out");
}, 500)
});

Of course, this is a specific case, but at least in a very common scenario - the one when the results of event handlers are checked - stringFunctions with 1ms instead of 100ms is very useful. Click on a button, see if a checkbox is available, check the checkbox, see if the value in a span has changed, stuff like that.

Testing average jQuery web code


Another thing I want to address is how to test Javascript that is intended as a web page companion script, with jQuery manipulations of the DOM and event listeners and all that. Ideally, all this would be stored in some sort of object that is instantiated with parameters that specify the test context, the various mocks and so on and so on. Since it is not an ideal world, I want to show you a way to test a typical such script, one that executes a function at DOMReady and does everything in it. Here is an example:
$(function () {

$('#btnSomething').click(function () {
$('#divSomethingElse').empty();
});

});
The code assumes $ is jQuery, then it adds a handler to a button click to empty another item. Think on how this should be tested:
  1. Declare a QUnit test
  2. In it, execute the script
  3. Then make some assertions

I was a bit lazy and changed the scripts themselves to check if a testContext exists and use that one. Something like this:
(function ($) {

var global = this;
var jQueryContext = global.testContext && global.testContext.document ? global.testContext.document : global.document;
var chrome = global.testContext && global.testContext.chrome ? global.testContext.chrome : global.chrome;
// etc.

$(function () {

$('#btnSomething', jQueryContext).click(function () {
$('#divSomethingElse', jQueryContext).empty();
});

});

})(jQuery);
which has certain advantages. First, it makes you aware of all the uses of jQuery in the code, yet it doesn't force you to declare everything in an object and having to refactor everything. Funny how you need to refactor the code in order to write unit tests in order to be able to refactor the code. Automated testing gets like that. It also solves some problems with testing Javascript offline - directly from the file system, because all you need to do now is define the testContext then load the script by creating a tag in the testing page and setting the src attribute:
var script = document.createElement('script');
script.onload = function () {
// your assertions here
};
script.src = "http://whatever.com/the/script.js";
document.getElementsByTagName('head')[0].appendChild(script);
In this case, even if you are running the page from the filesystem, the script will be loaded and executed correctly. Another, more elegant solution would load the script as a string and execute it inside a closure where jQuery was replaced with something that uses a mock document by default. This means you don't have to change your code at all, but you need to be able to read the script as a text, which is impossible on the filesystem. Some really messy script tag creation would be needed
QUnit.test("jQuery script Tests", function (assert) {

var global = (function () {
return this;
})();

function setIsolatedJquery() {
global.originalJquery = jQuery.noConflict(true);
var tc = global.testContext.document;
global.jQuery = global.$ = function (selectorOrHtmlOrFunction, context) {
if (typeof(selectorOrHtmlOrFunction) == 'function')
return global.originalJquery.apply(this, arguments);
var newContext;
if (!context) {
newContext = tc; //if not specified, use the testContext
} else {
if (typeof(context) == 'string') {
newContext = global.originalJquery(context, tc); //if context is a selector, use it inside the testContext
} else {
newContext = context; // use the one provided
}
}
return global.originalJquery(selectorOrHtmlOrFunction, newContext)
}
};
function restoreJquery() {
global.jQuery = global.$ = global.originalJquery;
delete global.originalJquery;
}

var a = assert.async();

global.testContext = {
document : jQuery('<div><button id="btnSomething">Something</button><div id="divSomethingElse"><span>Content</span></div></div>')
};
setIsolatedJquery();

var script = document.createElement('script');
script.onload = function () {

assert.notEqual($('#divSomethingElse').children().length, 0, "SomethingElse has children");
$('#btnSomething').click();
setTimeout(function () {
assert.equal($('#divSomethingElse').children().length, 0, "clicking Something clears SomethingElse");
restoreJquery();
a();
}, 1);
};
script.src = "sample.js";
document.getElementsByTagName('head')[0].appendChild(script);

});

There you have it: an asynchronous test that replaces jQuery with something with an isolated context, loads a script dynamically, performs a click in the isolated context, checks the results. Notice the generic way in which to get the value of the global object in Javascript.

Bottom-Up or Top-Bottom approach


A last point I want to make is more theoretical. After some consultation with a colleague, I've finally cleared up some confusion I had about the direction of automated tests. You see, once you have the code - or even in TDD, I guess, you know what every small piece of code does and also the final requirements of the product. Where should you start in order to create automated tests?

One solution is to start from the bottom and check that your methods call everything they need to call in the mocked dependencies. If you method calls 'chrome.tabs.create' and you have mocked chrome, your tabs.create method should count how many times it is called and your assertion should check that the count is 1. It has the advantage of being straightforward, but also tests details that might be irrelevant. One might refactor the method to call some other API and then the test would fail, as it tested the actual implementation details, not a result. Of course, methods that return the same result for the same input values - sometimes called immutable - are perfect for this type of testing.

Another solution is to start from the requirements and test that the entire codebase does what it is supposed to do. This makes more sense, but the combination of possible test cases increases exponentially and it is difficult to spot where the problem lies if a test fails. This would be called acceptance testing.

Well, the answer is: both! It all depends on your budget, of course, as you need to take into consideration not only the writing of the tests, but their maintenance as well. Automated acceptance tests would not need to change a lot, only when requirements change, while unit tests would need to be changed whenever the implementation is altered or new code is added.

Conclusion


I am not an expert on unit testing, so what I have written here describes my own experiments. Please let me know if you have anything to add or to comment. My personal opinion on the matter is that testing provides a measure of confidence that minimizes the stress of introducing changes or refactoring code. It also forces people to think in terms of "how will I test this?" while writing code, which I think is great from the viewpoint of separation of concerns and code modularity. On the other hand it adds a relatively large resource drain, both in writing and (especially) in maintaining the tests. There is also a circular kind of issue where someone needs to test the tests. Psychologically, I also believe automated testing only works for certain people. Chaotic asses like myself like to experiment a lot, which makes testing a drag. I don't even know what I want to achieve and someone tries to push testing down my throat. Later on, though, tests would be welcome, if only my manager allows the time for it. So it is, as always, a matter of logistics.

More info about unit testing with QUnit on their page.

I have been a professional in the IT business for a lot of years, less if you consider just software development, more if you count that my favorite activity since I was a kid was to mess with a computer or another. I think I know how to develop software, especially since I've kind of built my career on trying new places and new methods for doing that. And now people come to me and ask me: "Can I learn too? Can you teach me?". And the immediate answer is yes and no (heh! Learnt from the best politicians that line) Well, yes because I believe anyone who actually wants to learn can and no because I am a lousy teacher. But wait a minute... can't I become one?

You may think that it is easy to remember how it was when I was a code virgin, when I was writing Basic programs in a notebook in the hope that some day my father will buy me a computer, but it's not. My brain has picked up so many things that now they are applied automatically. I may not know what I know, but I know a lot and I am using it at all times. A few weeks ago I started thinking about these things and one of the first ideas that came to me was FizzBuzz! A program that allegedly people who can't program simple can't... err... program. Well, I thought, how would I write this best? How about worst? I even asked my wife and she gave me an idea that had never occurred to me, like not using the modulo function to determine divisibility.

And it dawned on me. To know if your code is good you need to know exactly what that code has to do. In other words, you can't program without having an idea on how to use or test it afterwards. You have to think about all the other people that would be stumbling unto your masterwork: other developers, for example, hired after you left the company, need to understand what they are looking at. You need to provide up to date and clear documentation to your users, as well. You need to handle all kinds of weird situations that your software might be subjected to. To sum it up: as a good developer you need to be a bit of all the people on the chain - users, testers, documenters, managers, marketers, colleagues - and to see the future as well. After all, you're an expert.

Of course, sketches like the one above are nothing but caricatures of people from the viewpoint of other people who don't understand them. After all, good managers need to be a little of everything as well. If you think about it, to be good at anything means you have to understand a little of everybody you work with and do your part well - exactly the opposite of specialization, the solution touted as solving every problem in the modern world. Anyway, enough philosophy. We were talking programming here.

What I mean to say is that for every bit of our craft, we developers are doing good things for other people. We code so that the computer does the job well, but we are telling it to do things that users need, we write concisely yet clear so that other developers can work from where we leave off, we write unit tests to make sure what we do is what we mean and ease the work of people who need to manually check that, we comment the code so that anyone can understand at a glance what a method does and maybe even automate the creation of documents explaining what the software does. And we draw lines in a form of a kitten so that marketers and managers sell the software - and we hate it, but we do it anyway.

So I ask, do we need to learn to write programs all over again? Because, to be frank, coders today write in TDD style because they think it's cutting edge, not that they are doing it for someone; they work in agile teams not because they know everybody will get a better understanding of what they are doing and prevent catastrophic crashes caused by lack of vision, but because they feel it takes managers off their backs and they can do their jobs; they don't write comments for the documentation team, but because they fear their small attention span might make them forget what the hell they were doing; they don't write several smaller methods instead of a large one because they believe in helping others read their code, but because some new gimmick tells them they have too much cyclomatic complexity. And so on and so on.

What if we would learn (and teach) that writing software is nothing but an abstraction layer thrown over helping all kinds of people in need and that even the least rockstar ninja superhero developer is still a hero if they do their job right? What if being a good cog in the machine is not such a bad thing?

While writing this I went all over the place, I know, and I didn't even touch what started me thinking about it: politics and laws. I was thinking that if we define the purpose of a law when we write it and package them together, anyone who can demonstrate that the effect is not the desired one can remove the law. How grand would that be? To know that something is applied upon you because no one could demonstrate that it is bad or wrong or ineffective.

We do that in software all the time, open software, for example, but also the internal processes in a programming shop designed to catch flaws early and to ensure people wrote things how they should have. Sometimes I feel so far removed from "the real world" because what I am doing seems to make more sense and in fact be more real than the crap I see all around me or on the media. What if we could apply this everywhere? Where people would take responsibility individually, not in social crowds? Where things would be working well not because a lot of people agree, but because no one can demonstrate they are working bad? What if the world is a big machine and we need to code for it?

Maybe learning to code is learning to live! Who wouldn't want to teach that?

During one revamp of the blog I realized that I didn't have images for some of my posts. I had counted pretty much on the Blogger system that provides a post.thumbnailUrl post metadata that I can use in the display of the post, but the url is not always there. Of course if you have a nice image in the post somewhere prominently displayed, the thumbnail URL will be populated, but what if you have a video? Surprisingly, Blogger has a pretty shitty video to thumbnail mechanism that prompted me to build my own.

So the requirements would be: get me the image representing a video embedded in my page, using Javascript only.

Well, first of all, videos can be actual video tags, but most of the time they are iframe elements coming from a reliable global provider like YouTube, Dailymotion, Vimeo, etc, and all the information available is the URL of the display frame. Here is the way to get the thumbnail for these scenarios:

YouTube


Given the iframe src value:

// find youtube.com/embed/[videohash] or youtube.com/embed/video/[videohash]
var m = /youtube\.com\/embed(?:\/video)?\/([^\/\?]+)/.exec(src);
if (m) {
// the thumbnail url is https://img.youtube.com/vi/[videohash]/0.jpg
imgSrc = 'https://img.youtube.com/vi/' + m[1] + '/0.jpg';
}

If you have embeds in the old object format, it is best to replace them with the iframe one. If you can't change the content, it remains your job to create the code to give you the thumbnail image.

Dailymotion


Given the iframe src value:

//find dailymotion.com/embed/video/[videohash]
var m=/dailymotion\.com\/embed\/video\/([^\/\?]+)/.exec(src);
if (m) {
// the thumbnail url is at the same URL with `thumbnail` replacing `embed`
imgSrc=src.replace('embed','thumbnail');
}

Vimeo


Vimeo doesn't have a one URL thumbnail format that I am aware of, but they have a Javascript accessible API.

// find vimeo.com/video/[videohash]
m=/vimeo\.com\/video\/([^\/\?]+)/.exec(src);
if (m) {
// set the value to the videohash initially
imgSrc=m[1];
$.ajax({
//call the API video/[videohash].json
url:'https://vimeo.com/api/v2/video/'+m[1]+'.json',
method:'GET',
success: function(data) {
if (data&&data.length) {
// and with the data replace the initial value with the thumbnail_medium value
replaceUrl(data[0].thumbnail_medium,m[1]);
}
}
});
}

In this example, the replaceUrl function would look for img elements to which the videohash value is attached and replace the url with the correct one, asynchronously.

TED


I am proud to announce that I was the one pestering them to make their API available over Javascript.

// find ted.com/talks/[video title].html
m=/ted\.com\/talks\/(.*)\.html/.exec(src);
if (m) {
// associate the video title with the image element
imgSrc=m[1];
$.ajax({
// call the oembed.json?url=frame_url
url:'https://www.ted.com/services/v1/oembed.json?url='+encodeURIComponent(src),
method:'GET',
success: function(data) {
// set the value of the image element asynchronously
replaceUrl(removeSchemeForHttpsEnabledSites(data.thumbnail_url),m[1]);
}
});
return false;
}

video tags


Of course there is no API to get the image from an arbitrary video URL, but the standard for the video tag specifies a poster attribute that can describe the static image associated with a video.

// if the element is a video with a poster value
if ($(this).is('video[poster]')) {
// use it
imgSrc=$(this).attr('poster');
}

I came upon a StackOverflow question today that sounded plain silly to me. In it someone was complaining that parsing a command line like "x\" -y returns a single argument, not two. And that is ridiculous, since the operating system doesn't do that.

Just to prove myself right (because I love being right) I created a batch file that displays all command line parameters:
@ECHO OFF 
SET counter=1
:loop
IF "%1"=="" EXIT
ECHO %counter% %1
SET /A counter=counter+1
SHIFT
GOTO loop

I ran the batch file with the command line supplied in the SO question: -p -z "E:\temp.zip" -v "f:\" -r -o -s –c and the result was
1 -p
2 -z
3 "E:\temp.zip"
4 -v
5 "f:\"
6 -r
7 -o
8 -s
9 –c
See? I was right! The command line is properly parsed. But just to be sure, I created a .NET console application:
static void Main(string[] args)
{
for (var i=0; i<args.Length; i++)
{
Console.WriteLine(i + " " + args[i]);
}
}
and the result was... different!
0  -p
1 -z
2 E:\temp.zip
3 -v
4 f:" -r -o -s -c
Never would I have imagined that .NET console applications would do different things from system applications, especially since they are both made by Microsoft.

Investigating the problem, I found a page that explained the rules of parsing command line arguments. It was for C++, but it applied perfectly. Apparently the caret (^) is treated just as any other character (not like by the operating system that treats it as an escape character) and a quote preceded by a backslash is considered an ordinary character as well (not like the operating system that ignores it).

So, how do I get the command line the same way the operating system does? First of all I need the raw unprocessed command line. I get that using Environment.CommandLine. Then I need to split it. Of course, I can make my own code, but I want to use the same stuff as the operating system - in this case Windows, we are not discussing .NET Core or Mono here - so I will be "Pinvoking" the Windows CommandLineToArgvW function.

And, of course, it didn't work. What I did here was basically recreating the Environment.GetCommandLineArgs method, which returns the exact same result as the args array!

Now that I had already gone through the rabbit hole, I got to this very informative link about it: How Command Line Parameters Are Parsed. In it, you may find the very aptly named chapter Everyone Parses Differently which shows that there are even differences between how C and VB programs parse the command line.

Bottom line: while you may be able to get the command line from the Environment class, there is no "standard" for parsing command line arguments, instead each compiler and operating system version implements their own. Consequently, I suggest that the only way to be consistent in how you parse command line arguments is to parse them yourself.

I had this crazy idea that I could make each word on my page come alive. The word "robot" would walk around the page, "explosion" would explode, "rotate" would rotate, color words would appear in the color they represent, no matter how weird named like OliveDrab , Chocolate , Crimson , DeepPink , DodgerBlue and so on, "radioactive" would pulse green, "Siderite" would appear in all its rocking glory and so on. And so I did!

The library is on GitHub and I urge you to come and bring your own ideas. Every effect that you see there is an addon that can be included or not in your setup.



Also see directly on GitHub pages.








Almost a month ago I got started being active on StackOverflow, a web site dedicated to answering computer related questions. It quickly got addictive, but the things that I found out there are many and subtle and I am happy with the experience.

The first thing you learn when you get into it is that you need to be fast. And I mean fast! Not your average typing-and-reading-and-going-back-to-fix-typos speed, but full on radioactive zombie attack typing. And without typos! If you don't, by the time you post your contribution the question would have been answered already. And that, in itself, is not bad, but when you have worked for minutes trying to get code working, looking good, being properly commented, taking care of all test cases, being effective, being efficient and you go there and you find someone else did the same thing, you feel cheated. And I know that my work is valid, too, and maybe even better than the answers already provided (otherwise I feel dumb), but to post it means I just reiterate what has been said before. In the spirit of good sportsmanship, I can only upvote the answer I feel is the best and eventually comment on what I think is missing. Now I realize that whenever I do post the answer first there are a lot of people feeling the same way I just described. Sorry about that, guys and gals!

The second thing you learn immediately after is that you need to not make mistakes. If you do, there will be people pointing them out to you immediately, and you get to fix them, which is not bad in itself, however, when you write something carelessly and you get told off or, worse, downvoted, you feel stupid. I am not the smartest guy in the world, but feeling stupid I don't like. True, sometimes I kind of cheat and post the answer as fast as possible and I edit it in the time I know the question poster will come check it out but before poor schmucks like me wanted to give their own answers. Hey, those are the rules! I feel bad about it, but what can you do?

Sometimes you see things that are not quite right. While you were busy explaining to the guy what he was doing wrong, somebody comes and posts the solution in code and gets the points for the good answer. Technically, he answered the question; educationally, not so much. And there are lot of people out there that ask the most silly of questions and only want quick cut-and-pastable answers. I pity them, but it's their job, somewhere in a remote software development sweat shop where they don't really want to work, but where the money is in their country. Luckily, for each question there are enough answers to get one thinking in the right direction, if that is what they meant to do.

The things you get afterwards become more and more subtle, yet more powerful as well. For example it is short term rewarding to give the answer to the question well and fast and first and to get the points for being best. But then you think it over and you realize that a silly question like that has probably been posted before. And I get best answer, get my five minutes of feeling smart for giving someone the code to add two values together, then the question gets marked as a duplicate. I learned that it is more satisfying and helpful to look first for the question before providing an answer. And not only it is the right thing to do, but then I get out of my head and see how other people solved the problem and I learn things. All the time.

The overall software development learning is also small, but steady. Soon enough you get to remember similar questions and just quickly google and mark new ones as duplicates. You don't get points for that, and I think that is a problem with StackOverflow: they should encourage this behavior more. Yet my point was that remembering similar questions makes you an expert on that field, however simple and narrow. If you go to work and you see the same problem there, the answer just comes off naturally, enforced by the confidence it is not only a good answer, but the answer voted best and improved upon by an army of passionate people.

Sometimes you work a lot to solve a complex problem, one that has been marked with a bounty and would give you in one shot maybe 30 times more points than getting best answer on a regular question. The situation is also more demanding, you have to not only do the work, but research novel ways of doing it, see how others have done it, explaining why you do things all the way. And yet, you don't get the bounty. Either it was not the best answer, or the poster doesn't even bother to assign the bounty to someone - asshole move, BTW, or maybe it is not yet a complete answer or even the poster snubs you for giving the answer to his question, but not what he was actually looking for. This is where you get your adrenaline pumping, but also the biggest reward. And I am not talking points here anymore. You actually work because you chose to, in the direction that you chose, with no restrictions on method of research or implementation and, at the end, you get to show off your work in an arena of your true peers that not only fight you, but also help you, improve on your results, point out inconsistencies or mistakes. So you don't get the points. Who cares? Doing great work is working great for me!

There is more. You can actually contribute not by answering questions, but by reviewing other people's questions, answers, comments, editing their content (then getting that edit approved by other reviewers) and so on. The quality of my understanding increases not only technically, but I also learn to communicate better. I learn to say things in a more concise way, so that people understand it quicker and better. I edit the words of people with less understanding of English and not only improve my own skills there, but help them avoid getting labelled "people in a remote software development sweat shop" just because their spelling is awful and their name sounds like John Jack or some other made up name that tries to hide their true origins. Yes, there is a lot of racism to go around and you learn to detect it, too.

I've found some interesting things while doing reviews, mostly that when I can't give the best edit, I usually prefer to leave the content as is, then before I know the content is subpar I can't really say it's OK or not OK, so I skip a lot of things. I just hope that people more courageous than me are not messing things up more than I would have. I understood how important it is for many people to do incremental improvements on something in order for it to better reach a larger audience, how important is that biases of language, race, sex, education, religion or psychology be eroded to nothing in order for a question to get the deserved answer.

What else? You realize that being "top 0.58% this week" or "top 0.0008% of all time" doesn't mean a lot when most of the people on StackOverflow are questioners only, but you feel a little better. Funny thing, I've never asked a question there yet. Does it mean that I never did anything cutting edge or that given the choice between asking and working on it myself I always chose the latter?

Most importantly, I think, I've learned a few things about myself. I know myself pretty well (I mean, I've lived with the guy for 39 years!) but sometimes I need to find out how I react in certain situations. For example I am pretty sure that given the text of a question with a large bounty, I gave the most efficient, most to the point, most usable answer. I didn't get the points, instead they went to a guy that gave a one liner answer that only worked in a subset of the context of the original question, which happened to be the one the poster was looking for. I fumed, I roared, I raged against the dying of the light, but in the end I held on to the joy of having found the answer, the pleasure of learning a new way of solving the same situation and the rightness of working for a few hours in the company of like-minded people on an interesting and challenging question. I've learned that I hate when people downvote me with no explanation even more than downvoting me with a good reason, that even if I am not always paying attention to detail, I do care a lot when people point out I missed something. And I also learned that given the choice between working on writing a book and doing what I already do best, I prefer doing the comfortable thing. Yeah, I suck!

It all started with a Tweet that claimed the best method of learning anything is to help people on StackOverflow who ask questions in the field. So far I've stayed in my comfort zone: C#, ASP.NET, WPF, Javascript, some CSS, but maybe later on I will get into some stuff that I've always planned on trying or even go all in. Why learn something when you can learn everything?!

Update 29 August 2017 - Version 3.0.4: The extension has been rewritten in EcmaScript6 and tested on Chrome, Firefox and Opera.

Update 03 March 2017 - Version 2.9.3: added a function to remove marketing URLs from all created bookmarks. Enable it in the Advanced settings section. Please let me know of any particular parameters you need purged. So far it removes utm_*, wkey, wemail, _hsenc, _hsmi and hsCtaTracking.

Update 26 February 2017: Version (2.9.1): added customizing the URL comparison function. People can choose what makes pages different in general or for specific URL patterns
Update 13 June 2016: Stable version (2.5.0): added Settings page, Read Later functionality, undelete bookmarks page and much more.
Update 8 May 2016: Rewritten the extension from scratch, with unit testing.
Update 28 March 2016: The entire source code of the extension is now open sourced at GitHub.

Whenever I read my news, I open a bookmark folder containing my favorite news sites, Twitter, Facebook, etc. I then proceed to open new tabs for each link I find interesting, closing the originating links when I am done. Usually I get a number of 30-60 open tabs. This wreaks havoc on my memory and computer responsiveness. And it's really stupid, because I only need to read them one by one. In the end I've decided to fight my laziness and create my first browser extension to help me out.

The extension is published here: Siderite's Bookmark Explorer and what it does is check if the current page is found in any bookmark folder, then allow you to go forward or backwards inside that folder.

So this is my scenario on using it:
  1. Open the sites that you want to get the links from.
  2. Open new tabs for the articles you want to read or YouTube videos you want to watch,etc.
  3. Bookmark all tabs into a folder.
  4. Close all the tabs.
  5. Navigate to the bookmark folder and open the first link.
  6. Read the link, then press the Bookmark Navigator button and then the right arrow. (now added support for context menu and keyboard shortcuts)
  7. If you went too far by mistake, press the left arrow to go back.

OK, let's talk about how I did it. In order to create your own Chrome browser extension you need to follow these steps:

1. Create the folder


Create a folder and put inside a file called manifest.json. It's possible structure is pretty complex, but let's start with what I used:
{
"manifest_version" : 2,

"name" : "Siderite's Bookmark Explorer",
"description" : "Gives you a nice Next button to go to the next bookmark in the folder",
"version" : "1.0.2",

"permissions" : [
"tabs",
"activeTab",
"bookmarks",
"contextMenus"
],
"browser_action" : {
"default_icon" : "icon.png",
"default_popup" : "popup.html"
},
"background" : {
"scripts" : ["background.js"],
"persistent" : false
},
"commands" : {
"prevBookmark" : {
"suggested_key" : {
"default" : "Ctrl+Shift+K"
},
"description" : "Navigate to previous bookmark in the folder"
},
"nextBookmark" : {
"suggested_key" : {
"default" : "Ctrl+Shift+L"
},
"description" : "Navigate to next bookmark in the folder"
}
}
}

The manifest version must be 2. You need a name, a description and a version number. Start with something small, like 0.0.1, as you will want to increase the value as you make changes. The other thing is that mandatory is the permissions object, which tells the browser what Chrome APIs you intend to use. I've set there activeTab, because I want to know what the active tab is and what is its URL, tabs, because I might want to get the tab by id and then I don't get info like URL if I didn't specify this permission, bookmarks, because I want to access the bookmarks, and contextMenus, because I want to add items in the page context menu. More on permissions here.

Now, we need to know what the extension should behave like.

If you want to click on it and get a popup that does stuff, you need to specify the browser_action object, where you specify the icon that you want to have in the Chrome extensions bar and/or the popup page that you want to open. If you don't specify this, you get a default button that does nothing on click and presents the standard context menu on right click. You may only specify the icon, though. More on browserAction here.

If you want to have an extension that reacts to background events, monitors URL changes on the current page, responds to commands, then you need a background page. Here I specify that the page is a javascript, but you can add HTML and CSS and other stuff as well. More on background here.

Obviously, the files mentioned in the manifest must be created in the same folder.

The last item in the manifest is the commands object. For each command you need to define the id, the keyboard shortcut (only the 0..9 and A..Z are usable unfortunately) and a description. In order to respond to commands you need a background page as shown above.

2. Test the extension


Next you open a Chrome tab and go to chrome://extensions, click on the 'Developer mode' checkbox if it is not checked already and you get a Load unpacked extension button. Click it and point the following dialog to your folder and test that everything works OK.

3. Publish your extension


In order to publish your extension you need to have a Chrome Web Store account. Go to Chrome Web Store Developer Dashboard and create one. You will need to pay a one time 5$ fee to open it. I know, it kind of sucks, but I paid it and was done with it.

Next, you need to Add New Item, where you will be asked for a packed extension, which is nothing but the ZIP archive of all the files in your folder.

That's it.

Let's now discuss actual implementation details.

Adding functionality to popup elements


Getting the popup page elements is easy with vanilla Javascript, because we know we are building for only one browser: Chrome! So getting elements is done via document.getElementById(id), for example, and adding functionality is done via elem.addEventListener(event,handler,false);

One can use the elements as objects directly to set values that are related to those elements. For example my prev/next button functionality takes the URL from the button itself and changes the location of the current tab to that value. Code executed when the popup opens sets the 'url' property on the button object.

Just remember to do it when the popup has finished loading (with document.addEventListener('DOMContentLoaded', function () { /*here*/ }); )

Getting the currently active tab


All the Chrome APIs are asynchronous, so the code is:
chrome.tabs.query({
'active' : true,
'lastFocusedWindow' : true
}, function (tabs) {
var tab = tabs[0];
if (!tab) return;
// do something with tab
});

More on chrome.tabs here.

Changing the URL of a tab


chrome.tabs.update(tab.id, {
url : url
});

Changing the icon in the Chrome extensions bar


if (chrome.browserAction) chrome.browserAction.setIcon({
path : {
'19' : 'anotherIcon.png'
},
tabId : tab.id
});

The icons are 19x19 PNG files. browserAction may not be available, if not declared in the manifest.

Get bookmarks


Remember you need the bookmarks permission in order for this to work.
chrome.bookmarks.getTree(function (tree) {
//do something with bookmarks
});

The tree is an array of items that have title and url or children. The first tree array item is the Bookmarks Bar, for example. More about bookmarks here.

Hooking to Chrome events


chrome.tabs.onUpdated.addListener(refresh);
chrome.tabs.onCreated.addListener(refresh);
chrome.tabs.onActivated.addListener(refresh);
chrome.tabs.onActiveChanged.addListener(refresh);
chrome.contextMenus.onClicked.addListener(function (info, tab) {
navigate(info.menuItemId, tab);
});
chrome.commands.onCommand.addListener(function (command) {
navigate(command, null);
});

In order to get extended info on the tab object received by tabs events, you need the tabs permission. For access to the contextMenus object you need the contextMenus permission.

Warning: if you install your extension from the store and you disable it so you can test your unpacked extension, you will notice that keyboard commands do not work. Seems to be a bug in Chrome. The solution is to remove your extension completely so that the other version can hook into the keyboard shortcuts.

Creating, detecting and removing menu items


To create a menu item is very simple:
chrome.contextMenus.create({
"id" : "menuItemId",
"title" : "Menu item description",
"contexts" : ["page"] //where the menuItem will be available
});
However, there is no way to 'get' a menu item and if you try to blindly remove a menu item with .remove(id) it will throw an exception. My solution was to use an object to store when I created and when I destroyed the menu items so I can safely call .remove().

To hook to the context menu events, use chrome.contextMenus.onClicked.addListener(function (info, tab) { }); where info contains the menuItemId property that is the same as the id used when creating the item.

Again, to access the context menu API, you need the contextMenus permission. More about context menus here.

Commands


You use commands basically to define keyboard shortcuts. You define them in your manifest and then you hook to the event with chrome.commands.onCommand.addListener(function (command) { });, where command is a string containing the key of the command.

Only modifiers, letters and digits can be used. Amazingly, you don't need permissions for using this API, but since commands are defined in the manifest, it would be superfluous, I guess.

That's it for what I wanted to discuss here. Any questions, bug reports, feature requests... use the comments in the post.

Here is a very informative presentation about the internals of await/async, which makes things a lot clearer when you are trying to understand what the hell is going on there: