and has 0 comments
White to move. Can you draw? Can you win? What would you do? Try - I know it's fucking hard, but do it anyway - to think it through.
[FEN "5kB1/3p1P2/7K/2Pp1P1P/p6p/4P3/7P/8 w - - 0 1"]
1. Kg6 a3 2. h6 a2 3. h7 a1=Q 4. h8=Q Qxh8 (4. .. Qg1+ 5. Kh5 Qxe3 (5. ..
Qg7 6. Qxg7+ Kxg7 7. Kxh4 d4 8. f6+ Kf8 9. c6) 6. Qf6 Qf3+ 7. Kh6 Qf4+ 8.
Kh7) 5. f6 h3 6. Kg5 d4 7. c6 dxc6 8. exd4 c5 (8. .. Qxg8+ 9. fxg8=Q+ Kxg8
10. Kg6 Kf8 11. f7 Ke7 12. Kg7) (8. .. Qh7 9. Bxh7 Kxf7 10. Bg8+ Kxg8 11.
Kg6 Kf8) 9. d5 c4 10. d6 c3 11. d7 c2 12. d8=Q# 1-0


Here is the video for it, from very good channel ChessNetwork:


Enjoy!

Clippy is back, thanks to this nice project. So what else could I have done than add it to my blog? Just go to Menu and choose your "Assistant".

If the assistant is set, the messages from the blog come from the assistant. It also follows the mouse as it moves around and does various gestures depending on what one does or reads. Have fun!

and has 0 comments
I will give you this as a puzzle, so please try to figure out that White's move is going to be. This and a lot of cool other puzzles are presented by IM Andrew Martin in the following video.

[Event "Ch URS"]
[Site "Moscow"]
[Date "1956.??.??"]
[Result "1-0"]
[White "Tigran Vartanovich Petrosian"]
[Black "Vladimir Simagin"]
[ECO "A53"]
[PlyCount "95"]

1. Nf3 Nf6 2. c4 c6 3. Nc3 d6 4. d4 g6 5. e4 Bg7 6. Be2 O-O
7. O-O Bg4 8. Be3 Nbd7 9. Nd2 Bxe2 10. Qxe2 e5 11. d5 c5
12. Rab1 Ne8 13. f3 f5 14. b4 cxb4 15. Rxb4 b6 16. a4 Bf6
17. Kh1 Bg5 18. Bg1 Nc7 19. Rbb1 Na6 20. Nb3 Ndc5 21. Nxc5
bxc5 22. exf5 gxf5 23. g4 fxg4 24. Ne4 Bf4 25. Rb7 Nc7
26. fxg4 Ne8 27. g5 Qc8 28. Re7 Qh3 29. Rf3 Qg4 30. Qd3 Bxh2
31. Rxf8+ Kxf8 32. Rxe8+ Rxe8 33. Bxh2 Re7 34. Nxd6 Qxg5
35. Qf1+ Kg8 36. Ne4 Qh4 37. Qe2 Rg7 38. d6 Qh6 39. Qd1 Qh4
40. Qe2 Qh6 41. Qf1 Rf7 42. Qg2+ Kf8 43. Ng5 Qxd6 44. Qa8+ Kg7
45. Bxe5+ Qxe5 46. Qh8+ Kxh8 47. Nxf7+ Kg7 48. Nxe5 1-0


Did you find it? I have to admit I did not. The game is Tigran Vartanovich Petrosian vs Vladimir Simagin, played in 1956.

[well, no following video, because YouTube just randomly removes content without any way of knowing what it was]

and has 0 comments
Every Heart a Doorway started well. Here is this girl that arrives at a specialized institution for "wayward children" - runaways, boys and girls that somehow don't fit into the slot their parents have prepared for them. Like many young adult stories, the main protagonists are young people into a place that accepts them as they are, but is still formal, with strict boundaries. In this case the idea was that the youngsters each have found the door to another world, a world that not only is completely different from ours, but is perfect for them. They have spent some time in it, only to accidentally leave or to be thrown out, with no way to return. After some times years in that place, changed to their deepest core, it is difficult to readapt to the real world, which makes their parents send them to this kind of institution.

From here, though, Seanan McGuire just piles up the tropes, while the careful writing style and setup from the beginning of this short 173 page story decays into a rushed and inconsistent ending. Just like Hogwarts, the house is managed with ancient British style rules, fixed meals, absolute authority, etc. There are children and there are adult teachers. The headmistress is someone who went through the same thing and decided to help others, but outside of that she's just as certain of her point of view and as self righteous as any of the parents that abandoned their offspring there.

And there is this... style, this way of describing the interaction of characters, which annoyed the hell out of me, without being bad as a writing style. You see, the young girl that arrives at the institution is time after time met by people who finish her sentences for her, show her that they think they know better than her, and she accepts it, just because she's the new girl. With that level of meek submission, I wonder why her parents ever wanted her gone. Her perfect world - a place of the dead where she was a servant of royalty and the skill she had learned best was to stay completely still for hours lest she upsets the lord of the dead - was also about total submission, and she loved it there. Most of the people that explored other worlds were similarly bonded to dominant characters that have absolute control over them.

In the end, children start to get killed and the response of "the authorities" is to hide the bodies and instruct youngsters to stay in groups, while the main characters suddenly can use their skills in the real world, as it was completely normal, but fail to use them properly to find the obvious killer. The scene where a skeleton tries to tell them who the murderer is - even if that should not have been possible in the real world - but can only point a finger, so they give up after one attempt of communication is especially sour in my mind.

So yeah, I didn't like the book. I felt it was a cheap mashup of Harry Potter and 50 Shades of Grey, polluted by the author's fantasies of submission and not much in the way of plot or characters. And it's too bad, since I liked the premise immediately.

and has 0 comments
I've quickly finished the second volume, Metro 2034, by Dmitry Glukhovsky, in the Metro series of books, after reading and being captivated by Metro 2033. To me it felt more geared towards the sci-fi and the writing lore than towards the social satire from the first book. It felt less. There are fewer main characters, fewer stations, less character development, less monsters. The few people that do populate the book are so archetypal that even the author acknowledges it in the guise of a character nicknamed Homer, an old guy that searches for stories and sees his entire adventure an odyssey to be written in a book and his companions filling up the roles of Warrior and Princess and Bard.

Don't get me wrong, I liked it a lot, but I felt that the first book acted as a caricature of current society, with its many stations that each adhered to some philosophy or another, while the second veered quite a large distance from that and went purely towards the catacomb sci-fi thriller. There is still enough philosophical discourse in Homer's musings and it is interesting to see the post apocalyptic world seen through the eyes of someone that lived before as well as with fresh eyes: a girl that only knew one station her whole life. But the book was tamer and I am not the only one to think that.

Now, I scoured the net for some Metro 2035 love, but to no avail. I found a Polish audio book, but nothing else. It is ridiculous how much one has to wait to get a translation of a book written in Russian or how difficult it is to get even the original Russian text.

and has 0 comments
I've heard of Metro 2033 from a movie link that said Hollywood wants to adapt the book as a feature film, which in fact is really ridiculous: not only is the book famous, it has several sequels, it spawned a video game franchise and many authors have join to write inside a shared Metro universe. So how didn't I know about it? The reason, of course, is that it is written by a Russian: Дмитрий Глуховский. The book was published in 2005 in Russia, but only in 2010 was it published in the US, similar to how Japanese movies or other culturally different art trickles to the predominantly English culture. Meanwhile, the concept has grown so much that there are more than thirty books written in the universe. It's like falling down a rabbit hole. I used to love Russian sci-fi when I was a child and I have not realized how much I missed it until I started reading this book: simple people yet complex characters, with deep philosophical concerns, problems that are rarely solved through sheer force, depressing settings where everything falls apart yet makes people strong and interesting and puts them in the foreground.

Metro 2033 describes a parallel universe where World War III has happened and the few Russian survivors have hidden in the large network of the Moscow subway system. Radiation, biological weapons and other factors have turned the surface into a toxic place, filled with monstrous mutants and terrible creatures, while underground each metro station has developed its own unique culture and mythology. The beauty of the book is that it reads more like a satire of the history of the world and less than a post apocalyptic story. There are religious fanatics, fascists, communists, people who have rejected technology and others that value knowledge and books more than anything else, traders and mystics and people with strange powers. I felt that the author himself didn't really mean to create a credible after war world, as he didn't linger on where the power comes from or how the food is grown or other such technical details, instead focusing on the human spirit, the many myths created to explain the state of the world and radiographed the world in this pathetic microcosm made of barely surviving people.

Somewhere in the middle of the book I got a little bored. I have to admit that the long paragraphs and the many details of some scenes make for difficult reading if you are not in the mood. I loved the book while reading it at night, but had trouble focusing when trying to read on the street or when walking the dog. Yet by the end I am really glad I read the book. I can't imagine reading next anything other than the sequels and I lament the very real possibility that I might delve into the other dozens of books and short stories written by other authors in the same world.

In order to explore the world of Metro 2033, you may start at the Metro2033 web site, of course, where there are beautiful 360 photos of the current Moscow subway stations. Also, you may try the game, which feels a bit dated from the promotional videos, but apparently has a good plot. Of course, exploring the book universe sounds like a better idea, yet most of the spinoffs have not yet been translated into English. Perhaps this is a good opportunity to start reading in Russian...

Bookmark Explorer, a Chrome browser extension that allows you to navigate inside bookmark folders on the same page, saving you from a deluge of browser tabs, has now reached version 2.4.0. I consider it stable, as I have no new features planned for it and the only changes I envision in the near future is switching to ECMAScript 6 and updating the unit test (in other words, nothing that concerns the user).

Let me remind you of its features:

  • lets you go to the previous/next page in a bookmark folder, allowing sequential reading of selected news or research items
  • has context menu, popup buttons and keyboard shortcut support
  • shows a page with all the items in the current bookmark folder, allowing selection, deletion, importing/exporting of simple URL lists
  • shows a page with all the bookmarks that were deleted, allowing restoring them, clearing them, etc.
  • keyboard support for both pages
  • notifies you if the current page has been bookmarked multiple times
  • no communication with the Internet, it works just as well offline - assuming the links would work offline, like local files
  • absolutely free


Install it from Google's Chrome Web store.

I have been using Disqus as a commentator for some time and yes, it is a bit bloated and yes, it writes all kinds of errors in the console, but it has a big advantage that you have all your comments and replies in a single place. When you go to a Disqus enabled web site you see a warning that you have unread messages. So from now on, my blog is using Disqus as its comment engine. While doing this, I've also updated some layout and code, so let me know if anything is wrong.

So there it is. Tell me what you think! Preferably in the comment section.

and has 0 comments
Liu Cixin is Chinese, which makes reading his work not only a pleasant science fiction pastime, but also an intercultural experience. That is because Chinese people are weeeeird :). Just kidding, but it does make for an interesting experience. Even with good English translation, the mindset behind the writing is clearly different from the Western writers I usually read.

I have read Devourer first, a short story, to see if I enjoy the writing style, then I read the first two books in the Remembrance of Earth series: The Three-Body Problem and The Dark Forest. It is clear that the author likes to think big, some even compared him with Arthur C. Clarke. In both stories Earth enters in contact with alien species which are vastly superior and devastatingly indifferent to the fate of the human race. While Devourer really reads like a Chinese story, you know with the emperor and the adviser and so on, it retains the same fear of irrelevance as the huge books in the trilogy.

To me it felt like The Three-Body Problem was more accessible, while The Dark Forest has a change of pace and style, but it very well may be because of the translator. It was easier to imagine scenes from Asian movies - with people shouting hysterically at each other to prove their loyalty to one group or the other and "generals" and all that jazz - while reading the second book than the first. Even so, throughout the reading I had these weird feelings of wrongness sometimes when things happened in a certain way because the protagonists were Chinese. Yet this was not relevant to the story or the enjoyment of the books. Also many Chinese cultural references were both instructive and eye opening. As an example, The Three-Body Problem starts in the middle of the Chinese Cultural Revolution which is just as appalling, if not more so, as our Nazi era.

I cannot talk about the stories without spoiling them too much, so I won't. Enough to say that they are all hard sci-fi, even if some of the elements there are not so scientifically accurate. Clearly for Liu Cixin the story took precedence to high technology, which is a good thing.

The third book in the trilogy, Death's End, will allegedly appear September 2016, from Tor Books. However, I have mixed feelings about it. The story almost ended with the second book. Do I really care what happens next? Will it be relevant or just the typical three book publishing deal forced the author's hand? There are some questions that remain unanswered and I would be glad to see a clarification in this upcoming book, but will they be enough to flesh a great story?

...is stupid.

For a very long time the only commonly used expression of software was the desktop application. Whether it was a console Linux thing or a full blown Windows application, it was something that you opened to get things done. In case you wanted to do several things, you either opted for a more complex application or used several of them, usually transferring partial work via the file system, sometimes in more obscure ways. For example you want to publish a photo album, you take all pictures you've taken, process them with an image processing software, then you save them and load them with a photo album application. For all intents and purposes, the applications are black boxes to each other, they only connect with inputs and outputs and need not know what goes on inside one another.

Enter the web and its novel concept of URLs, Uniform Resource Locators. In theory, everything on the web can be accessible from the outside. You want to link to a page, you have its URL to add as an anchor in your page and boom! A web site references specific resources from another. The development paradigm for these new things was completely different from big monolithic applications. Sites are called sites because they should be a place for resources to sit in; they are places, they have no other role. The resources, on the other hand, can be processed and handled by specific applications like browsers. If a browser is implemented in all operating systems in the same way, then the resources get accessed the same way, making the operating system - the most important part of one's software platform - meaningless. This gets us to this day and age when an OS is there to restrict what you can do, rather than provide you with features. But that's another story altogether.

With increased computing power, storage space, network speeds and the introduction and refining of Javascript - now considered a top contender for the most important programming language ever - we are now able to embed all kinds of crazy features in web pages, so much so that we have reached a time when writing a single page application is not only possible, but a norm. They had to add new functionality to browsers in order to let the page tweak the browser address without reloading the page and that is a big deal! And a really dumb one. Let me explain why.

The original concept was that the web would own the underlying mechanism of resource location. The new concept forces the developer to define what a resource locator means. I can pretty much make my own natural language processing system and have URLs that look like: https://siderite.com/give me that post ranting about the single page apps. And yes, the concept is not new, but the problem is that the implementation is owned by me. I can change it at any time and, since it all started from a desire to implement the newest fashion, destined to change. The result is chaos and that is presuming that the software developer thought of all contingencies and the URL system is adequate to link to resources from this page... which is never true. If the developer is responsible for interpreting what a URL means, then it is hardly "uniform".

Another thing that single page apps lead to is web site bloating. Not only do you have to load the stuff that now is on every popular website, like large pointless images and big fonts and large empty spaces, but also the underlying mechanism of the web app, which tells us where we are, what we can do, what gets loaded etc. And that's extra baggage that no one asked for. A single page app is hard to parse by a machine - and I don't care about SEO here, it's all about the way information is accessible.

My contention is that we are going backwards. We got the to point where connectivity is more important than functionality, where being on the web is more important than having complex well done features in a desktop app. It forced us to open up everything: resources, communication, protocols, even the development process and the code. And now we are going back to the "one app to rule them all" concept. And I do understand the attraction. How many times did I dream of adding mini games on my blog or make a 3D interface and a circular corner menu and so on. This things are cool! But they are only useful in the context of an existing web page that has value without them. Go to single page websites and try to open them with Javascript disabled. Google has a nice search page that works even then and you know what? The same page with Javascript is six times larger than the one without - and this without large differences in display. Yes, I know that this blog has a lot of stuff loaded with Javascript and that this page probably is much smaller without it, but the point it that the blog is still usable. For more on this you should take the time to read The Web Obesity Crisis, which is not only terribly true, but immensely funny.

And I also have to say I understand why some sites need to be single page applications, and that is because they are more application than web site. The functionality trumps the content. You can't have an online image processing app work without Javascript, that's insane. You don't need to reference the resource found in a color panel inside the photo editor, you don't need to link to the image used in the color picker and so on. But web sites like Flipboard, for example, that display a blank page when seen without Javascript, are supposed to be news aggregators. You go there to read stuff! It is true we can now decide how much of our page is a site and how much an application, but that doesn't mean we should construct abominations that are neither!

A while ago I wrote another ranty rant about how taking over another intuitively common web mechanism: scrolling, is helping no one. These two patterns are going hand in hand and slowly polluting the Internet. Last week Ars Technica announced a change in their design and at the same time implemented it. They removed the way news were read by many users: sequentially, one after the other, by scrolling down and clicking on the one you liked, and resorted to a magazine format where news were just side by side on a big white page with large design placeholders that looked cool yet did nothing but occupy space and display the number of comments for each. Content took a backseat to commentary. I am glad to report that two days later they reverted their decision, in view of the many negative comments.

I have nothing but respect for web designers, as I usually do for people that do things I am incapable of, however their role should always be to support the purpose of the site. Once things look cool just for the sake of it, you get Apple: a short lived bloom of user friendliness, followed by a vomitous explosion of marketing and pricing, leading to the immediate creation of cheaper clones. Copying a design because you think is great is normal, copying a bunch of designs because you have no idea what your web page is supposed to do is just direct proof you are clueless, and copying a design because everyone else is doing it is just blindly following clueless people.

My advice, as misguided as it could be, is forget about responsiveness and finger sized checkboxes, big images, crisp design and bootstrapped pages and all that crap. Just stop! And think! What are you trying to achieve? And then do it, as a web site, with pages, links and all that old fashioned logic. And if you still need cool design, add it after.

I've written another Chrome extension that I consider in beta, but so far it works. Really ugly makeshift code, but I am not gathering data about the way I will use it, then I am going to refactor it, just as I did with Bookmark Explorer. You may find the code at GitHub and the extension at the Chrome webstore.

This is how it works: Every time you access anything with the browser, the extension will remember the IPs for any given host. It will hold a list of the IPs, in reverse order (last one first), that you can just copy and paste into your hosts file. The hosts file is found in c:/Windows/System32/drivers/etc/hosts and on Linux in /etc/hosts. Once you add a line in the format "IP host" in it, the computer will resolve the host with the provided IP. Every time there is a problem with DNS resolution, the extension will add the latest known IP into the hosts text. Since the extension doesn't have access to your hard drive, you need to edit the file yourself. The icon of DNS resolver will show the number of hosts that it wants to resolve locally or nothing, if everything is OK.

The extension allows manual selection of an IP for a host and forced inclusion or exclusion from the list of IP/host lines. Data can be erased (all at once for now) as well. The extension does not communicate with the outside, but it does store a list of all domains you visit, so it is a slight privacy risk - although if someone has access to the local store of a browser extension, it's already too late. There is also the possibility of the extension to replace the host with IP directly in the browser requests, but this only works for the browser and fails in case the host name is important, as in the case of multiple servers using the same IP, so I don't recommend using it.

There are two scenarios for which this extension is very useful:
  • The DNS server fails for some reason or gives you a wrong IP
  • Someone removed the IP address from DNS servers or replaced it with one of their own, like in the case of governments censorship

I have some ideas for the future:
  • Sharing of working IP/host pairs - have to think of privacy before that, though
  • Installing a local DNS server that can communicate locally with the extension, so no more hosts editing - have to research and create one
  • Upvoting/Downvoting/flagging shared pairs - with all the horrible head-ache this comes with

As usual, let me know what you think here, or open issues on GitHub.

and has 0 comments
Neal Stephenson is known for writing speculative science fiction with focus on technological advancements and Seveneves is all about space. He thought about the idea in 2006, while he was an adviser with Blue Origin and he let the idea fester for years, while getting feedback from all kinds of people knowledgeable about and invested in space technology, like Planetary Resources, so at least the science is good. Personally, I believe that he gathered so much material that he just had to write the book, regardless if he had a story to tell or not. Never have I read a book that is so obviously written by an engineer, with long descriptions about how space stuff works and how a culture is like or how people solve problems. It's all about the how, never about the why or the who. As such, I consider it a failed book, because it could have been so much better as a well thought, well edited trilogy of books, with compelling characters, rather than a humongous enumeration of space technologies.

The story is split into three parts, mostly unconnected: the cataclysm that dooms Earth in two years and the solution found by the people of the planet, the cataclysm and what people do afterwards and the aftermath, 5000 years into the future.

What happens is that the Moon suddenly gets splintered apart by some unknown agent, possibly a miniature black hole, which just breaks it into seven pieces (it already starts with the number 7), that are destined to further break in collisions with each other and cause a catastrophic meteor bombardment of Earth, heating its atmosphere and boiling and smashing away all life. People decide to invest everything into expanding the International Space Station, having a few thousand people escape certain death by going into space. Everything is done very orderly and the book focuses exclusively at what people do to reach the stars, with today's technology. Nothing about what 7 billion people (see? I can use seven all over the place, too) feel or do when faced with certain doom. The book continues quickly over the inevitable deaths and accidents caused by rushing into something that is not really researched, proceeding towards a part of the story where almost everything just works, as by magic. The devastating problems that people would face in space are solved quickly by engineering solutions, ignoring the unsolvable ones.

So far the book does have a sort of a main character, a woman working with robots, sent to the ISS as part of a partnership with an asteroid mining company. Before we know enough about her, the story shifts into its second part, which splits attention between several important characters. At this point it is almost impossible to empathize with anyone, a problem compounded by using personalities "slightly towards the Asperger side of the spectrum", as the author points out several times.

To continue explaining the story is pointless and would spoil it, enough said that even as I am an engineer and always complaining that there is not enough science in science fiction, I got really bored with reading this book. Long long (mobile) pages of two of three paragraphs each, containing no dialog, explaining things that had nothing to do with the story, puny and underfed as it was. The only thing that made me react emotionally was the villain of the second part, who was written well enough to make me hate. To add insult to injury, after fighting through the 880 (normal) pages, the third part just abruptly ends, like he was just tired of writing, now that the tech was all explained away and there was some human story there.

Bottom line: As someone interested in the technology necessary to colonize the Solar System, this book should have been gold. Instead, I caught myself skimming over the long descriptions, just wanting the book to end. Too bad, since the subject could have easily been split into three or even several books, each with their own story to tell in a well structured fictional universe. Also, while the author swears he was "peer reviewed" on the concepts, he also admits making huge leaps of faith over what would work or not.

I have started writing Chrome extensions, mainly to address issues that my browser is not solving, like opening dozens of tabs and lately DNS errors/blocking and ad blocking. My code writing process is chaotic at first, just writing stuff and changing it until things work, until I get to something I feel is stable. Then I feel the need to refactor the code, organizing and cleaning it and, why not, unit testing it. This opens the question on how to do that in Javascript and, even if I have known once, I needed to refresh my understanding with new work. Without further ado: QUnit, a Javascript testing framework. Not that all code here will be in ES5 or earlier, mainly because I have not studied ES6 and I want this to work with most Javascript.

QUnit


QUnit is something that has withstood the test of time. It was first launched in 2008, but even now it is easy to use with simple design and clear documentation. Don't worry, you can use it even without jQuery. In order to use it, create an HTML page that links to the Javascript and CSS files from QUnit, then create your own Javascript file containing the tests and add it to the page together with whatever you are testing.

Already this raises the issue of having Javascript code that can be safely embedded in a random web page, so consider how you may encapsulate the code. Other testing frameworks could run the code in a headless Javascript engine, so if you want to be as generic as possible, also remove all dependencies on an existing web page. The oldest and simplest way of doing this is to use the fact that an orphan function in Javascript has its own scope and always has this pointing to the global object - in case of a web page, this would be window. So instead of something like:
i=0;
while (i<+(document.getElementById('inpNumber').value)) {
i++;
// do something
}
do something like this:
(function() {

var global=this;

var i=0;
while (i<+(global.document.getElementById('inpNumber').value)) {
i++;
// do something
}

})();

It's a silly example, but it does several things:
  • It keeps variable i in the scope of the anonymous function, thus keeping it from interfering with other code on the page
  • It clearly defines a global object, which in case of a web page is window, but may be something else
  • It uses global to access any out of scope values

In this particular case, there is still a dependency on the default global object, but if instead one would pass the object somehow, it could be abstracted and the only change to the code would be the part where global is defined and acquired.

Let's start with QUnit. Here is a Hello World kind of thing:
QUnit.test("Hello World", function (assert) {
assert.equal(1+1, 2, "One plus one is two");
});
We put it in 'tests.js' and include it into a web page that looks like this:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width">
<title>Unit Tests</title>
<link rel="stylesheet" href="https://code.jquery.com/qunit/qunit-1.23.1.css">
</head>
<body>
<script src="https://code.jquery.com/qunit/qunit-1.23.1.js"></script>
<div id="qunit"></div>
<div id="qunit-fixture"></div>

<script src="tests.js"></script>
</body>
</html>

The result:


As you can see, we declare a test with the static QUnit.test function, which receives a name and a function as parameters. Within the function, the assert object will do everything we need, mainly checking to see if a result conforms to an expected value or a block throws an exception. I will not go through a detailed explanation on simple uses like that. If you are interested peruse the QUnit site for tutorials.

Modules


What I want to talk about are slightly more advanced scenarios. The first thing I want to address is the concept of modules. If we declare all the tests, regardless on how many scripts they are arranged in, the test page will just list them one after another, in a huge blob. In order to somehow separate them in regions, we need a module. Here is another example:
QUnit.module("Addition");
QUnit.test("One plus one", function (assert) {
assert.equal(1+1, 2, "One plus one is two");
});
QUnit.module("Multiplication");
QUnit.test("Two by two", function (assert) {
assert.equal(2*2, 4, "Two by two is four");
});
resulting in:


It may look the same, but a Module: dropdown appeared, allowing one to choose which module to test or visualize. The names of the tests also includes the module name. Unfortunately, the resulting HTML doesn't have containers for modules, something one can collapse or expand at will. That is too bad, but it can be easily fixed - this is not the scope of the post, though. A good strategy is just to put all related tests in the same Javascript file and use QUnit.module as the first line.

Asynchronicity


Another interesting issue is asynchronous testing. If we want to test functions that return asynchronously, like setTimeout or ajax calls or Promises, then we need to use assert.async. Here is an example:
QUnit.config.testTimeout = 1000;
QUnit.module("Asynchronous tests");
QUnit.test("Called after 100 milliseconds", function (assert) {
var a=assert.async();
setTimeout(function() {
assert.ok(true, "Assertion was called from setTimeout");
a();
});
},100);

First of all, we needed to declare that we expect a result asynchronously, therefore we call assert.async() and hold a reference to the result. The result is actually a function. After we make all the assertions on the result, we call that function in order to finish the test. I've added a line before the test, though, which sets the testTimeout configuration value. Without it, an async test that fails would freeze the test suite indefinitely. You can easily test this by setting testTimeout to less than the setTimeout duration.

Asynchronous tests raise several questions, though. The example above is all nice and easy, but what about cases when the test is more complex, with multiple asynchronous code blocks that follow each other, like a Promise chain? What if the assertions themselves need to be called asynchronously, like when checking for the outcome of a click handler? If you run jQuery(selector).click() an immediately following assertion would fail, since the click handler is executed in another context, for example. One can imagine code like this, but look how ugly it is:
QUnit.test("Called after 500 milliseconds", function (assert) {
var a = assert.async();
setTimeout(function () {
assert.ok(true, "First setTimeout");
setTimeout(function () {
assert.ok(true, "Second setTimeout");
setTimeout(function () {
assert.ok(true, "Third setTimeout");
setTimeout(function () {
assert.ok(true, "Fourth setTimeout");
a();
}, 100);
}, 100);
}, 100);
}, 100);
setTimeout(function () {
assert.notOk(true, "Test timed out");
}, 500)
});

In order to solve at least this arrow antipattern I've created a stringFunctions function that looks like this:
function stringFunctions() {
if (!arguments.length)
throw 'needs functions as parameters';
var f = function () {};
var args = arguments;
for (var i = args.length - 1; i >= 0; i--) {
(function () {
var x = i;
var func = args[x];
if (typeof(func) != 'function')
throw 'parameter ' + x + ' is not a function';
var prev = f;
f = function () {
setTimeout(function () {
func();
prev();
}, 100);
};
})();
};
f();
};
which makes the previous code look like this:
QUnit.test("Called after 500 milliseconds", function (assert) {
var a = assert.async();
stringFunctions(function () {
assert.ok(true, "First setTimeout");
}, function () {
assert.ok(true, "Second setTimeout");
}, function () {
assert.ok(true, "Third setTimeout");
}, function () {
assert.ok(true, "Fourth setTimeout");
}, a);
setTimeout(function () {
assert.notOk(true, "Test timed out");
}, 500)
});

Of course, this is a specific case, but at least in a very common scenario - the one when the results of event handlers are checked - stringFunctions with 1ms instead of 100ms is very useful. Click on a button, see if a checkbox is available, check the checkbox, see if the value in a span has changed, stuff like that.

Testing average jQuery web code


Another thing I want to address is how to test Javascript that is intended as a web page companion script, with jQuery manipulations of the DOM and event listeners and all that. Ideally, all this would be stored in some sort of object that is instantiated with parameters that specify the test context, the various mocks and so on and so on. Since it is not an ideal world, I want to show you a way to test a typical such script, one that executes a function at DOMReady and does everything in it. Here is an example:
$(function () {

$('#btnSomething').click(function () {
$('#divSomethingElse').empty();
});

});
The code assumes $ is jQuery, then it adds a handler to a button click to empty another item. Think on how this should be tested:
  1. Declare a QUnit test
  2. In it, execute the script
  3. Then make some assertions

I was a bit lazy and changed the scripts themselves to check if a testContext exists and use that one. Something like this:
(function ($) {

var global = this;
var jQueryContext = global.testContext && global.testContext.document ? global.testContext.document : global.document;
var chrome = global.testContext && global.testContext.chrome ? global.testContext.chrome : global.chrome;
// etc.

$(function () {

$('#btnSomething', jQueryContext).click(function () {
$('#divSomethingElse', jQueryContext).empty();
});

});

})(jQuery);
which has certain advantages. First, it makes you aware of all the uses of jQuery in the code, yet it doesn't force you to declare everything in an object and having to refactor everything. Funny how you need to refactor the code in order to write unit tests in order to be able to refactor the code. Automated testing gets like that. It also solves some problems with testing Javascript offline - directly from the file system, because all you need to do now is define the testContext then load the script by creating a tag in the testing page and setting the src attribute:
var script = document.createElement('script');
script.onload = function () {
// your assertions here
};
script.src = "http://whatever.com/the/script.js";
document.getElementsByTagName('head')[0].appendChild(script);
In this case, even if you are running the page from the filesystem, the script will be loaded and executed correctly. Another, more elegant solution would load the script as a string and execute it inside a closure where jQuery was replaced with something that uses a mock document by default. This means you don't have to change your code at all, but you need to be able to read the script as a text, which is impossible on the filesystem. Some really messy script tag creation would be needed
QUnit.test("jQuery script Tests", function (assert) {

var global = (function () {
return this;
})();

function setIsolatedJquery() {
global.originalJquery = jQuery.noConflict(true);
var tc = global.testContext.document;
global.jQuery = global.$ = function (selectorOrHtmlOrFunction, context) {
if (typeof(selectorOrHtmlOrFunction) == 'function')
return global.originalJquery.apply(this, arguments);
var newContext;
if (!context) {
newContext = tc; //if not specified, use the testContext
} else {
if (typeof(context) == 'string') {
newContext = global.originalJquery(context, tc); //if context is a selector, use it inside the testContext
} else {
newContext = context; // use the one provided
}
}
return global.originalJquery(selectorOrHtmlOrFunction, newContext)
}
};
function restoreJquery() {
global.jQuery = global.$ = global.originalJquery;
delete global.originalJquery;
}

var a = assert.async();

global.testContext = {
document : jQuery('<div><button id="btnSomething">Something</button><div id="divSomethingElse"><span>Content</span></div></div>')
};
setIsolatedJquery();

var script = document.createElement('script');
script.onload = function () {

assert.notEqual($('#divSomethingElse').children().length, 0, "SomethingElse has children");
$('#btnSomething').click();
setTimeout(function () {
assert.equal($('#divSomethingElse').children().length, 0, "clicking Something clears SomethingElse");
restoreJquery();
a();
}, 1);
};
script.src = "sample.js";
document.getElementsByTagName('head')[0].appendChild(script);

});

There you have it: an asynchronous test that replaces jQuery with something with an isolated context, loads a script dynamically, performs a click in the isolated context, checks the results. Notice the generic way in which to get the value of the global object in Javascript.

Bottom-Up or Top-Bottom approach


A last point I want to make is more theoretical. After some consultation with a colleague, I've finally cleared up some confusion I had about the direction of automated tests. You see, once you have the code - or even in TDD, I guess, you know what every small piece of code does and also the final requirements of the product. Where should you start in order to create automated tests?

One solution is to start from the bottom and check that your methods call everything they need to call in the mocked dependencies. If you method calls 'chrome.tabs.create' and you have mocked chrome, your tabs.create method should count how many times it is called and your assertion should check that the count is 1. It has the advantage of being straightforward, but also tests details that might be irrelevant. One might refactor the method to call some other API and then the test would fail, as it tested the actual implementation details, not a result. Of course, methods that return the same result for the same input values - sometimes called immutable - are perfect for this type of testing.

Another solution is to start from the requirements and test that the entire codebase does what it is supposed to do. This makes more sense, but the combination of possible test cases increases exponentially and it is difficult to spot where the problem lies if a test fails. This would be called acceptance testing.

Well, the answer is: both! It all depends on your budget, of course, as you need to take into consideration not only the writing of the tests, but their maintenance as well. Automated acceptance tests would not need to change a lot, only when requirements change, while unit tests would need to be changed whenever the implementation is altered or new code is added.

Conclusion


I am not an expert on unit testing, so what I have written here describes my own experiments. Please let me know if you have anything to add or to comment. My personal opinion on the matter is that testing provides a measure of confidence that minimizes the stress of introducing changes or refactoring code. It also forces people to think in terms of "how will I test this?" while writing code, which I think is great from the viewpoint of separation of concerns and code modularity. On the other hand it adds a relatively large resource drain, both in writing and (especially) in maintaining the tests. There is also a circular kind of issue where someone needs to test the tests. Psychologically, I also believe automated testing only works for certain people. Chaotic asses like myself like to experiment a lot, which makes testing a drag. I don't even know what I want to achieve and someone tries to push testing down my throat. Later on, though, tests would be welcome, if only my manager allows the time for it. So it is, as always, a matter of logistics.

More info about unit testing with QUnit on their page.

To be frank, I never intended this to last too much. I have been (and proudly, like a true hipster) avoiding creating a Facebook account and the Twitter one I only opened because I wanted to explore it as a machine to machine messaging system and never looked back after that idea bombed. So this year I went on Facebook and reactivated my interest in Twitter, now with a more social focus. The reason doesn't really matter, but I'll share it anyway: I had an asshole colleague that refused to talk to me on anything else other than Facebook Messenger. Now we barely talk to each other, anyway. So, what have I learned from this experience? Before I answer that question, I want to tell you about how I thought it would go when I went in.

What I thought going in


I have been keeping this blog since 2007, carefully sharing whatever I thought important, especially since I am a very forgetful person and I needed a place to store valuable tidbits of information. So when Facebook blew up I merely scoffed. Have other people use some sort of weird platform to share what they think; let them post cat videos and share whenever they go to the toilet: I am above this. I carefully study and solve the problem, read the book, research new stuff, link to everything in the information that I think relevant. I have my own template, I control the code on my blog, people can chat with me and others directly, comment on whatever I have done. I can also edit a post and update it with changes that I either learn as I evolve. My posts have permanent links that look like their title, suckers! I really don't need Facebook at all.

And Twitter. Phaw! 140 characters? What is this, SMSes online? If you really have something to say, say it in bulk. It's a completely useless platform. I might take a second look at it and use it as a chat system for the blog, at most (I actually did that for a while, a long time ago). I am not social, I am antisocial, suckers! I really don't need Twitter at all.

There you go. Superior as fuck, I entered the social media having a lot of smug preconceptions that I feel ashamed for. I apologize.

Facebook


So what did I learn from months on Facebook? Nothing. Hah! To be honest, I didn't disrespect Facebook that much to begin with. I had high hopes that once I connect with all my friends I would share of their interesting experiences and projects, we would communicate and collaborate better, we would organize more parties or gettogethers, meet up more frequently if we are in the same area. Be interesting, passionate; you know... social. Instead I got cute animal videos, big pointless images with texts plastered all over them - like this would give more gravitas to bland clichés, pictures of people on vacation or at parties - as if I care about their mugs more than the location, political opinion bile, sexist jokes, driving videos, random philosophical musings, and so on and so on. Oh, I learned a lot from Facebook, most of it being how many stupid and pointless things people do. Hell, I am probably friends with people I don't really know for a good reason, not just because I am an asshole who only thinks about himself!

Not everything is bad, clearly. The messenger is the only widespread method of online communication outside email. I know when people's birthdays are (and what day it is currently). People sometimes post their achievements, link to their blog posts, share some interesting information that they either stumbled upon on the Internet (most of the time) or thought about or did themselves, there are events that I learn about from other people going there, like concerts and software meetings and so on. Oh, and the Unfollow button is a gem, however cowardly it is! However, I am no longer "reading my Facebook", I am scrolling at warp speed. I've developed internal filters for spammy bullshit and most of the time, after going through three days worth of stuff, I have only five or six links that I opened for later, one of them being probably a music video on YouTube. It still takes a huge amount of time sifting through all the shit.

Twitter


What about Twitter? Huge fucking surprise there! Forced to distill the information they share, people on Twitter either share links to relevant content or small bits of their actual thoughts, real time, while they are thinking them. There is not a comfortable mechanism for long conversations, group conferences or complicated Like-like mechanism. You do have a button to like or retweet something, but it's more of a nod towards the author that what they shared is good, not some cog in an algorithm to tell someone what YOU need. More work stuff is being shared, books that have been read and enjoyed, real time reactions to TV or cinema shows, bits of relevant code, all kind of stuff. In fact, very few people that spam Facebook are even active on Twitter. Twitter is less about a person than about the moment; it's more Zen if you want to go that way. You are not friends with folks, you just appreciate what they share. It's less personal, yet more revealing, a side effect that I had not expected. And when you reply to a tweet, you are aware of how public it is and how disassociated from the post you reply to it is. There is no ego trip on posting the most sarcastic comment like on Facebook.

Not everything is rosy there, either. They have a similar Facebooky thing that shows the title and the image/video of a shared link so you can open them directly there. So if I want to emulate the same type of behaviour on Twitter, you can by endlessly posting links to stupid stuff and follow other people who do that. You can Follow whoever you want and that means that if you are exaggerating, you end up with a deluge of posts that you have no chance of getting out of. I still haven't gotten used to the hashtag thingie. I only follow people and I only use the default Twitter website, so I am not an "advanced user", but I can tell you that after three days worth of Twitter posts that I have missed, I open around 50 links that I intend to follow up on.

So?


Some of the mental filters developed apply to both situations. The same funny ha-ha video that spams the Facebook site can be ignored just as well on the Twitter page as well. Big font misspelled or untranslatable text smacked on top of a meaningless picture is ignored by tradition, since it looks like a big ad I already have a trained eye for from years of browsing the web before ad blockers were invented.

Some of the opinion pieces are really good and I wouldn't have had the opportunity to read them if all I was looking for was news sites and some RSS feed, yet because of the time it takes to find them, I get less time in which I can pay attention to them. I catch myself feeling annoyed with the length of a text or skipping paragraphs, even when I know that those paragraphs are well researched pieces of gold. I feel like I still need to train myself to focus on what is relevant, yet I am so fucking unwilling to let go of the things that are not.

With tweaking, both platforms may become useful. For example one can unfollow all his friends on Facebook, leaving only the messaging and the occasional event and birthday notification to go through. It's a bit radical, but you can do it. I haven't played with the "Hide post (show fewer posts like this)" functionality, it could be pretty cool if it works. Twitter doesn't have a good default filtering system, though, even if I get more useful information from it. That doesn't mean that specialized Twitter clients don't have all kinds of features I have not tried. There is also the software guy way: developing your own software to sift through the stuff. One idea I had, for example, was something that uses OCR to restore images and videos to text.

Bottom line: Facebook, in its raw form, it's almost useless to me. I remember some guy making fun of it and he was so right: "Facebook is not cool. Parents are on it!". You ask someone to connect with you, which is a two directional connection, even if they couldn't care less about you, then you need to make an effort to remove the stuff they just vomit online. The graphical features of the site make it susceptible to graphical spam - everything big and flashy and lacking substance. Twitter is less so and I have been surprised to see how much actual usable information is shared there. The unidirectional following system also leads to more complex data flow and structure, not just big blobs of similar people sharing base stuff that appeals to all.

But hey! "What about you, Siderite? What are you posting on Facebook and Twitter?" You'll just have to become friends and follow me to see, right? Nah, just kidding. My main content creation platform is still Blogger and I am using this system called If This Then That to share any new post on both social networks. Sometimes I read some news or I watch some video and I use the Facebook sharing buttons to express my appreciation for the content without actually writing anything about it and occasionally I retweet something that I find really spectacular on Twitter. Because of my feelings towards the two systems, even if I find an interesting link on Tweeter, I just like it then share it on Facebook if I don't feel it's really something. So, yeah, I am also spamming more on Facebook than on Twitter.

What else?


I haven't touched Google+, which I feel is a failed social platform and only collects various YouTube comments without accurately conveying my interests. I also haven't spoken about LinkedIn, which I think is a great networking platform, but I use it - as I believe it should be - exclusively for promoting my work and finding employment. I've used some strong language above, not because I am passionate about the subject but because I am not. I find it's appropriate though and won't apologize for it. I couldn't care less if people go or don't go on social networks and surely I am not an trendsetter so that Zuckerberg would worry. I only shared my own experience.

For the future I will probably continue to use both systems unless I finally implement one of the good ideas that would allow me to focus more on what matters, thus renouncing parts of my unhealthy habits. I am curious on how this will evolve in the near future and after I leave my current hiatus and go look for employment or start my own business.

The focus of Writing Tools is more on the journalist than on the novel writer. Of course there is a lot of overlap, but some of the tools there may feel either not relevant or truly gold, since book writers would not write about them so easily.

Roy Peter Clark lists the 50 tools (55 if you have the revised edition) in four categories:
  • Nuts and Bolts - about the use of language: verbs, adverbs, phrase length, punctuation and so forth
  • Special Effects - various creative ideas that give inspiration and direction to writing
  • Blueprints - overall planning
  • Useful Habits - various solutions for common problems or for improvement
I will list the entire 50 entries at the end of the review.

What I liked about the book is that it is direct, to the point, listing the tools so that you can always pick up the book and refresh your memory on how to use them. Being so many, it is impossible to just skim through the book, unless you already know and employ most of the ideas there. I feel like I have to practice, practice, practice in order to absorb everything there is inside the material. It's not a huge thing, though, like something Kendall Haven might have written, but still it is packed with information.

I am unable to understand if the source material is still under copyright or maybe Clark made it available for free. The book is sold on Amazon, but you can also read it as PDF online or listen to it freely on iTunes.

Now, for a list of the tools, something that I have shamelessly stolen from another review, because I am lazy:
  • Part One: Nuts and Bolts
    • Begin sentences with subjects and verbs.
    • Order words for emphasis.
    • Activate your verbs.
    • Be passive-aggressive.
    • Watch those adverbs.
    • Take it easy on the -ings.
    • Fear not the long sentence.
    • Establish a pattern, then give it a twist.
    • Let punctuation control pace and space.
    • Cut big, then small.
  • Part Two: Special Effects
    • Prefer the simple over the technical.
    • Give key words their space.
    • Play with words, even in serious stories.
    • Get the name of the dog.
    • Pay attention to names.
    • Seek original images.
    • Riff on the creative language of others.
    • Set the pace with sentence length.
    • Vary the lengths of paragraphs.
    • Choose the number of elements with a purpose in mind.
    • Know when to back off and when to show off.
    • Climb up and down the ladder of abstraction.
    • Tune your voice.
  • Part Three: Blueprints
    • Work from a plan.
    • Learn the difference between reports and stories.
    • Use dialogue as a form of action.
    • Reveal traits of character.
    • Put odd and interesting things next to each other.
    • Foreshadow dramatic events and powerful conclusions.
    • To generate suspense, use internal cliffhangers.
    • Build your work around a key question.
    • Place gold coins along the path.
    • Repeat, repeat, and repeat.
    • Write from different cinematic angles.
    • Report and write for scenes.
    • Mix narrative modes.
    • In short works, don’t waste a syllable.
    • Prefer archetypes to stereotypes.
    • Write toward an ending.
  • Part Four: Useful Habits
    • Draft a mission statement for your work.
    • Turn procrastination into rehearsal.
    • Do your homework well in advance.
    • Read for both form and content.
    • Save string.
    • Break long projects into parts.
    • Take an interest in all crafts that support your work.
    • Recruit your own support group.
    • Limit self-criticism in early drafts.
    • Learn from your critics.
    • Own the tools of your craft.