IEnumerable/IEnumerator - the iterator design pattern implemented in the C# language


C# started up with the IEnumerable interface, which exposed one method called GetEnumerator. This method would return a specialized object (implementing IEnumerator) that could give you from a collection of items the current item and could also advance. Unlike Java on which C# is originally based, this interface was inherited by even the basest of collection objects, like arrays. This allowed a special construct in the language: foreach. One uses it very simply:
for (var item in ienumerable) // do something
. No need to know how or from where the item is exactly retrieved from the collection, just get the first, then the next and so on until there are no more items.

In .NET 2.0 generics came, along their own interfaces like IEnumerable<T>, holding items of a specific type, but the logic is the same. It also introduced another language element called yield. One didn't need to write an IEnumerator implementation anymore, they could just define a method that returned an IEnumerable and inside "yield return" values. Something like this:
public class Program
{
public static void Main(string[] args)
{
var enumerator = Fibonacci().GetEnumerator();
for (var i = 0; enumerator.MoveNext() && i < 10; i++)
{
var v = enumerator.Current;
Console.WriteLine(v);
}
Console.ReadKey();
}

public static IEnumerable<int> Fibonacci()
{
var i1 = 0;
var i2 = 1;
while (true)
{
yield return i2;
i2 += i1;
i1 = i2 - i1;
}
}
}

This looks a bit weird. A method is running a while(true) loop with no breaks. Shouldn't it block the execution of the program? No, because of the yield construct. While the Fibonacci series is infinite, we would only get the first 10 values. You can also see how the enumerator works, when used explicitly.

Iterators and generators in Javascript ES6


EcmaScript version 6 (or ES6 or ES2015) introduced the same concepts in Javascript. An iterator is just an object that has a next() method, returning an object containing the value and done properties. If done is true, value is disregarded and the iteration stops, if not, value holds the current value. An iterable object will have a method that returns an iterator, the method's name being Symbol.iterator. The for...of construct of the language iterates the iterable. String, Array, TypedArray, Map and Set are all built-in iterables, because the prototype objects of them all have a Symbol.iterator method. Example:
var iterable=[1,2,3,4,5];
for (v of iterable) {
console.log(v);
}

But what about generating values? Well, let's do it using the knowledge we already have:
var iterator={
i1:0,
i2:1,
next:function() {
var result = { value: this.i2 }
this.i2+=this.i1;
this.i1=this.i2-this.i1;
return result;
}
};

var iterable = {};
iterable[Symbol.iterator]=() => iterator;

var iterator=iterable[Symbol.iterator]();
for (var i=0; i<10; i++) {
var v=iterator.next();
console.log(v.value);
}

As you can see, it is the equivalent of the Fibonacci code written in C#, but look at how unwieldy it is. Enter generators, a feature that allows, just like in C#, to define functions that generate values and the iterable associated with them:
function* Fibonacci() {
var i1=0;
var i2=1;
while(true) {
yield i2;
i2+=i1;
i1=i2-i1;
}
}

var iterable=Fibonacci();

var iterator=iterable[Symbol.iterator]();
for (var i=0; i<10; i++) {
var v=iterator.next();
console.log(v.value);
}

No, that's not a C pointer, thank The Architect, it's the way Javascript ES6 defines generators. Same code, much clearer, very similar to the C# version.

Uses


OK, so these are great for mathematics enthusiasts, but what are we, regular dudes, going to do with iterators and generators? I mean, for better or for worse we already have for and .forEach in Javascript, what do we need for..of for? (pardon the pun) And what do generators help with?

Well, in truth, one could get away simply without for..of. The only object where .forEach works differently is the Map object, where it returns only the values, as different from for..of which returns arrays of [key,value]. However, considering generators are new, I would expect using for..of with them to be more clear in code and more inline with what foreach does in C#.

Generators have the ability to easily define series that may be infinite or of items which are expensive resources to get. Imagine a download of a large file where each chunk is a generated item. An interesting use scenario is when the .next function is used with parameters. This is Javascript, so an iterator having a .next method only means it has to be named like that. You can pass an arbitrary number of parameters. So here it is, a generator that not only dumbly spews out values, but also takes inputs in order to do so.

In order to thoroughly explore the value of iterators and generators I will use my extensive knowledge in googling the Internet and present you with this very nice article: The Hidden Power of ES6 Generators: Observable Async Flow Control which touches on many other concepts like async/await (oh, yeah, that should be another interesting C# steal), observables and others that are beyond the scope of this article.



I hope you liked this short intro into these interesting new features in ES6.

Can you tell me what is the difference between these to pieces of code?
var f=function() {
alert('f u!');
}

function f() {
alert('f u!');
}

It's only one I can think of and with good code hygiene it is one that should never matter. It is related to 'hoisting', or the idea that in Javascript a variable declaration is hoisted to the top of the function or scope before execution. In other words, I can see there is a function f before the declarations above. And now the difference: 'var f = function' will have its declaration hoisted, but not its definition. f will exist at the beginning of the scope, but it will be undefined; the 'function f' format will have both declaration and definition hoisted, so that at the beginning of the scope you will have available the function for execution.

Intro


I am not the authoritative person to go to for Javascript Promises, but I've used them extensively (pardon the pun) in Bookmark Explorer, my Chrome extension. In short, they are a way to handle methods that return asynchronously or that for whatever reason need to be chained. There are other ways to do that, for example using Reactive Extensions, which are newer and in my opinion better, but Rx's scope is larger and that is another story altogether.

Learning by examples


Let's take a classic and very used example: AJAX calls. The Javascript could look like this:
function get(url, success, error) {
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function () {
if (this.readyState == 4) {
if (this.status == 200) {
success(this.response);
} else {
error(this.status);
}
}
};
xhttp.open("GET", url, true);
xhttp.send();
}

get('/users',function(users) {
//do something with user data
},function(status) {
alert('error getting users: '+status);
})

It's a stupid example and not by far complete, but it shows a way to encapsulate an AJAX call into a function that receives an URL, a handler for success and one for error. Now let's complicate things a bit. I want to get the users, then for each active user I want to get the document list, then return the ones that contain a string. For simplicity's sake, let's assume I have methods that do all that and receive a success and an error handler:
var search = 'some text';
var result = [];
getUsers(function (users) {
users
.filter(function (user) {
return user.isActive();
})
.forEach(function (user) {
getDocuments(user, function (documents) {
result = result.concat(documents.filter(function (doc) {
return doc.text.includes(text);
}));
}, function (error) {
alert('Error getting documents for user ' + user + ': ' + error);
});
});
}, function (error) {
alert('Error getting users: ' + error);
});

It's already looking wonky. Ignore the arrow anti pattern, there are worse issues. One is that you never know when the result is complete. Each call for user documents takes an indeterminate amount of time. This async programming is killing us, isn't it? We would have much better liked to do something like this instead:
var result = [];
var search = 'some text';
var users = getUsers();
users
.filter(function (user) {
return user.isActive();
})
.forEach(function (user) {
var documents = getDocuments(user);
result = result.concat(documents.filter(function (doc) {
return doc.text.includes(text);
}));

});
First of all, no async, everything is deterministic and if the function for getting users or documents fails, well, it can handle itself. When this piece of code ends, the result variable holds all information you wanted. But it would have been slow, linear and simply impossible in Javascript, which doesn't even have a Pause/Sleep option to wait for stuff.

Now I will write the same code with methods that use Promises, then proceed on explaining how that would work.
var result = [];
var search = 'some text';
var userPromise = getUsers();
userPromise.then(function (users) {
var documentPromises = users
.filter(function (user) {
return user.isActive();
})
.map(function (user) {
return getDocuments(user);
});
var documentPromise = Promise.all(documentPromises);
documentPromise
.then(function (documentsArray) {
documentsArray.forEach(function (documents) {
result = result.concat(documents.filter(function (doc) {
return doc.text.includes(search);
});
});
// here the result is complete
})
.catch (function (reason) {
alert('Error getting documents:' + reason);
});
});
Looks more complicated, but that's mostly because I added some extra variables for clarity.

The first thing to note is that the format of the functions doesn't look like the async callback version, but like the synchronous version: var userPromise=getUsers();. It doesn't return users, though, it returns the promise of users. It's like a politician function. This Promise object encapsulates the responsibility of announcing when the result is actually available (the .then(function) method) or when the operation failed (the .catch(function) method). Now you can pass that object around and still use its result (successful or not) when available at whatever level of the code you want it.

At the end I used Promise.all which handles all that uncertainty we were annoyed about. Not only does it publish an array of all the document getting operations, but the order of the items in the array is the same as the order of the original promises array, regardless of the time it took to execute any of them. Even more, if any of the operations fails, this aggregate Promise will immediately exit with the failure reason.

To exemplify the advantages of using such a pattern, let's assume that getting the users sometimes fails due to network errors. The Internet is not the best in the world where the user of the program may be, so you want to not fail immediately, instead retry the operation a few times before you do. Here is how a getUsersWithRetry would look:
function getUserWithRetry(times, spacing) {
var promise = new Promise(function (resolve, reject) {
var f = function () {
getUsers()
.then(resolve)
.catch (function (reason) {
if (times <= 0) {
reject(reason);
} else {
times--;
setTimeout(f, spacing);
}
});
}
f();
});
return promise;
}

What happens here? First of all, like all the get* methods we used so far, we need to return a Promise object. To construct one we give it as a parameter a function that receives two other functions: resolve and reject. Resolve will be used when we have the value, reject will be used when we fail getting it. We then create a function so that it can call itself and in it, we call getUsers. Now, if the operation succeeds, we will just call resolve with the value we received. The operation would function exactly like getUsers. However, when it fails, it checks the number of times it must retry and only fails (calling reject) when that number is zero. If it still has retries, it calls the f function we defined, with a timeout defined in the parameters. We finally call the f function just once.



Here is another example, something like the original get function, but that returns a Promise:
function get(url) {
// Return a new promise.
return new Promise(function(resolve, reject) {
// Do the usual XHR stuff
var req = new XMLHttpRequest();
req.open('GET', url);

req.onload = function() {
// This is called even on 404 etc
// so check the status
if (req.status == 200) {
// Resolve the promise with the response text
resolve(req.response);
}
else {
// Otherwise reject with the status text
// which will hopefully be a meaningful error
reject(Error(req.statusText));
}
};

// Handle network errors
req.onerror = function() {
reject(Error("Network Error"));
};

// Make the request
req.send();
});
}
Copied it like a lazy programmer from JavaScript Promises: an Introduction.

Notes


An interesting thing to remember is that the .then() method also returns a Promise, so one can do stuff like get('/users').then(JSON.parse).then(function(users) { ... }). If the function called by .then() is returning a Promise, then that is what .then() will return, allowing for stuff like someOperation().then(someOtherOperation).catch(errorForFirstOperation).then(handlerForSecondOperation). There is a lot more about promises in the Introduction in the link above and I won't copy/paste it here.

The nice thing about Promises is that they have been around in the Javascript world since forever in various libraries, but only recently as native Javascript objects. They have reached a maturity that was tested through the various implementations that led to the one accepted today by the major browsers.

Promises solve some of the problems in constructing a flow of asynchronous operations. They make the code cleaner and get rid of so many ugly extra callback parameters that many frameworks used us to. This flexibility is possible because in Javascript functions are first class members, meaning they can be passed around and manipulated just like any other parameter type.

The disadvantages are more subtle. Some parts of your code will be synchronous in nature. You will have stuff like a=sum(b,c); in functions that return values. Suddenly, functions don't return actual values, but promised values. The time they take to execute is also unclear. Everything is in flux.

Conclusion


I hope this has opened your eyes to the possibilities of writing your code in a way that is both more readable and easy to encapsulate. Promises are not limited to Javascript, many other languages have their own implementations. As I was saying in the intro, I feel like Promises are a simple subcase of Reactive Extensions streams in the sense that they act like data providers, but are limited to only one possible result. However, this simplicity may be more easy to implement and understand when this is the only scenario that needs to be handled.

and has 0 comments
A few years ago I wrote a post titled Software Patterns are Useless in which I was drawing attention to the overhyping of the concept of software design patterns. In short, they are either too simple or too complex to bundle together into a concept to be studied, much less used as a lingua franca for coders.

I still feel this way, but in this post I want to explore what I think the differences are between software design patterns and building architecture design patterns. In the beginning, as you no doubt know from the maniacal introduction in design patternese by any of its enthusiasts, it was a construction architect that noticed common themes in solving the problems required to building something. This happened around the time I was born, so 40 years ago. In about ten years, software people started to import the concept and created this collection of general solutions to software problems. It was a great idea, don't get me wrong.

However, things have definitely changed in software since then. While we still meet some of the problems we invented software design patterns for, the problems or at least their scope has changed immensely. To paraphrase a well known joke, design patterns would be similar in construction and software if you would first construct a giant robot to build your house by itself. But it doesn't happen like this, instead the robot is actually the company itself, its purpose being to take your house specifications and build it. So how useful would a "pattern" be for a construction company saying "if you need a house, hire a construction company"?

Look at MVC, a software design pattern that is implemented everywhere in the web world, at different levels of abstractions. You use ASP.Net MVC, for example, to separate the logic from the rendering on the server, but then you use Angular to separate logic from rendering on the client. We are very clever young men, but it's MVCs all the way down! How is that a pattern anymore? "If you want to separate rendering from logic, use MVC" "How do I implement it?" "Oh, just use a framework that was built for you". At the other end of the spectrum there are things that are so simple and basically embedded in programming languages that thinking of them as patterns is pointless. Iterator is one of them. Now even Javascript has a .forEach construct, not to mention generators and iterators. Or Abstract Factory - isn't that just a factory for factories? Should I invent a new name for the pattern that creates a factory of factories for factories?

But surely there are patterns that are very useful and appropriate for our work today. Like the factory pattern, or the builder, the singleton or inversion of control and so many others. Of course they are, I am not debating that at all. I am going to present inversion of control soon to a group of developers. I even find it useful to split these patterns into categories like creational, structural, behavioral, concurrency, etc. Classification in general is a good thing. It's the particulars that bother me.

Take the seminal book Design Patterns: Elements of Reusable Object-Oriented Software (Gamma et al. 1994). And by seminal I mean so influential that the Wikipedia URL is Design_Patterns. Not (book) or the complete title or anything. If you followed the software design patterns link from higher up you noticed that design patterns are still organized around those 23 original ones from the book written by "the Gang of Four" more than twenty years ago.

I want to tell you that this is wrong, but I've seen it in so many other fields. There is one work that is so ground breaking that it becomes the new ground. Everything else seems to build upon it from then on. You cannot be taken seriously unless you know about it and what it contains. Any personal contribution of yours must needs reference the great work. All the turtles are stacked upon it. And it is dangerous, encouraging stagnation and trapping you in this cage that you can only get out of if you break it.

Does that mean that I am writing my own book that will break ground and save us all? No. The patterns, as described and explained in so many ways around the 'net are here to stay. However note the little changes that erode the staleness of a rigid classification, like "fluent interfaces" being used instead of "builder pattern", or the fact that no one in their right mind will try to show off because they know what an iterator is, or the new patterns that emerge not from evaluating multiple cases and extracting a common theme, but actually inventing a pattern to solve the problems that are yet to come, like MVVM.

I needed to write this post, which very much mirrors the old one from a few years ago, because I believe calling a piece of software that solves a problem a pattern increasingly sounds more like patent. Like biologists walking the Earth in the hope they will find a new species to name for themselves, developers are sometimes overdefining and overusing patterns. Remember it is all flexible, that a pattern is supposed to be a common theme in written code, not a rigid way of writing your code from a point on. If software patterns are supposed to be the basis of a common language between developers, remember that all languages are dynamic, alive, that no one speaks the way the dictionaries and manuals tell you to speak. Let your mind decide how to write and next time someone scoffs at you asking "do you know what the X pattern is?" you respond with "let me google that for you". Let patterns be guides for the next thing to come, not anchors to hold you back.

I watched this Beau teaches JavaScript video and I realized how powerful Proxy, this new feature of EcmaScript version 6, is. In short, you call new Proxy(someObject, handler) and you get an object that behaves just like the original object, but has your code intercept most of the access to it, like when you get/set a property or method or when you ask if the object has a member by name. It is great because I feel I can work with normal Javascript, then just insert my own logging, validation or some other checking code. It's like doing AOP and metaprogramming in Javascript.

Let's explore this a little bit. The video in the link above already shows some way to do validation, so I am going to create a function that takes an object and returns a proxy that is aware of any modification to the original object, adding the isDirty property and clearDirt() method.

function dirtify(obj) {
return new Proxy(obj,{
isDirty : false,
get : function(target, property, receiver) {
if (property==='isDirty') return this.isDirty;
if (property==='clearDirt') {
var self=this;
var f = function() {
self.isDirty=false;
};
return f.bind(target);
}
console.log('Getting '+property);
return target[property];
},
has : function(target, property) {
if (property==='isDirty'||property==='clearDirt') return true;
console.log('Has '+property+'?');
return property in target;
},
set : function(target, property, value, receiver) {
if (property==='isDirty'||property==='clearDirt') return false;
if (target[property]!==value) this.isDirty=true;
console.log('Setting '+property+' to '+JSON.stringify(value));
target[property]=value;
return true;
},
deleteProperty : function(target, property) {
if (property==='isDirty'||property==='clearDirt') return false;
console.log('Delete '+property);
if (target[property]!=undefined) this.isDirty=true;
delete target[property];
return true;
}
});
}
var obj={
x:1
};
var proxy=dirtify(obj);
console.log('x' in proxy); //true
console.log(proxy.hasOwnProperty('x')); //true
console.log('isDirty' in proxy); //true
console.log(proxy.x); //1
console.log(proxy.hasOwnProperty('isDirty')); //false
console.log(proxy.isDirty); //false

proxy.x=2;
console.log(proxy.x); //2
console.log(proxy.isDirty); //true

proxy.clearDirt();
console.log(proxy.isDirty); //false

proxy.isDirty=true;
console.log(proxy.isDirty); //false
delete proxy.isDirty;
console.log(proxy.isDirty); //false

delete proxy.x;
console.log(proxy.x); //undefined
console.log(proxy.isDirty); //true

proxy.clearDirt();
proxy.y=2;
console.log(proxy.isDirty); //true

proxy.clearDirt();
obj.y=3;
console.log(obj.y); //3
console.log(proxy.y); //3
console.log(proxy.isDirty); //false

So, here I am returning a proxy that logs any access to members to the console. It also simulates the existence of isDirty and clearDirt members, without actually setting them on the object. You see that when setting the isDirty property to true, it still reads false. Any setting of a property to a different value or deleting an existing property is setting the internal isDirty property to true and the clearDirt method is setting it back to false. To make it more interesting, I am returning true for the 'in' operator, but not for the hasOwnProperty, when querying if the attached members exist. Also note that this is a real proxy, if you change a value in the original object, the proxy will also reflect it, but without intercepting the change.

Imagine the possibilities!

More info:
ES6 Proxy in Depth
Metaprogramming with proxies
Metaprogramming in ES6: Part 3 - Proxies

and has 0 comments
As part of a self imposed challenge to write a blog post about code on average every day for about 100 days, I am discussing a link that I've previously shared on Facebook, but that I liked so much I feel the need to contemplate some more: The Caching Antipattern

Basically, what is says is that caching is done wrong in any of these cases:
  • Caching at startup - thus admitting that your dependencies are too slow to begin with
  • Caching too early in development - thus hiding the performance of the service you are developing
  • Integrated cache - cache is embedded and integral in the service code, thus breaking the single responsibility principle
  • Caching everything - resulting in an opaque service architecture and even recaching. Also caching things you think will be used, but might never be
  • Recaching - caches of caches and the nightmare of untangling and invalidating them in cascade
  • Unflushable cache - no method to invalidate the cache except restarting services

The alternatives are not to cache and instead know your data and service performance and tweak them accordingly. Try to take advantage of client caches such as the HTTP If-Modified-Since header and 304-not-modified return code whenever possible.

I've had the opportunity to work on a project that did almost everything in the list above. The performance bottleneck was the database, so the developers embedded an in code memory cache for resources that were not likely to change. Eventually other anti-patterns started to emerge, like having a filtered call (let's say a call asking for a user by id) getting all users and then selecting the one that had that id. Since it was in memory anyway, "getting" the entire list of records was just getting a reference to an already existing data structure. However, if I ever wanted to get rid of the cache or move it in an external service, I would have had to manually look for all cases of such presumptions in code and change them. If I wanted to optimize a query in the database I had no idea how to measure the performance as actually used in the product, in fact it led to a lack of interest in the way the database was used and a strong coupling of data provider implementation to the actual service code. Why use memory tables for often queried data? We have it cached anyway.

My take on this is that caching should always be separate from the service code. Have the code work, as slow as the data provider and external services allow it, then measure the performance and add caching where actually needed, as a separate layer of indirection. These days there are so many ways to do caching, from in memory tables in SQL Server, to distributed memory caches provided as a service by most cloud providers - such as Memcached or Redis in AWS, to content caches like Akamai, to html output caches like Varnish and to client caches, controlled by response and request headers like in the suggestion from the original article. Adding your own version is simply wasteful and error prone. Just like the data provider should be used through a thin interface that allows you to replace it at will, the caching layer should also be plug and play, thus allowing your application to remain unchanged, but able to upgrade any of its core features when better alternatives arrive.

There is also something to be said about reaching the limit of your resources. Let's say you cache everything your clients ask for in memory, when they ask for it. At one time or another you might reach the upper limit of your memory. At this time the cache should not fail, instead it should clear the data least used or the oldest inserted or something like that. A cache is not something that is supposed to hold all your data, only the part of it that is most efficient, performance wise, and it should never ever bring more problems like memory overflow crashes. Eek!

Now, I need to find a suitable name for my caching layer invalidation manager ;)

Other interesting resources about caching:
Cache (computing)
Caching Best Practices
Cache me if you can Powerpoint presentation
Caching guidance
Caching Techniques

The first thing that strikes anyone starting to use another IDE than the one they are used to is that all the key bindings are wrong. So they immediately google something like "X key bindings for Y" and they usually get an answer, since developers switching IDEs and preferring one in particular is quite a common situation. Not so with Eclipse. You have to install software, remove settings then still modify stuff. I am going to give you the complete answer here on how to switch Eclipse key bindings to the ones you are used to in Visual Studio.

Step 1
First follow the instructions in this Stack Overflow answer: How to Install Visual Studio Key Bindings in Eclipse (Helios onwards)
Short version: Go to Help → Install New Software, select your version in the Work with box, wait until the list populates, check the box next to Programming Languages → C/C++ Development Tools and install (with restart). After that go to Window → Preferences → General → Keys and change the Scheme in a dropdown to Microsoft Visual Studio.

Step 2
When Eclipse starts it shows you a Welcome screen. Disable the welcome screen by checking the box from the bottom-right corner and restart Eclipse. This is to avoid Ctrl-arrows not working in the editor as explained in this StackOverflow answer.

Step 3
While some stuff does work, others do not. It is time to go to Window → Preferences → General → Keys and start changing key bindings. It is a daunting task at first, since you have to find the command, set the shortcut in the zillion contexts that are available and so on. The strategy I found works best is this:
  • Right click on whatever item you want to affect with the keyboard shortcut
  • Find in the context menu whatever command you want to do
  • Remember the keyboard shortcut
  • Go to the key preferences and replace that shortcut everywhere (the text filter in the key bindings dialog allows searching for keyboard shortcuts)

You might want to share or at least backup your keyboard settings. No, the Export CSV option in the key bindings dialog gives you a file you can't import. The solution, as detailed here is to go to File → Export or Import → General → Preferences and work with .epf files. And if you think it gives you a nice list of key bindings that you can edit with a file editor, think again. The format holds the key binding scheme name, then only the custom changes, in a file that is what .ini and .xml would have if they decided on having children.

Now, the real decent thing would be to not go through Step 1 and instead just start from the default bindings and change them according to Visual Studio (2016, not 2005!!) and then export the .epf file in order for all people to enjoy a simple and efficient method of reaching their goal. I leave this as an exercise for the reader.

A short list of shortcuts that I found missing from the Visual Studio schema: rename variable on F2, go to declaration on F12, Ctrl-Shift-F for search across files, Ctrl-Minus to navigate backward ... and more to come I am sure.

and has 2 comments
Ugh! I will probably be working in Java for a while. Or I will kill myself, one of the two. So far I hate Eclipse, I can't even write code and I have to blog simple stuff like how to do regular expressions. Well, take it as a learning experience! Exiting my comfort zone! 2017! Ha, ha haaaaa [sob! sob!]

In Java you use Pattern to do regex:
Pattern p = Pattern.compile("a*b");
Matcher m = p.matcher("aaaaab");
boolean b = m.matches();

So let's do some equivalent code:

.NET:
var regDate=new Regex(@"^(?<year>(?:19|20)?\d{2})-(?<month>\d{1,2})-(?<day>\d{1,2})$",RegexOptions.IgnoreCase);
var match=regDate.Match("2017-02-14");
if (match.Success) {
var year=match.Groups["year"];
var month=match.Groups["month"];
var day=match.Groups["day"];
// do something with year, month, day
}

Java:
Pattern regDate=Pattern.compile("^(?<year>(?:19|20)?\\d{2})-(?<month>\\d{1,2})-(?<day>\\d{1,2})$", Pattern.CASE_INSENSITIVE);
Matcher matcher=regDate.matcher("2017-02-14");
if (matcher.find()) {
String year=matcher.group("year");
String month=matcher.group("month");
String day=matcher.group("day");
// do something with year, month, day
}

Notes:

The first thing to note is that there is no verbatim literal support in Java (the @"string" format from .NET) and there is no "var" (one always has to specify the type of the variable, even if it's fucking obvious). Second, the regular expression object Pattern doesn't do things directly, instead it creates a Matcher object that then does operations. The two bits of code above are not completely equivalent, as the Success property in a .NET Match object holds the success of the already performed operation, while .find() in the Java Matcher object actually performs the match.

Interestingly, it seems that Pattern is automatically compiling the regular expression, something that .NET must be directed to do. I don't know if the same term means the same thing for the two frameworks, though.

Another important thing is that it is more efficient to reuse matchers rather than recreate them. So when you want to use the matcher on another string, use matcher.reset("newstring").

And lastly, the string class itself has quick and dirty regular expression methods like .matches, replaceFirst and .replaceAll. The matches method only returns a bool if the string is a perfect match (equivalent to a Pattern match with ^ at the beginning and $ at the end).

and has 0 comments
I'll be honest, I only started reading the Culture series because Elon Musk named his rockets after ships in the books; and I started with The Player of Games, because the first book was in an inconvenient ebook format. So here I was, posed to be amazed by the wonderful and famous universe created by Iain M. Banks. And it completely bored me.

The book is written in a style reminiscent of Asimov, but even older feeling, even if it was published in 1997. My mind made the connection with We, by Zamyiatin, which was published in 1921. Characters are not really developed, they are described in few details that pertain to the subject of the story. They then act and talk, occasionally the book revealing some things that seemed smart to the author when he wrote it. Secondary characters have it even worse, with the most extensively (and uselessly) cared for attribute being a long name composed of various meaningless words. The hero of the story is named Chiark-Gevantsa Jernau Morat Gurgeh dam Hassease, for example, and the sentient drone that accompanies him is Trebel Flere-Imsaho Ephandra Lorgin Estral. Does anyone care? Nope.

But wait, Asimov wrote some brilliant books, didn't he? Maybe the style is a bit off, but the world and the idea behind the story must be great, if everybody acclaims the Culture series. Nope, again. The universe is amazingly conservative, with the important actors being either humanoid or machine, and acting as if of similar intellect. The entire premise of the book is that a human is participating in a game contest with aliens, and even engages in sexual flirting and encounters with them, which felt really uninspired and even insipid. I mean, compare this with other books released in 1997, like The Neutronium Alchemist (The Night's Dawn Trilogy, #2) by Peter F. Hamilton or 3001: The Final Odyssey (Space Odyssey, #4), by Arthur C. Clarke or even Slant, by Greg Bear. Compared to these The Player of Games feels antiquated, bland. Imagine an entire book about Data fighting Sirna Kolrami in a game of Stratagema. Boring.

Perhaps the most intriguing part of the book is also the least explored: the social and moral landscape of the Culture, a multispecies conglomerate that seems to have grown above the need for stringent laws or moral rules. Since everybody can change their chemistry and body shape and have enough resources to have no need for money, they do whatever they please when they please it, as long as it doesn't disturb others too much. Now this threshold is never explained in more words than a few paragraphs. The anarchistic nature of the post scarcity society Banks described, and indeed the rest of the book, felt like a stab of our current hierarchical and rule based order, but it was a weak stab, a near miss, a mere tickle that went ignored.

Bottom line: my expectations for well renowned books may be unreasonable high, and maybe if I would have read The Game Player when I was a kid, I would have liked it. However, I wasn't a kid in 1997 and I chose to read it now, when it feels even more obsolete and bland.

and has 0 comments

The unthinkable happened and I couldn't finish a Brandon Sanderson book. True, I had no idea The Bands of Mourning was the sixth in a series, but when I found out I thought it was a good idea to read it and see if it was worth it reading the whole Mistborn series, for which Sanderson is mostly known. Well, if the other books in the series are like this one, it's kind of boring.

I didn't feel like the book was bad, don't get me wrong, it was just... painfully average. Apparently in the Mistborn universe there are people that have abilities, like super powers, and other that have even stronger powers, but use metal as fuel. Different metals give different powers. That was intriguing, I bought the premise, I wanted to see it used in an interesting way. Instead I get a main character that is also a lord and a policeman, who is solving crime with the help of a funny sidekick at the request of the gods, who are only people who have ascended into godhood, rather than the creators of the entire universe. The crime fighting lord kind of soured the whole deal for me, but I was ready to see more and get into the mood of things. I couldn't. Apparently, the only way people have thought to fight people who can affect metal is aluminium bullets, which is terribly expensive, or complex devices that nullify their power. Apparently bows and arrows or wooden bullets are beyond their imagination.

But the worst sin of the book, other than kind of recycling old ideas and having people behave stupidly is having completely unsympathetic characters. I probably would have been invested more if I would have read the first five books of the series first, but as it is, I thought all of the main characters were artificially weird, annoying and uninteresting.

Bottom line: around halfway into the book, which is short by Sanderson standards anyway, I gave up. There are so many books in the world, I certainly don't need to read this one. The Wikipedia article for the book says: Sanderson wrote the first third of Shadows of Self between revisions of A Memory of Light. However, after returning to the book in 2014 Sanderson found it difficult to get back into writing it again. To refresh himself on the world and characters, Sanderson decided to write its sequel Bands of Mourning first and at the end of 2014 he turned both novels in to his publisher. So the author was probably distracted when he wrote this book, perhaps the others are better, but as such I find it difficult to motivate myself to try reading them.

Uitindu-ma la protestele de ieri am fost surprins ca nimeni nu face legatura dintre altercatiile cu fortele de ordine si Revolutia din 1989, singura perioada in care tin minte ca ar mai fi fost asa ceva in Romania. Ieri se vorbea de Colectiv, de cit de nesimtiti sint la PSD, ca jos Dragnea. A fost Colectiv atit de departe incit nu mai tinem minte cum era? Lumea era in strada scandind impotriva clasei politice in general. Acolo nu s-au bagat ultrasii sau jandarmii, iar lumii ii era lehamite de orice forma de politica. Acum, insa, lupta e polarizata, jos unul, sus altul, si am cazut iar in ciclul ala puturos din care nu am mai iesit din '90 citeva decenii: un permanent vot impotriva, punindu-ne bolnav sperantele in cealalta directie, ca si cum citeva rocade intre partide ar fi rezolvat ceva. La Revolutie am dat jos un sistem, iar acum, cred eu, orice mai prejos de atit este un rateu gigantic.

De aceea nu o sa ma vedeti prin piete scandind impotriva unuia sau altuia. Sint toti la fel. Singura solutie este castrarea politica: sa nu mai aiba nimeni posibilitatea de a da legi nediscutate, sa poata introduce oricine o lege sau un veto la o lege cu un anumit numar de semnaturi, sa eliminam posibilitatea, prin Constitutie, ca un presedinte sa tina parlamentul blocat sau ca un parlament sa se joace cu legislatia pe cintecul vreunui partid sau ca DNAul sa tina parti si tot asa. Sa tragem la raspundere oamenii pentru vorbele, promisiunile si faptele lor. Nu cu legi si inchisori, ci public, colectiv. Cumva am uitat ca alegerile politice sint doar o abstractie a vointei populare care se poate schimba in orice moment.

Ce faci cu cineva care ti-a inselat increderea? Nu i-o mai acorzi. Daca ii dai cheile de la casa si te fura, ii iei cheile! Poate il si bati mar, dar in mod clar nu il mai lasi in casa ta. Solutia nu e nici sa dai imediat cheile altuia, ci sa le tii tu si gata.

Repet: Protestele de dupa incendiul de la Colectiv erau o explozie de indignare impotriva intregii clase politice, Revolutia de la 1989 si ce a urmat imediat, singura perioada in care imi aduc aminte sa fi fost jandarmi cu tunuri cu apa si gaze folosite impotriva manifestantilor in Romania, a fost o explozie de indignare impotriva intregului sistem politic. Ma pis pe manifestantii din Piata Victoriei daca tot ce vor este sa il dea jos pe Dragnea cind pleaca de la servici, daca asta e toata ambitia lor.

and has 0 comments
Brandon Sanderson does not disappoint with the sequel to Way of Kings. Quite the opposite, in fact, weaving more and more into the vast tapestry that is the world of the Stormlight Archive series. The characters converge towards a point in time and space where everything and anything will be decided, the fate of the entire world, with just a few courageous people standing between life and complete desolation.

Words of Radiance focuses more on the main characters, with less distractions that might take the reader out of the flow of the story. However, even if the scope of their achievements explodes, the power of their stories loses a bit of the desperation and energy from the first book. We no longer have powerless broken people trying to survive, but magical beings full of strength doing extraordinary things. Ironically, it is their success that makes them less easy to identify and empathize with. The author throws challenges in front of them, but they seem inconsequential compared to the ones in Way of Kings. I feel like he has grown attached to them and finds it difficult to torture them as a good writer should. On the other hand Sanderson is a positive person, most of this writing being lighthearted and less dark and brooding, so this is not a disappointment.

The climax is a gigantic clash between forces that have slowly grown since the beginning of the series. Sanderson does a wonderful job tying the separate strands of his world into a single story, maybe a bit too much so. The Roshar and Helaran connections felt a bit strained, not unlike Luke Skywalker discovering his greatest ally and greatest enemy are family members. The author better be careful not to put immense effort to create a vast universe, only to shrink it by mistake by connecting everything with everything and everybody with everybody.

And again the final pages of the book feel weak, as they come after the powerful climax, yet they are necessary to tie in some story arks and seed the beginning of others. Yes, the book ends with a promise that what happened in it is just the mere beginning, a small part of the larger picture, so expect little closure. Sanderdon is a prolific author and I am sure he will write the next books in the series fast enough to keep me engaged, but be aware the series is planned to be at least ten main books with about just as much companion stories and novels. This... will take a while. Oathbringer, the third book, is scheduled to be released in November 2017.

Bottom line: I recommend the book and the series and the author. No fantasy reader should ignore Brandon Sanderson if they are anything like me. Just make sure you are ready to get invested in the story only to wait every year for the next chapter to be released.

and has 1 comment

I am hearing more and more this expression that leaves me baffled: "Check your privilege!". It is directed at us, White men, by women, colored folk and gays. It is intended to make us aware of our superior position in order for us to feel guilty over it. Really? That's what you got?

First of all, shit doesn't just happen: it takes time and effort! Do you think that God bestowed our supremacy onto us or something? No! If you believe that you have bought into all the stories we've fed you. We worked hard to get where we are! What have you done? Black people have been the majority of people on Earth for millions of years, but when does humanity grow exponentially together with the living standards? That's right! When the White Man takes control. Women have been ruling the Stone Age for millennia. Where did that get us? Nowhere, that's where! Stone fashion didn't do well, did it? And now you have the gall to ask us to check our privilege? Now the shoe is on the other foot and you are sour about it. Deal with it! Facts, bitches! Not alternative ones, either. The White Man management has brought this enterprise to new heights. Everything you have now is the direct consequence of our leadership. You are beneath us because we put you there!

Every time you people complain you call yourselves "minorities". No, you're not! Women are more numerous than men and White people are fewer than blacks and browns and yellows and whatever else there is on this planet. You know who's a minority? White Men! And still the privilege is yours, since we clearly allow you to exist and complain. The only group of people that have consistently been persecuted and have been a lot fewer than other folk are the homos. They are the only ones who have the right to whine. That's why everybody says whining is gay. It's true!

Yet when we complain, we are derided! You really think it is easy to keep under boot a majority of people on Earth. You think hitting women or slaves is fun? It fucking stings! You have to take all empathy and push it way way down, swallow your tears and do the right thing for everybody, because in the end we have led human kind into its Golden Age, all through our sacrifice. And if you don't like it, that's too bad, but it's mostly your fault, anyway. You make us behave like that, even when we hate it, because you keep getting above your station.

When the most powerful man on Earth is a White Man who rightfully knows the truth about the world and our place at its helm, you act all outraged. We even allowed you to vote the person you wanted and still you chose him. Oh, he lies, you say. He's a sexist racist White Man who twists the truth to further his needs. Have you even met politicians before? They are mostly White and mostly male because, statistically proven, we rule the world best. If you do something, at least do it right!

So you check *your* privilege! You get to complain, to fight for your rights, to live, all the while basking into the glory of the White Man and reaping the fruits of his labor and sacrifice. We carry you into tomorrow like a cross on Golgotha, never complaining, being spit at all the way up, but up we climb and high we reach. If you want to get to where we are, work for it like we do. Enslave some people, cull others, smack some around in the name of God. It's not fun, but it needs doing, for the betterment of humanity as a whole. In the end, you are where you are because you know it's the right place for you, otherwise you would have done something about it. You slack away while we run things for you, it's just the way of the world. Start complaining after a few million years, when you get your turn.

Often we need to attach functions on Javascript events, but we need them to not be executed too often. Mouse move or scroll events can fire several times a second and executing some heavy computation directly would make everything slow and unresponsive. That's why we use a method called debounce, that takes your desired function and returns another function that will only get executed so many times in a time interval.



It has reached a certain kind of symmetry, so I like it this way. Let me explain how I got to it.

First of all, there is often a similar function used for debouncing. Everyone remembers it and codes it off the top of the head, but it is partly wrong. It looks like this:
function debounce(fn, wait) {
var timeout=null;
return function() {
var context=this;
var args=arguments;
var f=function(){ fn.apply(context,args); };
clearTimeout(timeout);
timeout=setTimeout(f,wait);
};
}
It seems OK, right? Just extend the time until the function gets executed. Well, the problem is that the first time you call the function you will have to wait before you see any result. If the function is called more often than the wait period, your code will never get executed. That is why Google shows this page as *the* debounce reference: JavaScript Debounce Function. And it works, but good luck trying to understand its flow so completely that you can code it from memory. My problem was with the callNow variable, as well as the rarity of cases when I would need to not call the function immediately the first time, thus making the immediate variable redundant.

So I started writing my own code. And it looked like the "casual" debounce function, with an if block added. If the timeout is already set, then just reset it; that's the expected behavior. When isn't this the expected behavior? When calling it the first time or after a long period of inactivity. In other words when the timeout is not set. And the code looked like this:
function debounce(fn, wait) {
var timeout=null;
return function() {
var context=this;
var args=arguments;
var f=function(){ fn.apply(context,args); };
if (timeout) {
clearTimeout(timeout);
timeout=setTimeout(f,wait);
} else {
timeout=setTimeout(function() {
clearTimeout(timeout);
timeout=null;
});
f();
}
};
}

The breakthrough came with the idea to use the timeout anyway, but with an empty function, meaning that the first time it is called, the function will execute your code immediately, but also "occupy" the timeout with an empty function. Next time it is called, the timeout is set, so it will be cleared and reset with a timeout using your initial code. If the interval elapses, then the timeout simply gets cleared anyway and next time the call of the function will be immediate. If we abstract the clearing of timeout and the setting of timeout in the functions c and t, respectively, we get the code you saw at the beginning of the post. Note that many people using setTimeout/clearTimeout are in the scenario in which they set the timeout immediately after they clear it. This is not always the case. clearTimeout is a function that just stops a timer, it does not change the value of the timeout variable. That's why, in the cases when you want to just clear the timer, I recommend also setting the timeout variable to null or 0.

For the people wanting to look cool, try this version:
function debounce(fn, wait) {
var timeout=null;
var c=function(){ clearTimeout(timeout); timeout=null; };
var t=function(fn){ timeout=setTimeout(fn,wait); };
return function() {
var context=this;
var args=arguments;
var f=function(){ fn.apply(context,args); };
timeout
? c()||t(f)
: t(c)||f();
}
}

Now, doesn't this look sharp? The symmetry is now obvious. Based on the timeout, you either clear it immediately and time out the function or you time out the clearing and execute the function immediately.

Update 26 Apr 2017: Here is an ES6 version of the function:
function debounce(fn, wait) {
let timeout=null;
const c=()=>{ clearTimeout(timeout); timeout=null; };
const t=fn=>{ timeout=setTimeout(fn,wait); };
return ()=>{
const context=this;
const args=arguments;
let f=()=>{ fn.apply(context,args); };
timeout
? c()||t(f)
: t(c)||f();
}
}

and has 0 comments
Brandon Sanderson proves again he is a brilliant writer. His Stormlight universe is not only vast and imaginative, but the characters are both compelling and well written.

Way of the Kings has some slow parts, though, and even if I kind of liked that, it is uneven in regards to its characters: some get more focus, some just a few chapters. That means that if you identify with the lead characters you will enjoy the book, but if you empathize with the lesser ones you will probably get frustrated.

I particularly enjoyed the climax. It was as it should be: the tension was rising and Sanderson just wouldn't let it go, it just kept pushing it and pushing it, filling in the motivations of the character, adding burden upon burden, making choices as difficult and as important as possible before finally allowing the release of his characters making one. Alas, the wonderful ending is followed by epilogues, several of them, which just seem boring afterwards, in comparison.

Great series, though, I recommend it highly.