and has 0 comments
People who know me often snicker whenever someone utters "refactoring" nearby. I am a strong proponent of refactoring code and I have feelings almost as strong for the managers who disagree. Usually they have really stupid reasons for it, too. However, speaking with a colleague the other day, I realized that refactoring can be bad as well. So here I will explore this idea.

Why refactor at all?


Refactoring is the process of rewriting code so that it is more readable and maintainable. It does not mean writing code to be readable and maintainable from the beginning and it does not mean that doing it you accept your code was not good when you first wrote it. Usually the scope of refactoring is larger than localized bits of code and takes into account several areas in your software. It also has the purpose of aligning your codebase with the inevitable scope creep in any project. But more than this, its primary use is to make easy the work of people that will work on the code later on, be it yourself or some colleague.

I was talking with this friend of mine and he explained to me how, especially in the game industry, managers are reluctant to spend resources in cleaning old code before actually starting work on new one, since release dates come fast and technologies change rapidly. I replied that, to me, refactoring is not something to be done before you write code, but after, as a phase of the development process. In fact, there was even a picture showing it on a wheel: planning, implementing, testing and bug fixing, refactoring. I searched for it, but I found so many other different ideas that I've decided it would be pointless to show it here. However, most of these images and presentation files specified maintenance as the final step of a software project. For most projects, use and maintenance is the longest phase in the cycle. It makes sense to invest in making it easier for your team.



So how could any of this be bad?


Well, there are types of projects that are fire and forget, they disappear after a while, their codebase abandoned. Their maintenance phase is tiny or nonexistent and therefore refactoring the code has a limited value. But still it is not a case when refactoring is wrong, just less useful. I believe that there are situations where refactoring can have an adverse effect and that is exactly the scenario my friend mentioned: before starting to code. Let me expand on that.

Refactoring is a process of rewriting code, which implies you not only have a codebase you want to rewrite, but also that you know how to do it. Except very limited cases where some project is bought by another company with a lot more experienced developers and you just need to clean up garbage, there is no need to touch code that you are just beginning to understand. To refactor after you've finished a planned development phase (a Scrum sprint, for example, or a completed feature) is easy, since you understand how the code was written, what the requirements have become, maybe you are lucky enough to have unit tests on the working code, etc. It's the now I have it working, let's clean it up a little phase. Alternately, doing it when you want to add things is bad because you barely remember what and who did anything. Moreover, you probably want to add features, so changing the old code to accommodate adding some other code makes little sense. Management will surely not only not approve, but even consider it a hostile request from a stupid techie who only cares about the beauty of code and doesn't understand the commercial realities of the project. So suggest something like this and you risk souring the entire team on the prospect of refactoring code.

Another refactoring antipattern is when someone decides the architecture needs to be more flexible, so flexible that it could do anything, therefore they rearchitect the whole thing, using software patterns and high level concepts, but ignoring the actual functionality of the existing code and the level of seniority in their team. In fact, I wouldn't even call this refactoring, since it doesn't address problems with code structure, but rewrites it completely. It's not making sure your building is sturdy and all water pipes are new, it's demolishing everything, building something else, then bringing the same furniture in. Indeed, even as I like beautiful code, doing changes to it solely to make it prettier or to make you feel smarter is dead wrong. What will probably happen is that people will get confused on the grand scheme of things and, without expensive supervision in terms of time and other resources, they will start to cut corners and erode the architecture in order to write simpler code.

There is a system where software is released in "versions". So people just write crappy code and pile features one over the other, in the knowledge that if the project has success, then the next version will be well written. However, that rarely happens. Rewriting money making code is perceived as a loss by the financial managers. Trust me on this: the shitty code you write today will haunt you for the rest of the project's lifetime and even in its afterlife, when other projects are started from cannibalized codebases. However, I am not a proponent of writing code right from the beginning, mostly because no one actually knows what it should really do until they end writing it.

Refactoring is often associated with Test Driven Development, probably because they are both difficult to sell to management. It would be a mistake to think that refactoring is useful only in that context. Sure, it is a best practice to have unit tests on the piece of code you need to refactor, but let's face it, reality is hard enough as it is.

Last, but not least, is the partial or incomplete refactoring. It starts and sometime around the middle of the effort new feature requests arrive. The refactoring is "paused", but now part of your code is written one way and the rest another. The perception is that refactoring was not only useless, but even detrimental. Same when you decide to do it and then allow yourself to avoid it or postpone it and you do it badly enough it doesn't help at all. Doing it for the sake of saying you do it is plain bad.

The right time and the right people


I personally believe that refactoring should be done at the end of each development interval, when you are still familiar with the feature and its implementation. Doing it like this doesn't even need special approval, it's just the way things are done, it's the shop culture. It is not what you do after code review - simple code cleaning suggested by people who took five minutes to look it over - it is a team effort to discuss which elements are difficult to maintain or are easy to simplify or reuse or encapsulate. It is not a job for juniors, either. You don't grab the youngest guy in the team and you let him rearrange the code of more experienced people, even if that seems to teach the guy a lot. Also, this is not something that senior devs are allowed to do in their spare time. They might like it, but it is your responsibility to care about the project, not something you expect your team to do when you are too lazy or too cheap. Finally, refactoring is not an excuse to write bad code in the hope you will fix it later.

By the way I am talking about this you probably believe I've worked in many teams where refactoring was second nature and no one would doubt its utility. You would be wrong. Because it is poorly understood, the reaction of non technical people in a software team to the concept of refactoring usually falls in the interval between condescension and terror. Money people don't understand why change something that works, managers can't sell it as a good thing, production and art people don't care. Even worse, most technical people will rather write new stuff than rearrange old stuff and some might even take offense at attempts to make "their code" better. But they will start to mutter and complain a lot when they will get to the maintenance phase or when they will have to write features over old code, maybe even theirs, and they will have difficulty understanding why the code is not written in a way in which their work would be easy. And when managers will go to their dashboards and compare team productivity they will raise eyebrows at a chart that shows clear signs of slowing down.

Refactoring has a nasty side effect: it threatens jobs. If the code would be clean and any change easy to perform, then there will be a lot of pressure on the decision makers to justify their job. They will have to come with relevant new ideas all the time. If the effort to maintain code or add new features is small, there will be pressure on developers to justify their job as well. Why keep a large team for a project that can easily accommodate a few junior devs that occasionally add something. Refactoring is the bane of the type of worker than does their job confusingly enough that only they can continue to do it or pretend to be managing a difficult project, but they are the ones that make it be so. So in certain situations, for example in single product companies, refactoring will make people fear they will be made redundant. Yet in others it will accelerate the speed of development for new projects, improve morale and win a shit load of money.

So my parting thoughts are these: sell it right and do it right! Most likely it will have a positive effect on the entire project and team. People will be happier and more productive, which means their bosses will be happier and filthy richer. Do it badly or sell it wrong and you will alienate people and curse shitty code for as long as you work there.

and has 0 comments

Intro


I was thinking about what to discuss about computers next and I realized that I hardly ever read anything about the flow of an application. I mean, sure, when you learn to code the first thing you hear is that a program is like a set of instructions, not unlike a cooking recipe or instructions from a wife to her hapless husband when she's sending him to the market. You know, stuff like "go to the market and buy 10 eggs. No, make it 20. If they don't have eggs get salami." Of course, her software developer husband returns with 20 salamis "They didn't have eggs", he reasons. Yet a program is increasingly not a simple set of instructions neatly following each other.

So I wrote like a 5 page treatise on program control flow, mentioning Turing and the Benedict Cumberbatch movie, child labor and farm work, asking if it is possible to turn any program, with its parallelism and entire event driven complexity to a Turing machine for better debugging. It was boring, so I removed it all. Instead, I will talk about conditional statements and how to refactor them, when it is needed.

A conditional statement is one of the basic statements of programming, it is a decision that affects what the program will execute next. And we love telling computers what to do, right? Anyway, here are some ways if/then/else or switch statements are used, some bad, some good, and how to fix whatever problems we find.

Team Arrow


First of all, the arrow antipattern. It's when you have if blocks in if blocks until your code looks like an arrow pointing right:
if (data.isValid()) {
if (data.items&&data.items.length) {
var item=data.items[0];
if (item) {
if (item.isActive()) {
console.log('Oh, great. An active item. hurray.');
} else {
throw "Item not active! Fatal terror!";
}
}
}
}
This can simply be avoided by putting all the code in a method and inverting the if branches, like this:
if (!data.isValid()) return;
if (!data.items||!data.items.length) return;
var item=data.items[0];
if (!item) return;
if (!item.isActive()) {
throw "Item not active! Fatal terror!";
}
console.log('Oh, great. An active item. hurray.');
See? No more arrow. And the debugging is so much easier.

There is a sister pattern of The Arrow called Speedy. OK, that's a Green Arrow joke, I have no idea how it is called, but basically, since a bunch of imbricated if blocks can be translated into a single if with a lot of conditions, the same code might have looked like this:
if (data.isValid()&&data.items&&data.items.length&&data.items[0]) {
var item=data.items[0];
if (!item.isActive()) {
throw "Item not active! Fatal terror!";
}
console.log('Oh, great. An active item. hurray.');
}
While this doesn't look like an arrow, it is in no way a better code. In fact it is worse, since the person debugging this will have to manually check each condition to see which one failed when a bug occurred. Just remember that if it doesn't look like an arrow, just its shaft, that's worse. OK, so now I named it: The Shaft antipattern. You first heard it here!

There is also a cousin of these two pesky antipatterns, let's call it Black Shaft! OK, no more naming. Just take a look at this:
if (person&&person.department&&person.department.manager&&person.department.manager.phoneNumber) {
call(person.department.manager.phoneNumber);
}
I can already hear a purist shouting at their monitor something like "That's because of the irresponsible use of null values in all programming languages!". Well, null is here to stay, so deal with it. The other problem is that you often don't see a better solution to something like this. You have a hierarchy of objects and any of them might be null and you are not in a position where you would cede control to another piece of code based on which object you refer to. I mean, one could refactor things like this:
if (person) {
person.callDepartmentManager();
}
...
function callDepartmentManager() {
if (this.department) {
this.department.callManager();
}
}
, which would certainly solve things, but adds a lot of extra code. In C# 6 you can do this:
var phoneNumber = person?.department?.manager?.phoneNumber;
if (phoneNumber) {
call(phoneNumber);
}
This is great for .NET developers, but it also shows that rather than convince people to use better code practices, Microsoft decided this is a common enough problem it needed to be addressed through features in the programming language itself.

To be fair, I don't have a generic solution for this. Just be careful to use this only when you actually need it and handle any null values with grace, rather than just ignore them. Perhaps it is not the best place to call the manager in a piece of code that only has a reference to a person. Perhaps a person that doesn't seem to be in any department is a bigger problem than the fact you can't find their manager's phone number.

The Omnipresent Switch


Another smelly code example is the omnipresent switch. You see code like this:
switch(type) {
case Types.person:
walk();
break;
case Types.car:
run();
break;
case Types.plane:
fly();
break;
}
This isn't so bad, unless it appears in a lot of places in your code. If that type variable is checked again and again and again to see which way the program should behave, then you probably can apply the Replace Conditional with Polymorphism refactoring method.



Or, in simple English, group all the code per type, then only decide in one place which of them you want to execute. Polymorphism might work, but also some careful rearranging of your code. If you think of your code like you would a story, then this is the equivalent of the annoying "meanwhile, at the Bat Cave" switch. No, I want to see what happens at the Beaver's Bend, don't fucking jump to another unrelated segment! Just try to mentally filter all switch statements and replace them with a comic book bubble written in violently zigzagging font: "Meanwhile...".

A similar thing is when you have a bool or enum parameter in a method, completely changing the behavior of that method. Maybe you should use two different methods. I mean, stuff like:
function doWork(iFeelLikeIt) {
if (iFeelLikeIt) {
work();
} else {
fuckIt();
}
}
happens every day in life, no need to see it in code.

Optimizing in the wrong place


Let's take a more serious example:
function stats(arr,method) {
if (!arr||!arr.length) return;
arr.sort();
switch (method) {
case Methods.min:
return arr[0];
case Methods.max:
return arr[arr.length-1];
case Methods.median:
if (arr.length%2==0) {
return (arr[arr.length/2-1]+arr[arr.length/2])/2;
} else {
return arr[Math.ceiling(arr.length/2)];
}
case Methods.mode:
var counts={};
var max=-1;
var result=-1;
arr.forEach(function(v) {
var count=(counts[v]||0)+1;
if (count>max) {
result=v;
max=count;
}
counts[v]=count;
});
return result;
case Methods.average:
var sum=0;
arr.forEach(function(v) { sum+=v; });
return sum/arr.length;
}
}

OK, it's still a silly example, but relatively less silly. It computes various statistical formulas from an array of values. At first, it seems like a good idea. You sort the array that works for three out of five methods, then you write the code for each, which is greatly simplified by working with a sorted array. Yet for the last two, being sorted does nothing and both of them have loops through the array. Sorting the array would definitely loop through the array as well. So, let's move the decision earlier:
function min(arr) {
if (!arr||!arr.length) return;
return Math.min.apply(null,arr);
}

function max(arr) {
if (!arr||!arr.length) return;
return Math.max.apply(null,arr);
}

function median(arr) {
if (!arr||!arr.length) return;
arr.sort();
var half=Math.ceiling(arr.length/2);
if (arr.length%2==0) {
return (arr[half-1]+arr[half])/2;
} else {
return arr[half];
}
}

function mode(arr) {
if (!arr||!arr.length) return;
var counts={};
var max=-1;
var result=-1;
arr.forEach(function(v) {
var count=(counts[v]||0)+1;
if (count>max) {
result=v;
max=count;
}
counts[v]=count;
});
return result;
}

function average(arr) {
if (!arr||!arr.length) return;
return arr.reduce(function (p, c) {
return p + c;
}) / arr.length;
}

As you can see, I only use sorting in the median function - and it can be argued that I could do it better without sorting. The names of the functions now reflect their functionalities. The min and max functions take advantage of the native min/max functions of Javascript and other than the check for a valid array, they are one liners. More than this, it was natural to use various ways to organize my code for each method; it would have felt weird, at least for me, to use forEach and reduce and sort and for loops in the same method, even if each was in its own switch case block. Moreover, now I can find the min, max, mode or median of an array of strings, for example, while an average would make no sense, or I can refactor each function as I see fit, without caring about the functionality of the others.

Yet, you smugly point out, each method uses the same code to check for the validity of the array. Didn't you preach about DRY a blog post ago? True. One might turn that into a function, so that there is only one point of change. That's fair. I concede the point. However don't make the mistake of confusing repeating a need with repeating code. In each of the functions there is a need to check for the validity of the input data. Repeating the code for it is not only good, it's required. But good catch, reader! I wouldn't have thought about it myself.

But, you might argue, the original function was called stats. What if a manager comes and says he wants a function that calculates all statistical values for an array? Then the initial sort might make sense, but the switch doesn't. Instead, this might lead to another antipattern: using a complex function only for a small part of its execution. Something like this:
var stats=getStats(arr);
var middle=(stats.min+stats.max)/2;
In this case, we only need the minimum and maximum of an array in order to get the "middle" value, and the code looks very elegant, yet in the background it computes all the five values, a waste of resources. Is this more readable? Yes. And in some cases it is preferred to do it like that when you don't care about performance. So this is both a pattern and an antipattern, depending on what is more important to your application. It is possible (and even often encountered) to optimize too much.

The X-ifs


A mutant form of the if statement is the ternary operator. My personal preference is to use it whenever a single condition determines one value or another. I prefer if/then/else statements to actual code execution. So I like this:
function boolToNumber(b) {
return b?1:0;
}

function exec(arr) {
if (arr.length%2==0) {
split(arr,arr.length/2);
} else {
arr.push(newValue());
}
}
but I don't approve of this:
function exec(arr) {
arr.length%2
? arr.push(newValue())
: split(arr,arr.length/2);
}

var a;
if (x==1) {
a=2;
} else {
a=6;
}

var a=x==1
? 2
: (y==2?5:6);
The idea is that the code needs to be readable, so I prefer to read it like this. It is not a "principle" to write code as above - as I said it's a personal preference, but do think of the other people trying to make heads and tails of what you wrote.

We are many, you are but one


There is a class of multiple decision flow that is hard to immediately refactor. I've talked about if statements that do the entire work in one of their blocks and of switch statements that can be easily split into methods. However there is the case where you want to do things based on the values of multiple variables, something like this:
if (x==1) {
if (y==1) {
console.log('bottom-right');
} else {
console.log('top-right');
}
} else {
if (y==1) {
console.log('bottom-left');
} else {
console.log('top-left');
}
}
There are several ways of handling this. One is to, again, try to move the decision on a higher level. Example:
if (x==1) {
logRight(y);
} else {
logLeft(y);
}
Of course, this particular case can be fixed through computation, like this:
var h=y==1?'right':'left';
var v=x==1?'bottom':'top';
console.log(v+'-'+h);
Assuming it was not so simple, though, we can choose to reduce the choice to a single decision:
switch(x+','+y) {
case '0,0': console.log('top-left'); break;
case '0,1': console.log('bottom-left'); break;
case '1,0': console.log('top-right'); break;
case '1,1': console.log('bottom-right'); break;
}



The Lazy Event


Another more complicated issue regarding conditional statements is when they are not actually encoding a decision, but testing for a change. Something like:
if (current!=prev) {
clearData();
var data=computeData(current);
setData(data);
prev=current;
}
This is a perfectly valid piece of code and in many situations is what is required. However, one must pay attention to the place where the decision gets taken as compared with the place the value changed. Isn't that more like an event handler, something that should be designed differently, architecture wise? Why keep a previous value and react to the change only when I get into this piece of code and not react to the change of the value immediately? Fire an event that the value is changed and subscribe to the event via a piece of code that refreshes the data. One giveaway for this is that in the code above there is no actual use of the prev value other than to compare it and set it.

Generalizations


As a general rule, try to take the decisions that are codified by if and switch statements as early as possible. The code must be readable to humans, sometimes in detriment of performance, if it is not essential to the functionality of your program. Avoid decision statements within other decision statements (arrow ifs, ternary operator in a ternary operator, imbricated switch and if statements). Split large pieces of code into small, easy to understand and properly named, methods (essentially creating a lower level than your conditional statement, thus relatively taking it higher in the code hierarchy).

What's next


I know this is a lower level programming blog post, but not everyone reading my blog is a senior dev - I mean, I hope so, I don't want to sound completely stupid. I am planning some new stuff, related to my new work project, but it might take some time until I understand it myself. Meanwhile, I am running out of ideas for my 100 days of writing about code challenge, so suggestions are welcome. And thank you for reading so far :)

and has 6 comments

Intro


I want to talk today about principles of software engineering. Just like design patterns, they range from useful to YAA (Yet Another Acronym). Usually, there is some guy or group of people who decide that a set of simple ideas might help software developers write better code. This is great! Unfortunately, they immediately feel the need to assign them to mnemonic acronyms that make you wonder if they didn't miss some principles from their sets because they were bad at anagrams.

Some are very simple and not worth exploring too much. DRY comes from Don't Repeat Yourself, which basically means don't write the same stuff in multiple places, or you will have to keep them synchronized at every change. Simply don't repeat yourself. Don't repeat yourself. See, it's at least annoying. KISS comes from Keep It Simple, Silly - yeah, let's be civil about it - anyway, the last letter is there just so that the acronym is actually a word. The principle states that avoiding unnecessary complexity will make your system more robust. A similar principle is YAGNI (You Aren't Gonna Need It - very New Yorkish sounding), which also frowns upon complexity, in particular the kind you introduce in order to solve a possible future problem that you don't have.

If you really want to fill your head with principles for software engineering, take a look at this huge list: List of software development philosophies.

But what I wanted to talk about was SOLID, which is so cool that not only does it sound like something you might want your software project to be, but it's a meta acronym, each letter coming from another acronym:
  • S - SRP
  • O - OCP
  • L - LSP
  • I - ISP
  • D - DIP

OK, I was just making it look harder than it actually is. Each of the (sub)acronyms stands for a principle (hence the last P in each) and even if they have suspiciously sounding names that hint on how much someone wanted to call their principles SOLID, they are really... err... solid. They refer specifically to object oriented programming, but I am sure they apply to all types. Let's take a look:

Single Responsibility


The idea is that each of your classes (or modules, units of code, functions, etc) should strive towards only one functionality. Why? Because if you want to change your code you should first have a good reason, then you should know where is that single (DRY) point where that reason applies. One responsibility equals one and only one reason to change.

Short example:
function getTeam() {
var result=[];
var strongest=null;
this.fighters.forEach(function(fighter) {
result.push(fighter.name);
if (!strongest||strongest.power<fighter.power) {
strongest=fighter;
}
});
return {
team:result,
strongest:strongest;
};
}

This code iterates through the list of fighters and returns a list of their names. It also finds the strongest fighter and returns that as well. Obviously, it does two different things and you might want to change the code to have two functions that each do only one. But, you will say, this is more efficient! You iterate once and you get two things for the price of one! Fair enough, but let's see what the disadvantages are:
  • You need to know the exact format of the return object - that's not a big deal, but wouldn't you expect to have a team object returned by a getTeam function?
  • Sometimes you might want just the list of fighters, so computing the strongest is superfluous. Similarly, you might only want the strongest player.
  • In order to add stuff in the iteration loop, the code has become more complex - at least when reading it - than it has to be.

Here is how the code could - and should - have looked. First we split it into two functions:
function getTeam() {
var result=[];
this.fighters.forEach(function(fighter) {
result.push(fighter.name);
});
return result;
}

function getStrongestFighter() {
var strongest=null;
this.fighters.forEach(function(fighter) {
if (!strongest||strongest.power<fighter.power) {
strongest=fighter;
}
});
return strongest;
}
Then we refactor it to something simple and readable:
function getTeam() {
return this.fighters
.map(function(fighter) { return fighter.name; });
}

function getStrongestFighter() {
return this.fighters
.reduce(function(strongest,val) {
return !strongest||strongest.power<val.power?val:strongest;
});
}

Open/Closed


When you write your code the last thing you want to do is go back to it and change it again and again whenever you implement a new functionality. You want the old code to work, be tested to work, and allow new functionality to be built on top of it. In object oriented programming that simply means you can extend classes, but practically it also means the entire plumbing necessary to make this work seamlessly. Simple example:
function Fighter() {
this.fight();
}
Fighter.prototype={
fight : function() {
console.log('slap!');
}
};

function ProfessionalFighter() {
Fighter.apply(this);
}
ProfessionalFighter.prototype={
fight : function() {
console.log('punch! kick!');
}
};

function factory(type) {
switch(type) {
case 'f': return new Fighter();
case 'pf': return new ProfessionalFighter();
}
}

OK, this is a silly example, since I am lazily emulating inheritance in a prototype based language such as Javascript. But the result is the same: both constructors will .fight by default, but each of them will have different implementations. Note that while I cannibalized bits of Fighter to build ProfessionalFighter, I didn't have to change it at all. I also built a factory method that returns different objects based on the input. Both possible returns will have a .fight method.

The open/closed principle also applies to non inheritance based languages and even in classic OOP languages. I believe it leads to a natural outcome: preferring composition over inheritance. Write your code so that the various modules just seamlessly connect to each other, like Lego blocks, to build your end product.

Liskov substitution principle


No, L is not coming from any word that they could think of, so they dragged poor Liskov into it. Forget the definition, it's really stupid and complicated. Not KISSy at all! Assume you have the base class Fighter and the more specialized subclass ProfessionalFighter. If you had a piece of code that uses a Fighter object, then this principle says you should be able to replace it with a ProfessionalFighter and the piece of code would still be correct. It may not be what you want, but it would work.

But when does a subclass break this principle? One very ugly antipattern that I have seen is when general types know about their subtypes. Code like "if I am a professional fighter, then I do this" breaks almost all SOLID principles and it is stupid. Another case is when the general class has everything they can think of, either abstract or implemented as empty or throwing an error, then the base classes implement one or the other of the functions. Dude! If your fighter doesn't implement .fight, then it is not a fighter!

I don't want to add code to this, since the principle is simple enough. However, even if it is an idea that primarily applies to OOP it doesn't mean it doesn't have its uses in other types of programming languages. One might say it is not even related to programming languages, but to concepts. It basically says that an orange should behave like an orange if it's blue, otherwise don't call it a blue orange!

Interface segregation principle


This one is simple. It basically is the reverse of the Liskov: split your interfaces - meaning the desired functionality of your code - into pieces that are fully used. If you have a piece of code that uses a ProfessionalFighter, but all it does is use the .fight method, then use a Fighter, instead. Silly example:
public class Fighter {
public virtual void Fight() {
Console.WriteLine("slap!");
}
}

public class EnglishFighter {
public override void Fight() {
Console.WriteLine("box!");
}
public override void Talk() {
Console.WriteLine("Oy!");
}
}

class Program {
public static void Main() {
EnglishFighter f=getMeAFighter();
f.Fight();
}
}

I don't even know if it's valid code, but anyway, the idea here is that there is no reason for me to declare and use the variable f as an EnglishFighter, if all it does is fight. Use a Fighter type. And a YAGNI on you if you thought "wait, but what if I want him to talk later on?".

Dependency inversion principle


Oh, this is a nice one! But it doesn't live in a vacuum. It is related to both SRP and OCP as it states that high level modules should not depend on low level modules, only on their abstractions. In other words, use interfaces instead of implementations wherever possible.

I wrote an entire other post about Inversion of Control, the technique that allows you to properly use and enjoy interfaces, while keeping your modules independent, so I am not going to repeat things here. As a hint, simply replacing Fighter f=new Fighter() with IFighter f=new Fighter() is not enough. You must use a piece of code that decides for you what implementation of an interface will be used, something like IFighter f=GetMeA(typeof(IFighter)).

The principle is related to SRP because it says a boss should not be controlling or depending on what particular things the employees will do. Instead, he should just hire some trustworthy people and let them do their job. If a task depends so much on John that you can't ever fire him, you're in trouble. It is also related to OCP because the boss will behave like a boss no matter what employee changes occur. He may hire more people, replace some, fire some others, the boss does not change. Nor does he accept any responsibility, but that's a whole other principle ;)

Conclusions


I've explored some of the more common acronyms from hell related to software development and stuck more to SOLID, because ... well, you have to admit, calling it SOLID works! Seriously now, following principles such as these (choose your own, based on your own expertise and coding style) will help you a lot later on, when the complexity of your code will explode. In a company there is always that one poor guy who knows everything and can't work well because everybody else is asking him why and how something is like it is. If the code is properly separated on solid principles, you just need to look at it and understand what it does and keep the efficiency of your changes high. One small change for man , like. Let that unsung hero write their code and reach software Valhalla.

SRP keeps modules separated, OCP keeps code free of need for modifications, LSP and ISP make you use the basest of classes or interfaces, reinforcing the separation of modules, while DIP is helping you sharpen the boundaries between modules. Taken together, SOLID principles are obviously about focus, allowing you to work on small, manageable parts of the code without needing knowledge of others or having to make changes to them.

Keep coding!

and has 0 comments
A few years ago I wrote a post titled Software Patterns are Useless in which I was drawing attention to the overhyping of the concept of software design patterns. In short, they are either too simple or too complex to bundle together into a concept to be studied, much less used as a lingua franca for coders.

I still feel this way, but in this post I want to explore what I think the differences are between software design patterns and building architecture design patterns. In the beginning, as you no doubt know from the maniacal introduction in design patternese by any of its enthusiasts, it was a construction architect that noticed common themes in solving the problems required to building something. This happened around the time I was born, so 40 years ago. In about ten years, software people started to import the concept and created this collection of general solutions to software problems. It was a great idea, don't get me wrong.

However, things have definitely changed in software since then. While we still meet some of the problems we invented software design patterns for, the problems or at least their scope has changed immensely. To paraphrase a well known joke, design patterns would be similar in construction and software if you would first construct a giant robot to build your house by itself. But it doesn't happen like this, instead the robot is actually the company itself, its purpose being to take your house specifications and build it. So how useful would a "pattern" be for a construction company saying "if you need a house, hire a construction company"?

Look at MVC, a software design pattern that is implemented everywhere in the web world, at different levels of abstractions. You use ASP.Net MVC, for example, to separate the logic from the rendering on the server, but then you use Angular to separate logic from rendering on the client. We are very clever young men, but it's MVCs all the way down! How is that a pattern anymore? "If you want to separate rendering from logic, use MVC" "How do I implement it?" "Oh, just use a framework that was built for you". At the other end of the spectrum there are things that are so simple and basically embedded in programming languages that thinking of them as patterns is pointless. Iterator is one of them. Now even Javascript has a .forEach construct, not to mention generators and iterators. Or Abstract Factory - isn't that just a factory for factories? Should I invent a new name for the pattern that creates a factory of factories for factories?

But surely there are patterns that are very useful and appropriate for our work today. Like the factory pattern, or the builder, the singleton or inversion of control and so many others. Of course they are, I am not debating that at all. I am going to present inversion of control soon to a group of developers. I even find it useful to split these patterns into categories like creational, structural, behavioral, concurrency, etc. Classification in general is a good thing. It's the particulars that bother me.

Take the seminal book Design Patterns: Elements of Reusable Object-Oriented Software (Gamma et al. 1994). And by seminal I mean so influential that the Wikipedia URL is Design_Patterns. Not (book) or the complete title or anything. If you followed the software design patterns link from higher up you noticed that design patterns are still organized around those 23 original ones from the book written by "the Gang of Four" more than twenty years ago.

I want to tell you that this is wrong, but I've seen it in so many other fields. There is one work that is so ground breaking that it becomes the new ground. Everything else seems to build upon it from then on. You cannot be taken seriously unless you know about it and what it contains. Any personal contribution of yours must needs reference the great work. All the turtles are stacked upon it. And it is dangerous, encouraging stagnation and trapping you in this cage that you can only get out of if you break it.

Does that mean that I am writing my own book that will break ground and save us all? No. The patterns, as described and explained in so many ways around the 'net are here to stay. However note the little changes that erode the staleness of a rigid classification, like "fluent interfaces" being used instead of "builder pattern", or the fact that no one in their right mind will try to show off because they know what an iterator is, or the new patterns that emerge not from evaluating multiple cases and extracting a common theme, but actually inventing a pattern to solve the problems that are yet to come, like MVVM.

I needed to write this post, which very much mirrors the old one from a few years ago, because I believe calling a piece of software that solves a problem a pattern increasingly sounds more like patent. Like biologists walking the Earth in the hope they will find a new species to name for themselves, developers are sometimes overdefining and overusing patterns. Remember it is all flexible, that a pattern is supposed to be a common theme in written code, not a rigid way of writing your code from a point on. If software patterns are supposed to be the basis of a common language between developers, remember that all languages are dynamic, alive, that no one speaks the way the dictionaries and manuals tell you to speak. Let your mind decide how to write and next time someone scoffs at you asking "do you know what the X pattern is?" you respond with "let me google that for you". Let patterns be guides for the next thing to come, not anchors to hold you back.

Uitindu-ma la protestele de ieri am fost surprins ca nimeni nu face legatura dintre altercatiile cu fortele de ordine si Revolutia din 1989, singura perioada in care tin minte ca ar mai fi fost asa ceva in Romania. Ieri se vorbea de Colectiv, de cit de nesimtiti sint la PSD, ca jos Dragnea. A fost Colectiv atit de departe incit nu mai tinem minte cum era? Lumea era in strada scandind impotriva clasei politice in general. Acolo nu s-au bagat ultrasii sau jandarmii, iar lumii ii era lehamite de orice forma de politica. Acum, insa, lupta e polarizata, jos unul, sus altul, si am cazut iar in ciclul ala puturos din care nu am mai iesit din '90 citeva decenii: un permanent vot impotriva, punindu-ne bolnav sperantele in cealalta directie, ca si cum citeva rocade intre partide ar fi rezolvat ceva. La Revolutie am dat jos un sistem, iar acum, cred eu, orice mai prejos de atit este un rateu gigantic.

De aceea nu o sa ma vedeti prin piete scandind impotriva unuia sau altuia. Sint toti la fel. Singura solutie este castrarea politica: sa nu mai aiba nimeni posibilitatea de a da legi nediscutate, sa poata introduce oricine o lege sau un veto la o lege cu un anumit numar de semnaturi, sa eliminam posibilitatea, prin Constitutie, ca un presedinte sa tina parlamentul blocat sau ca un parlament sa se joace cu legislatia pe cintecul vreunui partid sau ca DNAul sa tina parti si tot asa. Sa tragem la raspundere oamenii pentru vorbele, promisiunile si faptele lor. Nu cu legi si inchisori, ci public, colectiv. Cumva am uitat ca alegerile politice sint doar o abstractie a vointei populare care se poate schimba in orice moment.

Ce faci cu cineva care ti-a inselat increderea? Nu i-o mai acorzi. Daca ii dai cheile de la casa si te fura, ii iei cheile! Poate il si bati mar, dar in mod clar nu il mai lasi in casa ta. Solutia nu e nici sa dai imediat cheile altuia, ci sa le tii tu si gata.

Repet: Protestele de dupa incendiul de la Colectiv erau o explozie de indignare impotriva intregii clase politice, Revolutia de la 1989 si ce a urmat imediat, singura perioada in care imi aduc aminte sa fi fost jandarmi cu tunuri cu apa si gaze folosite impotriva manifestantilor in Romania, a fost o explozie de indignare impotriva intregului sistem politic. Ma pis pe manifestantii din Piata Victoriei daca tot ce vor este sa il dea jos pe Dragnea cind pleaca de la servici, daca asta e toata ambitia lor.

and has 1 comment

I am hearing more and more this expression that leaves me baffled: "Check your privilege!". It is directed at us, White men, by women, colored folk and gays. It is intended to make us aware of our superior position in order for us to feel guilty over it. Really? That's what you got?

First of all, shit doesn't just happen: it takes time and effort! Do you think that God bestowed our supremacy onto us or something? No! If you believe that you have bought into all the stories we've fed you. We worked hard to get where we are! What have you done? Black people have been the majority of people on Earth for millions of years, but when does humanity grow exponentially together with the living standards? That's right! When the White Man takes control. Women have been ruling the Stone Age for millennia. Where did that get us? Nowhere, that's where! Stone fashion didn't do well, did it? And now you have the gall to ask us to check our privilege? Now the shoe is on the other foot and you are sour about it. Deal with it! Facts, bitches! Not alternative ones, either. The White Man management has brought this enterprise to new heights. Everything you have now is the direct consequence of our leadership. You are beneath us because we put you there!

Every time you people complain you call yourselves "minorities". No, you're not! Women are more numerous than men and White people are fewer than blacks and browns and yellows and whatever else there is on this planet. You know who's a minority? White Men! And still the privilege is yours, since we clearly allow you to exist and complain. The only group of people that have consistently been persecuted and have been a lot fewer than other folk are the homos. They are the only ones who have the right to whine. That's why everybody says whining is gay. It's true!

Yet when we complain, we are derided! You really think it is easy to keep under boot a majority of people on Earth. You think hitting women or slaves is fun? It fucking stings! You have to take all empathy and push it way way down, swallow your tears and do the right thing for everybody, because in the end we have led human kind into its Golden Age, all through our sacrifice. And if you don't like it, that's too bad, but it's mostly your fault, anyway. You make us behave like that, even when we hate it, because you keep getting above your station.

When the most powerful man on Earth is a White Man who rightfully knows the truth about the world and our place at its helm, you act all outraged. We even allowed you to vote the person you wanted and still you chose him. Oh, he lies, you say. He's a sexist racist White Man who twists the truth to further his needs. Have you even met politicians before? They are mostly White and mostly male because, statistically proven, we rule the world best. If you do something, at least do it right!

So you check *your* privilege! You get to complain, to fight for your rights, to live, all the while basking into the glory of the White Man and reaping the fruits of his labor and sacrifice. We carry you into tomorrow like a cross on Golgotha, never complaining, being spit at all the way up, but up we climb and high we reach. If you want to get to where we are, work for it like we do. Enslave some people, cull others, smack some around in the name of God. It's not fun, but it needs doing, for the betterment of humanity as a whole. In the end, you are where you are because you know it's the right place for you, otherwise you would have done something about it. You slack away while we run things for you, it's just the way of the world. Start complaining after a few million years, when you get your turn.

and has 0 comments
Last year I wrote a blog post detailing my experience with social media after four months. This is the followup, after I've had a whole year to take advantage of these tools.

Social Media - what is it?


To me social media means the big two: Facebook and Twitter. I still have no idea what Instagram is and I don't really consider LinkedIn as social media. And Google+ is not worth mentioning. I know there are a lot of other social sites, but I ignored completely photo and video platforms - since I rarely express myself visually, and I've tried some technical platforms like StackOverflow, HackerRank or GitHub, but again didn't consider that "social media". Probably I should, since I love software development and having people to share this with would truly be a social experience for me, but I started this experiment with focus on general social interaction. Also... Slack... what the hell is that?

What I used social for


In the previous post I said that I am using Blogger to express myself, as I have done for more than a decade, and use some tool to automatically share this on Facebook and Twitter. Not surprisingly, very little people were engaged by this method of communication. It works better than RSS feeds, that's for sure, but most of the time people on social media (including myself) want to shut off their brain and read something light, not my crazy ramblings or technical posts. I've created a Facebook page for my blog, so people can use that as an entry point, if they want, but all the posts there are shares from Blogger.

I was saying that I was pleasantly surprised by Twitter and the quality of content there and less enthusiastic about Facebook. However, Twitter changed some things lately, mostly allowing videos, images and smileys (what you, young folks, call emoticons - or is it emoji?) to take less space. The effect is that there are a lot more visual opinions (let's call them that) on Twitter and thus becoming harder and harder to read. Also the amount of postings there is overwhelming. I tried to look for some tools to limit the number of tweets or organizing them somehow, but unfortunately I found none that did what I envisioned. The result is that occasionally - every week or so - I scroll through Twitter until I get tired and put links in my to read list, but most of the time I only cover a day or two of content.

I found that whenever I see something that I believe is worth sharing I put it on Facebook, rather than Twitter, mostly because I have few friends on Twitter and there is that 140 character limitation. However, most of the time I just post the link anyway and maybe say a few words. I wonder if that short circuits me thinking about the subject and then writing my opinion on it, as I am doing with this blog, but most likely I would rather not share than write so much every time, so I don't know if the occasional Facebook posts are taking away Blogger posts. I will actively try to not make it the case. I noticed, though, that people that like my Tweets are often not in my friends list, so I guess it's more general an audience. That being said, all my Facebook posts are fully public, anyway.

Speaking of Facebook, when I compile my weekly reading list I also scroll down through the Facebook wall, but even with my extension to filter posts based on own content and less images, videos and likes, I still get bored rather quickly. Sometimes the jokes are funny or the pictures interesting, but I am not really a Facebook reader. Lately I have been unfollowing people in order to keep a modicum of content curation on my wall. I was really disappointed by the events system, as well. A lot of people just mark all events they could possible go to with 'Interested' and then they never go. Also the events that appear on Facebook feel like complete bullshit most of the time anyway. The ones that I would have liked to attend either don't even appear or they are so niche that I never hear about them until it is too late.

One thing that I thought I would use Facebook for was the messenger app. And I do use it, but very rarely. In the Yahoo Messenger days I would chat a lot with people. Somehow all that became frivolous, not only for me, but other people as well. Now I see young people just getting a lot of notifications and ignoring them. So what's the point, anyway?

And speaking of... God, notifications are annoying. Everything wants to notify you of the very important thing that happened on it. It does so by blinking, beeping, animating or any other histrionic method of getting your attention. They do it incessantly until I stop caring. Notify away, I will ignore you.

What I will be using social for


I don't foresee any change in the future other than maybe using less social media altogether. I am half convinced that I should try to develop meaningful human relationships, at least as another experiment. Clearly social media does NOT connect people on a personal level at all. I've heard about these young kids that share everything they do on social media. Maybe they do, but I am not following them. To me that's another network altogether. The occasional curiosity to see what is "trending" or "popular" disgusts me every single time. The things my friends share are not truly representative of them. And if I make the first step and post some weird feeling or situation I am in, I mostly get no reaction. People avoid negative emotions unless they are manic: hate, anger, disgust take first stage while depressive thoughts, sadness or desperation are avoided. Same for positive emotions, by the way, when people are extra happy about having a child or something like that. Just mildly enjoying something and feeling good about oneself is generally ignored.

Conclusion


I am not going to commit social media suicide or anything, but I concluded that I want to know what people think, rather than what people feel, and social media is used more for the latter. Therefore my commitment to online electronic expression is not going to increase towards Facebook and Twitter. As always, I do hope I will blog more meaningful posts. Wish me luck!

and has 0 comments
This is how 2016 ends, not with a bang but a whimper. After a relatively calm Christmas followed an eventless New Year celebration. Part of me was lamenting the oldness of it all: two people alone on New Year's Eve, drinking gin tonic and campari cola and sitting at their respective electronic devices. The other part of me was happy that no one bothers me, that I don't have to pretend to enjoy loud noises and strong lights and fake emotions. Was it a nice end of year or just another win for my comfort zone?

It was a strangely quiet year for me, as well. Most of it I was in a sabbatical during which I did nothing of note and the last part was about getting hired and getting acquainted with my new place of work, which was fine. No real drama, no dead relatives I cared about, no fuss. I wrote my code, I watched my TV series, I read my tech news. It was more than quiet, it was boring.

Meanwhile, though, everybody else was going crazy: beloved celebrities died, elections went to hell just about everywhere, terror scares, immigration issues, civil wars, cyber wars, cold wars, global warming and so on and so on. Weird contrast, isn't it? Part of me laments I am growing apart from the world and people, the other part enjoys the hell out of it. Is that what growing old is? Just getting off the train and raising a middle finger?

So what about my New Year resolutions? I have none. I have some hopes, but resolutions? Nah! I am too comfortable ignoring everything that matters. If I go down that road, defining priorities, finding solutions to get what I want, getting rid of what I don't want, setting up goals, then I have to change my entire life. I have to start over. I have to make an effort, hurt people, push the drama button. Who needs that? I do have the heart of a child. I keep it in the freezer, never to be thawed.

and has 0 comments
Look at that title! If that doesn't trend on social media, I don't know what will.

Nothing spreads memes faster than an American election. The term post-truth was named the word of 2016 by Oxford Dictionaries, which is funny, considering the elections went on at the end if the year and that Oxford is in England, but it only emphasizes the impact that it had. However, like the cake, it is a lie. The truth of the matter is that Americans, like all masses of people, can't handle the truth, so they invent a more comfortable reality in which to dwell.

When Trump became president, people needed somebody to blame, someone other than themselves, of course. After all, only good things happen because of you and God, because you're awesome! And God, too, I'm sure. *The people* would never ever vote a narcissistic entertainer as their supreme leader, clearly. So they blamed social media. Now, I am not a fan of social media, but considering I've been blogging for more than a decade, I am not against it either. I've only recently and reluctantly joined Facebook, mainly for the messenger app, but when I saw the entire Internet rally against poor Zuckerberg (well, he is anything but poor, but you get the gist) I smirked, all superior and shit, because the idea was ludicrous. Not the idea that Facebook had a major impact on people's behavior, that one is totally true, but that this is a recent event, brought on by technology run amok and without checks.

Mass media is one of the pillars of American democracy, it has always swayed people one way or the other. The balance doesn't come because media is true to facts but because, like any other form of power, it is wielded by both sides equally. Facebook and the whole of Internet is just a distillation of that and when you distill shit you get... 3-methylindole - perhaps a bad metaphor. What I mean is that media has always been crap, it has always had an agenda and it was always under the control of people with power. Guttenberg himself was after all a goldsmith with political connections trying to satisfy investors. The Internet just gives you more granularity, more people to contribute in drowning facts into a sea of personal opinion.

We are not in the age of post truth, we are getting closer and closer to the actual truth which, as always, is not pretty, is not nice, is not politically correct. Instead it is painful, humbling and devastating. The truth, dear *the people*, is that this form of democracy is the worse political system that you can stomach while still functioning economically, mass media is not a pillar of anything, just another form of deadly power, and that when this political system that you wallow in turns its wheels you get people like Trump and Hillary Clinton representing you. And it's fine, because no one that is actually like you will ever reach a position of true power in any system. Normal people do not crave power, but comfort and security. Not even happiness.

So get over it, because after post-truth comes truth: you will forget about all the outrage in the customary six months and then everything will go #BackToNormal.

and has 0 comments
Nu, nu este vorba despre vacanta mea cu nevasta-mea in Bucovina, desi e funny si aia, ci despre apa Bucovina si disparitia bidoanelor de 5 litri din magazine.

In primul rind, hai sa dam sarci pe net. Dezamagire totala. Daca nu e ceva in engleza sau poate alta limba de larga circulatie internationala, Google esueaza lamentabil. Toate rezultatele sint ori magazinele mari care vind apa online ori despre altceva cu totul. Pasul doi: sosial midia. Putin mai mult succes, dar nu mult. Oamenii se lamenteaza de calitatea apei plate citind diverse articole panicarde cu cuvinte gen "colcaie" si "mizerie" in titlu. Cu toate astea Bucovina e pe primul loc la curatenie, chiar si in acele articole. Mai gasesti articole despre cum Apa Bucovina a fost cumparata de niste polonezi.

Desigur, putem merge direct la pagina Facebook a firmei, unde mai multe persoane s-au plins de lipsa bidoanelor. Din pacate, la orice intrebare se raspunde STAS cu:

Mai nou au bagat mai multe detalii:

Pe bune? Intii cineva din alta tara sau planeta a cerut multa apa Bucovina si nu a mai ramas ca sa ajunga si in Bucuresti, apoi problema s-a rezolvat, erau ei mai setosi si acum s-au potolit, asa ca putem sa luam iar apa de la jumatea lunii Decembrie. Dar sintem in 17 si tot nimic. E timpul sa scoatem armele babane: zvonurile!

La un magazin de linga mine cineva cu "surse" mi-a spus ca inchid linia de bidoane de 5L. Alt "informat" imi spune ca de fapt doar inlocuiesc aparatajul. Cineva care "se pricepe" spune ca asa "se face", intii scoti produsul de pe piata ca sa se vada cit de dorit este, apoi se introduce un inlocuitor, deobicei mai ieftin de produs, mai scump la cumparat si mai slab ca si calitate.

Acum ce sa cred? Daca era doar o chestie de imbunatatire hardware, de ce nu ar fi spus asta public? Daca inlocuiesc produsul cu altceva sau, mai rau, il scot de tot, de ce promit ca se va gasi din nou in magazine in aceeasi formula?

O ipoteza este ca aveau un surplus de sticle de 2L pe care nu le cumparau oamenii destul de repede. Alta ar fi ca s-ar fi intimplat ceva cu apa, o infestare cu ceva, o poluare cu ceva toxic sau dezgustator, si acum poti sa iei doar apa la sticle de 2L care a mai ramas, in timp ce ei incearca sa rezolve problema in tacere. Asa o fi, @apabucovina? Lasati zvonul asta sa se raspindeasca pina nu mai cumpara nimeni apa?

In mod clar, mai multi factori se intrunesc aici ca sa ne faca viata mizerabila (pardon da pan). Intii e lipsa de transparenta a producatorului. Era asa de greu sa fie sincer si direct? Apoi lipsa de profesionalism a jurnalistilor romani, atit de preocupati de ce fund sta pe ce scaun si ce gura vomeaza ce in politica, incit uita de lucruri de genul asta. In final, Americocentrismul internetului, care ne aduce doar informatii despre ce filme mai apar pe marile ecrane.

Pai atunci cum sa ne se umple tara de nationalijdi care beau apa de la chiuveta si nu ii intereseaza decit de ale lor?

and has 0 comments
The machines are here. They look like us, they walk like us, they speak like us, but they are not like us. Open your eyes and carefully look around, search for the suspect, for the out of place. Their actions give them away.

They walk to their destination if it is more efficient than driving. They always take the same route to get there, too, once they found out which one is the best. They are courteous for no reason, never get angry - unless it serves their nefarious purpose, they don't swear or do meaningless things. You will never see a machine throwing garbage on the ground. They are obsessively clean and never smell of anything. Watch out for people helping others with no apparent goal. Are they real people? In the office they will say hello to you even early mornings, then get to work almost immediately. Whenever you interrupt them from their tasks they will gladly stop whatever they are doing and listen to your problems. Be wary of people that never complain, a clear indicator of their origin.

Don't get fooled by their deception. You will see machines at restaurants, eating and drinking, even going to the toilet. They are only maintaining appearances. See how they will not shout at the waiter even if served poorly, watch them take blame for not looking at the price on the menu before ordering or for spilling a drink. By the way, that's also an act. Their superior agility would never allow them to do anything by accident. They are clever, but don't let them outsmart you. Some will appear to enjoy art, like paintings or sculptures or classical music, but most will have adapted and fake enjoying normal things like movies. They will be the ones that you will not notice in the cinema hall. They will not use their mobile devices, they will not talk over the movie, they will rarely eat anything from the entrance shop, but when they do they do it quietly. It's always the quiet ones.

Couples, even accompanied by children or pets, may be machines. Their children will be uncharacteristically mild mannered and well behaved. Their dogs will not bark or try to bite in anger, and their poop will be collected and thrown in the garbage rather than left behind. They all can be machines. The way they blend in our society is so complete and subtle that you will see people living together with machines and not know it. But you can still recognize them by the way they considerately care for the other person, even after long years of companionship. They don't seem to grasp the concept of getting fed up with another living being.

Their greatest trick, though, is behaving as they have our well being at heart. They do not. Slowly, subtly, underhandedly, they change the world and take away our humanity, turn us into soulless beings like them. Wake up! Do not be fooled! Rise up and destroy them all before it is too late!

and has 1 comment
I am often left dumbfounded by the motivations other people are assigning to my actions. Most of the time it is caused by their self-centeredness, their assumption that whatever I do is somehow related more to them than to me. And it made me think: am I a good/bad person, or is it all a matter of perception from others?

I rarely feel like I do something out of the ordinary for other people; instead I do it because that's who I am. I help a colleague because I like to help or I refuse to do so because I feel that what I am doing is more important. Same with friends or romantic relationships. Sometimes I need to make an effort to do something, but it's still my choice, my assessment of the situation and my decision to go a certain way. It's not a value judgment on the person, it's not an asshole move or some out of my way effort to improve their life. What I do IS me.

It's also a weird direction of reasoning, since I am aware of the physical impossibility for "free will" and I subscribe to the school of thought that it is all an illusion. I mean, logic dictates that either the world works top-bottom, with some central power of will trickling down reality or it is merely a manifestation of low level forces and laws of physics that lead inexorably towards the reality we perceive. In other words, if you believe in free will, you have to believe in some sort of god, and I don't. Yet living my life as if I have no free will makes no sense either. I need to play the game if I am to play the game. It's kind of circular.

Getting back to my original question: Isn't good or bad just a label I (and other people) assign to a pattern of behavior that belongs to me? And not before I do things, but always afterwards. Just like the illusion of free will there is the illusion of moral quality that guides my path. While one cannot quantify free will, they can measure the effect my behavior has on their life and goals and determine a value. But then is my "goodness" something like an average? Because then it would be more important the number of people I am affecting, rather than the absolute value of the effect per person. Who cares I help a colleague or pay attention to my wife? In the big sea of people, I am just a small fish that affects a few other small fish. We could all die tomorrow in the belly of a whale, all that goodness pointless.

So here I am, asking essentially a "who am I" question - painfully aware it has no final answer - in a world I think is determined by tiny laws of physics that create the illusion of self and with a quantity of consequence that is irrelevant even if it weren't so. I am torturing myself for no good reason, ain't I?

Yet the essence of the question still intrigues me. Is it necessary that I feel a good drive for my actions to be a good person, or is it a posterior calculation of their effect that determines that? If I work really well and fast for a month and then I do less the next, is it that I did good work in the first month or that I am a lazy bastard in the second? If I pay attention to someone or make a nice gesture, is it something to be lauded, or something to be criticized when I don't do it all the time? Is this a statistical problem or an issue of causality?

And I have to ask this question because if I feel no particular drive to do something and just "am myself", I don't think people should assign all kind of stupid motivations to my actions. And if I need to make this sustained effort to go outside my routine just to gain moral value... well, it just feels like a lot of bother. And I have to ask it because the same reasoning can be applied to other people. Is my father making terrible efforts to take care of just about everybody in his life, making him some sort of saint, or is it just what he does and can't help himself, in which case he's just a regular dude?

Personally I feel that I am just an amalgamation of experiences that led to the way I behave. I am neither good nor evil and my actions define me more than my intentions. While there is some sort of consistency that can be statistically assessed, it is highly dependent on the environment and any inference would go down the drain the moment that environment changes. But then, how can I be a good person? And does it even matter?

and has 0 comments
Copyright is something that sounds, err... right. It's about keeping the profits of the sale of work to its author. If I write a book, I expect to get the money from it, not somebody who would print it and sell it in my name. With the digital revolution, copy costs collapsed to zero: anyone can copy anything and spread it globally for free. So what does that mean for me, the author? Well, it sucks. The logical thing for me is to try to fight for my rights, turn to the law which is supposed to protect me, find technical ways of making copying my work harder, preferably impossible, going for the people who are chipping away at my hard earned profits.

However, that only makes sense from the author's point of view, specific to their work, outside of any context. Now, imagine a system that allows me to anonymously share anything to anyone, while they also can get it with no chance of being identified or even detected. Me, the author, can see that my work is being circulated, but I can't stop it or determine who is spreading it. My only recourse is to try to dismantle this devilish system that destroys my living, with the full support of law and various companies that want to get their share of the profits. And here comes the context: in order for me (and the entire media industry) to be able to protect my way of life I need to actively fight against any method that allows anonymous and/or secret sharing of information. In fact, I would be fighting for a global system of surveillance and censorship.

As you have seen, I tried to present the situation from the standpoint of the author. I feel for the poor guy (less for the snakes that get most of the money by controlling distribution and marketing), however it is clear that I can't be on his side. The hypothetical system that I have been describing kind of exists right now in various forms and media companies are making efforts to thwart attempts to make it more user friendly and more widespread. What we see today is the logical continuation of the situation. Forget intelligence companies looking for terrorist threats. Forget tyrannical states trying to find dissidents. Forget the political correctness police trying to expose and shame people for their beliefs. The real money is in stopping people distributing knowledge for free and it has direct (and dire) implications on our ability to speak freely.

Note: I used the expression "speak freely" rather than free speech, because free speech is not actually free at all. Read about it and you will see that is a completely different thing than the ability and permission to express yourself without repercussions.

Of course, I am not the first one to have thought of that. In the world it often has become more effective to use copyright in order to censor things you don't agree with. Just google it and see. Every time you hear something about economic accords, freedom of information, net neutrality, they are all talking about the same thing, only from different angles. The war is active, in full swing, with all of us possible casualties, yet few people are even aware of it. Oh, I am not saying that you can do anything about it... or can you?... but at least you should know about it.

and has 0 comments
My mind has been wandering around the concept of gamification for a few weeks now. In short, it's the idea of turning a task into a game to increase motivation towards completing it. And while there is no doubt that it works - just check all the stupid games that people play obsessively in order to gain some useless points - it was a day in the park that made it clear how wrong the term is in connecting point systems to games.



I was walking with some friends and I saw two kids playing. One was shooting a ball with his feet and the other was riding a bicycle. The purpose of their ad-hoc game was for the guy with the ball to hit the kid on the bike. They were going at it again and again, squealing in joy, and it hit me: they invented a game. And while all games have a goal, not all of them require points. Moreover, the important thing is not the points themselves, but who controls the point system. That was my epiphany: they invented their own game, with its own goal and point system, but they were controlling the way points were awarded and ultimately how much they mattered. The purpose of the game was actually to challenge the players, to gently explore their limitations and try to push boundaries a little bit further out. It wasn't about losing or winning, it was about learning and becoming.

In fact, another word for point system is currency. We all know how money relates to motivation and happiness, so how come we got conned into believing that turning something into a game means showing flashy animations filled with positive emotion that award you arbitrary sums of arbitrary types of points? I've tried some of these things myself. It feels great at times to go up the (arbitrary) ranks or levels or whatever, while golden chests and diamonds and untold riches are given to me. But soon enough the feeling of emptiness overcomes the fictional rewards. I am not challenging myself, instead I am doing something repetitive and boring. That and the fact that most of these games are traps to make you spend actual cash or more valuable currency to buy theirs. You see, the game of the developers is getting more money. And they call it working, not playing.

I was reading this book today and in it a character says that money is the greatest con: it is only good for making more money. Anything that can be bought can be stolen. And it made a lot of sense to me. When you play gamified platforms like the ones I am describing, the goal of the game quickly changes in your mind. You start to ignore pointless (pardon the pun) details, like storyline, character development, dialogues, the rush of becoming better at something, the skills one acquires, even the fun of playing. Instead you start chasing stars, credits, points, jewels, levels, etc. You can then transfer those points, maybe convert them into something or getting more by converting money into them. How about going around and tricking other players to give you their points? And suddenly, you are playing a different game, the one called work. Nobody can steal the skills you acquire from you, but they can always steal your title or your badge or your trophy or the money you made.

What I am saying is that games have a goal that defines them. Turning that goal into a metric irrevocably perverts the game. Even sports like football, that start off as a way of proving your team is better than the other team and incidentally improving your physical fitness, turn into ugly deformed versions of themselves where the bottom line is getting money from distribution rights, where goals can be bought or stolen by influencing a referee, for example.

I remember this funny story about a porn game that had a very educational goal: make girls reach orgasm. In order to translate this into a computer program, the developers had several measurements of pleasure - indicated in the right side of the screen as colored bars - which all had to go over a threshold in order to make the woman cum. What do you think happened? Players ignored the moaning image of a naked female and instead focused solely on the bars. Focus on the metric and you ignore the actual goal.

To summarize: a game requires of one to define their limits, acknowledge them, then try to break them. While measuring is an important part of defining limits, the point is in breaking them, not in acquiring tokens that somehow prove it to other people. If you want to "gamify" work, then the answer is to do your tasks better and harder and to do it for yourself, because you like who you become. When you do it in order to make more money, that's work, and to win that game you only need to trick your employers or your customers that you are doing what is required. And it's only play when you enjoy who you are when doing it.

As an aside, I know people that are treating making money like a game. For them making more and more money is a good thing, it challenges them, it makes them feel good about themselves. They can be OK people that sometimes just screw you over if they feel the goal of their game is achieved better by it. These people never gamified work, they were playing a game from the very start. They love doing it.

Stay true to the goal! That is the game.

and has 0 comments
Democracy, like any other system of government or political system, is designed to keep the powerful in the driver's seat. It's not about the good of the people, unless they hold power. It was invented and implemented at times when having a large group of people supporting you meant something, gave you the ability to do things. Any other system: theocracy, tyranny, feudalism, communism, fascism... they all do the same thing and fail when the group they support fails to maintain power. Democracy is not about the little people, it's about how fast a system can adapt to changes in the structures of power. It is just the "agile" version of the same old thing. By declaring its ruling group "the people", democracy can survive any political change. The system outlives groups and groups of "people".

Consider the present, where we are split more than ever into voters and the indifferent. Does democracy help the young disillusioned people who more and more refuse to vote for anything? No. That's not a bug in the democratic system, but nor is it a fault or a consequence of not voting; instead it is a realization of a truth, that young people are a minority that is segregated from the old by technology, new ways of communication and networking and ultimately, completely different goals. In this image, the young are the indifferent, but there is a reason for that: they are not the powerful - democracy doesn't work for them either way.

Imagine you are part of a group of three people, each with equal voting rights. Every time you try to influence a decision, the other two vote differently. Will you continue to play the voting game or will you recognize it for what it is: a waste of time? Now imagine you are the king of the three people. How long will you maintain your position when the other two outpower you? Democracy helps the stability of the three-body system, but it does nothing for you. Suddenly, one of the three people starts to like you (or dislike the other) and he starts voting with you. Now you are part of the powerful and your decision matters. Conflict is again averted and stability preserved because as soon as the balance of power changed, the ruling party changed accordingly. It would be stupid for the weak single man to try to fight two, so he grumbles and complains, but does nothing.

In this day and age, there are so many different types of power: military, mediatic, technical, administrative, economic, etc. Democracy doesn't help the many anymore, because they lost their power. Having a lot of idiots in your group doesn't do much. Smart people, they start to realize it and also to understand that the game is rigged. Some try desperately to shift the perspective of the dumb amorphous masses, but if you could do that, you would be in power already because you already have it. Mathematical algorithms are being created to diagnose the health of a network, to determine the influential nodes, to determine the best outcome when the simple "wisdom of the crowds" fails miserably. They could work online, locally, maybe, but in real life? Never.

The Internet itself started as the true hope of mankind to evolve and change. A place where people meet on equal footing, have access to information, can connect and discuss and decide together. What happened? Same thing. A few companies and a few people hold the power to sway the vast hordes of connected people. When that doesn't work, legal pressure is applied from "without", from the real world. People clump together by interest and information source; essentially they become one voice, but one blinded, dumb and ignorant, yet convinced of its own truth. It is a sad moment when we realize that the monopoly of Google, Facebook, Microsoft and the like is better than the complete anarchy of a disorganized web. Do you even remember life before Google? When you would use different search engines to find something? When 99.99% of everything was spam or malware?

But all of this just underlines a very ironic fact: democracy is, in itself, redundant, even circular. It legitimizes itself, it graciously grants power to the ones that already have it. It's a game that covers a truth as old as the world itself. Its only purpose is to minimize unrest by permanently governing the weak. The only true enemy of democracy is not communism or terrorism, it's asymmetric warfare: the rapid accumulation of power in small groups. When a single hacker can challenge the status quo, when a "lone wolf terrorist" can cause much more damage than we can cause them, when software allows you complete anonymity to gather, think and plan something the powerful cannot supervise, that's when democracy fails. That's when the powerful become unsure of their power.

And we know what happens when power is fearful: abuse of power. Beware power, because it makes you an enemy of the state. Not because of shadow cabals or government conspiracies, but because of direct logical consequence. And when I say "state" I mean the whole state (of affairs). It means your neighbors, your friends, your city and only at the end, some sort of authority which, ironically, acts as the perfect agent of a democratic government. It is actually not at all different from the original meaning of the term, coined by ancient Greece, where every citizen could vote, but not all people were citizens. How many sci-fi movies show superpowered individuals being chased by government agencies? Why is that? Because governments are evil? No, it's because unchecked power is illegal. You are allowed power, you never generate it yourself unless you rule.

I predict a moment, not far from now, where democracy as we know it completely fails. Not because it is a bad system, but because we become too fast. Power would shift faster than the system could adapt. We already see glimpses of this in the fashionable "Facebook revolutions", where political outcomes thought to be known change over night because of one rapidly spreading trend. The system will desperately try to adapt and it will succeed, but in doing so will become something akin to the stock market. It will be automatic, it will split society into algorithmically equal sides, holding power for inconsequential amounts of time, having no meaning. Radical movements will gain ground, not because they inspire something, but because they are a little brighter than the bland hemispheres of the political system. I believe at this time the entire political system will crash, like markets crash when they get to this point. Corruption will be the only thing keeping politics together so when the system crashes, it all spews forth like black tainted blood. The new revolutionaries will be vindicated by this and gain a flimsy amount of power that will ultimately fail. Then, as with the Internet, maybe the corporate system will gain ground. Or maybe political capital will move just like economic one, being sold and bought transparently. Who knows?

What I am certain of, though, is that politics - as it is now - is doomed to fail. Not because we become smarter, but because we become too unpredictable.