It is said that the great theory of relativity of Einstein's doesn't apply to things moving slowly. Today I realized that is not true. There is a direct relationship between space and time and speed affects space, so it must affect time. Here is a practical example: a car moves faster than a person walking, so its speed makes distance shrink relative to time. Inversely, that means that it makes time expand, become more expensive, from the car's point of view.

That is why, when you see a car approaching and you have the option of walking in front of it forcing it to stop, you wait, because the driver's time is more expensive than yours. Stopping the car and wasting time would impact him much more than it would you. It also has the side effect that it saves your life if the car doesn't stop for some reason.

Just a thought.

The Romanian language has a word: deștept. It means smart, but it leans into knowledgeable, so it means both "knowing things" and "thinking fast". There is no relation to wisdom and this is the case in other languages as well. Sometimes wise is used to denote knowledgeable, yet I don't think they are related. While to know things means to be able to recall things you have learned, wisdom, I've come to realize, means to understand what little you know. Someone might be wise and know very little and think rather slowly. Wisdom is the maturation of the soul, like a well kept wine it provides subtle flavors.

Even a superficial and forgetful person as myself can gain wisdom in time. It is important to note this, because as people get older, stuck between that limit of usefulness and the onset of senility, we tend to dismiss them, flaunt our new found (and invented) knowledge to their faces, ignoring a very important aspect of their evolution: wisdom. Sure, their wisdom might not apply to your field or need, but even if it were, are you acknowledging it?

Just a thought.

Siderite's Razor: "The simplest solution/explanation is often somebody whining"

We are changing the furniture and repainting the walls in the apartment, so naturally, the first order of business is to dig into closets, drawers, bags, boxes and various regions under existing furniture and throw away as much as possible. It is a strange feeling, one that makes me remember a past and dead self, one that was hopeful, smart, crazy, in love, using technology and doing stuff that I can't even begin to comprehend nowadays.

I dug into old CD albums, remembering with much nostalgia the movies that I was watching and intending to keep forever. The movies are still around, CD players are almost gone. I had to use my wife's laptop to read the CDs, as mine would only accept a few of them. Well, that's because it's broken, but still. Among the CDs I found old source code and material that I had gathered from friends, jobs, the Internet, hacking. I felt like an archaeologist digging through the remains of old civilizations, ones we hold dear and towards which we feel a strong sense of ownership, but with which we have nothing in common.

Here it is: the Palm VX PDA that was built in 1998 and still works now, with the same battery, if you can just find a way to connect it to a computer so you can upload new stuff to it. Here it is: the Nokia E60 phone that worked flawlessly for more than ten years. I bought a smartphone to replace both of them just five years ago. But also, here it is: an external modem I had forgotten I had; I still wonder where I used it, if ever, and how I got hold of it. Same for the audio/video/infrared wireless transmitters and receivers that allowed me to watch movies from the computer to the TV in the other room. Tens of meters of Ethernet and all kinds of connective cables, forgotten in an age of ubiquitous digital wireless connection just forgotten in the odd corners of the house. Remains of two desktop computers (that I could still make work if I had the inclination) linger like the fossilized bones of extinct creatures.

I feel a mix of gratefulness, nostalgia, loss and that I am fucking old, all at the same time. I wonder where I could find people that still value these things that I dug out from my past and that otherwise will soon become anonymous and amorphous junk. Geez, look at the 6 CDs of utility software, stuff I still remember fondly and stuff I have never used: antivirus, archiving, communication, VoIP, OCR, document processing, all software that is in heavy use today but you would be hard pressed to find people still recognizing these particular incarnations. Music that I still have in my playlist on CDs almost twenty years old. Games that I had worked on that I have forgotten ever doing. Random writing from when I was so young I feel embarrassed just to remember.

And this is just from a 50 square meter apartment that we moved into just ten years ago. I can't even imagine how people do this when they move out from their childhood home, where they and their kids have lived for generations. What do they find? Do they even recognize it? What happened to all the people that I once was?

Occasionally I ask myself if I really am an "ist". You know: misogynist, racist, classist, sexist, bigot, and so on. Or maybe I am "one of the good guys", a progressive feminist antiracist. And the answer is yes. I am both.

I've just read a really long feminist article that - besides naming white bigoted men "the enemy" and showing them the smallest bit of empathy just because "if you mess with them, they mess with us women when they get home" - had the author wonder how come so many of the people who got outed by the latest wave of misconduct allegations were people who declared themselves progressive and even wrote or shared content towards that. And the answer is really simple and really uncomfortable for all purists out there: we are all a bit bigoted. More than that, sometimes were are really leaning towards a side and then we change back, like reeds in the wind. I think that's OK. That's how people are and have been since forever. The answer is not to pretend we are different, but to accept we have that side and to listen to it and converse with it in order to reach some sort of consensus.

The animal brain has one job and one alone. It has to heavily filter all the inputs from the real world and then create a manageable model of it in order to predict what's going to happen next. Shortcuts and pure yes and no answers are heaven to it. If you can look at one person and immediately infer things that will help you predict their behavior from simple things like sex or color of skin or the way they dress, the brain is ecstatic. Try telling it that no, that's not good, and instead of the limited statistical experience model that it uses it should instead rely on the morally curated amalgamation of acceptable experience of other people frustrates it. It's not a human thing, it's not a mammal thing; if you could express this idea to an ant, it would get angry with you. The brain wants - if not even needs - to be racist, sexist and other isms like that. What it wants is to take everything and put as much of it in small boxes so that it can use the limited capacity it has to navigate the things that are not labeled in one way or another.

So yes, physiologically we are too stupid to not be bigots. All bigots are stupid. We are all bigots. In order to not be, or at least not behave like one, you have to be motivated. Messing one's entire life in a matter of days with an onslaught of sympathetic and coordinated allegations would do that quite well. That doesn't mean it's the right thing to do, any more than it would be to "kill off" people who disagree with you. Therefore in matters such as these I cannot help feeling sympathetic towards people who are quite literally dicks. It doesn't mean I agree with what they did, it means I don't agree with what anybody did. And in such moments of sympathy I hear the parts of me that current society wants erased shouting for attention: "See, we were right! We are dicks, but these moralists are überdicks!" I listen to bits of me that want everything wrong with the world to be the fault of poor people, women, people from other nationalities, races or religions, certain jobs or certain types, having certain cars or behaving or dressing in a certain way. It would be so easy to navigate a world like that: just kill off the Jews and black people, put women in their place, write code only in C#, rename the island of Java to DotNet, be happy!

Yet it is obvious it doesn't work that way. Not even white males wouldn't want this to happen, most of them. How do I make the voices shut up? Clearly witch hunting offenders until their lives are more upended than if they stole or ran someone over with their car does not work. And the answer, from my own limited experience, seems to be contact. Whenever I am inclined to say all Chinese or Indians are stupid (which is numerically much worse than being antisemitic and so many people from my background are guilty of it) and I meet a brilliant Asian programmer or entrepreneur or simply an articulated and intelligent human being I am forced to revisit my assertion. Whenever I think women can't code and I meet young girls smarter and more energetic than I am I have to drop that, too. Whenever I want to believe black people smell or are violent or are genetically faulty and I see some Nubian Adonis talking high philosophy way over my head, I just have to stop. If these people would all go hypersensitive, get offended by everything I say or do and gang up on me for being limited in my view, I clearly won't be motivated or even have the opportunity to grow out of it. Of course gay people and Jews are responsible for all evils on Earth if they are the ones making my life hell. And it is also easy to remain bigoted if I surround myself with people just like me. I've read somewhere a statistic that showed racists usually live in areas where they lack contact with people of color.

Basically, what I want to say is that I see no reason why someone would want to be paranoid. Either there is something wrong with them or people are really out to get them. And it is so easy to label someone "the enemy" and just pound on them, so easy to blame anyone else for your troubles, so easy to enter the flight or fight mode that is encoded in our very beings. I see this with my dog: he avoids big dogs since a big dog attacked him. If he continues this trend, he will certainly avoid getting attacked again by a big dog, while trying to get acquainted with them might result in injury or even death. It's so easy to decide to avoid them, however nice they smell and how nice they play. For him it is a very limiting, but rational choice.

Hide your inner bigot, cage him in the darkest depths of your soul, and it will grow stronger, malignant, uncontrolled. This is what civilization, especially the forced kind, does to people. It makes them think they are something else, while inside they are cancerous and vile, just waiting to explode in the worst way. Instead, I propose something else: take your bigot for a walk, talk to it, introduce it to people. Maybe people will start avoiding you like the plague, but that's their own bigotry at work. And soon, you will probably be the progressive one. It's hard to be a racist if you have a black friend and difficult to be a misogynist when you meet wonderful humans that happen to be female. You will make the bad joke, you will expose your limits and the world around you will challenge you on them. But in the end, your limits will expand, people who matter will understand and appreciate your growth, and frigid feminazi Jew lesbos can go to hell.

You know that joke, about the guy who wants to become progressive, so he is searching for a gay friend? Why not try it the other way around? Find a bigot near you and make friends.

A year and a half ago, as I was going from miserable job interview to the next, I was asked what I think about code review. At the time I said that I thought it was the most important organizational aspect of writing code. I mean you can do agile, waterfall, work on games or mobile apps or business applications, use the latest or the oldest, the best or the worst technology and still code reviewing helps. I still think that way now, but recent experiences with the process have left me thinking of refining my understanding of it. This blog post is about that.

The Good


Why is code review good? The very first thing it does is that it forces you to acknowledge your work. You can be tired and fix one little thing in a lazy way and forget about it and it might work or it might break something, but when you know you have to publish what you did you do things less lazy, more documented, more thought out. It doesn't matter that no one will ever look carefully on the review, as that you are thinking there is the possibility of it.

Second, and obvious, is that any mistakes you made are more likely to come to the surface when someone looks at the code. It doesn't mean people blame you for mistakes, it means the mistakes don't come and bite you in the ass later, when your work is supposed to be making money for some poor bastard somewhere. This is very important because we tend to work on systems more complex that we can or are willing to understand. If a group of people who together understand the system is reviewing work, though, you learn not only about the inevitable code errors you introduce, but also about the errors in judgement or understanding or in the assumptions you made.

Then there is the learning aspect of it. Juniors learn from seniors reviewing their work, they learn from code reviewing each other and everybody learns from reviewing work made by anyone else. It opens up perspectives. I mean, you can review some method that was copy pasted four times in order to do the same thing to four different objects and learn how not to that, ever! No matter how much you would want to when coming in at work hungover and hoping for death a little. For example, I've only recently learned to comment on my own code review before submitting it. Some might say comments in the code should do that, but sometimes you need more, as anchors for discussion, which obviously cannot be carried in code comments. (well, they can, but please don't do that)

And there is more! You get documentation of the code for free. When someone doesn't understand what the hell is going on, they ask questions, which leads to you answering in whatever code review software you use. This will remain there for others to peruse long after you've left the company and went on to slightly RGB shifted pastures. I still dream of a non intrusive system that would connect reviews to the code in your IDE, so you can always see a list of comments and annotations for whatever you are looking at.

One of the benefits is that code review makes everyone in the team write code in the same way. For better or worse. I will detail that in a moment, but think about what it means to read a piece of code, trying to understand it, then switch to the next one and see it written in a completely different style. You waste a lot of time.

Finally, I think the confidence code review gives you can lead not only to better code, but also faster code. More on this comes next. This is controversial, but I think you can use code review to check your code, but only if you trust the reviewers. You might fire off commit after commit after commit, confident that your peers will check what you, normally, would have to double and triple check before committing. It's risky, but with the right team it can do wonders.

The Bad


OK, so it's a great thing, this code review stuff. I knew that, you knew that, why are you wasting your finger strength? Well, there is a dark side to code review. I've heard some purists insist on some rules for code review with which I am not completely comfortable with, for example. I invite said purists who also read my blog to come rant in the comments below. Also my recent experience which touches on said rules and introduces others. Let me detail the bad.

There are programmers and programmers, projects and projects, management and management. Where one developer writes some code and hopes people will look at it carefully and instruct them on what they could improve, some people just lazily write something that kind of works, thinking whoever will do the code review will also do the work of making their code remotely usable. Where in some projects developers remain working after hours because they want to see their code do good and the project succeed, in others people couldn't care less: they do their time and break the door when the bell rings. Don't expect careful code reviews then. And there is the management issue, which might protect the developers from anything unrelated to coding or they might pester them with meetings and emails and processes that break concentration, waste time and surely do not help with the attention span of a code reviewer. But in all of the worst cases above, code review is still good, just less effective.

One of the rules I was talking about above was to never commit code unless its code review was accepted. Note the bold font on the never. It was like that whenever I heard the rule. Sounded bold. But I completely disagree with that.

First, if you have developers that you can't trust to commit something, don't let them commit. Either find someone better or do something with their privileges, a system that prevents them from committing. Same goes for people you can't trust to read the code review and update the code afterward a bad or defective commit.

Second of all, you might work on a file that should appear in more code reviews. No, the system where you do the work, ask for review, then shelve the files so you can work on the next thing doesn't work! It takes time, concentration and leads to bad resolves that break your code. Just commit the first thing and move to the next. When your review comes back full of bugs, just finish what you are working on, commit that, then return to the code and implement fixes for the issues found. That is a problem for code review software that can't understand a file committed after changes were made to it doesn't mean you want to include all the changes since time immemorial. That's a software issue, though. Just create a new review and somehow link it to the other, via comments or notes. Creating a personal branch for all developers or other crazy ideas like that are also crap.

Not committing work that you've done means delaying your other work, testing, finding problems in it, etc. Having to juggle with software in order to submit to a rigid process that is indifferent to the overall pace of development and the realities of your work is stupid. Just work, commit, review, test, rework. It's what we do.

It's also, I think, an error in judgement to force code review. As good as I think it is, you can work without it. It is an optional process, so keep it that way. Conditioning development on an optional process makes it mandatory. It might sound like a truism, but people don't seem to realize things unless you articulate them.

And then there is human nature. If you ask me to code review for you, I will stop what I am doing and perform the review, because if I don't, you can't commit. It hurts my work, because it breaks my concentration. It hurts your code review, because I am not focused enough. Personally I am best at reviewing in the morning. None of the organizational crap happened yet, no meetings, no emails telling me to write other emails, no chat messages asking questions that I have no desire to answer. I am rested, I am a bit pumped from making the minimum physical movements required to get me to the office and so I am ready to singlemindedly focus on your review. It shouldn't matter that you committed the code yesterday. I'll get to it when I get to it.

The Ugly


The ugly is not only bad, but also disturbing. It's not a characteristic of the code review per se, but is more related to the humans involved in the process. Code review has some nasty side effects on certain people and in certain situations. Let's discuss this for a bit.

I was saying above that it's good everybody writes in a certain way. That actually may stop people from innovating in the writing of code. Do it this way, that's the pattern we're using, you will hear, without the slightest hint of the possibility to improve on that pattern. Same thing might happen with new ideas that you might feel need to be introduced in the project, or some refactoring, or some other creative work that would make you proud and motivated to continue to do good work. As I said above, it's a people problem, not a process problem, but when it happens, it stifles innovation, creativity and ultimately the fucks you give on what happens to the project as a whole.

Code reviews, like any other communication medium, may be abused. People may be attacked or shamed by others who don't really like them. They might not even be junior and senior, as it might involve time in the firm rather than technical skill, or some other hierarchical or social advantage. Ego fights can also erupt in code reviews, which can exacerbate the problem if they are blocking reviews. Arguments are good, pissing contests are ugly, that kind of thing.

Reviews waste time. That's really not a people problem, it's a process problem. All processes, that is. You need to put in the work to do a good review. Just glancing over and saying "it looks good", without trying to understand what the code is supposed to do, is almost worse than refusing to do the review. I am plenty guilty of that. Instead of thinking about what the guy did and trying to help, part of my brain just keeps rummaging on what my current development task is. This is another argument to separate reviewing from code writing. You need your zone for both. When code review waste rather than spend time, that's ugly.

Finally, I think one major issues with code review is that it encourages lazing off on unit testing, proper testing, refactoring and even simple writing of the code. This is a management issue, mostly, and it's ugly like vomited shit. When people write horrid code filled with bugs assuming that code review will fix their lack of interest, that's ugly. When you are urged, more or less vigorously, to skimp on the unit or manual testing because the code review was accepted, that's ugly. But when you are trying to improve the general quality of the code and the answer is either that you don't have time for this or that any change is unnecessary because the code review passed or even when you are unwilling to do the refactoring, knowing what a hassle will be to send it through review, that's damn ugly. It means you want to do more than your share and you get stuck in a process.

And on that note, I end this wall of text. Process before people is always ugly.

Comments and opinions, if you dare! :)

There are two girls standing right in front of me on the subway escalator, riding up towards the gray, unforgiving, cold Bucharest weather. I am standing there, looking at their backs and my mind starts to wander. I imagine their long young hair hiding one of those smiles to yearn for, expressing both calm and potential, contentment and desire, purity and mischief. I imagine the skin on their back and neck, clean and unmarred, smelling faintly floral, not because of some stupid perfume, but coming from its rose petal smoothness and their very inner nature, tasting like heaven. The long coat and pants hide the perfect ass, not fat and not muscular, not big and not small, not even sexual, the ideal origin for two perfect legs. One of them turns to the other and they start talking and in a completely involuntary act of political correctness they become human, ordinary, just like me, and I resent them for that. Why couldn't you remain goddesses? What is wrong with you?

People know that I sometimes like to hold controversial opinions, sometimes just for the kicks. So first I will ask some questions, ask you to find your own answers, then read about what I think.

Why is sexual harassment different from any other type of harassment?
Why is leaving from a job where your boss or colleague is an asshole any different because they are sexual harassers as opposed to regular assholes?
Why is offering sexual services, including just dressing or behaving a certain way, a natural way of expressing oneself, but asking for them is illegal?
Why do these things seem to happen only in the US?

Now, I am a technical person. While I am certainly not for sexual harassment at work, I need to understand why some types of behaviors are punished so differently from others that are very similar. Consider an employee who is humiliated and disrespected publicly at work, but not in a sexual manner. How could they ever support this outcry against sexual harassment, when their own plight is getting ignored? I mean, do we need laws to tell us when we crossed the line while being jerks? Why is there a legal segregation of jerkness? Why is someone abusing their power more or less guilty based on the type of abuse they chose to exercise?

And, as you might think, my opinion extends to a lot of these special exceptions to civilized behavior. They are called positive discrimination, affirmative action, employment equity and so on, but they boil down to one thing only: another type of discrimination. In the end, like any sort of forced empowerment, they will get abused. The rallying cry of Gretchen Carlson will fizzle out in a sea of false claims and malicious law suits. In fact, it only takes a few to invalidate all the others. It's like trying to make a drink less bitter by adding sugar, rather than removing the bitter ingredient. Or paying someone after you had sex with them against their will, if you see what I mean.

But what am I advocating? Certainly if we abolish the special rules, then all one can do when sexual harassed is to find another job. The asshole in power will hire more and more people until some are weak or desperate enough to accept their advances. If we extend the rules to cover any type of harassment, we expose employers to abuses and tie their hands in how to run their business. Neither option is acceptable, but neither is cherry picking which type of behavior is bad enough to be illegal based on its popularity. This entire issue is systemic: there is no way for the system to self regulate.

When all this #MeToo business started going on, I went and asked women about this. And girls here were supportive of the women who were coming forward, but not supportive of the ensuing publicity circus and these arbitrary lines being drawn in the sand. While the virality of discussion inspired so many people to come forward, which is a good thing, we have to consider why they failed to do so in the past and what will happen when fatigue will inevitably remove this from the public interest. Company policy and employee training, harsher laws? No formalization of common sense (which doesn't seem to be so common anymore) is a good idea.

To summarize this, if you behave like a douchebag and people around you don't call you out on it, then it is culturally OK to continue. It is not right, but putting it into a law won't stop it from happening. The recent publicity around this seems to be coming not from the gravity of the behavior, but from the number of people who suddenly came forward to expose it. If that would have been naturally possible, it would not have been news, it would have just been the normal process of weeding out assholes, so it is clear to me that the problem is not as much the specific type of harassment, but the system that makes difficult the public exposure of it and a subsequent moral censure.

I don't have a solution. I am sure if I could figure it out, so would a lot of other people, and this would be solved already. However, it doesn't hurt to look at these cases from a different perspective from time to time. Like from a different country, with a different culture. Or from high enough to understand which of the social values you foster lead to assholes in power getting more and more power and then abusing it. Let's not get hung up on local solutions for specific problems, turning everything into a legal and moral labyrinth where no one is comfortable, and solve the big problems first.

Growing up I always had this fantasy of writing a journal. My sense of privacy - being sure someone would read and judge it - stopped me from pursuing that, as well as the simple fact that I didn't need a journal, I just saw it as a cool thing I should do. Little did I know that in my older years I would want some sort of record of my forgotten youth and find none. Yet the idea persisted.

I started an actual journal as soon as I had a computer and I understood the concept of encryption. It didn't really work, either. It was full of self serving bullshit and it described a person that I really wasn't. One could (and should) read between the lines in order to understand the smug and ignorant state of mind of the author. Later still, I started to write a book, something called The Good Programmer or something of that sort. Phaw! Even if I could have gotten past my chronic impostor syndrome, being a good programmer is nice, but not my goal in life. If it were, I would have made other life choices. And again, it was full of self serving bullshit.

You may detect a pattern here and it might inform your reading this blog post. Anyway, its point is to generalize my experience as a programmer, as fast and as clean as possible. Hope it helps.

Every time I write software - that I care about and have influence over its technical quality - I tend to generalize things: reuse components, refactor duplicate code and so on. In other words, find similar problems and solve them with the same tool. It is not Golden Hammering problems away, that's a different thing altogether, since it is I who is shaping the tool. So how about doing that for my life? I should care about it and have influence over its quality.

First time I started writing code I was actually writing it on paper. I didn't have a computer, but I had just read this beginner's book and I was hooked. The code wouldn't have worked in a million years, but it was the thought that counted: I played around with it. Later on I got a computer and I started using the programs, understanding how it works, not different from getting a smartphone and learning how to phone people. Yet, after a while I found issues that I wanted to solve or games that I wanted to play but didn't have, then I made them myself: I found a problem and solved it. But writing code is not just about the end result. As soon as I explored what other people were doing, I started trying to emulate and improve what they did. I played around with compression and artificial intelligence, for example. And I was a teen in a world of no Internet. I went to the British Council and borrowed actual books, then tried the concepts there a lot.

It was years before I would become a professional programmer, and that is mostly because the hiring process (in any country) is plain stupid. The best HR department in the world is just looking for people that have already done what is required, so that they do it at the current company as well. But that's not what a developer wants. Software is both science and art. The science is a bit of knowledge and a lot of discipline, but the rest, a very large chunk, is just intuition and exploration and imagination. People who want to do the same thing over and over again are not good developers; instead they are probably people that just want to make a buck with which to live their "real" life. For me, real life has been writing code - and I still think I am being paid for putting up with the people I work for and work with, rather that for doing what I love.

Professional work is completely different from the learning period. In it you usually don't have a say on what you work on and the problems that software is supposed to solve are at best something you are indifferent to and at worst something you wouldn't understand (as in will not, even if you could). Yet, the same basic principles apply. First, you are required to write good code. By this your employers mean something that works as they intended, but for you it is still something that you feel pride in having written, something that is readable enough so you understand it a few weeks later when you have to add stuff or repair something. You are expected to "keep up to date", by which they understand you would keep studying in your spare time so that you do work that they don't know they need done, but for you it is still playing around with things. Think about it! You are expected to keep playing around! As for the part where you see what other people do and you get to emulate or improve on that... you have a bunch of colleagues working on the same stuff that you can talk to and compare notes and code review with. Add to this the strong community of software developers that are everywhere on the Internet.

Bottom line: Just keep doing three things and you're good. First play around with stuff. Then find a problem to solve (or someone to provide it for you) and write code for it. Finally, check what other people do and gain inspiration to create or improve your or their work. Oh, did I say finally? This is a while loop, for as long as you are having fun. Hey, what do you know? This does scale. Doesn't it sound like a good plan, even if you are not a software dev?

It seems to me that there are more and more crazy people around me. They are relatives, friends, colleagues, random people on the street and I have no idea where they came from. I don't remember as much insanity from when I was a boy, but then again I was even more oblivious then than I am now, and that's saying something. Yet, since then the population of the planet grew from 4.5 billion to 7 billion and, more importantly to me, the population of my home city of Bucharest grew from about 1.5 million to a city where just as many people come from outside the capital to find opportunities. But the percentage of mentally afflicted seems to have more than doubled. But what is crazy?

I mean, I just saw an old lady, looking like she was chronically homeless, shouting obscenities to no one in particular. Who else was she to talk to except herself? She can't even trust another human being enough to talk to them, even if the thought came to her mind. And if she has an audience of one, just as sane as she is, who is to say she's talking crazy? Or when you see some company executive make stupid after stupid decision, then boldly coming on stage and presenting it as the best idea since fire was invented. Do they know they are sociopaths? Does anybody else know? Do they even care? There is a quote in the Mindhunter TV series: "How does a sociopath become the president of the United States?", asks the young FBI agent. "How does one become president if they are not?", responds the psychology professor. And I am reading this book, that I am going to review in a few days, about the counterculture in America, during the 60's. If those people would appear in front of me right now, foraging through mall trash and explaining cosmic truths while loaded with speed and LSD, I would probably catalog them as insane.

Maybe insanity is not a state, but a perception. It's just a socially unacceptable behavior. It does hurt the person using it, but that's mostly because they can't fit (or maybe they fit too well). Have I become more sensitive because of the carefully constructed shell that protects me from hardship? Anything going through it hurts like hell because I am not used for stuff to come through. I have thin skin covered by layers of callousness. Maybe society is more exclusive now? It is easier to become crazy, as you only have to fall a little bit before you get into an unstoppable spiraling decline. Certainly you can't experiment now with personal freedom; it's almost gone, taken away bit by bit, not (only) by repressive governments, but by our willingness to waste time and resources until there are none left. Open relationships? Life on the road? Chemically expanding your mind? Forget about it! You get homeopathy and holotropic breathwork and feel enlightened.

There is another hypothesis worth exploring. Maybe people are not crazy at all. Perhaps I am the mad one. At every stage I expect the full weight of social scorn to come over me and crush me like the bug I am. How dare I? I wouldn't even know what I was guilty of - which, paradoxically, would prove I am even more guilty. They would come at me with carefully crafted smiles and expressions taken from shows or movies they have all seen and burn me alive, giggling all the way, like they are making the greatest joke in the world while providing me with the help they know I desperately need. All these people that apparently speak only to themselves, yet somehow communicate with others by methods unseen, they would suddenly all turn towards me, pointing their fingers and letting out inarticulate cries. Then, of course, I would know that I am insane, because I would never be able to do any of that.

I just don't know. Where does this vomitous mountain of madness come from? Maybe more importantly, where is it going?

Cool people are those who respond to your opinions with condescension. Anything different from what they believe is ridiculous, pathetic, laughable or dangerous. I mean, they could even be right: you might have said something really stupid. But while you can handle finding out that something you were thinking is not true, they cannot. Their whole world view is based on them being right. While you might see them as a little hurtful and a bit annoying, they see you as a threat, because their truth is something they desperately cling to and any type of difference challenges their way of life.

You will usually meet them in positions of power. They are not enough of a sociopath to be top leadership, but they will be somewhere in the middle, telling themselves the story of how in control of their life they are. They clump together, because a tale is easier to believe in a group than by yourself. They drink together, they watch the same sports, they play the same games, they go in the same vacations, they have the same gods and the same rituals. Their information bubbles existed way before Facebook and while you might see them as ridiculous bubble people, they always fear you are carrying a pin to burst theirs. Cool people always know "how the world works" and to pretend otherwise would only mean you are not as savvy. Major changes leave them helpless and in search of a narrative that explains that away from their view of the world. Beware a former cool person for they are desperate.

They will applaud each other vigorously at every little success as they feel it's a validation of their own. Unfortunately, that means they will stand in the way of your success, as they feel it invalidates theirs. Cool people live on a narrow ladder, where everybody is clearly ranked on a vertical scale. Not being on their ladder makes them feel superior to you. Not wanting to be on their ladder makes them feel threatened by you. While you are exchanging information you possess, they only coerce it out of you in order to judge and rank you on their scale. When they are exchanging information is from a feeling of generosity, allowing you to know where the cool is; not being grateful angers them.

Cool people keep in touch. They cannot allow coolness to exist in different flavors. They maintain contact in order to synchronize their shared concepts. Socially it is easy for two cool people to communicate, because they are very similar. It is important to make other people feel not cool enough, because a cool person can't handle a conversation that doesn't follow a familiar pattern. While the problem is mostly theirs, they need to shift the blame onto others. They smile easily as a well trained skill, not an expression of how they feel. Smiles and laughter are tools and weapons for them.

Uncool people are essential to the well being of cool people. A careful dance of keeping people just far enough to indicate superiority, but close enough to make it visible to any outside observer, is essential to the lifestyle of cool people. While you either despise, pity or envy them, but you could easily do without them, they actually need you.

So how cool are you? I am not cool. I am better than cool. Me and my kind.

It occurred to me recently that the opposite of fear is hope. Well, of course, you will say, didn't you know that? I did, but I also didn't fully grasp the concept. It doesn't help that fear is considered an emotion, yet hope a more complicated idea.

I was thinking about the things that go wrong in my country and some of it, a large part, comes from bad laws. And I was trying to understand what a "bad law" is. I tried some examples, like the dog leash one - I know, I have a special personal hate for that one in particular - but I noticed a pattern. It's not about the content of the law as it is about its trigger. You see, lawmen don't propose and pass laws because they like work, but because there was an event that triggered the need for that law. Law is always reactive, not proactive. It could be proactive, but there is a lot more effort involved, like convincing people that there is an actual problem that needs addressing. It's much easier to wait for the problem to manifest and then try (or pretend) to fix it.

Anyway, the pattern that I noticed was related to the trigger for individual laws. The bad laws were the ones that came out of fear. One kid got killed by stray dogs, kill them all and institute mandatory leashes on pets. The good laws, on the other hand, come from hope. Lower taxes so people are more inclined to work and thus produce more and so get more tax in. Hopefully people will not be lazy.

And it's not only related to laws, but to personal decisions as well. Will I try a new thing, hoping that it will make me better, teach me something, be fun, or will I not try it because it is dangerous, somebody might get hurt, I may lose precious time, etc? When it is so abstract it's almost a given that you will take the first choice, yet when it is more personal fear tends to paralyze.

Fear is also contagious. The people who want us to be afraid are afraid themselves. Control freaks, power hungry people, they don't want to take us to a better place because they are afraid to lose that control, because they are afraid of what might happen. And their toolkit is based on fear, too. Something exploded and killed people, some asshole drove a car into people: we must ban explosives, cars and - just to be safe - people. Don't go to space because people might die, although they die every second and most of the time you don't care about it. Let's hoard money and things because we might not get another chance to have them, because we might lose them, because we are so afraid. The fear people don't know any other language but fear and they will use it against you. Much easier to instill fear than to give hope, so hope is not that contagious. It is fragile and it is precious.

I submit that while fear might keep us safe it will never make us happy. The very expression "to keep safe" implies stagnation, keeping, holding, controlling, restricting freedom.

So here is my solution. As Saint-Exupery said, perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. Let's strictly define our safe zone, or the area we need to be safe in order to not be afraid. Personally, as a group, as a country, as a planet, let's set the minimum requirements to being safe, a place or situation we can always retreat to and not be afraid. Whether it is a place that is your own, or a lack of debt, or a job or business that will give you just enough money to survive and not spiral out of control, a relationship or some other safety net, everyone needs it. But beyond it, let's abandon fear and instead use hope. Hope that you can do more, you can be better, you can live more or have fun, that other people will act good rather than badly, that strangers will help rather than harm you, that the unknown will reveal beauty rather than terror.

I will choose to define good decisions as coming from hope. Will that hope be proven to be unfounded? Maybe. But a decision based on fear will never ever be good enough. And if all else fails, I have my safe zone to get back to. And I know, I very much know that having a place to get back to from failure is a luxury, that not many people have it as good as I do, but to have it and still live in fear, that's just stupid.

People who know me often snicker whenever someone utters "refactoring" nearby. I am a strong proponent of refactoring code and I have feelings almost as strong for the managers who disagree. Usually they have really stupid reasons for it, too. However, speaking with a colleague the other day, I realized that refactoring can be bad as well. So here I will explore this idea.

Why refactor at all?


Refactoring is the process of rewriting code so that it is more readable and maintainable. It does not mean writing code to be readable and maintainable from the beginning and it does not mean that doing it you accept your code was not good when you first wrote it. Usually the scope of refactoring is larger than localized bits of code and takes into account several areas in your software. It also has the purpose of aligning your codebase with the inevitable scope creep in any project. But more than this, its primary use is to make easy the work of people that will work on the code later on, be it yourself or some colleague.

I was talking with this friend of mine and he explained to me how, especially in the game industry, managers are reluctant to spend resources in cleaning old code before actually starting work on new one, since release dates come fast and technologies change rapidly. I replied that, to me, refactoring is not something to be done before you write code, but after, as a phase of the development process. In fact, there was even a picture showing it on a wheel: planning, implementing, testing and bug fixing, refactoring. I searched for it, but I found so many other different ideas that I've decided it would be pointless to show it here. However, most of these images and presentation files specified maintenance as the final step of a software project. For most projects, use and maintenance is the longest phase in the cycle. It makes sense to invest in making it easier for your team.



So how could any of this be bad?


Well, there are types of projects that are fire and forget, they disappear after a while, their codebase abandoned. Their maintenance phase is tiny or nonexistent and therefore refactoring the code has a limited value. But still it is not a case when refactoring is wrong, just less useful. I believe that there are situations where refactoring can have an adverse effect and that is exactly the scenario my friend mentioned: before starting to code. Let me expand on that.

Refactoring is a process of rewriting code, which implies you not only have a codebase you want to rewrite, but also that you know how to do it. Except very limited cases where some project is bought by another company with a lot more experienced developers and you just need to clean up garbage, there is no need to touch code that you are just beginning to understand. To refactor after you've finished a planned development phase (a Scrum sprint, for example, or a completed feature) is easy, since you understand how the code was written, what the requirements have become, maybe you are lucky enough to have unit tests on the working code, etc. It's the now I have it working, let's clean it up a little phase. Alternately, doing it when you want to add things is bad because you barely remember what and who did anything. Moreover, you probably want to add features, so changing the old code to accommodate adding some other code makes little sense. Management will surely not only not approve, but even consider it a hostile request from a stupid techie who only cares about the beauty of code and doesn't understand the commercial realities of the project. So suggest something like this and you risk souring the entire team on the prospect of refactoring code.

Another refactoring antipattern is when someone decides the architecture needs to be more flexible, so flexible that it could do anything, therefore they rearchitect the whole thing, using software patterns and high level concepts, but ignoring the actual functionality of the existing code and the level of seniority in their team. In fact, I wouldn't even call this refactoring, since it doesn't address problems with code structure, but rewrites it completely. It's not making sure your building is sturdy and all water pipes are new, it's demolishing everything, building something else, then bringing the same furniture in. Indeed, even as I like beautiful code, doing changes to it solely to make it prettier or to make you feel smarter is dead wrong. What will probably happen is that people will get confused on the grand scheme of things and, without expensive supervision in terms of time and other resources, they will start to cut corners and erode the architecture in order to write simpler code.

There is a system where software is released in "versions". So people just write crappy code and pile features one over the other, in the knowledge that if the project has success, then the next version will be well written. However, that rarely happens. Rewriting money making code is perceived as a loss by the financial managers. Trust me on this: the shitty code you write today will haunt you for the rest of the project's lifetime and even in its afterlife, when other projects are started from cannibalized codebases. However, I am not a proponent of writing code right from the beginning, mostly because no one actually knows what it should really do until they end writing it.

Refactoring is often associated with Test Driven Development, probably because they are both difficult to sell to management. It would be a mistake to think that refactoring is useful only in that context. Sure, it is a best practice to have unit tests on the piece of code you need to refactor, but let's face it, reality is hard enough as it is.

Last, but not least, is the partial or incomplete refactoring. It starts and sometime around the middle of the effort new feature requests arrive. The refactoring is "paused", but now part of your code is written one way and the rest another. The perception is that refactoring was not only useless, but even detrimental. Same when you decide to do it and then allow yourself to avoid it or postpone it and you do it badly enough it doesn't help at all. Doing it for the sake of saying you do it is plain bad.

The right time and the right people


I personally believe that refactoring should be done at the end of each development interval, when you are still familiar with the feature and its implementation. Doing it like this doesn't even need special approval, it's just the way things are done, it's the shop culture. It is not what you do after code review - simple code cleaning suggested by people who took five minutes to look it over - it is a team effort to discuss which elements are difficult to maintain or are easy to simplify or reuse or encapsulate. It is not a job for juniors, either. You don't grab the youngest guy in the team and you let him rearrange the code of more experienced people, even if that seems to teach the guy a lot. Also, this is not something that senior devs are allowed to do in their spare time. They might like it, but it is your responsibility to care about the project, not something you expect your team to do when you are too lazy or too cheap. Finally, refactoring is not an excuse to write bad code in the hope you will fix it later.

By the way I am talking about this you probably believe I've worked in many teams where refactoring was second nature and no one would doubt its utility. You would be wrong. Because it is poorly understood, the reaction of non technical people in a software team to the concept of refactoring usually falls in the interval between condescension and terror. Money people don't understand why change something that works, managers can't sell it as a good thing, production and art people don't care. Even worse, most technical people will rather write new stuff than rearrange old stuff and some might even take offense at attempts to make "their code" better. But they will start to mutter and complain a lot when they will get to the maintenance phase or when they will have to write features over old code, maybe even theirs, and they will have difficulty understanding why the code is not written in a way in which their work would be easy. And when managers will go to their dashboards and compare team productivity they will raise eyebrows at a chart that shows clear signs of slowing down.

Refactoring has a nasty side effect: it threatens jobs. If the code would be clean and any change easy to perform, then there will be a lot of pressure on the decision makers to justify their job. They will have to come with relevant new ideas all the time. If the effort to maintain code or add new features is small, there will be pressure on developers to justify their job as well. Why keep a large team for a project that can easily accommodate a few junior devs that occasionally add something. Refactoring is the bane of the type of worker than does their job confusingly enough that only they can continue to do it or pretend to be managing a difficult project, but they are the ones that make it be so. So in certain situations, for example in single product companies, refactoring will make people fear they will be made redundant. Yet in others it will accelerate the speed of development for new projects, improve morale and win a shit load of money.

So my parting thoughts are these: sell it right and do it right! Most likely it will have a positive effect on the entire project and team. People will be happier and more productive, which means their bosses will be happier and filthy richer. Do it badly or sell it wrong and you will alienate people and curse shitty code for as long as you work there.

Intro


I was thinking about what to discuss about computers next and I realized that I hardly ever read anything about the flow of an application. I mean, sure, when you learn to code the first thing you hear is that a program is like a set of instructions, not unlike a cooking recipe or instructions from a wife to her hapless husband when she's sending him to the market. You know, stuff like "go to the market and buy 10 eggs. No, make it 20. If they don't have eggs get salami." Of course, her software developer husband returns with 20 salamis "They didn't have eggs", he reasons. Yet a program is increasingly not a simple set of instructions neatly following each other.

So I wrote like a 5 page treatise on program control flow, mentioning Turing and the Benedict Cumberbatch movie, child labor and farm work, asking if it is possible to turn any program, with its parallelism and entire event driven complexity to a Turing machine for better debugging. It was boring, so I removed it all. Instead, I will talk about conditional statements and how to refactor them, when it is needed.

A conditional statement is one of the basic statements of programming, it is a decision that affects what the program will execute next. And we love telling computers what to do, right? Anyway, here are some ways if/then/else or switch statements are used, some bad, some good, and how to fix whatever problems we find.

Team Arrow


First of all, the arrow antipattern. It's when you have if blocks in if blocks until your code looks like an arrow pointing right:
if (data.isValid()) {
if (data.items&&data.items.length) {
var item=data.items[0];
if (item) {
if (item.isActive()) {
console.log('Oh, great. An active item. hurray.');
} else {
throw "Item not active! Fatal terror!";
}
}
}
}
This can simply be avoided by putting all the code in a method and inverting the if branches, like this:
if (!data.isValid()) return;
if (!data.items||!data.items.length) return;
var item=data.items[0];
if (!item) return;
if (!item.isActive()) {
throw "Item not active! Fatal terror!";
}
console.log('Oh, great. An active item. hurray.');
See? No more arrow. And the debugging is so much easier.

There is a sister pattern of The Arrow called Speedy. OK, that's a Green Arrow joke, I have no idea how it is called, but basically, since a bunch of imbricated if blocks can be translated into a single if with a lot of conditions, the same code might have looked like this:
if (data.isValid()&&data.items&&data.items.length&&data.items[0]) {
var item=data.items[0];
if (!item.isActive()) {
throw "Item not active! Fatal terror!";
}
console.log('Oh, great. An active item. hurray.');
}
While this doesn't look like an arrow, it is in no way a better code. In fact it is worse, since the person debugging this will have to manually check each condition to see which one failed when a bug occurred. Just remember that if it doesn't look like an arrow, just its shaft, that's worse. OK, so now I named it: The Shaft antipattern. You first heard it here!

There is also a cousin of these two pesky antipatterns, let's call it Black Shaft! OK, no more naming. Just take a look at this:
if (person&&person.department&&person.department.manager&&person.department.manager.phoneNumber) {
call(person.department.manager.phoneNumber);
}
I can already hear a purist shouting at their monitor something like "That's because of the irresponsible use of null values in all programming languages!". Well, null is here to stay, so deal with it. The other problem is that you often don't see a better solution to something like this. You have a hierarchy of objects and any of them might be null and you are not in a position where you would cede control to another piece of code based on which object you refer to. I mean, one could refactor things like this:
if (person) {
person.callDepartmentManager();
}
...
function callDepartmentManager() {
if (this.department) {
this.department.callManager();
}
}
, which would certainly solve things, but adds a lot of extra code. In C# 6 you can do this:
var phoneNumber = person?.department?.manager?.phoneNumber;
if (phoneNumber) {
call(phoneNumber);
}
This is great for .NET developers, but it also shows that rather than convince people to use better code practices, Microsoft decided this is a common enough problem it needed to be addressed through features in the programming language itself.

To be fair, I don't have a generic solution for this. Just be careful to use this only when you actually need it and handle any null values with grace, rather than just ignore them. Perhaps it is not the best place to call the manager in a piece of code that only has a reference to a person. Perhaps a person that doesn't seem to be in any department is a bigger problem than the fact you can't find their manager's phone number.

The Omnipresent Switch


Another smelly code example is the omnipresent switch. You see code like this:
switch(type) {
case Types.person:
walk();
break;
case Types.car:
run();
break;
case Types.plane:
fly();
break;
}
This isn't so bad, unless it appears in a lot of places in your code. If that type variable is checked again and again and again to see which way the program should behave, then you probably can apply the Replace Conditional with Polymorphism refactoring method.



Or, in simple English, group all the code per type, then only decide in one place which of them you want to execute. Polymorphism might work, but also some careful rearranging of your code. If you think of your code like you would a story, then this is the equivalent of the annoying "meanwhile, at the Bat Cave" switch. No, I want to see what happens at the Beaver's Bend, don't fucking jump to another unrelated segment! Just try to mentally filter all switch statements and replace them with a comic book bubble written in violently zigzagging font: "Meanwhile...".

A similar thing is when you have a bool or enum parameter in a method, completely changing the behavior of that method. Maybe you should use two different methods. I mean, stuff like:
function doWork(iFeelLikeIt) {
if (iFeelLikeIt) {
work();
} else {
fuckIt();
}
}
happens every day in life, no need to see it in code.

Optimizing in the wrong place


Let's take a more serious example:
function stats(arr,method) {
if (!arr||!arr.length) return;
arr.sort();
switch (method) {
case Methods.min:
return arr[0];
case Methods.max:
return arr[arr.length-1];
case Methods.median:
if (arr.length%2==0) {
return (arr[arr.length/2-1]+arr[arr.length/2])/2;
} else {
return arr[Math.ceiling(arr.length/2)];
}
case Methods.mode:
var counts={};
var max=-1;
var result=-1;
arr.forEach(function(v) {
var count=(counts[v]||0)+1;
if (count>max) {
result=v;
max=count;
}
counts[v]=count;
});
return result;
case Methods.average:
var sum=0;
arr.forEach(function(v) { sum+=v; });
return sum/arr.length;
}
}

OK, it's still a silly example, but relatively less silly. It computes various statistical formulas from an array of values. At first, it seems like a good idea. You sort the array that works for three out of five methods, then you write the code for each, which is greatly simplified by working with a sorted array. Yet for the last two, being sorted does nothing and both of them have loops through the array. Sorting the array would definitely loop through the array as well. So, let's move the decision earlier:
function min(arr) {
if (!arr||!arr.length) return;
return Math.min.apply(null,arr);
}

function max(arr) {
if (!arr||!arr.length) return;
return Math.max.apply(null,arr);
}

function median(arr) {
if (!arr||!arr.length) return;
arr.sort();
var half=Math.ceiling(arr.length/2);
if (arr.length%2==0) {
return (arr[half-1]+arr[half])/2;
} else {
return arr[half];
}
}

function mode(arr) {
if (!arr||!arr.length) return;
var counts={};
var max=-1;
var result=-1;
arr.forEach(function(v) {
var count=(counts[v]||0)+1;
if (count>max) {
result=v;
max=count;
}
counts[v]=count;
});
return result;
}

function average(arr) {
if (!arr||!arr.length) return;
return arr.reduce(function (p, c) {
return p + c;
}) / arr.length;
}

As you can see, I only use sorting in the median function - and it can be argued that I could do it better without sorting. The names of the functions now reflect their functionalities. The min and max functions take advantage of the native min/max functions of Javascript and other than the check for a valid array, they are one liners. More than this, it was natural to use various ways to organize my code for each method; it would have felt weird, at least for me, to use forEach and reduce and sort and for loops in the same method, even if each was in its own switch case block. Moreover, now I can find the min, max, mode or median of an array of strings, for example, while an average would make no sense, or I can refactor each function as I see fit, without caring about the functionality of the others.

Yet, you smugly point out, each method uses the same code to check for the validity of the array. Didn't you preach about DRY a blog post ago? True. One might turn that into a function, so that there is only one point of change. That's fair. I concede the point. However don't make the mistake of confusing repeating a need with repeating code. In each of the functions there is a need to check for the validity of the input data. Repeating the code for it is not only good, it's required. But good catch, reader! I wouldn't have thought about it myself.

But, you might argue, the original function was called stats. What if a manager comes and says he wants a function that calculates all statistical values for an array? Then the initial sort might make sense, but the switch doesn't. Instead, this might lead to another antipattern: using a complex function only for a small part of its execution. Something like this:
var stats=getStats(arr);
var middle=(stats.min+stats.max)/2;
In this case, we only need the minimum and maximum of an array in order to get the "middle" value, and the code looks very elegant, yet in the background it computes all the five values, a waste of resources. Is this more readable? Yes. And in some cases it is preferred to do it like that when you don't care about performance. So this is both a pattern and an antipattern, depending on what is more important to your application. It is possible (and even often encountered) to optimize too much.

The X-ifs


A mutant form of the if statement is the ternary operator. My personal preference is to use it whenever a single condition determines one value or another. I prefer if/then/else statements to actual code execution. So I like this:
function boolToNumber(b) {
return b?1:0;
}

function exec(arr) {
if (arr.length%2==0) {
split(arr,arr.length/2);
} else {
arr.push(newValue());
}
}
but I don't approve of this:
function exec(arr) {
arr.length%2
? arr.push(newValue())
: split(arr,arr.length/2);
}

var a;
if (x==1) {
a=2;
} else {
a=6;
}

var a=x==1
? 2
: (y==2?5:6);
The idea is that the code needs to be readable, so I prefer to read it like this. It is not a "principle" to write code as above - as I said it's a personal preference, but do think of the other people trying to make heads and tails of what you wrote.

We are many, you are but one


There is a class of multiple decision flow that is hard to immediately refactor. I've talked about if statements that do the entire work in one of their blocks and of switch statements that can be easily split into methods. However there is the case where you want to do things based on the values of multiple variables, something like this:
if (x==1) {
if (y==1) {
console.log('bottom-right');
} else {
console.log('top-right');
}
} else {
if (y==1) {
console.log('bottom-left');
} else {
console.log('top-left');
}
}
There are several ways of handling this. One is to, again, try to move the decision on a higher level. Example:
if (x==1) {
logRight(y);
} else {
logLeft(y);
}
Of course, this particular case can be fixed through computation, like this:
var h=y==1?'right':'left';
var v=x==1?'bottom':'top';
console.log(v+'-'+h);
Assuming it was not so simple, though, we can choose to reduce the choice to a single decision:
switch(x+','+y) {
case '0,0': console.log('top-left'); break;
case '0,1': console.log('bottom-left'); break;
case '1,0': console.log('top-right'); break;
case '1,1': console.log('bottom-right'); break;
}



The Lazy Event


Another more complicated issue regarding conditional statements is when they are not actually encoding a decision, but testing for a change. Something like:
if (current!=prev) {
clearData();
var data=computeData(current);
setData(data);
prev=current;
}
This is a perfectly valid piece of code and in many situations is what is required. However, one must pay attention to the place where the decision gets taken as compared with the place the value changed. Isn't that more like an event handler, something that should be designed differently, architecture wise? Why keep a previous value and react to the change only when I get into this piece of code and not react to the change of the value immediately? Fire an event that the value is changed and subscribe to the event via a piece of code that refreshes the data. One giveaway for this is that in the code above there is no actual use of the prev value other than to compare it and set it.

Generalizations


As a general rule, try to take the decisions that are codified by if and switch statements as early as possible. The code must be readable to humans, sometimes in detriment of performance, if it is not essential to the functionality of your program. Avoid decision statements within other decision statements (arrow ifs, ternary operator in a ternary operator, imbricated switch and if statements). Split large pieces of code into small, easy to understand and properly named, methods (essentially creating a lower level than your conditional statement, thus relatively taking it higher in the code hierarchy).

What's next


I know this is a lower level programming blog post, but not everyone reading my blog is a senior dev - I mean, I hope so, I don't want to sound completely stupid. I am planning some new stuff, related to my new work project, but it might take some time until I understand it myself. Meanwhile, I am running out of ideas for my 100 days of writing about code challenge, so suggestions are welcome. And thank you for reading so far :)

Intro


I want to talk today about principles of software engineering. Just like design patterns, they range from useful to YAA (Yet Another Acronym). Usually, there is some guy or group of people who decide that a set of simple ideas might help software developers write better code. This is great! Unfortunately, they immediately feel the need to assign them to mnemonic acronyms that make you wonder if they didn't miss some principles from their sets because they were bad at anagrams.

Some are very simple and not worth exploring too much. DRY comes from Don't Repeat Yourself, which basically means don't write the same stuff in multiple places, or you will have to keep them synchronized at every change. Simply don't repeat yourself. Don't repeat yourself. See, it's at least annoying. KISS comes from Keep It Simple, Silly - yeah, let's be civil about it - anyway, the last letter is there just so that the acronym is actually a word. The principle states that avoiding unnecessary complexity will make your system more robust. A similar principle is YAGNI (You Aren't Gonna Need It - very New Yorkish sounding), which also frowns upon complexity, in particular the kind you introduce in order to solve a possible future problem that you don't have.

If you really want to fill your head with principles for software engineering, take a look at this huge list: List of software development philosophies.

But what I wanted to talk about was SOLID, which is so cool that not only does it sound like something you might want your software project to be, but it's a meta acronym, each letter coming from another acronym:
  • S - SRP
  • O - OCP
  • L - LSP
  • I - ISP
  • D - DIP

OK, I was just making it look harder than it actually is. Each of the (sub)acronyms stands for a principle (hence the last P in each) and even if they have suspiciously sounding names that hint on how much someone wanted to call their principles SOLID, they are really... err... solid. They refer specifically to object oriented programming, but I am sure they apply to all types. Let's take a look:

Single Responsibility


The idea is that each of your classes (or modules, units of code, functions, etc) should strive towards only one functionality. Why? Because if you want to change your code you should first have a good reason, then you should know where is that single (DRY) point where that reason applies. One responsibility equals one and only one reason to change.

Short example:
function getTeam() {
var result=[];
var strongest=null;
this.fighters.forEach(function(fighter) {
result.push(fighter.name);
if (!strongest||strongest.power<fighter.power) {
strongest=fighter;
}
});
return {
team:result,
strongest:strongest;
};
}

This code iterates through the list of fighters and returns a list of their names. It also finds the strongest fighter and returns that as well. Obviously, it does two different things and you might want to change the code to have two functions that each do only one. But, you will say, this is more efficient! You iterate once and you get two things for the price of one! Fair enough, but let's see what the disadvantages are:
  • You need to know the exact format of the return object - that's not a big deal, but wouldn't you expect to have a team object returned by a getTeam function?
  • Sometimes you might want just the list of fighters, so computing the strongest is superfluous. Similarly, you might only want the strongest player.
  • In order to add stuff in the iteration loop, the code has become more complex - at least when reading it - than it has to be.

Here is how the code could - and should - have looked. First we split it into two functions:
function getTeam() {
var result=[];
this.fighters.forEach(function(fighter) {
result.push(fighter.name);
});
return result;
}

function getStrongestFighter() {
var strongest=null;
this.fighters.forEach(function(fighter) {
if (!strongest||strongest.power<fighter.power) {
strongest=fighter;
}
});
return strongest;
}
Then we refactor it to something simple and readable:
function getTeam() {
return this.fighters
.map(function(fighter) { return fighter.name; });
}

function getStrongestFighter() {
return this.fighters
.reduce(function(strongest,val) {
return !strongest||strongest.power<val.power?val:strongest;
});
}

Open/Closed


When you write your code the last thing you want to do is go back to it and change it again and again whenever you implement a new functionality. You want the old code to work, be tested to work, and allow new functionality to be built on top of it. In object oriented programming that simply means you can extend classes, but practically it also means the entire plumbing necessary to make this work seamlessly. Simple example:
function Fighter() {
this.fight();
}
Fighter.prototype={
fight : function() {
console.log('slap!');
}
};

function ProfessionalFighter() {
Fighter.apply(this);
}
ProfessionalFighter.prototype={
fight : function() {
console.log('punch! kick!');
}
};

function factory(type) {
switch(type) {
case 'f': return new Fighter();
case 'pf': return new ProfessionalFighter();
}
}

OK, this is a silly example, since I am lazily emulating inheritance in a prototype based language such as Javascript. But the result is the same: both constructors will .fight by default, but each of them will have different implementations. Note that while I cannibalized bits of Fighter to build ProfessionalFighter, I didn't have to change it at all. I also built a factory method that returns different objects based on the input. Both possible returns will have a .fight method.

The open/closed principle also applies to non inheritance based languages and even in classic OOP languages. I believe it leads to a natural outcome: preferring composition over inheritance. Write your code so that the various modules just seamlessly connect to each other, like Lego blocks, to build your end product.

Liskov substitution principle


No, L is not coming from any word that they could think of, so they dragged poor Liskov into it. Forget the definition, it's really stupid and complicated. Not KISSy at all! Assume you have the base class Fighter and the more specialized subclass ProfessionalFighter. If you had a piece of code that uses a Fighter object, then this principle says you should be able to replace it with a ProfessionalFighter and the piece of code would still be correct. It may not be what you want, but it would work.

But when does a subclass break this principle? One very ugly antipattern that I have seen is when general types know about their subtypes. Code like "if I am a professional fighter, then I do this" breaks almost all SOLID principles and it is stupid. Another case is when the general class has everything they can think of, either abstract or implemented as empty or throwing an error, then the base classes implement one or the other of the functions. Dude! If your fighter doesn't implement .fight, then it is not a fighter!

I don't want to add code to this, since the principle is simple enough. However, even if it is an idea that primarily applies to OOP it doesn't mean it doesn't have its uses in other types of programming languages. One might say it is not even related to programming languages, but to concepts. It basically says that an orange should behave like an orange if it's blue, otherwise don't call it a blue orange!

Interface segregation principle


This one is simple. It basically is the reverse of the Liskov: split your interfaces - meaning the desired functionality of your code - into pieces that are fully used. If you have a piece of code that uses a ProfessionalFighter, but all it does is use the .fight method, then use a Fighter, instead. Silly example:
public class Fighter {
public virtual void Fight() {
Console.WriteLine("slap!");
}
}

public class EnglishFighter {
public override void Fight() {
Console.WriteLine("box!");
}
public override void Talk() {
Console.WriteLine("Oy!");
}
}

class Program {
public static void Main() {
EnglishFighter f=getMeAFighter();
f.Fight();
}
}

I don't even know if it's valid code, but anyway, the idea here is that there is no reason for me to declare and use the variable f as an EnglishFighter, if all it does is fight. Use a Fighter type. And a YAGNI on you if you thought "wait, but what if I want him to talk later on?".

Dependency inversion principle


Oh, this is a nice one! But it doesn't live in a vacuum. It is related to both SRP and OCP as it states that high level modules should not depend on low level modules, only on their abstractions. In other words, use interfaces instead of implementations wherever possible.

I wrote an entire other post about Inversion of Control, the technique that allows you to properly use and enjoy interfaces, while keeping your modules independent, so I am not going to repeat things here. As a hint, simply replacing Fighter f=new Fighter() with IFighter f=new Fighter() is not enough. You must use a piece of code that decides for you what implementation of an interface will be used, something like IFighter f=GetMeA(typeof(IFighter)).

The principle is related to SRP because it says a boss should not be controlling or depending on what particular things the employees will do. Instead, he should just hire some trustworthy people and let them do their job. If a task depends so much on John that you can't ever fire him, you're in trouble. It is also related to OCP because the boss will behave like a boss no matter what employee changes occur. He may hire more people, replace some, fire some others, the boss does not change. Nor does he accept any responsibility, but that's a whole other principle ;)

Conclusions


I've explored some of the more common acronyms from hell related to software development and stuck more to SOLID, because ... well, you have to admit, calling it SOLID works! Seriously now, following principles such as these (choose your own, based on your own expertise and coding style) will help you a lot later on, when the complexity of your code will explode. In a company there is always that one poor guy who knows everything and can't work well because everybody else is asking him why and how something is like it is. If the code is properly separated on solid principles, you just need to look at it and understand what it does and keep the efficiency of your changes high. One small change for man , like. Let that unsung hero write their code and reach software Valhalla.

SRP keeps modules separated, OCP keeps code free of need for modifications, LSP and ISP make you use the basest of classes or interfaces, reinforcing the separation of modules, while DIP is helping you sharpen the boundaries between modules. Taken together, SOLID principles are obviously about focus, allowing you to work on small, manageable parts of the code without needing knowledge of others or having to make changes to them.

Keep coding!