Intro  

  One of the most asked questions related to the novel coronavirus is "what is the mortality rate of the disease?" And most medical professionals and statisticians will choose not to answer it, because so far the data is not consistent enough to tell. Various countries report things differently, have different testing rates and methods and probably different definitions of what it means to be dead or recovered from Covid-19. To give a perfectly informed answer to this is impossible and that is why the people we look to for answers avoid the question, while people who are not professionals are giving all of the possible answers at the same time. I am not a professional, so I can give my answer and you can either trust my way of thinking or not.

  In order to compute mortality with absolute certainty we need several things:

  • the pandemic has to be over
  • the number of deaths from SARS-Cov-2 has to be exactly known
  • the number of people infected with SARS-Cov-2 has to be exactly known

 Then the answer would be the total number of dead over the total number of infected people (100*dead/infected). During the epidemic, though, people tend to use the numbers they have.

Panic!

 One of the most used formulas is: current number of deaths over the total number of infected so far (100*current deaths/current infected). This formula is wrong! Imagine there would be two people A and B. Both get infected at the same time and no one else gets infected after that. A will die from the disease in a week, B will recover in two weeks. If we use the formula above, for the first week the mortality of the disease is 0, then it becomes 50% after a week and it stays that way until the end. If B would die, too, the mortality would be computed as 0, then 50, then 100%. As you see, not much accuracy. In the case of Covid-19 the outcome of an infection is not known for three weeks or even more (see below).

  But let's use it anyway. Today, the 31st of March 2020, this would be 100*37832/786617 which is 4.8%. This is a large number. Applied to the entire world population, it would yield 362 million deaths.

  Accuracy comes from the finality of an outcome. A dead man stays dead, a recovered one stays recovered. A better formula is current number of deaths over the sum of current number of deaths and current number of recovered (100*current deaths/(current deaths+current recovered)). This eliminates the uncertainty of people who are sick, but still have to either die or live. If we would know with certainty who is infected and who is not, who died from Covid-19 and who recovered, this would actually be pretty accurate, wouldn't it?

  If we use it on Covid-19 today, we have  100*37832/(37832+165890), which gives us an 18.57% mortality rate. "What!? Are you insane? That's a fifth of all people!", you will shout, with immediate thoughts of a fifth of the world population: 1.4 billion people.

  So which number is the correct one? Neither. And it all comes from the way we define the numbers we use.

Reality

  OK, I kind of tricked you, I apologize. I can't answer the question of mortality, either. My point is that no one can. We can estimate it, but as you have seen, the numbers will fluctuate wildly. And the numbers above are not the extremes of the interval, not by a long shot. Let's explore that further while I explain why numbers derived from bad data cannot be good data.

  What are the types of data that we have right now?

  • deaths
  • infected (cases)
  • recovered
  • tested
  • total population of an area

  And we can't trust any of these.

Cases/infected

  One cannot confirm an infection without testing, which is something that for most countries (and especially the ones with numerous populations) it is really lacking. We know from small countries like Iceland that when you test a significant part of the population, half of the number of infections show no symptoms. The rest of 50% are on average also experiencing mild symptoms. The number of severe cases that can lead to death is relatively small. The takeaway here is that many more people can be infected than we think, making the actual mortality rate be very very small.

  So, can we use the Iceland data to compute mortality? Yes we can, on the part of the population of Iceland that was tested. We can't use that number for anything else and there are still people that have not been infected there. What is this significant percent of the population that was tested? 3%. 3% right now is considered large. Iceland has a population of 360000, less than the neighbourhood I live in. 3% of that is 108000 people. The largest number of tests have been performed in South Korea, a staggering number of 316664. That's only 0.6% of the total population size.

  But, using formula number 2, mortality for from the Iceland data would be 100*2/(2+157), which is 1.26%. Clearly this will get skewed quite a lot if one more person dies, so we can't really say anything about that number other than: yay! smaller than 4.8%!

  We can try on South Korean data: 100*162/(162+5408) which gives us a 2.9% mortality rate.

  Yet, assuming we would test a lot of people, wouldn't that give us useful data to make an accurate prediction? It would, only at this time, testing is something of a confusing term.

Testing

  What does testing mean? There are two types of tests: based on antibodies and based on RNA, or molecular tests. One tells you that the body is fighting or has fought an infection, the other is saying that you have virus stuff in your system. The first one is faster and cheaper, the other takes more time, but is more accurate. In all of these, some tests are better than others.

  There were reports that people who were infected before and recovered got reinfected later. This is not how that works. The immune system works by recognizing the intruder and creating antibodies to destroy it. Once your body has killed the virus, you still keep the antibodies and the knowledge of what the intruder is. You are, for at least a number of months, immune to the virus. The length of time for this immunity depends not on how forgetful your immune system is, but on how much the virus mutates and can trick it into believing it is not an intruder. As far as we know, SARS-Cov-2 is relatively stable genetically. That's good. So the reason why people were reported to get reinfected was that they were either false positives when they were detected or false negatives when they were considered recovered or false positives when they were considered reinfected.

  Does it mean we can't trust testing at all and it's all useless? No. It means that at the beginning, especially when everybody was panicking, testing was unreliable. We can start trusting tests now, after they have been used and their efficacy determined in large populations. Remember, though, that the pandemic pretty much started in January and for many countries just recently. It takes time to make this reality the new normal and people and technology work in a "proper way".

  Testing is also the official way of determining when someone has recovered.

Recovered

  It is surprisingly difficult to find out what "recovered" means. There are also some rules, implemented in a spotty way by the giants of the Internet, which determine which web pages are "not fake news", but I suspect that the system filters a lot of the legitimate ones as well. A Quora answer to the question says "The operational definition of “recovered” is that after having tested positive for the virus (you have had it) you test negative twice, 3 days apart. If you test negative, that means that no RNA (well, below a certain threshold) from the virus is found in a nasal or throat swab."

  So if you feel perfectly fine, even after having negative effects, you still have to test negative, then test negative again after three days. That means in order to determine one is recovered, two tests have to be used per person, tests that will not be used to determine infection in people with symptoms or in people who have died. I believe this would delay that kind of determination for quite a while.

  In other words, probably the number of recovered is way behind the number of infected and, obviously, of deaths. This means the mortality has to be lower than whatever we can compute using the currently reported values for recovered people.

Deaths

  Surely the number of dead is something we can trust, right? Not at all. When someone dies their cause of death is determined in very different ways depending on where they died and in situations where the morgues are overflowing with dead from the pandemic and where doctors are much better used for the sick you cannot trust the official cause of death! Moreover, on a death certificate you can write multiple causes of death. On average, they are about two or three, some have up to 20. And would you really use tests for covid for the dead rather than for the sick or recovered?

 Logically it's difficult to assign a death to a clear little category. If a person dies of a heart attack and it is tested positive of SARS-Cov-2, is it a heart attack? If someone dies of hunger because they lost their job during the pandemic, is it a Covid-19 death or not? If an 87 year old dies, can you really say which of the dozen medical conditions they were suffering of was the culprit?

 So in some situations the number of deaths associated with Covid-19 will be overwhelmingly exaggerated. This is good. It means the actual mortality rate is lower than what we can determine right now.

Population in an area

  Oh, come on! We know how many people there are in an area. How can you not trust this? Easy! Countries like China and Italy and others have implemented quarantine zones. That means that the total people in Italy or China is irrelevant as there are different densities of the contagion in regions of the same territory. Even without restrictive measures, geography and local culture as well as local genetic predispositions will work towards skewing any of the relevant values.

  Yeah, you can trust the number of people in small areas, especially if they are isolated, like Iceland, but then you can't trust those numbers in statistics, because they are not significant. As the virus spreads and more and more people get infected, we will be able to trust a little more the values, as computed over the entire world, but it will all be about estimations that we can't use in specific situations.

Infectiousness

  An important factor that will affect the total number of deaths, rather than the percent of dead over infected, is how infectious Covid-19 really is. Not all people exposed to SARS-Cov-2 will get infected. They are not really immune, some of them will be, some of them will be resistant enough to not catch the virus. I need a medical expert to tell me how large this factor is. I personally did not find enough information about this type of interaction (or lack thereof) and I suspect it is a small percent. However, most pessimistic scenarios assume 80% of the world population will get infected at some point. That implies a 20% that will not. If anyone knows more about this, please let me know.

Mortality trends

  There is another thing that has to be mentioned. By default viruses go through the process of attenuation when going through large populations. This is the process by which people with milder symptoms have a larger mobility, therefore they infect more people with their milder strain, while sicker people tend to "fall sick" and maybe die, therefore locking the more aggressive strains away from the general population. In this context (and this context only) quarantines and social distancing are actually bad because they limit the mobility of the milder strains as well as of the aggressive ones. In extreme cases, preventing people from interacting, but then taking severely sick people to hospitals and by that having them infect medical personnel and other people is making the disease stronger.

  However, statistically speaking, I expect the mortality of the virus to slowly decrease in time, meaning that even if we could compute the mortality rate exactly right now, it will be different later on.

  What about local authorities and medical administrators? How do they prepare for this if they can't pinpoint the number of sick and dead? The best strategy is hope for the best while preparing for the worst. Most politicians, though, live in a fantasy world of their own making where words and authority over others affect what and how things are done. There is also the psychological bias of wanting to believe something so much that you start believing it is probable. I am looking at you, Trump! Basically that's all he does. That being said, there are a lot of people who are doing their job and the best they can do is to estimate based on current data, then extrapolate based on the evolution of the data.

  So here is another piece of data, or rather information, that we have overlooked: the direction in which current data is moving. One of the most relevant is what is called "the peak of the contagion". This is the moment when, for whatever reasons, the number of infected and recovered has reached a point where the virus has difficulties finding new people to infect. The number of daily infections starts to decrease and, if you can't assign this drop to some medical or administrative strategy, you can hope it means the worst is behind you. Mind you, the number of total infected is still rising, but slower. I believe this is the one you should keep your attention on. While the number of daily infected people increases in your area, you are not out of the woods yet. 

Mechanism

  Statistical studies closely correlate the life expectancy of a population with the death rate in that population. In other words there isn't a specific mechanism that only kills old people, for example. In fact, this disease functions like a death probability amplifier. Your chances to die increase proportionally to how likely you were to die anyway. And again, statistically, it doesn't apply to you as an individual. The virus attacks the lungs and depending on your existing defenses, it is more or less successful. (To be fair, the success of a virus is measured in how much it spreads, not how badly it sickens its host. The perfect virus would show no negative symptom and increase the health or survival chances of its host. That's how vampires work!)

  I have no doubt that there are populations that have specific mutations that make them more or less susceptible to SARS-Cov-2, but I think that's not statistically relevant. I may be wrong, though. We can't know right now. There are reports of Italian regions in the middle of the contagion that have no sick people. 

Conclusion

  We cannot say with certainty what is the mortality rate right now. We can't even estimate it properly without going into horrible extremes. For reasons that I cannot ascertain, WHO Director-General Dr Tedros Adhanom Ghebreyesus announced on the 3rd of March a mortality rate estimated at 3.4%. It is immense and I personally believe it was irresponsible to make such a statement at that time. But what do I know? A UK study released today calculates a 1.4 fatality rate.

  My personal belief, and I have to emphasize that is a belief, no matter how informed, is that the mortality of this disease, by which I mean people who would have not died otherwise but instead died of viral pneumonia or organ failure due to SARS-Cov-2 overwhelming that very organ over the total people that have been exposed to the virus and their immune system has fought it, will be much less than 1%. That is still huge. Assuming a rate of infection of 80%, as many scenarios are considering right now, that's up to 0.8% of all people dying, meaning 60 million people. No matter what proportion of that number will die, it will still be a large number.

  The fact that most of these people would have been on their way anyway is not really a consolation. There will be loved grandparents, people that had various conditions and were happily carrying on with their first world protected lives, believing in the power of modern medicine to keep them alive. I really do expect that the average life expectancy, another statistic that would need thousands of words to unpack, will not decrease by a lot. In a sense, I believe this is the relevant one, though, in terms of how many years of life have been robbed from people by this virus. It, too, won't be easy to attribute. How many people will die prematurely because of losing their job, not getting medical attention when they needed it, getting murdered by people made insane by this whole thing, etc?

  Also, because the people who were more likely to die died sooner, or even got medical attention that they would otherwise not gotten, because pollution dropped, cars killed less people, etc, we might actually see a rise of the life expectancy statistic immediately after the pandemic ends.

  Bottom line: look for the daily number of newly infected people and rejoice when it starts consistently decreasing. After the contagion, try to ascertain the drop in average life expectancy. The true effects of this disease, not only in terms of mortality, will only become visible years after the pandemic ends.

  Update: mere days after I've written this article, BBC did a similar analysis.

 Intro

  When I was a kid, computers didn't have multithreading, multitasking or even multiple processes. You executed a program and it was the only program that was running. Therefore the way to do, let's say, user key input was to check again and again if there is a key in a buffer. To give you a clearer view on how bonkers that was, if you try something similar in Javascript the page dies. Why? Because the processing power to look for a value in an array is minuscule and you will basically have a loop that executes hundreds of thousand or even millions of times a second. The CPU will try to accommodate that and run at full power. You will have a do nothing loop that will take the entire capacity of the CPU for the current process. The browser would have problems handling legitimate page events, like you trying to close it! Ridiculous!

Bad solution

Here is what this would look like:

class QBasic {

    constructor() {
        this._keyBuffer=[];
        // add a global handler on key press and place events in a buffer
        window.addEventListener('keypress', function (e) {
            this._keyBuffer.push(e);
        }.bind(this));
    }

    INKEY() {
        // remove the first key in the buffer and return it
        const ev = this._keyBuffer.shift();
        // return either the key or an empty string
        if (ev) {
            return ev.key;
        } else {
            return '';
        }
    }
}

// this code will kill your CPU and freeze your page
const qb = new QBasic();
while (qb.INKEY()=='') {
 // do absolutely nothing
}

How then, should we port the original QBasic code into Javascript?

WHILE INKEY$ = ""

    ' DO ABSOLUTELY NOTHING

WEND

Best solution (not accepted)

Of course, the best solution is to redesign the code and rewrite everything. After all, this is thirty year old code. But let's imagine that, in the best practices of porting something, you want to find the first principles of translating QBasic into Javascript, then automate it. Or that, even if you do it manually, you want to preserve the code as much as possible before you start refactoring it. I do want to write a post about the steps of refactoring legacy code (and as you can see, sometimes I actually mean legacy, as in "bestowed upon by our forefathers"), but I wanted to write something tangible first. Enough theory!

Interpretative solution (not accepted, yet)

Another solution is to reinterpret the function into a waiting function, one that does nothing until a key is pressed. That would be easier to solve, but again, I want to translate the code as faithfully as possible, so this is a no-no. However, I will discuss how to implement this at the end of this post.

Working solution (slightly less bad solution)

Final solution: do the same thing, but add a delay, so that the loop doesn't use the entire pool of CPU instructions. Something akin to Thread.Sleep in C#, maybe. But, oops! in Javascript there is no function that would freeze execution for a period of time.

The only thing related to delays in Javascript is setTimeout, a function that indeed waits for the specified interval of time, but then executes the function that was passed as a parameter. It does not pause execution. Whatever you write after setTimeout will execute immediately. Enter async/await, new in Javascript ES8 (or EcmaScript 2017), and we can use the delay function as we did when exploring QBasic PLAY:

function delay(duration) {
    return new Promise(resolve => setTimeout(resolve, duration));
}

Now we can wait inside the code with await delay(milliseconds);. However, this means turning the functions that use it into async functions. As far as I am concerned, the pollution of the entire function tree with async keywords is really annoying, but it's the future, folks!

Isn't this amazing? In order to port to Javascript code that was written in 1990, you need features that were added to the language only in 2017! If you wanted to do this in Javascript ES5 you couldn't do it! The concept of software development has changed so much that it would have been impossible to port even the simplest piece of code from something like QBasic to Javascript.

Anyway, now the code looks like this:

function delay(duration) {
    return new Promise(resolve => setTimeout(resolve, duration));
}

class QBasic {

    constructor() {
        this._keyBuffer=[];
        // add a handler on every key press and place events in a buffer
        window.addEventListener('keypress', function (e) {
            this._keyBuffer.push(e);
        }.bind(this));
    }

    async INKEY() {
        // remove the first key in the buffer and return it
        const ev = this._keyBuffer.shift();
        // return either the key or an empty string
        if (ev) {
            return ev.key;
        } else {
            await delay(100);
            return '';
        }
    }
}

const qb = new QBasic();
while (qb.INKEY()=='') {
 // do absolutely nothing
}

Now, this will work by delaying for 100 milliseconds when there is nothing in the buffer. It's clearly not ideal. If one wanted to fix a problem with a loop running too fast, then the delay function should have at least been added to the loop, not the INKEY function. Using it like this you will get some inexplicable delays in code that would want to use fast key inputs. It's, however, the only way we can implement an INKEY function that will behave as close to the original as possible, which is hiring a 90 year old guy to go to a letter box and check if there is any character in the mail and then come back and bring it to you. True story, it's the original implementation of the function!

Interpretative solution (implementation)

It would have been much simpler to implement the function in a blocking manner. In other words, when called, INKEY would wait for a key to be pressed, then exit and return that key when the user presses it. We again would have to use Promises:

class QBasic {

    constructor() {
        this._keyHandler = null;
        // instead of using a buffer for keys, keep a reference
        // to a resolve function and execute it if it exists
        window.addEventListener('keypress', function (e) {
            if (this._keyHandler) {
                const handler = this._keyHandler;
                this._keyHandler = null;
                handler(e.key);
            }
        }.bind(this));
    }

    INKEY() {
        const self = this;
        return new Promise(resolve => self._keyHandler = resolve);
    }
}


const qb = new QBasic();
while ((await qb.INKEY())=='') { // or just await qb.INKEY(); instead of the loop
 // do absolutely nothing
}

Amazing again, isn't it? The loops (pun not intended) through which one has to go in order to force a procedural mindset on an event based programming language.

Disclaimer

Just to make sure, I do not recommend this style of software development; this is only related to porting old school code and is more or less designed to show you how software development has changed in time, from a period before most of you were even born.

Intro

  This is part of a series that I plan to build on as time goes on: technical interview questions, dissected and laid bare for both interviewers and interviewees. You can also check out the previous one: Interview question: all items in table A but not in B.

  This question is a little bit more complex and abstract at the same time. The post is written more for interviewers this time, because as candidates go, you need to read the links in it if you didn't know the concepts in it. This also is not a question with a single correct answer. It comes after asking about Dependency Injection as a whole and the candidate answering correctly.

  I expect senior developers to be able to go through this successfully, it is not a test for junior developers, although depending on their previous experience juniors might be able to go through it and seniors be force to reason through it.

The test

Bonus introduction question: why use DI at all? Expected answers would be separation of concerns and testability. 

  The question has two steps.

Step 1: given the following code in a legacy application, improve it to use Dependency Injection:

public SomeClass {
  public List<Item> GetItems(int days, string filter) {
    var service = new ItemService();
    return service.GetItems()
      .Where(i => i.Time >= DateTime.Now.AddDays(-days));
  }
}

Bonus questions:

  • has the candidate worked with LINQ before?
  • what does the code do?

Now, this question is about programming knowledge as it is for attention. There are three irregularities that can attract that attention:

  • the most obvious: the service is being instantiated by calling the constructor
    • the interviewer expects at the very least for the candidate to notice it and suggest moving the instantiation of the service in the constructor of the SomeClass class and inject it instead of using new
    • there is the possibility of passing the service as a parameter, as well, but suggest that the signature of the method should remain the same to get around it. Anyway, one can discuss the idea of moving all dependencies to the constructor and/or the calling methods and get insight in the way the candidate is thinking.
  • the unexplained string filter in the signature of the method
    • the interviewer can either tell the candidate that it will become relevant later, because it will, or that this is a method that implements an interface, to which a more snarky candidate might reply that SomeClass implements nothing (bonus for attention)
  • the use of DateTime.Now
    • it is a static property that gives a different output every time so it should be taken into account for Dependency Injection or at least for unit testing

By now you have filtered out the majority of failing candidates and you are left with someone who used or at least understands DI, can read and understand code, has used or at least understood basic LINQ and you have also gauged their level of attention to detail.

If the candidate only talked about the service and they decided to create an interface for ItemService and then add it as a parameter for the constructor of SomeClass, ask them to write a unit test for the method, explain to them that testability is one of the goals of DI if you didn't cover this already

  • bonus: see if they do unit testing or at least understand the concept
  • if they do attempt to write the unit test, ask them what would happen if you would run the test in different days

The expected result of this part is that the candidate understands the need of abstracting DateTime.Now. It is interesting to note how they intend to abstract it, since they do not have access to the code and it is a static method/property to abstract.

Whether the candidate figured it out by themselves or it was explained to them, the expected answer is that DateTime.Now is abstracted by creating an IDateTimeService interface that is implemented as a wrapper over DateTime.

At this point the code should look like this:

public SomeClass {
  private IItemService _itemService;
  private IDateTimeService _dateTimeService;

  public SomeClass(IItemService itemService, IDateTimeService dateTimeService) {
    _itemService = itemService;
    _dateTimeService = dateTimeService;
  }

  public List<Item> GetItems(int days, string filter) {
    return _itemService.GetItems()
      .Where(i => i.Time >= _dateTimeService.Now.AddDays(-days));
  }
}

Also, the candidate should be asked to write a unit test, just to see they know how, for bonus points. Note if the candidate understands isolation for unit testing or does something that would work but be silly like generate the test data based on current date or duplicate the code logic in the test instead of working with static data.

Step 2: tell the candidate that the legacy code they need to fix looks a bit different:

public SomeClass {
  public List<Item> GetItems(int days, string filter) {
    var service = new ItemService(filter);
    return service.GetItems()
      .Where(i => i.Time >= DateTime.Now.AddDays(-days));
  }
}

The ItemService now receives the filter as the parameter. Ask them what to do in this case.

The expected answer is a factory injected instead of the service, which will then be used to instantiate an IItemService with a parameter. Bonus discussion about software patterns can be inserted here.

There are other valid answers here, like using the DI container itself as a factory for the service, which might provoke interesting discussions in itself, like weighing constructor injection versus service provider in dependency injection and whether hybrid solutions might be better.

Bonus question: what if you cannot control the code of ItemService in step 1 and it does not implement an interface or a base class?

  • warning, this might give a hint for the second part of the interview, so use it at the end 
  • correct answer 1: use the class as the type of the parameter and let the dependency container decide how to instantiate it
  • correct answer 2: use a wrapper over the class that implements the interface and proxies to the instance methods.

Conclusion

For me this test told me a lot more about the candidate than just their dependency injection knowledge. We got to talking, I became aware of how their minds worked and I was both pleasantly surprised when they came with alternate solutions that kind of worked and a bit irked that they went that far and didn't see the superior option. Most of the time this made me think about the differences between what I would answer and what they did and this resulted in interesting discussions that enriched not only their experience, but also mine.

Dependency injection, separation of concerns and unit testing are important concepts for any modern developer. I hope this helps devs evolve and interviewers find the best candidates... at least until all of them get to read my blog.

  I didn't want to write about this. Not because of a false sense of security, but because everybody else talked about it. They all have opinions, most of them terribly wrong, but for me to join the fray and tell the world what I think is right would only put me in the same category as them. So no, I abstained. However, there are some things so wrong, so stupidly incorrect, that I can't maintain this silence. So let's begin.

  "The flu", "a cold" are not scientific, they are popular terms and they all relate to respiratory infectious diseases caused by a variety of viruses and sometimes bacteria or a combination thereof. Some of them affect us on a seasonal basis, some of them do not. Rhinoviruses are the ones most often associated with the common cold and they are seasonal. However, a whooping 15% of what is commonly called "a cold" comes from coronaviruses, thus named because of their crown-like shape. Influenza viruses, what we would normally call "flu" are a completely different type of virus. In other words, Covid-19 is more a common cold than a flu, but it's not the seasonal type. Stop wishful thinking that it will all go away with the summer. It will not. Other famous coronavirus diseases are SARS and MERS. The SARS epidemic lasted until July, the MERS epidemic spreaded just fine in the Middle Eastern summer weather. This will last. It will last for months from the moment I am writing this blog. This will be very important for the next section of the post.

  Also, there is something called the R-naught (R0), the rate with which a virus spreads to other people. It predicts, annoyingly accurate, how a disease is going to progress. This virus has an R0 probably twice as high as that of the influenza virus, which we all get, every fucking year. Draw your own conclusions.

  The only reason we got rid of SARS and MERS is because they are only infectious after the symptoms are apparent and the symptoms are pretty damn apparent. Covid-19 is very infectious even before the first cough, when people feel just fine. Surely masks will help, then? Not unless they are airtight. Medical masks are named so because medics use them in order to not cough or spit or breathe inside a patient, maybe during surgery. The air that the doctor breathes comes from the sides of the mask. So if you get sick and you wear the mask it will help the people that have not met you while you had no symptoms yet.

  Washing the hands is always good. It gets rid of all kind of crap. The primary medium of spreading Covid-19 is air, so you can wash your hands as often as you'd like, it helps very little with that. Stopping touching your face does little good, either. There is a scenario when someone coughs in their hand, touches something, then you touch it, then you pick your nose. Possible, so it's not all worthless, it's just statistically insignificant. What I am saying is that washing your hands and not touching yourself decreases the probability a very small amount. That being said, masturbation does increase the activity of your immune system, so be selective when you touch yourself.

  The idea that old people are the only ones affected is a myth. Age statistically correlates with harsher symptoms because it also correlates with negative health conditions. In other words, people with existing health conditions will be most affected. This includes smokers, obese people, people with high blood pressure, asthma and, of course, fucking old people. The best way to prepare for a SARS-Cov-2 virus (the latest "official" name) is to stay in good health. That means healthy food, less alcohol, no smoking and keeping a healthy weight. So yes, I am fucked, but at least I will die happy... oh, no, I am out of gin!!

  Medically, the only good strategy is to develop a vaccine as soon as possible and distribute it everywhere. It will lead quicker and with less casualties to the inevitable end of this pandemic: when more people are immune than those who are not. This will happen naturally after we all get infected and get healthy (or die). All of the news of people who got sick after getting healthy are artefacts of defective testing. All of it! Immunity does not work like that. You either got rid of it and your body knows how to defend itself or you never had it or you had something else or somebody tested you wrong.

  That being said, fuck all anti-vaxxers. You are killing people, you assholes!

  Personally, the best you can do is keep hydrated and eat in a balanced way. You need proteins and zinc and perhaps vitamin C (not sure about that). Warm bone broths will be good. Zinc you get from red meat and plant seeds. There was a report of drinking green tea being negatively correlated with influenza infections (different virus, though). And don't start doing sport now, if you haven't been doing it already, you can't get the pig fat one day before Christmas. Sport is actually decreasing the efficiency of your immune system.

  This is the end of the medical section of this post. There is nothing else. Probiotics won't help, Forsythia won't help, antibiotics will certainly not help. The only thing that fights the virus right now is your immune system, so just help it out. If there was a cure for the common cold you wouldn't get it each year every year.

  But it's not over. Because of people. When people panic, bad things happen. And by panic, I mean letting their emotions get the better of them, I mean not thinking people, not zombie hordes, although sometimes the difference is academic.

  Closing schools and workplaces and public places has one beneficial effect: it makes the infection rate go down. It doesn't stop the spread, it doesn't stop the disease, it just gives more time to the medical system to deal with the afflicted. But at the same time, it closes down manufacturing, supply chains, it affects the livelihood of entire categories of people. So here is where governments should step in, to cover financially the losses these people have to endure. You need money for medical supplies and for keeping healthy. Think of it as sponsoring immune systems.

  The alternative, something we are seeing now in paranoid countries, is closing down essential parts of national economies with no compensation. This is the place and time for an honest cost vs. gain analysis. Make sure the core of your nation is functioning. This is not one of those moments when you play dead for a few minutes and the bear leaves (or falls down next to you because he really likes playing that game). This is something that needs to work for months, if not a year or more. This is not (and never was) a case of stopping a disease, but of managing its effects. Some people are going to die. Some people are going to get sick and survive. Some lucky bastards will cough a few times and go on with their day. Society and the economical system that sustains it must go on, or we will have a lot more problems than a virus.

  Speaking of affected professions, the most affected will be medical personnel. Faced day in and day out with SARS-Cov-2 infections they will get infected in larger numbers than the regular population. Yes, they will be careful, they will wear masks and suits and whatever, but it won't help. Not in a statistical way, the only way we must think right now. It's a numbers game. It's no longer about tragedies, it's about statistics, as Stalin used to say. And these people are not regular people. They've been in school for at least a decade before they can properly work in a hospital where Covid-19 patients will be admitted. You lose one of these, you can't easily replace them. Especially in moron countries like my own, where the medical system is practically begging people to leave work in other countries. The silver lining is that probably, at the end of the outbreak, there will be a lot more medical people available, since they went through the disease and emerged safe and immune. But there is a lot of time between now and then.

  Closing borders is probably the most idiotic thing one can do, with perhaps the exception of countries that had real problems with immigration before. If sick people don't crowd your borders in order to take advantage of your medical system, closing borders is just dumb. The virus is already in, the only thing you are stopping is the flow of supplies to handle the disease. Easter is coming. People from all over the world will just move around chaotically to spend this religious holiday with their family. It will cause a huge spike in the number of sick people and will probably prompt some really stupid actions taken by governments all over the place. One could argue that religion is dumb at all times, but right now it makes no difference. It's just an acceleration of a process that is already inevitable, Easter or no Easter.

  Statistics again: look at the numbers and you will see that countries get an increase of 30% in infected cases every day. It's an exponential curve. It doesn't care about your biases, your myths, your hopes, your judging. It just grows. China will get infection cases as soon as travelling restrictions relax. Consider the ridiculous situation where one somehow protected their country against infection when the whole of the world went through a global pandemic. It doesn't even matter. It's not even healthy, as sooner or later that virus will affect only them. The best they can do is manage the situation, bottleneck it so that the medical system can cope with it.

  Do you know what the most important supply chain is in this situation? Medical supplies. A lot of countries get these from China and India. Because they are cheaper. So they can sell them to you at ten times the prices and make those immense profits that generated the name Big Pharma. It's not a conspiracy theory, it's common knowledge. What do you think happens when you close your borders with China and India?

  In this situation, the globally economy will stagger. It will be worse than the 2008 crisis. But while that was a crisis generated by artificial and abstract concepts that affected the real economy, that of people working for other people, this one comes as real as it gets, where people can't work anymore. That means less money, less resources, scarcity of some resources, less slack to care of the old and sick in your family. It's a lose-lose situation: the most affected by the pandemic will be affected either by people not being able to care for them or people giving them the disease while caring for them because they must make much more effort and human contact to get the supplies needed. Now, some countries can somehow handle that by employing a healthy transport infrastructure and care system, but in others, where they can barely handle normal quantities of sick people that come to hospitals themselves, they will never be able to cover, even if they wanted to, the effort to give supplies to previously affected people.

  So does that mean you have to go to the supermarket and get all the supplies you might need for months to come? I am afraid to say that it does. The reasonable way to handle this is for the governments of the world to ensure supply and financial support for everybody. Then people wouldn't need to assault shops to get the last existing supplies. If you can trust your government to do that, by all means, trust you will always have a nearby shop to sell you the goods you need to stay alive and health. But I ask you this: if you got to the farmacy and bought their entire stock of some medicine that you might need and then you hear your neighbor, the person you greeted every day when you got to work, died because they couldn't get that medicine, what then? What if you hear they need the medicine now? Will you knock at their door and offer it to them? Maybe at five time the price? Or maybe for free? What if you are the neighbor?

  And you hear that some country has isolated the virus and are making a vaccine. Oh, it's all over, you think. But before they even start mass producing it, they need to test it. People will die because of how overcautious and bureaucratic the system is. People will die when corners are cut. People will die either way. It will take time either way. This thing will be over, but not soon. After they make it, you will still have to get it. That means supply chains and money to buy things.

  Bottom line: it's all about keeping systems going. In your body, the immune system has to be working to fight the disease. In your country, the economy must be working in order to handle the effects of the disease. Fake cures and home remedies are just as damaging as false news of the crisis not being grave, getting over soon or going away by itself.

  Here is a video from a medical professional that is saying a lot of the things I've listed here:

[youtube:E3URhJx0NSw]

  One more thing: consider how easy it was for this panic to lead to countries announcing national emergency, a protocol that gives extraordinary powers to the government. A few dead here, a few sick there, and suddenly the state has the right to arrest your movement, to detain you unconditionally, to close borders, to censor communications. Make sure that when this is over, you get every single liberty back. No one it going to return freedom to you out of their own good will.

Summary

Once you finished with the foundation, it doesn't matter who you call to architect your house or fix problems you might have. Businesses and software are exactly like that. Think hard about your foundation, it will save you a lot of effort later on. I've been working in a lot of different places and was surprised to see they didn't know there are other ways of doing things. I distill the foundational principles one needs for a good software solution and maybe not just software:

  • Separation of concerns - processes, components and people should be able to function in isolation. If you can test they work when everything else is shut down, you're good. People should only do what they are good at. Components should do only one thing.
  • Cleanliness - keep your code readable rather than efficient, your process flow intuitive, roles and responsibilities clear. Favor convention over documentation, document anything else.
  • Knowledge sharing - Allow knowledge to be free and alive in your organization by promoting knowledge sharing, collaborative documentation, searchability.

Intro

  I am not the greatest of all developers or architects, but I am good. I know how things should be and what they should do in order to reach a goal. When people ask me about software, my greatest gaps are around specific software tools or some algorithm, not the general craft. That is because of several reasons: I enjoy my work, I've been really enthusiastic in my youth and sponged everything I could in order to become better and I've worked in many different types of organizations so I know multiple ways in which people have tried to do this. As I grow older, the last one may be my most valuable skill, but I am yet to find the employer to realize this.

  You see, what I've learned from my career so far is that most businesses live in a bubble. Used to not only learn software development as I am working on some task, but also network with other people in the craft from all across the business, I kind of expected every other developer to be like that. Or at least the senior devs, the dev leads and architects, maybe some managers. But no, most of the time, people are stuck in their little position and never stray from it. They may invoke life work balance, or they are just burned out, or they just don't care enough, but more often, they haven't even realized what they are missing. And that's the bubble. A bubble is not a prison, is just a small area in which people stay voluntarily and never get out of.

  This is why gaming development is so different from business app development. That is why development in an administrative business with a small software department is so different from development in a software company. It is why development in small shops is so different than in large software companies. Sometimes people, really smart people, have worked for a long time in only one of these ecosystems and they only know that one. They can hardly conceive of different ways to do things.

  So this is why I am writing this post, to talk about the foundations of things, that part that separates one from the other, forever, no matter what you do afterwards. And this applies to business, people organization, but especially well to large software projects. You see, if you start your solution badly, it will be bad until you rewrite it. Just like a building with a weak foundation. It doesn't matter you bring the best workers and architects afterwards, they will only build a wonderful house that will fall down when the foundation fails. You want to make a good thing, first plan it through and build the greatest foundation you can think of and afford. It's much easier to change the roof than the foundation.

  And you wouldn't believe how many times I've been put in the situation of having to fix the unfixable. "Hey, you're smart, right? We started this thing a million years ago, we thought we would save some money, so we got a bunch of junior developers to do it, and it worked! But then it didn't anymore. Fix it!" And I have to explain it to them: you can't scale duct tape. You can go only so much with a thing held together by paper clips, chewing gum and the occasional hero employee with white hair and hunched back and in his twenties.

  Now of course, to an entitled senior engineer like me any software evokes the instinct to rewrite it in their own image. "Also, get some juniors to carve my face into that hill over there!". Sometimes it's just a matter of adapting to the environment, work with what you have. But sometimes you just have to admit things are beyond salvation. Going forward is just a recipe for disaster later on. It's the law of diminishing returns when the original returns were small to begin with. And you wouldn't believe how many people agree with that sentiment, then declare there is nothing that can be done. "They won't give us the budget!" is often muttered. Sometimes it's "We only need this for a few years. After that we start from scratch" and in a few years some "business person" makes a completely uninformed cost and gain analysis and decides building up from existing code is cheaper than starting over. But don't worry, they will suuuurely start from scratch next time.

  Sometimes the task of rewriting something is completely daunting. It's not just the size of it, or the feeling that you've achieved nothing if you have to start over to do the same thing. It's the dread that if you make the same thing and it takes less effort and less money and it works better then you must be inferior. And it's true! You sucked! Own it and do better next time. It's not the same thing, it's version 2.0. You now have something that you couldn't have dreamed of when building version 1.0: an overview. You know what you need, not what you think you will need. Your existing project is basically the D&D campaign you've played so many times that it has become a vast landscape, rich with characters and story. You've mapped it all down.

  This post is an overview. Use it! To be honest, reaching this point is inevitable, there will always be a moment when a version 2.0 makes more sense than continuing with what you have. But you can change how terrible your software is when you get to it. And for this you need the right foundation. And I can teach you to do that. It's not even hard.

Separation of Concerns

  Most important thing: separation of concerns. Components should not be aware of each other. Compare a Lego construction to a brick and mortar one. One you can disassemble and reassemble, adding to it whatever you need, the other you need to tear down and rebuild from zero. Your foundation needs to allow and even enable this. Define clear boundaries that completely separate the flow into independent parts. For example a job description is an interface. It tells the business that if the person occupying a job leaves, another can come and take their place. The place is clearly defined as a boundary that separates a human being from their role in the organization.

  Software components, too, need to be abstracted as interfaces in order to be able to swap them around. And I don't mean the exact concept of interface from some programming languages. I mean that as loosely as one can. A web service is an interface, since it abstracts business logic from user interface. A view model is an interface, as it abstracts the user interface logic from its appearance. A website is an interface, as it performs a different task than another that is completely separated. If you can rewrite an authorization component in isolation and then just replace the one you have and the application continues to work as before, that means you have done well.

  Separation of concerns should also apply to your work process and the people in it. A software developer should not have to do much outside developing software. A manager should just manage. People should only be in meetings that bring value and should only be in those that actually concern them. If the process becomes too cumbersome, split it up into smaller pieces, hire people to handle each of them. Free the time of your employees to do the job they are best suited for. 

  One important value you gain from isolating components is testing. In fact, you can use testing as a force towards separation of concerns. If you can test a part of your application in isolation (so all other parts do not need to be working for it), then you have successfully separated concerns. Consider a fictional flow: you get on the bus, you get to the market, you find a vegetable stand, you buy a kilo of tomatoes, you get back to the bus, you come home. Now, if you can successfully test your ability to get on a bus, any bus, to get anywhere the bus is going, in order to test that you can buy tomatoes from the market you just test you can find the stand and buy the tomatoes. Then, if you can test that you can buy anything at any type of stand, you only need to test your ability to find a stand in a market.

  It seems obvious, right? It feels that way to me. Even now, writing this post, I am thinking I sound like an idiot trying to seem smart, but I've seen droves of developers who don't even consider this. Businesses who are not even aware of this as a possibility. "We have testing teams to make sure the application is working end to end, we don't need unit testing" or "We have end to end automated testing. For each new feature we write new tests". When you hear this, fight it. Their tests, even if written correctly and maintained perfectly, will take as long as one needs to get on a bus and go to the market. And then the other test will take as long as one need to get on a train and get to the airport. And so on. End to end testing should exist and if you can automate it, great, but it should be sparse, it should function like an occasional audit, not something that supports your confidence in the solution working correctly.

  So go for testable, not for tests. Tests often get a bad wrap because someone like me comes and teaches a company to write tests, then they leave and the people in the company either skip testing occasionally or they change bits of the application and don't bother to update the tests. This leads to my next point: clean code.

Cleanliness

  Cleanliness applies to everything, again. The flow of your solution (note that I am being as general as possible) needs to be as clear as possible, at every level. In software this usually translates in readable code and builds up from that. Someone looking at the code should be able to instantly and easily understand what it does. Some junior developers want to write their code as efficient as possible. They just read somewhere that this method is faster than the other method and want to put that in code. But it boils down to a cost analysis: if they shave one second off a process you run ten times a day, they save one hour per year; if another developer has to spend more than one hour to understand what the code does, the gain means nothing.

  Code should be readable before being fast. Comments in code should document decisions, not explain what is going on. Comments should display information from another level than the code's. Component names, object names, method names, variable names should be self explanatory. Configuration structures, property names, property values, they should be intuitive and discoverable.

  And there is another aspect to cleanliness. Most development environments have some automated checks for your code. You can add more and even make your own. This results in lists of errors, warnings and notifications. On a flow level, this might translate to people complaining about various things, some in key positions, some not. Unit tests, once you have them, might be passing or failing. It is important to clean that shit up! Do not ignore warnings or even notifications. You think a warning is wrong, find a way to make it go away, not by ignoring it, but by replacing the complaining component, marking it specifically in the code as not a valid warning and document why, run all the tests and make sure they are green or remove the tests that you think are not important (this should not happen usually). The reason is simple: in a sea of ignored warnings you will not see the one that matters.

  To be perfectly clear: by clean code I don't mean code that follows design patterns, I don't mean documentation comments on every property and method, I don't mean color coded sections (although that's nice). What I mean is code clean enough to read without cringing or having to look in ten other places to figure out what it does. If your hotdog falls on that code you should be comfortable enough to pick it up and continue eating it.

  Cleanliness should and must be applied to your work process. If the daily meeting is dirty (many people talking about unrelated things) then everybody is wasting time. If the process of finishing a task is not clear, you will have headless chickens instead of professionals trying to complete it. If you have to ask around where to log your hours or who is responsible for a specific job that you need done in order to continue, you need to clean that process. Remove all superfluous things, optimize remaining ones. Remember separation of concerns.

  Cleanliness extends to your project folder structure, your solution structure, your organizational structure. It all has to be intuitive. If you declare a principle, it should inform every query and decision, with no exception. "All software development people are at the fifth floor! Ugh... all except Joe". What if you need Joe? What if you don't know that you need Joe, but you still need him? Favor convention over configuration/documentation, document everything else. And that leads me to the final point: knowledge sharing.

Knowledge Sharing

  To me, knowledge sharing was always natural. In small companies there was always "that guy" who would know everything and couldn't work at all because people came to ask him things. In medium companies there was always some sort of documentation of decisions and project details. In large companies there were platforms like Confluence where people would share structured information, like the complete description of tasks: what they are about, how decisions were made, who is responsible for what, how they were split into specific technical tasks, what problems arose, what the solutions were, etc. And there were always your peers that you could connect to and casually talk about your day.

  Imagine my surprise to find myself working in places where you don't know what anyone else is doing, where you don't know what something is and what it is supposed to do, there are no guidelines outside random and out of date Powerpoint files, where I am alone with no team, brought in for problems that need strong decisions in order to fix but no one is willing to make them, and already I have no idea who should even attempt to. I solve a common problem, I want to share the solution, there is no place to do that. People are not even in the same building as me. Emails are come and go and no one has time to read them.

  Knowledge should live freely in your company. You should be able to search for anything and find it, be able to understand it, contribute to it, add more stuff. It should be more natural for the people in your company to write a blog post than go for coffee and complain. It should be easier to find and consume information from people that left the company than to get it from colleagues at the desk next to you. And maybe this cannot be generalized to all departments, but it is fucking important: people in the office should never need to open Microsoft Office (or any similar product suite). I can't stress that enough.

  You should not need printed documents, so no need for Word. Excel files are great for simple data tasks, but they are not specific. If you need something done repeatedly and you use Excel sheet, it is probably better to build a tool for it. Don't reinvent the wheel now, but use the best tool for the job. And there are better and more modern tools than Powerpoint files, but I will allow the use of them because, in the context of knowledge sharing, everyone should feel free and confident enough to make presentation for the team. My tenet still stands, though: the Powerpoint file would be used in a presentation. Hardly anyone else should need to open it. I mean, there would be a video of the presentation available, right?

Vision

  Imagine a park. It is sunny, birds are singing, there are people walking on hardened dirt walkways, cyclers biking on their asphalted bike lanes, benches everywhere, with a small notepad attached to them that people can just pick up and read or write their own notes. Places in the park are clearly indicated with helpful arrows: children playground, hotdog stand, toilet, football field, bar, ice ring. Everything is clean, everybody is doing what they do best, all is good. You feel hungry, you see the arrow pointing towards the hotdog stand, you walk there calmly and ask for one. The boy there give you a bun and a wurst. He is new, but he has a colleague that knows exactly how much mustard and ketchup to put on the hotdog. He even asks you if you want curry on it. 

  Imagine a park. It is sunny, birds are singing. Some walkways start of as asphalt, then continue as dirt. Some stop suddenly or end in a ditch. There is a place that serves hotdogs next to a toilet. You have to ask around to find out where to find it. You get lost several times, as some people don't know either, but they still come with an opinion, or they are just misinformed. You get tired, but you can't sit on a bench, they are all taken and there are so few of them. You have to look both ways several times before you walk to the stand, because of cyclers. You stand in a line, then order a hotdog. The boy there gives you a bun with a wurst in it. You ask for mustard, but the boy is new and it takes him a while to find it after looking for some paper that tells him where it is. You have to dodge a football that was coming at your head. Someone flushes the toilet.

  I don't remember why I thought this would be a good book to read. Perhaps because it was one of those "gothic novels" and I had just read one that I liked a lot. The Owl Service is a short novel, but it took me ages to finish it. Whenever I had the time to read/listen to it I always found something else to do. I think Alan Garner wanted to do right by the story, which is a reimagining of a traditional Welsh legend, but it ended up an opaque and pretentious mess with characters that you cannot stand. If at least the writing had called to me. Garner is not a bad writer, but the style of writing didn't capture my attention. I had to make efforts to stay in the story and not let my mind wander.

  The plot revolves around a valley in Wales where a British family owns property and where the locals are treated as uneducated peasants. The family comes to spend the summer and weird things start to happen. But they are either completely random or, when it comes to be some sort of possession or empowerment, there is always someone near to break the spell or destroy things in fear and righteous anger, which made it all rather boring. At no point there was anyone saying "Oh, that's peculiar, let's dig into it!" or "Hey, I can make books fly by themselves, let's see if I can solve world hunger or space exploration".

  The worst part was the characters, all entitled twats. Every single one of them believes he can order other around, force things upon them or do and say whatever the hell they want. And I mean everyone, including the Welsh help. If they don't insult you, force things upon you or treat you like scum just because you are different, they smack you upon the head with indignation for not having done what was rudely ordered to you. And that's the maid doing it!

  Bottom line: as a scholar of Welsh legend and the literary interpretation of myth in British literature I... hell, no! Just leave this book be! It's just bad.

I know I am shooting myself in the foot here, but, to paraphrase some people, staying quiet doesn't help anyone. I've come to love Dev.to, a knowledge sharing platform aimed at software developers, because it actually promotes blogging and dissemination of information. It doesn't do enough against clickbait, but it's great so far. So, hungry for some new dev stuff, I opened up the website only to find it spammed with big colorful posters and posts supporting female devs. It was annoying, but it got me thinking.

  I like women in software! I too can honestly say I support them. I've always done so. I worked with them, mentored them, learned from them, worked for them, hired them. I want them to get paid what they are due, just like any other person: quiet, happiness, money, respect, understanding. I support their right to tell me when they hate (or love) something I do or say and I am totally against assholes who would pray on them or belittle them. Not because they are women, but because they are human, and no one should stand for stupid little people who only think of themselves and have a chip on their shoulder.

  And yes, women need more support than men, because they traditionally did not have it before. For them it is an uphill battle to fit into communities that contain few females. They have to butt in, they have to push and struggle and we need to understand their underdog status and protect them through that. But not because they are some fantasy creature, or perpetual victims or some other thing, but because they are people. This applies to them, to minorities, to majorities, to every single person around you. I would feel the same about some guy not getting hired because he is too muscular as for some woman who won't get a job because she's bland looking.

  So ask yourself, are you really supporting women, or are you just playing a game? Are you the one shouting loudly in the night "Night time! Everybody go to sleep!"? Are you protecting women or singling them out as something different that must be treated differently? Are you actually thinking of people or just doing politics? Because if you decide to annoy devs on behalf of women, you'd better do a good job supporting them for real.

  I had tried to install the latest version of Windows 10 (at this time 1909, the September 2019 version), but it had failed with a blue screen. I kind of ignored it, because I thought they will probably fix it or maybe it's because my laptop is new and so on. But recently I got this warning in Settings Windows Update: "You’re currently running a version of Windows that’s nearing the end of service. We recommend you update to the most recent version of Windows 10 to get the latest features and security improvements".

  So I tried to install it again. Same thing: a blue screen with a creepy smiley telling me it has failed to boot, with a stop code of "MEMORY MANAGEMENT" at FIRST_BOOT when BOOTing. Well, no shit, Sherlock! I tried to install it using the Media Creation Tool (which I recommend when you have Windows Update issues because you can download the update and save it into an ISO, so you can try it again and again without redownloading everything) and I got a more specific error: "0xC1900101 - 0x30017". What does it mean? No one knows, but it led me down a rabbit hole of people having a lot of different issues.

  This post will not discuss every single problem out there, you can search for yourself, I only wrote this because in some obscure corner of a forum I saw one person saying he solved by uninstalling Diskcryptor, a software that I had also installed, but never used. Once I uninstalled it, the update went without a hitch.

  So, summary:

  • if you have Diskcryptor, uninstall it. You can re-install it later.
  • download the Media Creation Tool for your service pack
  • tell it to download in an ISO file
  • mount the ISO and install from there
  • check what error you get
  • Google for it until you find someone having had the same specific problem

Intro

  This post will take you on an adventure through time and sound. It will touch the following software development concepts:

  • await/async in Javascript
  • named groups in regular expressions in Javascript
  • the AudioContext API in Javascript
  • musical note theory
  • Gorillas!

  In times immemorial, computers were running something called the DOS operating system and almost the entire interface was text based. There was a way to draw things on the screen, by setting the values of pixels directly in the video memory. The sound was something generated on a "PC speaker" which was a little more than a small speaker connected to a power port and which you had to make work by handling "interrupts". And yet, since this is when I had my childhood, I remember so many weird little games and programs from that time with a lot of nostalgic glee.

  One of these games was Gorillas, where two angry gorillas would attempt to murder each other by throwing explosive bananas. The player would have to enter the angle and speed and also take into account a wind speed that was displayed as an arrow on the bottom of the screen. That's all. The sounds were ridiculous, the graphics really abstract and yet it was fun. So, as I was remembering the game, I thought: what would it take to make that game available in a modern setting? I mean, the programming languages, the way people thought about development, the hardware platform, everything has changed.

  In this post I will detail the PLAY command from the ancient programming language QBASIC. This command was being used to generate sound by instructing the computer to play musical notes on the PC speaker. Here is an example of usage:

PLAY "MBT160O1L8CDEDCDL4ECC"

  This would play the short song at the beginning of the Gorillas game. The string tells the computer to play the sound in the background, at a tempo of 160 in the first octave, with notes of an eighth of a measure: CDEDCD then end with quarter measure notes: ECC. I want to replicate this with Javascript, one because it's simpler to prototype and second because I can make the result work in this very post.

Sound and Music

  But first, let's see how musical notes are being generated in Javascript, using the audio API. First you have to create an AudioContext instance, with which you create an Oscillator. On the oscillator you set the frequency and then... after a while you stop the sound. The reason why the API seems so simplistic is because it works by creating an audio graph of nodes that connect to each other and build on each other. There are multiple ways in which to generate sound, including filling a buffer with data and playing that, but I am not going to go that way.

  Therefore, in order to PLAY in Javascript I need to translate concepts like tempo, octaves, notes and measures into values like duration and frequency. That's why we need a little bit of musical theory.

  In music, sounds are split into domains called octaves, each holding seven notes that, depending on your country, are either Do, Re, Mi, Fa, So, La, Si or A, B,C, D, E, F and G or something else. Then you have half notes, so called sharp or flat notes: A# is half a note above A and A♭ is a half note below A. A# is the same as B♭. For reasons that I don't want to even know, the octaves start with C. Also the notes themselves are not equally spaced. The octaves are not of the same size, in terms of frequency. Octave 0 starts at 16.35Hz and ends at 30.87, octave 1 ranges between 32.70 and 61.74. In fact, each octave spreads on twice as much frequency space as the one before. Each note has twice the frequency of the same note on the lower octave.

  In a more numerical way, octaves are split into 12: C, C#, D, E♭, E, F, F#, G, G#, A, B♭, B. Note (heh heh) that there are no half notes between B and C and E and F. The frequency of one of these notes is 21/12 times the one before. Therefore one can compute the frequency of a note as:

Frequency = Key note * 2n/12, where the key note is a note that you use as a base and n is the note-distance between the key note and the note you want to play.

  The default key note is A4, or note A from octave 4, at 440Hz. That means B♭ has a frequency of 440*1.059463 = 466.2.

  Having computed the frequency, we now need the duration. The input parameters for this are: tempo, note length, mode and the occasional "dot":

  • tempo is the number of quarter measures in a minute
    • this means if the tempo is 120, a measure is 60000 milliseconds divided by 120, then divided by 4, so 125 milliseconds
  • note length - the length of a note relative to a measure
    • these are usually fractions of a measure: 1, 1/2, 1/4, 1/8, 1/16, etc
  • mode - this determines a general speed of playing the melody
    • as defined by the PLAY command, you have:
      • normal: a measure is 7/8 of a default measure
      • legato: a measure is a measure
      • staccato: a measure is 3/4 of a default measure
  • dotted note - this means a specific note will be played for 3/2 of the defined duration for that note

  This gives us the formula:

Duration = note length * mode * 60000 / 4 / tempo * dotDuration

Code

  With this knowledge, we can start writing code that will interpret musical values and play a sound. Now, the code will be self explanatory, hopefully. The only thing I want to discuss outside of the audio related topic is the use of async/await in Javascript, which I will do below the code. So here it is:

class QBasicSound {

    constructor() {
        this.octave = 4;
        this.noteLength = 4;
        this.tempo = 120;
        this.mode = 7 / 8;
        this.foreground = true;
        this.type = 'square';
    }

    setType(type) {
        this.type = type;
    }

    async playSound(frequency, duration) {
        if (!this._audioContext) {
            this._audioContext = new AudioContext();
        }
        // a 0 frequency means a pause
        if (frequency == 0) {
            await delay(duration);
        } else {
            const o = this._audioContext.createOscillator();
            const g = this._audioContext.createGain();
            o.connect(g);
            g.connect(this._audioContext.destination);
            o.frequency.value = frequency;
            o.type = this.type;
            o.start();
            await delay(duration);
            // slowly decrease the volume of the note instead of just stopping so that it doesn't click in an annoying way
            g.gain.exponentialRampToValueAtTime(0.00001, this._audioContext.currentTime + 0.1);
        }
    }

    getNoteValue(octave, note) {
        const octaveNotes = 'C D EF G A B';
        const index = octaveNotes.indexOf(note.toUpperCase());
        if (index < 0) {
            throw new Error(note + ' is not a valid note');
        }
        return octave * 12 + index;
    }

    async playNote(octave, note, duration) {
        const A4 = 440;
        const noteValue = this.getNoteValue(octave, note);
        const freq = A4 * Math.pow(2, (noteValue - 48) / 12);
        await this.playSound(freq, duration);
    }

    async play(commandString) {
        const reg = /(?<octave>O\d+)|(?<octaveUp>>)|(?<octaveDown><)|(?<note>[A-G][#+-]?\d*\.?)|(?<noteN>N\d+\.?)|(?<length>L\d+)|(?<legato>ML)|(?<normal>MN)|(?<staccato>MS)|(?<pause>P\d+\.?)|(?<tempo>T\d+)|(?<foreground>MF)|(?<background>MB)/gi;
        let match = reg.exec(commandString);
        let promise = Promise.resolve();
        while (match) {
            let noteValue = null;
            let longerNote = false;
            let temporaryLength = 0;
            if (match.groups.octave) {
                this.octave = parseInt(match[0].substr(1));
            }
            if (match.groups.octaveUp) {
                this.octave++;
            }
            if (match.groups.octaveDown) {
                this.octave--;
            }
            if (match.groups.note) {
                const noteMatch = /(?<note>[A-G])(?<suffix>[#+-]?)(?<shorthand>\d*)(?<longerNote>\.?)/i.exec(match[0]);
                if (noteMatch.groups.longerNote) {
                    longerNote = true;
                }
                if (noteMatch.groups.shorthand) {
                    temporaryLength = parseInt(noteMatch.groups.shorthand);
                }
                noteValue = this.getNoteValue(this.octave, noteMatch.groups.note);
                switch (noteMatch.groups.suffix) {
                    case '#':
                    case '+':
                        noteValue++;
                        break;
                    case '-':
                        noteValue--;
                        break;
                }
            }
            if (match.groups.noteN) {
                const noteNMatch = /N(?<noteValue>\d+)(?<longerNote>\.?)/i.exec(match[0]);
                if (noteNMatch.groups.longerNote) {
                    longerNote = true;
                }
                noteValue = parseInt(noteNMatch.groups.noteValue);
            }
            if (match.groups.length) {
                this.noteLength = parseInt(match[0].substr(1));
            }
            if (match.groups.legato) {
                this.mode = 1;
            }
            if (match.groups.normal) {
                this.mode = 7 / 8;
            }
            if (match.groups.staccato) {
                this.mode = 3 / 4;
            }
            if (match.groups.pause) {
                const pauseMatch = /P(?<length>\d+)(?<longerNote>\.?)/i.exec(match[0]);
                if (pauseMatch.groups.longerNote) {
                    longerNote = true;
                }
                noteValue = 0;
                temporaryLength = parseInt(pauseMatch.groups.length);
            }
            if (match.groups.tempo) {
                this.tempo = parseInt(match[0].substr(1));
            }
            if (match.groups.foreground) {
                this.foreground = true;
            }
            if (match.groups.background) {
                this.foreground = false;
            }

            if (noteValue !== null) {
                const noteDuration = this.mode * (60000 * 4 / this.tempo) * (longerNote ? 1 : 3 / 2);
                const duration = temporaryLength
                    ? noteDuration / temporaryLength
                    : noteDuration / this.noteLength;
                const A4 = 440;
                const freq = noteValue == 0
                    ? 0
                    : A4 * Math.pow(2, (noteValue - 48) / 12);
                const playPromise = () => this.playSound(freq, duration);
                promise = promise.then(playPromise)
            }
            match = reg.exec(commandString);
        }
        if (this.foreground) {
            await promise;
        } else {
            promise;
        }
    }
}

function delay(duration) {
    return new Promise(resolve => setTimeout(resolve, duration));
}

One uses the code like this:

var player = new QBasicSound();
await player.play('T160O1L8CDEDCDL4ECC');

Note that you cannot start playing the sound directly, you need to wait for a user interaction first. An annoying rule to suppress annoying websites which would start playing the sound on load. And here is the result (press multiple times on Play for different melodies):

Javascript in modern times

There are two concepts that were used in this code that I want to discuss: named regular expression groups and async/await. Coincidentally, both are C# concepts that have crept up in the modern Javascript specifications when .NET developers from Microsoft started contributing to the language.

Named groups are something that appeared in ES2018 and it is something I've been using with joy in .NET and hated when I didn't have it in some other language. Look at the difference between the original design and the current one:

// original design
var match = /(a)bc/.exec('abcd');
if (match && match[1]) { /*do something with match[1]*/ }

// new feature
const match = /(?<theA>a)bc/.exec('abcd');
if (match && match.groups.theA) { /*do something with match.groups.theA*/ }

There are multiple advantages to this:

  • readability for people revisiting the code
  • robustness in the face of changes to the regular expression
    • the index might change if new groups are added to it
  • the code aligns with the C# code (I like that :) )

My advice is to always use named groups when using regular expressions.

Another concept is await/async. In .NET it is used to hide complex asynchronous interactions in the code and with the help of the compiler helps with all the tasks that are running at the same time. Unfortunately, in C#, that means polluting code with async keywords on all levels as async methods can only be used inside other async methods. No such qualms in Javascript.

While in .NET the await/async system runs over Task<T> methods, in Javascript it runs over Promises. Both are abstractions over work that is being done asynchronously.

A most basic example is this:

// original design
getSomethingAsync(url,function(data) {
  getSomethingElseAsync(data.url,function(data2) {
    // do something with data2
  }, errorHandler2);
},errorHandler1);

// Promises
getSomethingAsync(url)
  .then(function(data) {
    getSomethingElseAsync(data.url);
  })
  .then(function(data2) {
    // so something with data2
  })
  .catch(errorHandler);

// async/await
try {
  var data = await getSomethingAsync(url);
  var data2 = await getSomethingElseAsync(data.url);
  // do something with data2
} catch(ex) {
  errorHandler(ex);
}

You see that the await/async way looks like synchronous code, you can even catch errors. await can be used on any function that returns a Promise instance and the result of it is a non-blocking wait until the Promise resolves and returns the value that was passed to the resolve function.

If you go back to the QBasicSound class, at the end, depending on if the sound is in the foreground or background, the function is either awaiting a promise or ... just letting it run. You might also notice that I've added a delay function at the end of the code which is using setTimeout to resolve a Promise. Here is what is actually going on:

// using await
console.log(1);
await delay(1000).then(()=>console.log(2));
console.log(3);
// this logs 1,2,3


// NOT using await
console.log(1);
delay(1000).then(()=>console.log(2));
console.log(3);
// this logs 1,3,2

In the first case, the Promise that was constructed by a one second delay and then logging 2 is awaited, meaning the code waits for the result. After it is executed, 3 gets logged. In the second case, the logging of 2 is executed after one second delay, but the code does not wait for the result, therefore 3 is logged immediately and 2 comes after.

What sorcery is this?! Isn't Javascript supposed to be single threaded? How does it work? Well, consider that in the delay function, the resolve function will only be called after a timeout of one second. When executed, it starts the timeout, then reaches the end of the function. It has not been resolved yet, so it passes control back to the engine, which uses it to execute other things. When the timeout is fired, the engine takes back control, executes the resolve function, then passes control back. All of this is invisible to the user, who gets the illusion of multithreaded behavior.

Already some standard out of the box APIs are async, like fetch. In order to get an object from a REST API that is called via HTTP the code would look like this:

// fetch API
let response = await fetch('/article/promise-chaining/user.json');
let user = await response.json();

Conclusion

I spent an entire day learning about sounds and writing code that would emulate QBASIC code from a billion years ago. Who knows, maybe my next project will be to port the entire Gorillas game in Javascript. Now one can lovingly recreate the sounds of one's childhood.

Other references:

Gorillas.BAS

QBasic/Appendix

Generate Sounds Programmatically With Javascript

Musical Notes

Gorrilas game online

  We are all racists. We belittle dinosaurs for getting extinct, we pump our chests and declare we are the highest pinnacle of evolution and they are inferior, failed experiments of nature, we, mammals, are clearly the superior product. Yet they existed and flourished and ruled every ecosystem on Earth for hundreds of millions of years. Even today the number of species of birds, the direct ancestors of dinosaurs, is more than double the number of species of mammals. Kenneth Lacovara starts his book with a similar assumption: Einstein was a schmuck! Every one of his great achievements means nothing because, in the end, Einstein died. If that idea is ridiculous for him, how come we still use it for dinosaurs?

  Why Dinosaurs Matter is a short book, one in the TED Books series, and it pretty much adds detail to Lacovara's TED talk, like all of the TED books. Frankly, I am not very happy with the series, as it often adds little to the ideas summarised in the talks themselves. Some people are spending a lot of effort to summarize existing books into 15 minutes bite size media and TED books do the opposite, adding fat onto already fleshed out ideas. That doesn't mean this book is bad. It is well written, it has a lot of useful information, but it felt disjointed, like a combination of an opinion piece and a history book of discovered fossils. It gets its point across, but that's about it.

  And the point is that we can learn a lot from dinosaurs, from how they spread around the world, adapted to all kinds of environments and the biological innovations they brought on with this. We can learn from their apparently absolute dominion and their immediate and humiliating downfall. Being at the top of the food chain is not only a matter of prideful boasting, but also a fragile spot with multiple dependencies. Once the natural order is disrupted, the top of the pyramid is the first to topple.

  Bottom line: it is a nice introductory book in the world of dinosaurs, but not more than that. It's short enough to read on a long train ride or plane flight and it can be easily read by a child or teenager.

  On the SQLite reference page for the WITH clause there is a little example of solving a Sudoku puzzle. Using SQL. I wanted to see it in action and therefore I've translated it into T-SQL.

  You might think that there is a great algorithm at play, something that will blow your mind. I mean, people have blogged about Sudoku solvers to hone their programming skills for ages and they have worked quite a lot, writing lines and lines of how clever they were. And this is SQL, it works, but how do you do something complex in it? But no, it's very simple, very straightforward and also performant. Kind of a let down, I know, but it pretty much takes all possible solutions and only selects for the valid ones using CTEs (Common Table Expressions).

  Here is the translation, followed by some explanation of the code:

DECLARE @Board VARCHAR(81) = '86....3...2...1..7....74...27.9..1...8.....7...1..7.95...56....4..1...5...3....81';
WITH x(s,ind) AS
(
  SELECT sud,CHARINDEX('.',sud) as ind FROM (VALUES(@Board)) as input(sud)
  UNION ALL
  SELECT
	CONVERT(VARCHAR(81),CONCAT(SUBSTRING(s,1,ind-1),z,SUBSTRING(s,ind+1,81))) as s,
	CHARINDEX('.',CONCAT(SUBSTRING(s,1,ind-1),z,SUBSTRING(s,ind+1,81))) as ind
  FROM x
  INNER JOIN (VALUES('1'),('2'),('3'),('4'),('5'),('6'),('7'),('8'),('9')) as digits(z)
  ON NOT EXISTS (
            SELECT 1
              FROM (VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9)) as positions(lp)
             WHERE z = SUBSTRING(s, ((ind-1)/9)*9 + lp, 1)
                OR z = SUBSTRING(s, ((ind-1)%9) + (lp-1)*9 + 1, 1)
                OR z = SUBSTRING(s, (((ind-1)/3) % 3) * 3
                        + ((ind-1)/27) * 27 + lp
                        + ((lp-1) / 3) * 6, 1)
	)
	WHERE ind>0
)
SELECT s FROM x WHERE ind = 0

  The only changes from the original code I've done is to extract the unsolved puzzle into its own variable and to change the puzzle values. Also, added a more clear INNER JOIN syntax to replace the obnoxious, but still valid, comma (aka CROSS JOIN) notation. Here is the breakdown of the algorithm, as it were:

  • start with an initial state of the unsolved puzzle as a VARCHAR(81) string and the first index of a dot in that string, representing an empty slot - this is the anchor part
  • for the recursive member, join the current state with all the possible digit values (1 through 9) and return the strings with the first empty slot replaced by all valid possibilities and the position of the next empty slot
  • stop when there are no more empty slots
  • select the solutions (no empty slots)

  It's that simple. And before you imagine it will generate a huge table in memory or that it will take a huge time, worry not. It takes less than a second (a lot less) to find the solution. Obviously, resource use increases exponentially when the puzzle doesn't have just one solution. If you empty the first slot (. instead of 8) the number of rows is 10 and it takes a second to compute them all. Empty the next slot, too (6) and you get 228 solutions in 26 seconds and so on.

 The magical parts are the recursive Common Table Expression itself and the little piece of code that checks for validity, but the validity check is quite obvious as it is the exact translation of the Sudoku rules: no same digits on lines, rows or square sections.

  A recursive CTE has three parts:

  • an initial query that represents the starting state, often called the anchor member
  • a recursive query that references the CTE itself, called the recursive member, which is UNIONed with the anchor
  • a termination condition, to tell SQL when to end the recursion

  For us, we started with one unsolved solution, we recursed on all possible valid solutions for replacing the first empty slot and we stopped when there were no more empty slots.

  CTEs are often confusing because the notation seems to indicate something else to a procedural programmer. You imagine doing this without CTEs, maybe in an object oriented programming language, and you think of this huge buffer that just keeps increasing and you have to remember where you left off so you don't process the same partial solution multiple times and you have to clean the data structure so it doesn't get too large, etc. SQL, though, is at heart a declarative programming language, very close to functional programming. It will take care not only of the recursion, but also filter the rows by the final condition of no empty slots while (and sometimes before) it makes the computations.

  Once you consider the set of possible solutions for a problem as a working set, SQL can do wonders to find the solution, provided you can encode it in a way the SQL engine will understand. This is just another example of the right tool for the right job. Hope you learned something.

  One Word Kill starts off like an episode of Stranger Things. You've got the weird kid, his weird friends and the mysterious girl who is both beautiful, smart and hangs out with them to play D&D, all set in the 80's. Then the main character gets cancer and his future self comes to save... the girl. There is also a school boy psycho after them. But that's where the similarities end... the rest of the story is just... nothing. People explain things that needed little explaining and make no sense, good kids and their parents run around from a school boy, as psychotic as he could possibly be, without involving police or gang member allies and, in the middle of all the drama: cancer, psycho killer, future self, time travel... they play Dungeons and Dragons, a game that promotes imagination and creativity that then the protagonists fail to use in any amount in their real life.

  Having just read Prince of Thorns, I really expected a lot more from Mark Lawrence. Instead I get a derivative and boring story that brings absolutely nothing new to the table. It's reasonably well written, I guess, but nothing Wow!, which is exactly the reaction reviewers seem to have about this book. Have I read a different story somehow?

  Bottom line: I am tempted to rate this average, on account of other raving reviews and on the fact that I liked another Mark Lawrence book, but I have to be honest with me and rate this book alone, which I am sorry to say, is sub par.

  This is something that appeared in C# 5, so a long time ago, with .NET 4.5, but I only found out about it recently. Remember when you wanted to know the name of a property when doing INotifyPropertyChanged? Or when you wanted to log the name of the method that was calling? Or you wanted to know which line in which source file is responsible for calling a certain piece of code? All of this can be done with the Caller Information feature.

  And it is easy enough to use, just decorate a method parameter with an explicit default value with any of these three attributes:

The parameter value, if not set when calling the method, will be filled in with the member name or file name or line number. It's something that the compiler does, so no overhead from reflection. Even better, it works on the caller of the method, not the interior of the method. Imagine you had to write a piece of code to do the same. How would you reference the name of the method calling the method you are in?

Example from Microsoft's site:

public void DoProcessing()
{
    TraceMessage("Something happened.");
}

public void TraceMessage(string message,
        [System.Runtime.CompilerServices.CallerMemberName] string memberName = "",
        [System.Runtime.CompilerServices.CallerFilePath] string sourceFilePath = "",
        [System.Runtime.CompilerServices.CallerLineNumber] int sourceLineNumber = 0)
{
    System.Diagnostics.Trace.WriteLine("message: " + message);
    System.Diagnostics.Trace.WriteLine("member name: " + memberName);
    System.Diagnostics.Trace.WriteLine("source file path: " + sourceFilePath);
    System.Diagnostics.Trace.WriteLine("source line number: " + sourceLineNumber);
}

// Sample Output:
//  message: Something happened.
//  member name: DoProcessing
//  source file path: c:\Visual Studio Projects\CallerInfoCS\CallerInfoCS\Form1.cs
//  source line number: 31

  So you want to use a queue, a structure that has items added at one side and removed on another, in Javascript code. Items are added to the tail of the queue, while they are removed at the head. We, Romanians, are experts because in the Communist times resources were scarce and people often formed long queues to get to them, sometimes only on the basis of rumour. They would see a line of people and ask "Don't they have meat here?" and the answer would come "No, they don't have milk here. It's next building they don't have meat at". Anyway...

  There is an option that can be used directly out of the box: the humble array. It has methods like .push (add an item), .pop (remove the latest added item - when you use it as a stack) and .shift (remove the oldest added item - when you use it as a queue). For small cases, that is all you need.

  However, I needed it in a high performance algorithm and if you think about it, removing the first element of an array usually means shifting (hence the name of the function) all elements one slot and decreasing the length of the array. Consider a million items array. This is not an option.

  One of the data structure concepts we are taught at school is the linked list. Remember that? Each item has a reference to the next (and maybe the previous) item in the list. You explore it by going from one item to the next, without indexing, and you can remove any part of the list or add to any part of the list just by changing the value of these references. This also means that for each value you want stored you have the value, the reference(s) and the overhead of handling a more complex data object. Again, consider a million numbers array. It's not the right fit for this problem.

  Only one option remains: still using an array, but moving the start and the end of the array in an abstract manner only, so that all queue/dequeue operations take no effort. This means keeping a reference to the tail and the head of the queue in relation to the length of the queue and of the underlying array.

  But first let's establish a baseline. Let's write a test and implement a queue using the default array pop/shift implementation:

// the test
const size = 100000;
const q=new Queue();
time(()=> { for (let i=0; i<size; i++) q.enqueue(i); },'Enqueue '+size+' items');
time(()=> { for (let i=0; i<size; i++) q.dequeue(i); },'Dequeue '+size+' items');
time(()=> { for (let i=0; i<size/10; i++) {
	for (let j=0; j<10; j++) q.enqueue(i);
	for (let j=0; j<9; j++) q.dequeue(i);
} },'Dequeue and enqueue '+size+' items');

// the Queue implementation
class Queue {
  constructor() {
    this._arr = [];
  }

  enqueue(item) {
    this._arr.push(item);
  }

  dequeue() {
    return this._arr.shift();
  }
}

// the results
Enqueue 100000 items, 10ms
Dequeue 100000 items, 1170ms
Dequeue and enqueue 100000 items, 19ms

  The Enqueue operation is just adding to an array, enqueuing and dequeuing by leaving an item at ever series of dequeues is slightly slower, as the amount of array shifting is negligible. Dequeuing, though, is pretty heavy. Note that increasing just a little bit the amount of items leads to an exponential increase in time:

Enqueue 200000 items, 12ms
Dequeue 200000 items, 4549ms
Dequeue and enqueue 200000 items, 197ms

  Now let's improve the implementation of the queue. We will keep enqueue using Array.push, but use a _head index to determine which items to dequeue. This means faster speed, but the queue will never shorten. It's the equivalent of Romanians getting their product, but remaining in the queue.

// the Queue implementation
class Queue {
  constructor() {
    this._arr = [];
    this._head = 0;
  }

  enqueue(item) {
    this._arr.push(item);
  }

  dequeue() {
    if (this._head>=this._arr.length) return;
    const result = this._arr[this._head];
    this._head++;
    return result;
  }
}

// the results
Enqueue 200000 items, 11ms
Dequeue 200000 items, 4ms
Dequeue and enqueue 200000 items, 11ms

  The performance has reached the expected level. Dequeuing is now even faster than enqueuing because it doesn't need to expand the array as items are added. However, for all scenarios the queue is only growing, even when dequeuing all the items. What I can do is reuse the slots of the dequeued items for the items to be added. Now it gets interesting!

  My point is that right now we can improve the functionality of our queue by replacing dequeued but still stored items with newly enqueued items. That is the equivalent of Romanians leaving the queue only after they get the meat and a new Romanian comes to take their place. If there are more people coming than getting served, then people that got their meat will all leave and we can just add people to the tail of the queue.

  So let's recap the algorithm:

  • we will use an array as a buffer
  • the queue items start at the head and end at the tail, but wrap around the array buffer
  • whenever we add an item, it will be added in the empty space inside the array and the tail will increment
  • if there is no empty space (queue length is the same as the array length) then the array will be rearranged so that it has space for new itms
  • when we dequeue, the item at the head will be returned and the head incremented
  • whenever the head or tail reach the end of the array, they will wrap around

Some more improvements:

  • if we enqueue a lot of items then dequeue them, the array will not decrease until we dequeue them all. An improvement is to rearrange the array whenever the queue length drops below half of that of the array. It will add computation, but reduce space.
  • when we make space for new items (when the array size is the same as the one of the logical queue) we should add more space than just 1, so I will add the concept of a growth factor and the smallest size increase.

Here is the code:

/**
 * A performant queue implementation in Javascript
 *
 * @class Queue
 */
class Queue {

    /**
     *Creates an instance of Queue.
     * @memberof Queue
     */
    constructor() {
        this._array = [];
        this._head = 0;
        this._tail = 0;
        this._size = 0;
        this._growthFactor = 0.1;
        this._smallestSizeIncrease = 64;
    }

    /**
     * Adding an iterator so we can use the queue in a for...of loop or a destructuring statement [...queue]
     */
    *[Symbol.iterator]() {
        for (let i = 0; i < this._size; i++) {
            yield this.getAt(i);
        }
    }

    /**
     * Returns the length of the queue
     *
     * @readonly
     * @memberof Queue
     */
    get length() {
        return this._size;
    }

    /**
     * Get item based on item in the queue
     *
     * @param {*} index
     * @returns
     * @memberof Queue
     */
    getAt(index) {
        if (index >= this._size) return;
        return this._array[(this._head + index) % this._array.length];
    }

    /**
     * Gets the item that would be dequeued, without actually dequeuing it
     *
     * @returns
     * @memberof Queue
     */
    peek() {
        return this.getAt(0);
    }

    /**
     * Clears the items and shrinks the underlying array
     */
    clear() {
        this._array.length = 0;
        this._head = 0;
        this._tail = 0;
        this._size = 0;
    }

    /**
     * Adds an item to the queue
     *
     * @param {*} obj
     * @memberof Queue
     */
    enqueue(obj) {
        // special case when the size of the queue is the same as the underlying array
        if (this._size === this._array.length) {
            // this is the size increase for the underlying array
            const sizeIncrease = Math.max(this._smallestSizeIncrease, ~~(this._size * this._growthFactor));
            // if the tail is behind the head, it means we need to move the data from the head to 
            // the end of the array after we increase the array size
            if (this._tail <= this._head) {
                const toMove = this._array.length - this._head;
                this._array.length += sizeIncrease;
                for (let i = 0; i < toMove; i++) {
                    this._array[this._array.length - 1 - i] = this._array[this._array.length - 1 - i - sizeIncrease];
                }
                this._head = (this._head + sizeIncrease) % this._array.length;
            }
            else
            // the array size can just increase (head is 0 and tail is the end of the array)
            {
                this._array.length += sizeIncrease;
            }
        }
        this._array[this._tail] = obj;
        this._tail = (this._tail + 1) % this._array.length;
        this._size++;
    }

    /**
     * Removed the oldest items from the queue and returns it
     *
     * @returns
     * @memberof Queue
     */
    dequeue() {
        if (this._size === 0) {
            return undefined;
        }
        const removed = this._array[this._head];
        this._head = (this._head + 1) % this._array.length;
        this._size--;
        // special case when the size of the queue is too small compared to the size of the array
        if (this._size > 1000 && this._size < this._array.length / 2 - this._smallestSizeIncrease) {
            if (this._head<this._tail) {
                this._array = this._array.slice(this._head,this._tail);
            } else {
                this._array=this._array.slice(this._head, this._array.length).concat(this._array.slice(0,this._tail));
            }
            this._head = 0;
            this._tail = 0;
        }   
        return removed;
    }
}

  Final notes:

  • there is no specification on how an array should be implemented in Javascript, therefore I've used the growth factor concept, just like in C#. However, according to James Lawson, the array implementation is pretty smart in modern Javascript engines, we might not even need it.
  • the optimization in dequeue might help with space, but it could be ignored if what you want is speed and don't care about the space usage
  • final benchmarking results are:
    Enqueue 200000 items, 15ms, final array size 213106
    Dequeue 200000 items, 19ms, final array size 1536
    Dequeue and enqueue 200000 items, 13ms, final array size 20071

  While on the road with his mother and baby brother, a ten year old prince is attacked by an enemy armed group. Thrown into a patch of thorns from where he could not move, only watch, he sees his mother defiled and killed and his brother smashed on a rock like a toy. He vows vengeance. Such a classic story, right? Only we see him a few years later, leading a band of brigands, murdering and looting and raping, his vengeance all but forgotten and replaced by a desire to unite all the hundred little states warring against each other. Well, more interesting, but still pretty classic, right? Nope, stuff still happens that makes the lead character (and you) doubt his thoughts and the true nature of reality and retroactively explains some of the more incredulous questions that the reader is asking.

  I would say Prince of Thorns is all about revealing layers of this world that Mark Lawrence is still shaping. I quite liked that. The first book sets things up, but it is not a setup book. It is filled with action. Nor does it tell us everything, leaving a lot to be explored in the next books in the series. That's something that is sorely missing in many modern stories. In order to enjoy the book, though, you have to suspend your disbelief when it tells of an eleven year old boy smashing heads, swinging swords and leading men. Yes, in feudal times being 11 is the time to have a midlife crisis, but it is all a little bit too much for a child.

  It is a game of thrones kind of book, but mercifully from the standpoint of a single character. There is not a lot of lore, but there is magic and a mysterious connection to an advanced but now dead civilisation, plenty of violence and strategy. I will probably read the next books in the series.