and has 1 comment
I have invented a new way to write software when people who hold decision power are not available. It's called Flag Assisted Programming and it goes like this: whenever you have a question on how to proceed with your development, instead of bothering decision makers, add a flag to the configuration that determines which way to go. Then estimate for all the possible answers to your question and implement them all. This way, management not only has more time to do real work, but also the ability to go back and forth on their decisions as they see fit. Bonus points, FAPing allows middle management to say you have A/B testing at least partially implemented, and that you work in a very agile environment.

and has 0 comments
A year and a half ago, as I was going from miserable job interview to the next, I was asked what I think about code review. At the time I said that I thought it was the most important organizational aspect of writing code. I mean you can do agile, waterfall, work on games or mobile apps or business applications, use the latest or the oldest, the best or the worst technology and still code reviewing helps. I still think that way now, but recent experiences with the process have left me thinking of refining my understanding of it. This blog post is about that.

The Good


Why is code review good? The very first thing it does is that it forces you to acknowledge your work. You can be tired and fix one little thing in a lazy way and forget about it and it might work or it might break something, but when you know you have to publish what you did you do things less lazy, more documented, more thought out. It doesn't matter that no one will ever look carefully on the review, as that you are thinking there is the possibility of it.

Second, and obvious, is that any mistakes you made are more likely to come to the surface when someone looks at the code. It doesn't mean people blame you for mistakes, it means the mistakes don't come and bite you in the ass later, when your work is supposed to be making money for some poor bastard somewhere. This is very important because we tend to work on systems more complex that we can or are willing to understand. If a group of people who together understand the system is reviewing work, though, you learn not only about the inevitable code errors you introduce, but also about the errors in judgement or understanding or in the assumptions you made.

Then there is the learning aspect of it. Juniors learn from seniors reviewing their work, they learn from code reviewing each other and everybody learns from reviewing work made by anyone else. It opens up perspectives. I mean, you can review some method that was copy pasted four times in order to do the same thing to four different objects and learn how not to that, ever! No matter how much you would want to when coming in at work hungover and hoping for death a little. For example, I've only recently learned to comment on my own code review before submitting it. Some might say comments in the code should do that, but sometimes you need more, as anchors for discussion, which obviously cannot be carried in code comments. (well, they can, but please don't do that)

And there is more! You get documentation of the code for free. When someone doesn't understand what the hell is going on, they ask questions, which leads to you answering in whatever code review software you use. This will remain there for others to peruse long after you've left the company and went on to slightly RGB shifted pastures. I still dream of a non intrusive system that would connect reviews to the code in your IDE, so you can always see a list of comments and annotations for whatever you are looking at.

One of the benefits is that code review makes everyone in the team write code in the same way. For better or worse. I will detail that in a moment, but think about what it means to read a piece of code, trying to understand it, then switch to the next one and see it written in a completely different style. You waste a lot of time.

Finally, I think the confidence code review gives you can lead not only to better code, but also faster code. More on this comes next. This is controversial, but I think you can use code review to check your code, but only if you trust the reviewers. You might fire off commit after commit after commit, confident that your peers will check what you, normally, would have to double and triple check before committing. It's risky, but with the right team it can do wonders.

The Bad


OK, so it's a great thing, this code review stuff. I knew that, you knew that, why are you wasting your finger strength? Well, there is a dark side to code review. I've heard some purists insist on some rules for code review with which I am not completely comfortable with, for example. I invite said purists who also read my blog to come rant in the comments below. Also my recent experience which touches on said rules and introduces others. Let me detail the bad.

There are programmers and programmers, projects and projects, management and management. Where one developer writes some code and hopes people will look at it carefully and instruct them on what they could improve, some people just lazily write something that kind of works, thinking whoever will do the code review will also do the work of making their code remotely usable. Where in some projects developers remain working after hours because they want to see their code do good and the project succeed, in others people couldn't care less: they do their time and break the door when the bell rings. Don't expect careful code reviews then. And there is the management issue, which might protect the developers from anything unrelated to coding or they might pester them with meetings and emails and processes that break concentration, waste time and surely do not help with the attention span of a code reviewer. But in all of the worst cases above, code review is still good, just less effective.

One of the rules I was talking about above was to never commit code unless its code review was accepted. Note the bold font on the never. It was like that whenever I heard the rule. Sounded bold. But I completely disagree with that.

First, if you have developers that you can't trust to commit something, don't let them commit. Either find someone better or do something with their privileges, a system that prevents them from committing. Same goes for people you can't trust to read the code review and update the code afterward a bad or defective commit.

Second of all, you might work on a file that should appear in more code reviews. No, the system where you do the work, ask for review, then shelve the files so you can work on the next thing doesn't work! It takes time, concentration and leads to bad resolves that break your code. Just commit the first thing and move to the next. When your review comes back full of bugs, just finish what you are working on, commit that, then return to the code and implement fixes for the issues found. That is a problem for code review software that can't understand a file committed after changes were made to it doesn't mean you want to include all the changes since time immemorial. That's a software issue, though. Just create a new review and somehow link it to the other, via comments or notes. Creating a personal branch for all developers or other crazy ideas like that are also crap.

Not committing work that you've done means delaying your other work, testing, finding problems in it, etc. Having to juggle with software in order to submit to a rigid process that is indifferent to the overall pace of development and the realities of your work is stupid. Just work, commit, review, test, rework. It's what we do.

It's also, I think, an error in judgement to force code review. As good as I think it is, you can work without it. It is an optional process, so keep it that way. Conditioning development on an optional process makes it mandatory. It might sound like a truism, but people don't seem to realize things unless you articulate them.

And then there is human nature. If you ask me to code review for you, I will stop what I am doing and perform the review, because if I don't, you can't commit. It hurts my work, because it breaks my concentration. It hurts your code review, because I am not focused enough. Personally I am best at reviewing in the morning. None of the organizational crap happened yet, no meetings, no emails telling me to write other emails, no chat messages asking questions that I have no desire to answer. I am rested, I am a bit pumped from making the minimum physical movements required to get me to the office and so I am ready to singlemindedly focus on your review. It shouldn't matter that you committed the code yesterday. I'll get to it when I get to it.

The Ugly


The ugly is not only bad, but also disturbing. It's not a characteristic of the code review per se, but is more related to the humans involved in the process. Code review has some nasty side effects on certain people and in certain situations. Let's discuss this for a bit.

I was saying above that it's good everybody writes in a certain way. That actually may stop people from innovating in the writing of code. Do it this way, that's the pattern we're using, you will hear, without the slightest hint of the possibility to improve on that pattern. Same thing might happen with new ideas that you might feel need to be introduced in the project, or some refactoring, or some other creative work that would make you proud and motivated to continue to do good work. As I said above, it's a people problem, not a process problem, but when it happens, it stifles innovation, creativity and ultimately the fucks you give on what happens to the project as a whole.

Code reviews, like any other communication medium, may be abused. People may be attacked or shamed by others who don't really like them. They might not even be junior and senior, as it might involve time in the firm rather than technical skill, or some other hierarchical or social advantage. Ego fights can also erupt in code reviews, which can exacerbate the problem if they are blocking reviews. Arguments are good, pissing contests are ugly, that kind of thing.

Reviews waste time. That's really not a people problem, it's a process problem. All processes, that is. You need to put in the work to do a good review. Just glancing over and saying "it looks good", without trying to understand what the code is supposed to do, is almost worse than refusing to do the review. I am plenty guilty of that. Instead of thinking about what the guy did and trying to help, part of my brain just keeps rummaging on what my current development task is. This is another argument to separate reviewing from code writing. You need your zone for both. When code review waste rather than spend time, that's ugly.

Finally, I think one major issues with code review is that it encourages lazing off on unit testing, proper testing, refactoring and even simple writing of the code. This is a management issue, mostly, and it's ugly like vomited shit. When people write horrid code filled with bugs assuming that code review will fix their lack of interest, that's ugly. When you are urged, more or less vigorously, to skimp on the unit or manual testing because the code review was accepted, that's ugly. But when you are trying to improve the general quality of the code and the answer is either that you don't have time for this or that any change is unnecessary because the code review passed or even when you are unwilling to do the refactoring, knowing what a hassle will be to send it through review, that's damn ugly. It means you want to do more than your share and you get stuck in a process.

And on that note, I end this wall of text. Process before people is always ugly.

Comments and opinions, if you dare! :)

I just read a very cool article (Understanding Default Parameters in Javascript) and my takeaway is this smart piece of code to enforce that a parameter is specified:
const isRequired = () => { throw new Error('param is required'); };

function filterEvil(array, evil = isRequired()) {
return array.filter(item => item !== evil);
}

So all you have to do is define the isRequired function in a shared library file and then use it in any function that you write.

Are you a bit put off by the fact you can use functions as default parameters? Welcome to Javascript, a language that seems designed by Eurythmics

I am sure I've tested this, but for some reason the icons in my blog disappeared for Internet Explorer. They are using Font Awesome SVG background images, declared something like this:
.fas-comment {
background-image: url("data:image/svg+xml;utf8,<svg height='511.6' version='1.1' viewBox='0 0 511.6 511.6' width='511.6' x='0' xml:space='preserve' xmlns='http://www.w3.org/2000/svg' y='0'><g fill='#2f5faa'><path d='M477.4 127.4c-22.8-28.1-53.9-50.2-93.1-66.5 -39.2-16.3-82-24.4-128.5-24.4 -34.6 0-67.8 4.8-99.4 14.4 -31.6 9.6-58.8 22.6-81.7 39 -22.8 16.4-41 35.8-54.5 58.4C6.8 170.8 0 194.5 0 219.2c0 28.5 8.6 55.3 25.8 80.2 17.2 24.9 40.8 45.9 70.7 62.8 -2.1 7.6-4.6 14.8-7.4 21.7 -2.9 6.9-5.4 12.5-7.7 16.9 -2.3 4.4-5.4 9.2-9.3 14.6 -3.9 5.3-6.8 9.1-8.8 11.3 -2 2.2-5.3 5.8-9.9 10.8 -4.6 5-7.5 8.3-8.8 9.9 -0.2 0.1-1 1-2.3 2.6 -1.3 1.6-2 2.4-2 2.4l-1.7 2.6c-1 1.4-1.4 2.3-1.3 2.7 0.1 0.4-0.1 1.3-0.6 2.9 -0.5 1.5-0.4 2.7 0.1 3.4v0.3c0.8 3.4 2.4 6.2 5 8.3 2.6 2.1 5.5 3 8.7 2.6 12.4-1.5 23.2-3.6 32.5-6.3 49.9-12.8 93.6-35.8 131.3-69.1 14.3 1.5 28.1 2.3 41.4 2.3 46.4 0 89.3-8.1 128.5-24.4 39.2-16.3 70.2-38.4 93.1-66.5 22.8-28.1 34.3-58.7 34.3-91.8C511.6 186.1 500.2 155.5 477.4 127.4z'/></g></svg>");
}

I had to try several things, but in the end, I found out that there are three steps in order to make this compatible with Internet Explorer (and still work in other browsers):
  1. The definition of the utf8 charset must be explicit: data:image/svg+xml;charset=utf8 instead of data:image/svg+xml;utf8
  2. The SVG code needs to be URL encoded: so turn all double quotes into single quotes and then replace < and > with %3C and %3E or use some URL encoder
  3. The colors need to be in rbg() format: so instead of fill='#2f5faa' use fill='rgb(47,95,170)' (same in style tags in the SVG, if any)


So now the result is:
.fas-comment {
background-image: url("data:image/svg+xml;charset=utf8,%3Csvg height='511.6' version='1.1' viewBox='0 0 511.6 511.6' width='511.6' x='0' xml:space='preserve' xmlns='http://www.w3.org/2000/svg' y='0'%3E%3Cg fill='rgb(47,95,170)'%3E%3Cpath d='M477.4 127.4c-22.8-28.1-53.9-50.2-93.1-66.5 -39.2-16.3-82-24.4-128.5-24.4 -34.6 0-67.8 4.8-99.4 14.4 -31.6 9.6-58.8 22.6-81.7 39 -22.8 16.4-41 35.8-54.5 58.4C6.8 170.8 0 194.5 0 219.2c0 28.5 8.6 55.3 25.8 80.2 17.2 24.9 40.8 45.9 70.7 62.8 -2.1 7.6-4.6 14.8-7.4 21.7 -2.9 6.9-5.4 12.5-7.7 16.9 -2.3 4.4-5.4 9.2-9.3 14.6 -3.9 5.3-6.8 9.1-8.8 11.3 -2 2.2-5.3 5.8-9.9 10.8 -4.6 5-7.5 8.3-8.8 9.9 -0.2 0.1-1 1-2.3 2.6 -1.3 1.6-2 2.4-2 2.4l-1.7 2.6c-1 1.4-1.4 2.3-1.3 2.7 0.1 0.4-0.1 1.3-0.6 2.9 -0.5 1.5-0.4 2.7 0.1 3.4v0.3c0.8 3.4 2.4 6.2 5 8.3 2.6 2.1 5.5 3 8.7 2.6 12.4-1.5 23.2-3.6 32.5-6.3 49.9-12.8 93.6-35.8 131.3-69.1 14.3 1.5 28.1 2.3 41.4 2.3 46.4 0 89.3-8.1 128.5-24.4 39.2-16.3 70.2-38.4 93.1-66.5 22.8-28.1 34.3-58.7 34.3-91.8C511.6 186.1 500.2 155.5 477.4 127.4z'/%3E%3C/g%3E%3C/svg%3E");
}

and has 8 comments
I've learned something new today. It all starts with an innocuous question: Given the following struct, tell me what is its size:
    public struct MyStruct
{
public int i1;
public char c1;
public long l1;
public char c2;
public short s1;
public char c3;
}
Let's assume that this is in 32bit C++ or C#.

The first answer is 4+1+8+1+2+1 = 17. Nope! It's 24.

Well, it is called memory alignment and it has to do with the way CPUs work. They have memory registers of fixed size, various caches with different sizes and speeds, etc. Basically, when you ask for a 4 byte int, it needs to be "aligned" so that you get 4 bytes from the correct position into a single register. Otherwise the CPU needs to take two registers (let's say 1 byte in one and 3 bytes in another) then mask and shift both and add them into another register. That is unbelievably expensive at that level.

So, why 24? i1 is an int, it needs to be aligned on positions that are multiple of 4 bytes. 0 qualifies, so it takes 4 bytes. Then there is a char. Chars are one byte, can be put anywhere, so the size becomes 5 bytes. However, a long is 8 bytes, so it needs to be on a position that is a multiple of 8. That is why we add 3 bytes as padding, then we add the long in. Now the size is 16. One more char → 17. Shorts are 2 bytes, so we add one more padding byte to get to 18, then the short is added. The size is 20. And in the end you get the last char in, getting to 21. But now, the struct needs to be aligned with itself, meaning with the largest primitive used inside it, in our case the long with 8 bytes. That is why we add 3 more bytes so that the struct has a size that is a multiple of 8.

Note that a struct containing a struct will align it to its largest primitive element, not the actual size of the child struct. It's a recursive process.

Can we do something about it? What if I want to spend speed on memory or disk space? We can use directives such as StructLayout. It receives a LayoutKind - which defaults to Sequential, but can also be Auto or Explicit - and a numeric Pack parameter. Auto rearranges the order of the members of the class, so it takes the least amount of space. However, this has some side effects, like getting errors when you want to use Marshal.SizeOf. With Explicit, each field needs to be adorned with a FieldOffset attribute to determine the exact position in memory; that also means you can use several fields on the same position, like in:
    [StructLayout(LayoutKind.Explicit)]
public struct MyStruct
{
[FieldOffset(0)]
public int i1;
[FieldOffset(4)]
public int i2;
[FieldOffset(0)]
public long l1;
}
The Pack parameter tells the system on how to align the fields. 0 is the default, but 1 will make the size of the first struct above to actually be 17.
    [StructLayout(LayoutKind.Sequential, Pack = 1)]
public struct MyStruct
{
public int i1;
public char c1;
public long l1;
public char c2;
public short s1;
public char c3;
}
Other values can be 2,4,8,16,32,64 or 128. You can test on how the performance is affected by this, as an exercise.

More information here: Advanced c# programming 6: Everything about memory allocation in .NET

Update: I've created a piece of code to actually test for this:
unsafe static void Main(string[] args)
{
var st = new MyStruct();
Console.WriteLine($"sizeof:{sizeof(MyStruct)} Marshal.sizeof:{Marshal.SizeOf(st)} custom sizeof:{MySizeof(st)}");
Console.ReadKey();
}
 
private static long MySizeof(MyStruct st)
{
long before = GC.GetTotalMemory(true);
MyStruct[] array = new MyStruct[100000];
long after = GC.GetTotalMemory(true);
var size = (after - before) / array.Length;
return size;
}

Considering the original MyStruct, the size reported by all three ways of computing size is 24. I had to test the idea that the maximum byte padding is 4, so I used this structure:
public struct MyStruct
{
public long l;
public byte b;
}
Since long is 8 bytes and byte is 1, I expected the size to be 16 and it was, not 12. However, I decided to also try with a decimal instead of the long. Decimal values have 16 bytes, so if my interpretation was correct, 17 bytes should be aligned with the size of the biggest struct primitive field: a multiple of 16, so 32. The result was weirdly inconsistent: sizeof:20 Marshal.sizeof:24 custom sizeof:20, which suggests an alignment to 4 or 8 bytes, not 16. So I started playing with the StructLayoutAttribute:
[StructLayout(LayoutKind.Sequential, Pack = 1)]
public struct MyStruct
{
public decimal d;
public byte b;
}

For Pack = 1, I got the consistent 17 bytes. For Pack=4, I got consistent values of 20. For Pack=8 or higher, I got the weird 20-24-20 result, which suggests packing works differently for decimals than for other values. I've replaced the decimal with a struct containing two long values and the consistent result was back to 24, but then again, that's expected. Funny thing is that Guid is also a 16 byte value, although it is itself a struct, and the resulting size was 20. Guid is not a value type, though.

The only conclusion I can draw is that what I wrote in this post is true. Also, StructLayout Pack does not work as I had expected, instead it provides a minimum packing size, not a maximum one. If the biggest element in the struct is 8 bytes, then the minimum between the Pack value and 8 will be used for alignment. The alignment of the type is the size of its largest element (1, 2, 4, 8, etc., bytes) or the specified packing size, whichever is smaller.

All this if you are not using decimals... then all bets are off! From my discussions with Filip B. Vondrášek in the comments of this post, I've reached the conclusion that decimals are internally structs that are aligned to their largest element, an int, so to 4 bytes. However, it seems Marshal.sizeof misreports the size of structs containing decimals, for some reason.

In fact, all "simple" types are structs internally, as described by the C# language specification, but the Decimal struct also implements IDeserializationEventListener, but I don't see how this would influence things. Certainly the compilers have optimizations for working with primitive types. This is as deep as I want to go with this, anyway.

and has 0 comments
For anyone coming from the welcoming arms of Visual Studio 2015 and higher, Eclipse feels like an abomination. However, knowing some nice tips and tricks helps a lot. I want to give a shout out to this article: Again! – 10 Tips on Java Debugging with Eclipse which is much more detailed that what I am going to write here and from where I got inspired.

Three things I thought most important, though, and this is what I am going to highlight:
  1. Show Logical Structure - who would have known that a little setting on top of the Expressions view would have been that important? Remember when you cursed how Maps are shown in the Eclipse debugger? With Show Logical Structures you can actually see items, keys and values!
  2. The Display View - just go to Window → Show View → Display and you get something that functions a bit like the Immediate Window in Visual Studio. In other words, just write your code there and execute it in the program's context. For a very useful example: write new java.util.Scanner(request.getEntity().getContent()).useDelimiter("\\A").next() in the Display window, select it, then click on Display Result of Evaluated Selected Text, and it will add to the Display window the string of the content of a HttpPost request.
  3. Watchpoints - you can set breakpoints that go into debug mode when a variable is accessed or changed!

For details and extra info, read the codecentric article I mentioned above.

and has 1 comment

As with all the programmer questions, I will update the post with the answer after people comment on this. Today's question is:
You have a list of regular expressions for strings to be matched against. You need to turn them into a single regular expression. How can you do it so that a string needs to match any of the initial regular expressions? How can you do it to match them all at the same time?

Here is a question for programmers. I will wait for your comments before answering.

After my disappointment with the Firefox for Android lack of proper bookmarks API implementation, I was at least happy that my Bookmark Explorer extension works well with Firefox for desktop. That quickly turned cold when I got a one star review because the extension did not work. And the user was right, it didn't! One of the variables declared in one of the Javascript files was not found. But that only happened in the published version, not the unpacked one on my computer.

Basically the scenario was this:
  1. Load unpacked (from my computer) extension
  2. Test it
  3. Works great
  4. Make it a Zip file and publish it
  5. Shame! Shame! Shame!

Long story short, I was loading the Javascript file like this: <script src="ApiWrapper.js"></script> when the name of the file was apiWrapper.js (note the lowercase A). My computer file system is Windows, couldn't care less about filename casing, while the virtual Zip filesystem probably isn't like that, at least in Firefox's implementation. Note that this error only affected Firefox and not Chrome or (as far as I know - because it has been 2 months since I've submitted the extension and I got no reply other than "awaiting moderation") Opera.

I've found a small gem in Javascript ES6 that I wanted to share:
let arr = [3, 5, 2, 2, 5, 5];
let unique = [...new Set(arr)]; // [3, 5, 2]

and has 0 comments
Today I've discovered, to my dismay, that two Integer objects with the same value compared with the == operator may return false, because they are different objects. So you need to use .equals (before you check for null, of course). I was about to write a scathing blog entry on how much Java sucks, but then I discovered this amazing link: Java gotchas: Immutable Objects / Wrapper Class Caching that explains that the Integer class creates a cache of 256 values so that everything between -128 and 127 is actual equal as an instance as well.

Yes, folks, you've heard that right. I didn't believe it, either, so I wrote a little demo code:
Integer i1 = Integer.valueOf(1);
Integer i2 = Integer.valueOf(1);
boolean b1 = i1 == i2; // true

i1 = Integer.valueOf(1000);
i2 = Integer.valueOf(1000);
boolean b2 = i1 == i2; // false

i1=1;
i2=1;
boolean b3 = i1 == i2; // true

i1=1000;
i2=1000;
boolean b4 = i1 == i2; // false

i1=126;
i2=126;
boolean b5 = i1 == i2; // true

i1++;
i2++;
boolean b6 = i1 == i2; // true

i1++;
i2++;
boolean b7 = i1 == i2; // false

i1 = 2000;
i2 = i1;
boolean b8 = i1 == i2; // true

i1++;
i1--;
boolean b9 = i1 == i2; // false


Update: the same thing also applies to Strings. Two strings with the same value are not == although they are immutable, so even the same string won't be equal to itself after changes. Fun!

I now submit to you that "sucks" applies to many things, but not to Java. A new term needs to be defined for it, so that it captures the horror above in a single word.

Tonight I went to an ADCES presentation about SQL table partitioning, a concept that allows for a lot of flexibility while preserving the same basic interface for a table one would use for a simpler and less scalable application. The talk was very professionally held by Bogdan Sahlean and you should have been there to see it :)

He talked about how one can create filegroups on which a table can be split into as many partitions as needed. He then demonstrated the concept of partition switching, which means swapping two tables without overhead, just via metadata, and, used in the context of partitions, the possibility to create a staging table, do stuff on it, then just swap it with a partition with no downtime. The SQL scripts used in the demo can be found on Sahlean's blog. This technology exists since SQL Server 2005, it's not something terribly new, and features with similar but limited functionality existed since SQL Server 2000. Basically the data in a table can be organized in separate buckets and one can even put each partition on a different drive for extra speed.

Things I've found interesting, in no particular order:

  • Best practice: create custom filegroups for databases and put objects in them, rather than in the primary (default) filegroup. Reason: each filegroup is restored separately,
    with the primary being the first and the one the database restore waits for to call a database as online. That means one can quickly restore the important data and see the db online, while the less accessed or less important data, like archive info, loaded afterwards.
  • Using constraints with CHECK on tables is useful in so many ways. For example, even since SQL Server 2000, one could create tables on different databases, even different servers, and if they are marked with not overlapping checks, one can not only create a view that combines all data with UNION ALL, but also insert into the view. The server will know which tables, databases and servers to connect to. Also, useful in the partition presentation.
  • CREATE INDEX with a DROP_EXISTING hint to quickly recreate or alter clustered indexes. With DROP_EXISTING, you save one complete cycle of dropping and recreating nonclustered indexes. Also, if specifying a different filegroup, you are effectively moving the data in a table from a filegroup to another.
  • Finally, the SWITCH TO partition switching can be used to quickly swap two tables, since from Sql Server 2005 all tables are considered partitioned, with regular ones just having one partition. So one creates a table identical in structure with another, does whatever with it, then just uses something like this: ALTER TABLE Orders SWITCH PARTITION 1 TO OrdersHistory PARTITION 1; to swap them out, with minimal overhang.

and has 0 comments
Just a heads up on a technology than I had no idea existed. To get the details read this 2009 (!!! :( ) article.

Basically you define a MemoryMappedFile instance from a path or a file reader, then create one or more MemoryMappedViewAccessors, then read or write binary data. The data can be structures, by using the generic Read/Write<[type]> methods.

Drawbacks: The size of the file has to be fixed, it cannot be increased or decreased. Also the path of the file needs to be on a local drive, it can't be on a network path.
Advantages: Fast access, built in persistency, the most efficient method to share data between processes.

and has 0 comments
Fuck Java! Just fuck it! I have been trying for half an hour to understand why a NullPointerException is returned in a Java code that I can't debug. It was a simple String object that was null inside a switch statement. According to this link states that The prohibition against using null as a switch label prevents one from writing code that can never be executed. If the switch expression is of a reference type, that is, String or a boxed primitive type or an enum type, then a run-time error will occur if the expression evaluates to null at run time.

and has 2 comments
A reader asked me how to work with multiple projects in Visual Studio Code and after fumbling a little I realized I had no idea. So I started trying out things.

First I created a folder in which I created two other folders ingeniously named Proj1 and Proj2. I went in both and ran dotnet new console and dotnet new classlib respectively. I moved the Console.WriteLine("Hello World!"); code from the console Program.cs in a static method in the Class1 class, then called the method from Program.cs Main, then tried to find ways of referencing Proj2 from Proj1 in Visual Studio Code.

And here I got stuck. I tried the smart solutions VS Code recommended, but none of them included adding a reference. I right clicked on everything to no avail. I wrote using Proj2; by hand hoping that Code will magically understand I need a project reference. I googled, only to find old articles that discussed project.json, not .csproj type of .NET projects.

In the end I was resigned to write the reference by hand. I opened Proj1.csproj and added
<ItemGroup>
<ProjectReference Include="..\Proj2\Proj2.csproj"/>
</ItemGroup>

After saving the file and going to the unresolved Class1 reference, I now got using Proj2; as an option to fix it. And now I got to the problem my reader was having. When trying to run Proj1, I got Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'Proj2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified. at Proj1.Program.Main(String[] args).

It's disgustingly easy to solve, you just need to know what to do. Either Ctrl-Shit-P and type restore, then select restoring Proj1 or do it manually by going to the Proj1 folder and running dotnet restore by hand. After that the project is compiled and runs.

Summary:
  1. add project reference by hand to .csproj file
  2. resolve whatever compilation errors you have by specifying the correct usings or inlining namespaces
  3. dotnet restore the project you added references to