and has 8 comments
I've learned something new today. It all starts with an innocuous question: Given the following struct, tell me what is its size:
    public struct MyStruct
{
public int i1;
public char c1;
public long l1;
public char c2;
public short s1;
public char c3;
}
Let's assume that this is in 32bit C++ or C#.

The first answer is 4+1+8+1+2+1 = 17. Nope! It's 24.

Well, it is called memory alignment and it has to do with the way CPUs work. They have memory registers of fixed size, various caches with different sizes and speeds, etc. Basically, when you ask for a 4 byte int, it needs to be "aligned" so that you get 4 bytes from the correct position into a single register. Otherwise the CPU needs to take two registers (let's say 1 byte in one and 3 bytes in another) then mask and shift both and add them into another register. That is unbelievably expensive at that level.

So, why 24? i1 is an int, it needs to be aligned on positions that are multiple of 4 bytes. 0 qualifies, so it takes 4 bytes. Then there is a char. Chars are one byte, can be put anywhere, so the size becomes 5 bytes. However, a long is 8 bytes, so it needs to be on a position that is a multiple of 8. That is why we add 3 bytes as padding, then we add the long in. Now the size is 16. One more char → 17. Shorts are 2 bytes, so we add one more padding byte to get to 18, then the short is added. The size is 20. And in the end you get the last char in, getting to 21. But now, the struct needs to be aligned with itself, meaning with the largest primitive used inside it, in our case the long with 8 bytes. That is why we add 3 more bytes so that the struct has a size that is a multiple of 8.

Note that a struct containing a struct will align it to its largest primitive element, not the actual size of the child struct. It's a recursive process.

Can we do something about it? What if I want to spend speed on memory or disk space? We can use directives such as StructLayout. It receives a LayoutKind - which defaults to Sequential, but can also be Auto or Explicit - and a numeric Pack parameter. Auto rearranges the order of the members of the class, so it takes the least amount of space. However, this has some side effects, like getting errors when you want to use Marshal.SizeOf. With Explicit, each field needs to be adorned with a FieldOffset attribute to determine the exact position in memory; that also means you can use several fields on the same position, like in:
    [StructLayout(LayoutKind.Explicit)]
public struct MyStruct
{
[FieldOffset(0)]
public int i1;
[FieldOffset(4)]
public int i2;
[FieldOffset(0)]
public long l1;
}
The Pack parameter tells the system on how to align the fields. 0 is the default, but 1 will make the size of the first struct above to actually be 17.
    [StructLayout(LayoutKind.Sequential, Pack = 1)]
public struct MyStruct
{
public int i1;
public char c1;
public long l1;
public char c2;
public short s1;
public char c3;
}
Other values can be 2,4,8,16,32,64 or 128. You can test on how the performance is affected by this, as an exercise.

More information here: Advanced c# programming 6: Everything about memory allocation in .NET

Update: I've created a piece of code to actually test for this:
unsafe static void Main(string[] args)
{
var st = new MyStruct();
Console.WriteLine($"sizeof:{sizeof(MyStruct)} Marshal.sizeof:{Marshal.SizeOf(st)} custom sizeof:{MySizeof(st)}");
Console.ReadKey();
}
 
private static long MySizeof(MyStruct st)
{
long before = GC.GetTotalMemory(true);
MyStruct[] array = new MyStruct[100000];
long after = GC.GetTotalMemory(true);
var size = (after - before) / array.Length;
return size;
}

Considering the original MyStruct, the size reported by all three ways of computing size is 24. I had to test the idea that the maximum byte padding is 4, so I used this structure:
public struct MyStruct
{
public long l;
public byte b;
}
Since long is 8 bytes and byte is 1, I expected the size to be 16 and it was, not 12. However, I decided to also try with a decimal instead of the long. Decimal values have 16 bytes, so if my interpretation was correct, 17 bytes should be aligned with the size of the biggest struct primitive field: a multiple of 16, so 32. The result was weirdly inconsistent: sizeof:20 Marshal.sizeof:24 custom sizeof:20, which suggests an alignment to 4 or 8 bytes, not 16. So I started playing with the StructLayoutAttribute:
[StructLayout(LayoutKind.Sequential, Pack = 1)]
public struct MyStruct
{
public decimal d;
public byte b;
}

For Pack = 1, I got the consistent 17 bytes. For Pack=4, I got consistent values of 20. For Pack=8 or higher, I got the weird 20-24-20 result, which suggests packing works differently for decimals than for other values. I've replaced the decimal with a struct containing two long values and the consistent result was back to 24, but then again, that's expected. Funny thing is that Guid is also a 16 byte value, although it is itself a struct, and the resulting size was 20. Guid is not a value type, though.

The only conclusion I can draw is that what I wrote in this post is true. Also, StructLayout Pack does not work as I had expected, instead it provides a minimum packing size, not a maximum one. If the biggest element in the struct is 8 bytes, then the minimum between the Pack value and 8 will be used for alignment. The alignment of the type is the size of its largest element (1, 2, 4, 8, etc., bytes) or the specified packing size, whichever is smaller.

All this if you are not using decimals... then all bets are off! From my discussions with Filip B. Vondrášek in the comments of this post, I've reached the conclusion that decimals are internally structs that are aligned to their largest element, an int, so to 4 bytes. However, it seems Marshal.sizeof misreports the size of structs containing decimals, for some reason.

In fact, all "simple" types are structs internally, as described by the C# language specification, but the Decimal struct also implements IDeserializationEventListener, but I don't see how this would influence things. Certainly the compilers have optimizations for working with primitive types. This is as deep as I want to go with this, anyway.

and has 0 comments
For anyone coming from the welcoming arms of Visual Studio 2015 and higher, Eclipse feels like an abomination. However, knowing some nice tips and tricks helps a lot. I want to give a shout out to this article: Again! – 10 Tips on Java Debugging with Eclipse which is much more detailed that what I am going to write here and from where I got inspired.

Three things I thought most important, though, and this is what I am going to highlight:
  1. Show Logical Structure - who would have known that a little setting on top of the Expressions view would have been that important? Remember when you cursed how Maps are shown in the Eclipse debugger? With Show Logical Structures you can actually see items, keys and values!
  2. The Display View - just go to Window → Show View → Display and you get something that functions a bit like the Immediate Window in Visual Studio. In other words, just write your code there and execute it in the program's context. For a very useful example: write new java.util.Scanner(request.getEntity().getContent()).useDelimiter("\\A").next() in the Display window, select it, then click on Display Result of Evaluated Selected Text, and it will add to the Display window the string of the content of a HttpPost request.
  3. Watchpoints - you can set breakpoints that go into debug mode when a variable is accessed or changed!

For details and extra info, read the codecentric article I mentioned above.

and has 0 comments
Today I've discovered, to my dismay, that two Integer objects with the same value compared with the == operator may return false, because they are different objects. So you need to use .equals (before you check for null, of course). I was about to write a scathing blog entry on how much Java sucks, but then I discovered this amazing link: Java gotchas: Immutable Objects / Wrapper Class Caching that explains that the Integer class creates a cache of 256 values so that everything between -128 and 127 is actual equal as an instance as well.

Yes, folks, you've heard that right. I didn't believe it, either, so I wrote a little demo code:
Integer i1 = Integer.valueOf(1);
Integer i2 = Integer.valueOf(1);
boolean b1 = i1 == i2; // true

i1 = Integer.valueOf(1000);
i2 = Integer.valueOf(1000);
boolean b2 = i1 == i2; // false

i1=1;
i2=1;
boolean b3 = i1 == i2; // true

i1=1000;
i2=1000;
boolean b4 = i1 == i2; // false

i1=126;
i2=126;
boolean b5 = i1 == i2; // true

i1++;
i2++;
boolean b6 = i1 == i2; // true

i1++;
i2++;
boolean b7 = i1 == i2; // false

i1 = 2000;
i2 = i1;
boolean b8 = i1 == i2; // true

i1++;
i1--;
boolean b9 = i1 == i2; // false


Update: the same thing also applies to Strings. Two strings with the same value are not == although they are immutable, so even the same string won't be equal to itself after changes. Fun!

I now submit to you that "sucks" applies to many things, but not to Java. A new term needs to be defined for it, so that it captures the horror above in a single word.

and has 0 comments
Fuck Java! Just fuck it! I have been trying for half an hour to understand why a NullPointerException is returned in a Java code that I can't debug. It was a simple String object that was null inside a switch statement. According to this link states that The prohibition against using null as a switch label prevents one from writing code that can never be executed. If the switch expression is of a reference type, that is, String or a boxed primitive type or an enum type, then a run-time error will occur if the expression evaluates to null at run time.

and has 2 comments
As a .NET developer I am very familiar with LInQ, or Language Integrated Queries, which is a collection of fluent interface methods that deal with querying data from collections. However, so many people outside the .NET ecosystem are not familiar with the concept or they use it as disparate functions in their language of choice. What makes it even more confusing is that the same concept is implemented in other languages with different names. Let me give you an example:
var arr=[1,2,3,4,5,6];
var result=arr
.Where(v=>v%2==0) //get only even values
.Select(v=>v*10) //return their values multiplied with 10
.Aggregate(15,(s,v)=>v+s); //aggregate their value into a sum that starts with a seed of 15
// result should be 15+2*10+4*10+6*10=135

We see here the use of three of these methods:
  • Where - filters the values on a condition
  • Select - changes the values it returns
  • Aggregate - creates an aggregate value using an operation on all the values in the collection

Let me write you the same C# code without using these methods:
var arr=[1,2,3,4,5,6];
var result=15;
foreach(var v in arr) {
if (v%2==0) {
result+=v*10;
}
}

In this case, some people might prefer the second version, but it is only an example. LInQ is not a silver bullet that replaces all loops, only a tool amongst many in a large toolset. Some advantages of using such a method are concise code, better readability, a common API for iterating, filtering and querying collections, etc. For example in the largely used Entity Framework or its previous incarnations such as Linq over SQL, the queries would look the same, but they would be translated into SQL and sent to the database and executed just once. So it would not get a list of thousands of records to filter it in memory, instead it would translate the expression of the function sent to the query into SQL and execute it there. The same sort of operations can be used on streams of data, rather than fixed collections, like in the case of Reactive Extensions.

Some other methods in this set include:
  • First/Last - getting the first or last element in an enumerable that satisfies a boolean condition
  • Skip - ignoring a number of values in a collection
  • Take - returning a number of values in a collection
  • Any/All - returning true if at least one or all of the items satisfy a boolean condition
  • Average/Sum/Min/Max - specific aggregating methods for the elements in the collection
  • OrderBy/OrderByDescending - sorting
  • Count - counting

There are many others, you can look them up here.

Does this system of querying data seem familiar to you? To SQL developers it will feel second nature. In SQL the same result from above would be achieved by using something like:
SELECT 15+SUM(SELECT v*10 FROM table WHERE v%2=0)

Note that other than putting the source of the data in front, LInQ syntax is almost identical.

However, in other languages this sort of data query is called map/reduce and in fact there is a very used programming model called MapReduce that applies in big data processing. In Java, the function that filters data is called filter, the one that alters the values is called map and the one that aggregates data is called reduce. Similar in Javascript. Here is the same code in Javascript:
var arr=[1,2,3,4,5,6];
var result=arr
.filter(v=>v%2==0) //get only even values
.map(v=>v*10) //return their values multiplied with 10
.reduce((s,v)=>v+s,15); //aggregate their value into a sum that starts with a seed of 15
// result should be 15+2*10+4*10+6*10=135
Note that the lambda syntax of writing functions used here is new in ECMA Script version 6. Before you would have to use the function(x) { return [something with x]; } syntax.

In Haxe, the concept is achieved by using the Lambda library and the functions are again named differently: filter for filtering, map for altering and fold for aggregating.

There is another sort of people that would instantly recognize this model of data querying: functional programming people. Indeed, SQL is a functional programming language at its core and the same standard for data querying is used very efficiently in functional programming languages, since they know whether a function is pure or not (has side effects). When only dealing with pure functions, some optimizations can be made on the query by the compiler before anything is even executed. Haskell has the same naming as Haxe (filter, map, fold) for example.

So whenever I get to review other people's code, especially people that have little experience with either SQL or C#, I cringe to see stuff like this:
var max=-1;
for (var i=0; i<arr.length; i++) {
if (max<arr[i]) max=arr[i];
}
In my head this should be simply arr.max(); And considering how easy it is to implement something like this in Javascript, for example, it's a crime for not using it:
Array.prototype.max=function() { return Math.max.apply(null,this); }

Yet there is more to this than my personal preference for reading code. Composition, for example. Because this works like a fluent API or a builder pattern, one can keep adding conditions to a query. Imagine you have to filter a list of strings based on a Google like query string. At the very minimum you would need to split the query into strings and filter repeatedly on each one. Something like this:
var arr=['this is my special query string','this is a string','my query string is this awesome','no query strings here, move along','these are not the strings you are looking for'];
var query="this is a query string";
var splits=query.split(/\s+/g);
var result=arr;
splits.forEach(s=>result=result.filter(a=>a.includes(s)));
console.log(result);

There is a lot of stuff I could be saying about this subject, but I need to summarize it. It's all about inverting loops. Instead of having to go through a collection, a stream or some other data source, then executing some code for each element, this method allows you to encapsulate the operations you want to execute on those elements, pass them around, compose them, translate them, then use them on any data source in the same way. A common API means reusability, better readability of code, less written code and a simpler declaration of intent. Because we get out of the loop system, we can expand the use for other paradigms, such as infinite data streams or event buses.

The first thing that strikes anyone starting to use another IDE than the one they are used to is that all the key bindings are wrong. So they immediately google something like "X key bindings for Y" and they usually get an answer, since developers switching IDEs and preferring one in particular is quite a common situation. Not so with Eclipse. You have to install software, remove settings then still modify stuff. I am going to give you the complete answer here on how to switch Eclipse key bindings to the ones you are used to in Visual Studio.

Step 1
First follow the instructions in this Stack Overflow answer: How to Install Visual Studio Key Bindings in Eclipse (Helios onwards)
Short version: Go to Help → Install New Software, select your version in the Work with box, wait until the list populates, check the box next to Programming Languages → C/C++ Development Tools and install (with restart). After that go to Window → Preferences → General → Keys and change the Scheme in a dropdown to Microsoft Visual Studio.

Step 2
When Eclipse starts it shows you a Welcome screen. Disable the welcome screen by checking the box from the bottom-right corner and restart Eclipse. This is to avoid Ctrl-arrows not working in the editor as explained in this StackOverflow answer.

Step 3
While some stuff does work, others do not. It is time to go to Window → Preferences → General → Keys and start changing key bindings. It is a daunting task at first, since you have to find the command, set the shortcut in the zillion contexts that are available and so on. The strategy I found works best is this:
  • Right click on whatever item you want to affect with the keyboard shortcut
  • Find in the context menu whatever command you want to do
  • Remember the keyboard shortcut
  • Go to the key preferences and replace that shortcut everywhere (the text filter in the key bindings dialog allows searching for keyboard shortcuts)

You might want to share or at least backup your keyboard settings. No, the Export CSV option in the key bindings dialog gives you a file you can't import. The solution, as detailed here is to go to File → Export or Import → General → Preferences and work with .epf files. And if you think it gives you a nice list of key bindings that you can edit with a file editor, think again. The format holds the key binding scheme name, then only the custom changes, in a file that is what .ini and .xml would have if they decided on having children.

Now, the real decent thing would be to not go through Step 1 and instead just start from the default bindings and change them according to Visual Studio (2016, not 2005!!) and then export the .epf file in order for all people to enjoy a simple and efficient method of reaching their goal. I leave this as an exercise for the reader.

A short list of shortcuts that I found missing from the Visual Studio schema: rename variable on F2, go to declaration on F12, Ctrl-Shift-F for search across files, Ctrl-Minus to navigate backward ... and more to come I am sure.

and has 2 comments
Ugh! I will probably be working in Java for a while. Or I will kill myself, one of the two. So far I hate Eclipse, I can't even write code and I have to blog simple stuff like how to do regular expressions. Well, take it as a learning experience! Exiting my comfort zone! 2017! Ha, ha haaaaa [sob! sob!]

In Java you use Pattern to do regex:
Pattern p = Pattern.compile("a*b");
Matcher m = p.matcher("aaaaab");
boolean b = m.matches();

So let's do some equivalent code:

.NET:
var regDate=new Regex(@"^(?<year>(?:19|20)?\d{2})-(?<month>\d{1,2})-(?<day>\d{1,2})$",RegexOptions.IgnoreCase);
var match=regDate.Match("2017-02-14");
if (match.Success) {
var year=match.Groups["year"];
var month=match.Groups["month"];
var day=match.Groups["day"];
// do something with year, month, day
}

Java:
Pattern regDate=Pattern.compile("^(?<year>(?:19|20)?\\d{2})-(?<month>\\d{1,2})-(?<day>\\d{1,2})$", Pattern.CASE_INSENSITIVE);
Matcher matcher=regDate.matcher("2017-02-14");
if (matcher.find()) {
String year=matcher.group("year");
String month=matcher.group("month");
String day=matcher.group("day");
// do something with year, month, day
}

Notes:

The first thing to note is that there is no verbatim literal support in Java (the @"string" format from .NET) and there is no "var" (one always has to specify the type of the variable, even if it's fucking obvious). Second, the regular expression object Pattern doesn't do things directly, instead it creates a Matcher object that then does operations. The two bits of code above are not completely equivalent, as the Success property in a .NET Match object holds the success of the already performed operation, while .find() in the Java Matcher object actually performs the match.

Interestingly, it seems that Pattern is automatically compiling the regular expression, something that .NET must be directed to do. I don't know if the same term means the same thing for the two frameworks, though.

Another important thing is that it is more efficient to reuse matchers rather than recreate them. So when you want to use the matcher on another string, use matcher.reset("newstring").

And lastly, the string class itself has quick and dirty regular expression methods like .matches, replaceFirst and .replaceAll. The matches method only returns a bool if the string is a perfect match (equivalent to a Pattern match with ^ at the beginning and $ at the end).