and has 0 comments
Well, if Java can do it, so can I! Why not make my own C# string class that can do whatever I want it to do? Requirements? Here they are:
  • I want it to work just like regular strings
  • I want to know if the string is empty just by using it as an if clause, just like Javascript: if (text) doSomething(text);
  • I want to see if it's a number just by adding a plus sign in front of it: var number = +text;
  • I want to easily know if the string is null or whitespace!

OK, so first I want to get all the functionality of a normal string. So I will just inherit from it! Only I can't. String is a sealed class in .Net.
Yet there is an abstract class that seems destined to be used by this: ValueType! It's only implemented by all value types (more or less) except String, but I will be the next guy to use it! Only I can't. If I try, I get the occult message: "Error CS0644 'EString' cannot derive from special class 'ValueType'". But it does help with something, if I try to inherit from it, it tells me what methods to implement: Equals, GetHashCode and ToString.
OK, then, I will just do everything from scratch!

Start with a struct (no need for a class) that wraps a string _value field, then overwrite the equality methods:
public struct EString
{
private string _value;
 
public override bool Equals(object obj)
{
return String.Equals(obj, _value);
}
 
public override int GetHashCode()
{
return _value == null ? 0 : _value.GetHashCode();
}
 
public override string ToString()
{
return _value;
}
}

I used 0 for the hash code when the value is null because that's what the .Net object GetHashCode() does. Now, in order for this to work as a string, I need some implicit conversion from string to my EString class. So add these two beauties:
public string Value { get => _value; }
 
private EString(string value) { _value = value; }
 
public static implicit operator string(EString estring)
{
return estring.Value;
}
 
public static implicit operator EString(string value)
{
return new EString(value);
}

EString now supports stuff like EString text="Something";, but I want more! Let's overload some operators. I want to be able to see if a string is null or empty just like in Javascript:
public static bool operator !(EString estring)
{
return String.IsNullOrEmpty(estring);
}
 
public static bool operator true(EString estring)
{
return !!estring;
}
 
public static bool operator false(EString estring)
{
return !estring;
}
Yes, ladies and gentlemen, true and false are not only boolean constants, but also operators. That doesn't mean something like this is legal: if (text==true) ..., but you can use stuff like if (text) ... or text ? "something" : "empty". Overloading the ! operator allows also something more useful like if (!text) [this shit is empty].

Moar! How about a simple way to know if the string is null or empty or whitespace? Let's overload the ~ operator. How about getting the number encoded into the text? Let's overload the + operator. Here is the final result:
public struct EString
{
private string _value;
 
public string Value { get => _value; }
 
private EString(string value) { _value = value; }
 
public override bool Equals(object obj)
{
return String.Equals(obj, _value);
}
 
public override int GetHashCode()
{
return _value == null ? 0 : _value.GetHashCode();
}
 
public override string ToString()
{
return _value;
}
 
public static bool operator !(EString estring)
{
return String.IsNullOrEmpty(estring);
}
 
public static bool operator true(EString estring)
{
return !!estring;
}
 
public static bool operator false(EString estring)
{
return !estring;
}
 
public static bool operator ~(EString estring)
{
return String.IsNullOrWhiteSpace(estring);
}
 
public static decimal operator +(EString estring)
{
return decimal.TryParse(estring.Value, out decimal d) ? d : decimal.Zero;
}
 
public static implicit operator string(EString estring)
{
return estring.Value;
}
 
public static implicit operator EString(string value)
{
return new EString(value);
}
}

Disclamer: I did this mostly for fun. I can't condone replacing your strings with my class, but look at these examples for usage:
EString s=inputText;
if
(!~s) {
var d = Math.Round(+s,2);
if (d != 0) {
Console.WriteLine("Number was introduced: "+d);
}
}

Update: the initial article was plaing wrong :) I fixed it now. The important change is that you need to npm link the dist folder, not the root folder of the plugin project.

So, the question arises when you want to change a module that is used (and tested) in another module. Let's say your normal flow is to change the version of the child package, then npm run packagr, then npm publish it, then npm install childModule@latest in the parent app. This quickly gets tiresome and leads to unrealistic version numbers.

A better solution is to use npm link. First, you go to your plugin/child module and you run npm run packagr. When it's done, go to the dist folder and run npm link. This will create a symlink in the global node_modules folder for your project's distribution package. Then, move to the parent module and run npm link <name-of-child>. The name of the child is the same as the name of the application. This creates a symlink in the parent module's node_modules to the global symlink created earlier.

Wait! A few gotchas, first:
  • careful with the operations that might change the content of the folder linked in node_modules, as they will change the actual source code of the plugin
  • after you finish with the work on the plugin, then delete the symlink, publish the child and reinstall @latest at the parent
  • make sure that the version of the plugin package in the parent is permissive (something like >=initialVersion), otherwise you might have problems with the version number you set in the plugin package.json file

Hope this helps.

and has 3 comments
NPM is a popular package manager (think NuGet for JavaScript) and the information of the packages that it needs to install is stored in a file called package.json. You run npm install, packages are getting downloaded in a folder called node_modules and a package-lock.json file is generated. Since you can always delete node_modules and package-lock.json and rerun the package install, a common assumption is that they are redundant and they shouldn't be stored in source control. That is wrong in most cases.

The lock file not only stores the progress of the npm installation, but also the actual versions of the packages that it installs (for the entire dependency tree). As opposed to this, package.json contains only the packages directly needed by the project and the acceptable ranges of the versions. One can allow for any version of a package to be used, or maybe anything above a version, or an interval or something that is "the best version" around a specific version. Deleting the package-lock.json file effectively tells NPM that you trust package.json and the developers of each package for the versions of the dependencies loaded.

Here is a common scenario: you create a new application, you need some NPM packages so you npm install thePackage. This gets the latest version of thePackage and installs it, then marks the exact version into package-lock.json as well as the versions of the packages thePackage uses and what they use and so on. Finally, you commit the project, including package-lock.json. Three months later, a new developer comes and gets the project from source control. They use npm install and see that everything works perfectly, because the packages restored are the exact same versions as the ones restored for the original developer. But now they think "who committed package-lock.json? Don't they know it's redundant?" so they remove it from source control. Three months later another developer comes and runs npm install on the source from the code repository, only nothing works anymore. The versions that were downloaded are, depending on what is specified in package.json, the latest version of the dependency or maybe a version similar, but with a different minor version, and with the dependencies that the developers thought best for that particular version.

There is a situation when package-lock.json is entirely redundant and that is when package.json only specifies exact versions. NPM works so that you cannot replace the same version of a software in their repository, so the devs will never be able to change the package versions they used for a specific version. That is why it is safe to assume that the same version of a package will use the same package dependency tree (unless some of the packages are removed, but that's another question entirely).

Summary: If you have any version of a dependency in package.json specified as anything else than a specific version (no tilde, no caret, no asterisks, no intervals), then you also need to store package-lock.json in your source control.

Just a short info about HttpInterceptor, which is the Angular system of intercepting http requests, so you can do useful stuff like logging, error handling, authentication, etc. There are two npm packets for http, the old one is @angular/http and the new one is in @angular/common. While their interfaces are similar, HttpInterceptor only works for @angular/common/http.

Bonus thing: in the interceptor you are building, when you get the Observable<HttpEvent<any>> from next.handle, do not .subscribe to it, lest you will double all http requests (including adding items).

and has 0 comments
I am in the process of converting an old web site to Angular 5 CLI. Little technical value other than I need to understand the underlying concepts, but I needed to take some Javascript code and execute it in Typescript, the de facto language for Angular. And you hear that Typescript is a super set of ECMAScript, but it's not as easy to integrate existing code.

So, first of all, we are talking pure Javascript code, not set up as a module or anything more advanced. Let's say something like function say(message) { return 'I say '+message.content+' ('+messsage.author+')'; }. It's a simple function declaration receiving a message object with the fields content and author and returns a string. How to use it in Typescript, which is a strong typed language?

First of all, you need to load the script itself. The file can be added to angular.cli.json, in the scripts section, like this:
"scripts": [
"../node_modules/jquery/dist/jquery.min.js",
"assets/js/someJqueryThing.js",...

Next, in the Typescript file you want to execute the code, import the script:
import('someJqueryThing')
(note that it is not some import something from something else syntax, just the name of the script, so that it is bundled in for that page. But at this moment Typescript tells you there is no say method, and that's because you have not declared it for Typescript.

There are two options. One is to add a file called someJqueryThing.d.ts in the same folder with the .js in which you declare the signature of the say function, the other is to declare it in the .ts file you are running the Javascript from. The syntax, for this case, is
declare function say(obj:any):string;
You could declare an interface and specify what kind of object say receives
interface Message {
content:string,
author:string
}
declare function say(message:Message):string;
, or you can even declare var say:any;

Update for .NET Core 3.0:

Seems for .NET Core 3.0 the solution is much simpler:
  • install the Microsoft.AspNetCore.Authentication.Negotiate NuGet package
  • add authentication in ConfigureServices like this:
    services
    .AddAuthentication(NegotiateDefaults.AuthenticationScheme)
    .AddNegotiate();
  • use the authentication in Configure (above app.UseAuthorization();)
    app.UseAuthentication();

No need to UseIISIntegration, UseHttpSys or anything.

Original post:

If you get the System.InvalidOperationException "No authenticationScheme was specified, and there was no DefaultChallengeScheme found." it means that ... err... you don't have a default authentication scheme. Solution:
  • Install NuGet package Microsoft.AspNetCore.Authentication in your project
  • add
    services.AddAuthentication(Microsoft.AspNetCore.Server.IISIntegration.IISDefaults.AuthenticationScheme);
    to the ConfigureServices method.

Update: Note that this is for IIS integration. If you want to use self hosted or Kestrel in debug, you should use HttpSysDefaults.AuthenticationScheme. Funny though, it's the same string value for both constants: "Windows".

Oh, and if you enter the credentials badly when prompted and you can't reenter them, try to restart Chrome (as in this answer)

and has 2 comments
I was working on this web ASP.Net Core 2.0 project that was spewing a lot of "Application Insights Telemetry (unconfigured): ..." messages in the Debug Output window. At first I thought I should just remove the Microsoft Application Insights NuGet package, but it didn't work. By default, it will still use insights even if you don't have it referenced anywhere in your code.

The solution is to do have installed Microsoft Application Insights NuGet package, but then set
Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration.Active.DisableTelemetry = true;
somewhere in Startup.cs (the constructor is fine).

Apparently, in the preview versions of Visual Studio 2017 there is an option under Options → Projects and Solutions → Web Projects → Disable local Application Insights for Asp.net Core web projects, too.

Update: A comment suggested you need to Disable the automatic loading of hosting startup assemblies which can be done in two ways:
  1. setting ASPNETCORE_preventHostingStartup to True or 1 in the project properties → Debug → Environment variables
  2. Doing something like
    WebHost.CreateDefaultBuilder(args)
    .UseSetting(WebHostDefaults.PreventHostingStartupKey, "true")
    ...
    - available from .NET Core 2.0

Either of these works, and although I agree with Andrei that the underlying issue for the unwanted telemetry is the automatic loading of hosting assemblies, I feel like the first option, the one that actually contains the word Telemetry in it is better for reasons of readability. But it's good there are three options to choose from.

This is actually a TypeScript module resolution thing. The shape of the import name tells TypeScript what kind it is. The relative path imports always need a directory specified, so './myModule' and not 'myModule'. That's because myModule could be the name of an already declared ambient module.

Well, it's more to it, but the takeaway is that you have an import like import {something} from 'folder/something' and you want a similar import with a file from the same folder, you don't just delete folder/, you replace it with a dot, like this: import {somethingElse} from './something-else'

I used to put all my work in a folder called !Projects, for the simple reason that it would make it the first folder to appear in the file explorer. Due to a limitation in WebPack, Angular cannot work with paths containing the exclamation mark character.

and has 1 comment
I have invented a new way to write software when people who hold decision power are not available. It's called Flag Assisted Programming and it goes like this: whenever you have a question on how to proceed with your development, instead of bothering decision makers, add a flag to the configuration that determines which way to go. Then estimate for all the possible answers to your question and implement them all. This way, management not only has more time to do real work, but also the ability to go back and forth on their decisions as they see fit. Bonus points, FAPing allows middle management to say you have A/B testing at least partially implemented, and that you work in a very agile environment.

and has 0 comments
A year and a half ago, as I was going from miserable job interview to the next, I was asked what I think about code review. At the time I said that I thought it was the most important organizational aspect of writing code. I mean you can do agile, waterfall, work on games or mobile apps or business applications, use the latest or the oldest, the best or the worst technology and still code reviewing helps. I still think that way now, but recent experiences with the process have left me thinking of refining my understanding of it. This blog post is about that.

The Good


Why is code review good? The very first thing it does is that it forces you to acknowledge your work. You can be tired and fix one little thing in a lazy way and forget about it and it might work or it might break something, but when you know you have to publish what you did you do things less lazy, more documented, more thought out. It doesn't matter that no one will ever look carefully on the review, as that you are thinking there is the possibility of it.

Second, and obvious, is that any mistakes you made are more likely to come to the surface when someone looks at the code. It doesn't mean people blame you for mistakes, it means the mistakes don't come and bite you in the ass later, when your work is supposed to be making money for some poor bastard somewhere. This is very important because we tend to work on systems more complex that we can or are willing to understand. If a group of people who together understand the system is reviewing work, though, you learn not only about the inevitable code errors you introduce, but also about the errors in judgement or understanding or in the assumptions you made.

Then there is the learning aspect of it. Juniors learn from seniors reviewing their work, they learn from code reviewing each other and everybody learns from reviewing work made by anyone else. It opens up perspectives. I mean, you can review some method that was copy pasted four times in order to do the same thing to four different objects and learn how not to that, ever! No matter how much you would want to when coming in at work hungover and hoping for death a little. For example, I've only recently learned to comment on my own code review before submitting it. Some might say comments in the code should do that, but sometimes you need more, as anchors for discussion, which obviously cannot be carried in code comments. (well, they can, but please don't do that)

And there is more! You get documentation of the code for free. When someone doesn't understand what the hell is going on, they ask questions, which leads to you answering in whatever code review software you use. This will remain there for others to peruse long after you've left the company and went on to slightly RGB shifted pastures. I still dream of a non intrusive system that would connect reviews to the code in your IDE, so you can always see a list of comments and annotations for whatever you are looking at.

One of the benefits is that code review makes everyone in the team write code in the same way. For better or worse. I will detail that in a moment, but think about what it means to read a piece of code, trying to understand it, then switch to the next one and see it written in a completely different style. You waste a lot of time.

Finally, I think the confidence code review gives you can lead not only to better code, but also faster code. More on this comes next. This is controversial, but I think you can use code review to check your code, but only if you trust the reviewers. You might fire off commit after commit after commit, confident that your peers will check what you, normally, would have to double and triple check before committing. It's risky, but with the right team it can do wonders.

The Bad


OK, so it's a great thing, this code review stuff. I knew that, you knew that, why are you wasting your finger strength? Well, there is a dark side to code review. I've heard some purists insist on some rules for code review with which I am not completely comfortable with, for example. I invite said purists who also read my blog to come rant in the comments below. Also my recent experience which touches on said rules and introduces others. Let me detail the bad.

There are programmers and programmers, projects and projects, management and management. Where one developer writes some code and hopes people will look at it carefully and instruct them on what they could improve, some people just lazily write something that kind of works, thinking whoever will do the code review will also do the work of making their code remotely usable. Where in some projects developers remain working after hours because they want to see their code do good and the project succeed, in others people couldn't care less: they do their time and break the door when the bell rings. Don't expect careful code reviews then. And there is the management issue, which might protect the developers from anything unrelated to coding or they might pester them with meetings and emails and processes that break concentration, waste time and surely do not help with the attention span of a code reviewer. But in all of the worst cases above, code review is still good, just less effective.

One of the rules I was talking about above was to never commit code unless its code review was accepted. Note the bold font on the never. It was like that whenever I heard the rule. Sounded bold. But I completely disagree with that.

First, if you have developers that you can't trust to commit something, don't let them commit. Either find someone better or do something with their privileges, a system that prevents them from committing. Same goes for people you can't trust to read the code review and update the code afterward a bad or defective commit.

Second of all, you might work on a file that should appear in more code reviews. No, the system where you do the work, ask for review, then shelve the files so you can work on the next thing doesn't work! It takes time, concentration and leads to bad resolves that break your code. Just commit the first thing and move to the next. When your review comes back full of bugs, just finish what you are working on, commit that, then return to the code and implement fixes for the issues found. That is a problem for code review software that can't understand a file committed after changes were made to it doesn't mean you want to include all the changes since time immemorial. That's a software issue, though. Just create a new review and somehow link it to the other, via comments or notes. Creating a personal branch for all developers or other crazy ideas like that are also crap.

Not committing work that you've done means delaying your other work, testing, finding problems in it, etc. Having to juggle with software in order to submit to a rigid process that is indifferent to the overall pace of development and the realities of your work is stupid. Just work, commit, review, test, rework. It's what we do.

It's also, I think, an error in judgement to force code review. As good as I think it is, you can work without it. It is an optional process, so keep it that way. Conditioning development on an optional process makes it mandatory. It might sound like a truism, but people don't seem to realize things unless you articulate them.

And then there is human nature. If you ask me to code review for you, I will stop what I am doing and perform the review, because if I don't, you can't commit. It hurts my work, because it breaks my concentration. It hurts your code review, because I am not focused enough. Personally I am best at reviewing in the morning. None of the organizational crap happened yet, no meetings, no emails telling me to write other emails, no chat messages asking questions that I have no desire to answer. I am rested, I am a bit pumped from making the minimum physical movements required to get me to the office and so I am ready to singlemindedly focus on your review. It shouldn't matter that you committed the code yesterday. I'll get to it when I get to it.

The Ugly


The ugly is not only bad, but also disturbing. It's not a characteristic of the code review per se, but is more related to the humans involved in the process. Code review has some nasty side effects on certain people and in certain situations. Let's discuss this for a bit.

I was saying above that it's good everybody writes in a certain way. That actually may stop people from innovating in the writing of code. Do it this way, that's the pattern we're using, you will hear, without the slightest hint of the possibility to improve on that pattern. Same thing might happen with new ideas that you might feel need to be introduced in the project, or some refactoring, or some other creative work that would make you proud and motivated to continue to do good work. As I said above, it's a people problem, not a process problem, but when it happens, it stifles innovation, creativity and ultimately the fucks you give on what happens to the project as a whole.

Code reviews, like any other communication medium, may be abused. People may be attacked or shamed by others who don't really like them. They might not even be junior and senior, as it might involve time in the firm rather than technical skill, or some other hierarchical or social advantage. Ego fights can also erupt in code reviews, which can exacerbate the problem if they are blocking reviews. Arguments are good, pissing contests are ugly, that kind of thing.

Reviews waste time. That's really not a people problem, it's a process problem. All processes, that is. You need to put in the work to do a good review. Just glancing over and saying "it looks good", without trying to understand what the code is supposed to do, is almost worse than refusing to do the review. I am plenty guilty of that. Instead of thinking about what the guy did and trying to help, part of my brain just keeps rummaging on what my current development task is. This is another argument to separate reviewing from code writing. You need your zone for both. When code review waste rather than spend time, that's ugly.

Finally, I think one major issues with code review is that it encourages lazing off on unit testing, proper testing, refactoring and even simple writing of the code. This is a management issue, mostly, and it's ugly like vomited shit. When people write horrid code filled with bugs assuming that code review will fix their lack of interest, that's ugly. When you are urged, more or less vigorously, to skimp on the unit or manual testing because the code review was accepted, that's ugly. But when you are trying to improve the general quality of the code and the answer is either that you don't have time for this or that any change is unnecessary because the code review passed or even when you are unwilling to do the refactoring, knowing what a hassle will be to send it through review, that's damn ugly. It means you want to do more than your share and you get stuck in a process.

And on that note, I end this wall of text. Process before people is always ugly.

Comments and opinions, if you dare! :)

I just read a very cool article (Understanding Default Parameters in Javascript) and my takeaway is this smart piece of code to enforce that a parameter is specified:
const isRequired = () => { throw new Error('param is required'); };

function filterEvil(array, evil = isRequired()) {
return array.filter(item => item !== evil);
}

So all you have to do is define the isRequired function in a shared library file and then use it in any function that you write.

Are you a bit put off by the fact you can use functions as default parameters? Welcome to Javascript, a language that seems designed by Eurythmics

I am sure I've tested this, but for some reason the icons in my blog disappeared for Internet Explorer. They are using Font Awesome SVG background images, declared something like this:
.fas-comment {
background-image: url("data:image/svg+xml;utf8,<svg height='511.6' version='1.1' viewBox='0 0 511.6 511.6' width='511.6' x='0' xml:space='preserve' xmlns='http://www.w3.org/2000/svg' y='0'><g fill='#2f5faa'><path d='M477.4 127.4c-22.8-28.1-53.9-50.2-93.1-66.5 -39.2-16.3-82-24.4-128.5-24.4 -34.6 0-67.8 4.8-99.4 14.4 -31.6 9.6-58.8 22.6-81.7 39 -22.8 16.4-41 35.8-54.5 58.4C6.8 170.8 0 194.5 0 219.2c0 28.5 8.6 55.3 25.8 80.2 17.2 24.9 40.8 45.9 70.7 62.8 -2.1 7.6-4.6 14.8-7.4 21.7 -2.9 6.9-5.4 12.5-7.7 16.9 -2.3 4.4-5.4 9.2-9.3 14.6 -3.9 5.3-6.8 9.1-8.8 11.3 -2 2.2-5.3 5.8-9.9 10.8 -4.6 5-7.5 8.3-8.8 9.9 -0.2 0.1-1 1-2.3 2.6 -1.3 1.6-2 2.4-2 2.4l-1.7 2.6c-1 1.4-1.4 2.3-1.3 2.7 0.1 0.4-0.1 1.3-0.6 2.9 -0.5 1.5-0.4 2.7 0.1 3.4v0.3c0.8 3.4 2.4 6.2 5 8.3 2.6 2.1 5.5 3 8.7 2.6 12.4-1.5 23.2-3.6 32.5-6.3 49.9-12.8 93.6-35.8 131.3-69.1 14.3 1.5 28.1 2.3 41.4 2.3 46.4 0 89.3-8.1 128.5-24.4 39.2-16.3 70.2-38.4 93.1-66.5 22.8-28.1 34.3-58.7 34.3-91.8C511.6 186.1 500.2 155.5 477.4 127.4z'/></g></svg>");
}

I had to try several things, but in the end, I found out that there are three steps in order to make this compatible with Internet Explorer (and still work in other browsers):
  1. The definition of the utf8 charset must be explicit: data:image/svg+xml;charset=utf8 instead of data:image/svg+xml;utf8
  2. The SVG code needs to be URL encoded: so turn all double quotes into single quotes and then replace < and > with %3C and %3E or use some URL encoder
  3. The colors need to be in rbg() format: so instead of fill='#2f5faa' use fill='rgb(47,95,170)' (same in style tags in the SVG, if any)


So now the result is:
.fas-comment {
background-image: url("data:image/svg+xml;charset=utf8,%3Csvg height='511.6' version='1.1' viewBox='0 0 511.6 511.6' width='511.6' x='0' xml:space='preserve' xmlns='http://www.w3.org/2000/svg' y='0'%3E%3Cg fill='rgb(47,95,170)'%3E%3Cpath d='M477.4 127.4c-22.8-28.1-53.9-50.2-93.1-66.5 -39.2-16.3-82-24.4-128.5-24.4 -34.6 0-67.8 4.8-99.4 14.4 -31.6 9.6-58.8 22.6-81.7 39 -22.8 16.4-41 35.8-54.5 58.4C6.8 170.8 0 194.5 0 219.2c0 28.5 8.6 55.3 25.8 80.2 17.2 24.9 40.8 45.9 70.7 62.8 -2.1 7.6-4.6 14.8-7.4 21.7 -2.9 6.9-5.4 12.5-7.7 16.9 -2.3 4.4-5.4 9.2-9.3 14.6 -3.9 5.3-6.8 9.1-8.8 11.3 -2 2.2-5.3 5.8-9.9 10.8 -4.6 5-7.5 8.3-8.8 9.9 -0.2 0.1-1 1-2.3 2.6 -1.3 1.6-2 2.4-2 2.4l-1.7 2.6c-1 1.4-1.4 2.3-1.3 2.7 0.1 0.4-0.1 1.3-0.6 2.9 -0.5 1.5-0.4 2.7 0.1 3.4v0.3c0.8 3.4 2.4 6.2 5 8.3 2.6 2.1 5.5 3 8.7 2.6 12.4-1.5 23.2-3.6 32.5-6.3 49.9-12.8 93.6-35.8 131.3-69.1 14.3 1.5 28.1 2.3 41.4 2.3 46.4 0 89.3-8.1 128.5-24.4 39.2-16.3 70.2-38.4 93.1-66.5 22.8-28.1 34.3-58.7 34.3-91.8C511.6 186.1 500.2 155.5 477.4 127.4z'/%3E%3C/g%3E%3C/svg%3E");
}

and has 8 comments
I've learned something new today. It all starts with an innocuous question: Given the following struct, tell me what is its size:
    public struct MyStruct
{
public int i1;
public char c1;
public long l1;
public char c2;
public short s1;
public char c3;
}
Let's assume that this is in 32bit C++ or C#.

The first answer is 4+1+8+1+2+1 = 17. Nope! It's 24.

Well, it is called memory alignment and it has to do with the way CPUs work. They have memory registers of fixed size, various caches with different sizes and speeds, etc. Basically, when you ask for a 4 byte int, it needs to be "aligned" so that you get 4 bytes from the correct position into a single register. Otherwise the CPU needs to take two registers (let's say 1 byte in one and 3 bytes in another) then mask and shift both and add them into another register. That is unbelievably expensive at that level.

So, why 24? i1 is an int, it needs to be aligned on positions that are multiple of 4 bytes. 0 qualifies, so it takes 4 bytes. Then there is a char. Chars are one byte, can be put anywhere, so the size becomes 5 bytes. However, a long is 8 bytes, so it needs to be on a position that is a multiple of 8. That is why we add 3 bytes as padding, then we add the long in. Now the size is 16. One more char → 17. Shorts are 2 bytes, so we add one more padding byte to get to 18, then the short is added. The size is 20. And in the end you get the last char in, getting to 21. But now, the struct needs to be aligned with itself, meaning with the largest primitive used inside it, in our case the long with 8 bytes. That is why we add 3 more bytes so that the struct has a size that is a multiple of 8.

Note that a struct containing a struct will align it to its largest primitive element, not the actual size of the child struct. It's a recursive process.

Can we do something about it? What if I want to spend speed on memory or disk space? We can use directives such as StructLayout. It receives a LayoutKind - which defaults to Sequential, but can also be Auto or Explicit - and a numeric Pack parameter. Auto rearranges the order of the members of the class, so it takes the least amount of space. However, this has some side effects, like getting errors when you want to use Marshal.SizeOf. With Explicit, each field needs to be adorned with a FieldOffset attribute to determine the exact position in memory; that also means you can use several fields on the same position, like in:
    [StructLayout(LayoutKind.Explicit)]
public struct MyStruct
{
[FieldOffset(0)]
public int i1;
[FieldOffset(4)]
public int i2;
[FieldOffset(0)]
public long l1;
}
The Pack parameter tells the system on how to align the fields. 0 is the default, but 1 will make the size of the first struct above to actually be 17.
    [StructLayout(LayoutKind.Sequential, Pack = 1)]
public struct MyStruct
{
public int i1;
public char c1;
public long l1;
public char c2;
public short s1;
public char c3;
}
Other values can be 2,4,8,16,32,64 or 128. You can test on how the performance is affected by this, as an exercise.

More information here: Advanced c# programming 6: Everything about memory allocation in .NET

Update: I've created a piece of code to actually test for this:
unsafe static void Main(string[] args)
{
var st = new MyStruct();
Console.WriteLine($"sizeof:{sizeof(MyStruct)} Marshal.sizeof:{Marshal.SizeOf(st)} custom sizeof:{MySizeof(st)}");
Console.ReadKey();
}
 
private static long MySizeof(MyStruct st)
{
long before = GC.GetTotalMemory(true);
MyStruct[] array = new MyStruct[100000];
long after = GC.GetTotalMemory(true);
var size = (after - before) / array.Length;
return size;
}

Considering the original MyStruct, the size reported by all three ways of computing size is 24. I had to test the idea that the maximum byte padding is 4, so I used this structure:
public struct MyStruct
{
public long l;
public byte b;
}
Since long is 8 bytes and byte is 1, I expected the size to be 16 and it was, not 12. However, I decided to also try with a decimal instead of the long. Decimal values have 16 bytes, so if my interpretation was correct, 17 bytes should be aligned with the size of the biggest struct primitive field: a multiple of 16, so 32. The result was weirdly inconsistent: sizeof:20 Marshal.sizeof:24 custom sizeof:20, which suggests an alignment to 4 or 8 bytes, not 16. So I started playing with the StructLayoutAttribute:
[StructLayout(LayoutKind.Sequential, Pack = 1)]
public struct MyStruct
{
public decimal d;
public byte b;
}

For Pack = 1, I got the consistent 17 bytes. For Pack=4, I got consistent values of 20. For Pack=8 or higher, I got the weird 20-24-20 result, which suggests packing works differently for decimals than for other values. I've replaced the decimal with a struct containing two long values and the consistent result was back to 24, but then again, that's expected. Funny thing is that Guid is also a 16 byte value, although it is itself a struct, and the resulting size was 20. Guid is not a value type, though.

The only conclusion I can draw is that what I wrote in this post is true. Also, StructLayout Pack does not work as I had expected, instead it provides a minimum packing size, not a maximum one. If the biggest element in the struct is 8 bytes, then the minimum between the Pack value and 8 will be used for alignment. The alignment of the type is the size of its largest element (1, 2, 4, 8, etc., bytes) or the specified packing size, whichever is smaller.

All this if you are not using decimals... then all bets are off! From my discussions with Filip B. Vondrášek in the comments of this post, I've reached the conclusion that decimals are internally structs that are aligned to their largest element, an int, so to 4 bytes. However, it seems Marshal.sizeof misreports the size of structs containing decimals, for some reason.

In fact, all "simple" types are structs internally, as described by the C# language specification, but the Decimal struct also implements IDeserializationEventListener, but I don't see how this would influence things. Certainly the compilers have optimizations for working with primitive types. This is as deep as I want to go with this, anyway.

and has 0 comments
For anyone coming from the welcoming arms of Visual Studio 2015 and higher, Eclipse feels like an abomination. However, knowing some nice tips and tricks helps a lot. I want to give a shout out to this article: Again! – 10 Tips on Java Debugging with Eclipse which is much more detailed that what I am going to write here and from where I got inspired.

Three things I thought most important, though, and this is what I am going to highlight:
  1. Show Logical Structure - who would have known that a little setting on top of the Expressions view would have been that important? Remember when you cursed how Maps are shown in the Eclipse debugger? With Show Logical Structures you can actually see items, keys and values!
  2. The Display View - just go to Window → Show View → Display and you get something that functions a bit like the Immediate Window in Visual Studio. In other words, just write your code there and execute it in the program's context. For a very useful example: write new java.util.Scanner(request.getEntity().getContent()).useDelimiter("\\A").next() in the Display window, select it, then click on Display Result of Evaluated Selected Text, and it will add to the Display window the string of the content of a HttpPost request.
  3. Watchpoints - you can set breakpoints that go into debug mode when a variable is accessed or changed!

For details and extra info, read the codecentric article I mentioned above.