and has 0 comments

The Grisha trilogy (or The Shadow and Bone series) is, as the name implies, a series of three books that comprise the entirety of a story about a young orphan girl and her childhood friend growing up to find they have powers and need to battle evil that only they can vanquish. Yes, it's typical young adult stuff.

The refreshing bit about this series is the Slavic flavor that permeates the story. The names are Slavic, the legends and history are similar to the ones around Russia, and if you read not the books, but the short stories, you get that nice hopeful dread that one can find in old Russian legends: things can be nice, but most of the time you can only hope for instructional and survivable.

That leaves me at an impasse. I liked the books, but compared to the expectations created by the short stories they are pretty crap. I mean, you get that Twilighty romantic triangle thing (it's more of a square, really), and so much potential from characters that are tortured by a rough childhood is just wasted on pointless romance dancing around. The first book is clearly the best, but then Leigh Bardugo falters and writes the rest of the story more and more traditional... not to the Russian folklore, but to Hollywood bullshit. The evil guy gets more and more evil, for no actual reason, the characters get more and more righteous, for no reason other than being juxtaposed against the evil guy, secondary characters get killed off randomly, with no gain from the effort made in defining them, while primary characters get more and more entangled, again, for no good reason. The worst offence, in my view, is that the author bothered to create this Slavic world of impoverished peasants fighting neverending wars with neighboring countries, only to basically end it all with a happy ending. In order to bring the story to a popular finale, she massacred an entire universe, not unlike the villain of the series.

Bottom line: an interesting and easy to read young adult story that unfortunately ends much worse than it had begun. Instead of continuing to explore the truly adult themes of loss, betrayal, learning from mistakes, surviving trauma, etc, it caves in to the easy romantic and tired idea of light versus dark, losing all other color in the process.

and has 0 comments
Ed Yong's style is a little bit over narrated, like those TV documentaries that start with some guy walking down the street while they present who he is and what he does. That's the only real issue I had with this book, other than a few groan inducing puns. Besides that, the book is not only extremely interesting, but also contains a multitude (OK, I like puns) of well crafted insights into the biological world all around us.

I Contain Multitudes explains how animal and plant life has evolved from a previous state in which microbes were everywhere and everything. Every adaptation since then has taken them into account and forced them to adapt in turn. Microbes, as explained by the book, are not a bunch of criminals hell bent on causing disease, but a complex ecosystem that overshadows the macrobiome, with complex adaptations in a matter of days.

A lot of eye opening ideas in the book. That disease is more often caused by an imbalance in a community of different microbes, not by one opportunistic infection. The old paradigm of "kill'em all" is no longer valid, as it just clears way for other microbes to take over the vacated real estate. The way selected cultures of microbes can function as a living drug for all kinds of afflictions, from bowel problems to mental issues, from tree diseases to those transmissible by insect bites, is shockingly powerful. But there is more, the most pervasive being that we cohabitate a world of bacteria and viruses that are as part of our identity and function as any other organ. Indiscriminately killing everything microscopic is then akin to cutting off your limb, just because you feel like it.

It is a book I can't recommend enough. Anyone even remotely interested in medicine should consider it as a must read. Anyone interested in their own health should read it. In fact, I can't imagine a single person that shouldn't read it. Check out the book's page on Ed Yong's web site for more information, videos and articles.

Just a heads up on a really useful blog post: Script to Delete Data from SQL Server Tables with Foreign Key Constraints

Basically it creates a stored procedure to display the dependencies based on a table name and a condition, then some other code to generate hierarchical delete code.

A lot of people are using Visual Studio Code to work with TypeScript and are getting annoyed by tslint warnings. TSLint is a piece of software that statically checks your code to find issues with how you wrote it. And at first one tries to fix the warnings, but since you already started on an existing project or template, there are a lot of warnings and you have work to do. Even worse, the warnings appear only when you open a file, so you think you're done until you start working in another area and you get red all over your project file. It's annoying! Therefore a lot of people just disable tslint.

But what if there was a way to check all the linting errors in your entire code? Even better, what if there were some magical way of fixing everything that can be fixed? Well, both of these exist. Assuming you have installed tslint globally (npm install tslint -g) you have these two commands to check and fix, respectively, all the errors in the current project:

tslint "src/**/*.ts?(x)" >lint_result
tslint "src/**/*.ts?(x)" --fix >fix_result

Note the src part, which tells tslint to look in the src folder and not in node_modules :-). If you prefer a more unsafe version that checks everything in the current directory, just replace src with a dot.

The fix option is only available from TSLint 4.0.

and has 0 comments
Female Orgasms is not so much as a book, as a really tiny set of chapters that are barely connected to each other. Emily Nagoski is frustrated by the way male standards are used to judge all sexuality and makes a point in this booklet that it is unhelpful, at best. However, while some of the ideas in the book are interesting, to me it seemed as a list of ideas and ramblings gathered together in order to form a volume, with most of the things either really basic or without any narrative or connection to others. 100 ebook pages and 26 chapters, that's saying something.

The book is oriented towards women, with men as a secondary audience. It is not a self-help book for men to become gods in bed, it is a self-help book for women on how to become more aware of their sexuality and enjoy themselves better. Some of the ideas I found interesting are mostly related to expectations. If we know 95% of women masturbate with clitoral stimulation, why do we even consider the necessity for women to orgasm from vaginal intercourse? It's nice when it happens, but as opposed to men, women don't orgasm predictably nor is the orgasm the end purpose of sexual encounter. Another interesting fact is that women are mostly responsive to erotic stimulation, as opposed to men who just wake up one moment wanting to have sex. It's a statistical fact, but still, one to take into consideration. One idea that the author wanted to make clear is that there is only one orgasm: the explosive release of sexual tension. How that tension is generated doesn’t matter (to the orgasm).

An important concept that Nagoski is making efforts to popularize is the one of arousal nonconcordance. In other words, while for men there is a strong correlation between physical sexual arousal and the desire or openness for sex, for women it's not quite so. Experiments of people watching porn while devices compare their sexual arousal and also take their reported input of how aroused they feel show consistently this is true. I do feel, though, that the author pushes a little too far, attempting to completely decouple the declarative and physical arousal. Considering some men use opposing ideas as justification for non consensual sex ("your body wants it, so you must want it" kind of logic) that is understandable, but less scientific than I would have liked.

This book is part of a series about sexuality, written by different authors, called Good in Bed Guide. I found it basic, but probably helpful for a lot of people. I wish it would have been better written and edited, though. Also, try reading this on the subway with a straight face.

and has 0 comments
There is a folder that appears to be quite big when you analyze your drive: Windows\WinSxS. It reports many gigabytes of files. If you are low on space, you might be tempted to delete it. The problem is that the folder is full of hard links to files that are already stored in other places. In order to determine the true size of the folder, run this command with elevated privileges:
dism /online /cleanup-image /analyzecomponentstore
the output looks like this:
Deployment Image Servicing and Management tool
Version: 10.0.16299.15

Image Version: 10.0.16299.309

[===========================99.1%========================= ]

Component Store (WinSxS) information:

Windows Explorer Reported Size of Component Store : 7.03 GB

Actual Size of Component Store : 6.94 GB

Shared with Windows : 6.06 GB
Backups and Disabled Features : 687.18 MB
Cache and Temporary Data : 194.21 MB

Date of Last Cleanup : 2018-06-15 01:02:45

Number of Reclaimable Packages : 0
Component Store Cleanup Recommended : No

The operation completed successfully.
The actual size used is the one in bold: 687.18 MB.

Deleting the folder or the files inside it will break your machine. Moving the files and copying them back will delete the hard links (freeing no space) then copy actual files instead, wreaking havoc with your system.

Update 2024: Sometimes you want to display a Javascript object as a string and when you use JSON.stringify you get an error: Converting circular structure to JSON. The solution is to use a function that keeps a record of objects found and returns a text explaining where the circular reference appeared.

Here is a function like that:

function fixCircularReferences(o) {

  const weirdTypes = [
    Int8Array,
    Uint8Array,
    Uint8ClampedArray,
    Int16Array,
    Uint16Array,
    Int32Array,
    Uint32Array,
    BigInt64Array,
    BigUint64Array,
    //Float16Array,
    Float32Array,
    Float64Array,
    ArrayBuffer,
    SharedArrayBuffer,
    DataView
  ];

  const defs=new Map();
  return (k,v) => {
    if (k && v==o) return '['+k+' is the same as original object]';
    if (v===undefined) return undefined;
    if (v===null) return null;
    const weirdType=weirdTypes.find(t=>v instanceof t);
    if (weirdType) return weirdType.toString();
    if (typeof(v) == 'function') {
      return v.toString();
    }
    if (v && typeof(v) == 'object') {
      const def = defs.get(v);
      if (def) return '['+k+' is the same as '+def+']';
      defs.set(v,k);
    }
    return v;
  }
}

And you use it like this:

const fixFunc = fixCircularReferences(someObject);
const json = JSON.stringify(someObject,fixFunc,2);

Possible improvements:

  • If you want to do more than just search through it, like actually deserialize it into something, you might not want to return the strings about "the same as" and return null instead.
  • If you want to serialize the circular references somehow, take a look at Newtonsoft's JSON PreserveReferencesHandling setting. You will need to also get the right id, not just the immediate property key name, which is more complicated.

and has 2 comments
My colleague showed me today something that I found interesting. It involves (sometimes unwittingly) casting an awaitable method to Action. In my opinion, the cast itself should now work. After all, an awaitable method is a Func<Task> which should not be castable to Action. Or is it? Let's look at some code:
var method = async () => { await Task.Delay(1000); };
This does not work, as compilation fails with Error CS0815 Cannot assign lambda expression to an implicitly-typed variable, which means we need to set the type explicitly. But what is it? It receives no parameter and returns nothing. So it must be an Action, right? But it is also an async/await method, which means it's a Func<Task>. Let's try something else:
Task.Run(async () => { await Task.Delay(1000); });
This compiles. If we hover or go to implementation for the Task.Run method, we reach the public static Task Run(Func<Task> function); signature. So that does it, right? It IS a Func<Task>! Let's try something else, though.
Action action = async() => { await Task.Delay(1000); };
Task.Run(action);
This compiles again! So it IS an Action, too!

What is my point, though? Consider you would want to create a method that receives an Action as a parameter. You want something done, then to execute the function, something like this:
public void ExecuteWithLog(Action action)
{
Console.WriteLine("Start");
action();
Console.WriteLine("End");
}

And then you want to use it like this:
ExecuteWithLog(async () => {
Console.WriteLine("Start delay");
await Task.Delay(1000);
Console.WriteLine("End delay");
});

The output will be:
Start
Start delay
End
End delay
There is NO WAY of awaiting the original method in the ExecuteWithLog method, as it is received as an Action, and while it waits for a second, execution returns to ExecuteWithLog immediately. Write the method like this:
public async void ExecuteWithLog(Func<Task> action)
{
Console.WriteLine("Start");
await action();
Console.WriteLine("End");
}
and now the output is as expected:
Start
Start delay
End delay
End

Why does this happen? Well, as mentioned above, you start with an Action, then you need to await some method (because now everybody NEEDS to use await/async), and then you get an error that your method is not marked with async. Now it's suddenly something else, not an Action anymore. Perhaps that would be annoying, but this ambiguity in defining what an anonymous async parameterless void method is worse.

and has 1 comment
What the hell?! After starting with so much potential, the story started fizzling, but there still was a lot of room for greatness. Instead, Bakker seems to have contracted Martinitis for his last book in the series, having important characters die off randomly, insignificant ones suddenly pop up, and filling space with feudal descriptions of the battles fought by completely irrelevant characters. Oh, and talking about erect penises. And then the end comes, everything seems to come to some sort of confluence, only it actually doesn't. It all goes completely to the left. Things get confused, the story goes nowhere, and the reader goes to WTF land for the entire day.

What is the purpose of having the reader getting invested in characters, only to kill them off, then return them later on (oh, they didn't die!), only to have them do nothing or die (again!)? What is the point of reading the names of every leader of men and no-men while they battle gloriously, complete with a short description of these characters right before they die in said battle? Was this book written with dice?

The Unholy Consult is a complete disappointment of a series finale. It ends practically nothing! Consider that it all started with Drusas Achamian, as a learned, in love, slightly damaged magus who liked to consider the world with wisdom. At the end, he is a bumbling old buffoon who can't string a thought in his head. Esmenet, the ex-prostitute, chosen by Achamian for her beauty and by the Emperor for her intellect and strength for bearing his children, first rises to the challenge of being a queen, then is just hauled away like a child and just does random things. Mimara gives birth to twins. But one is dead. There is no significance to this at all, it's just a random event. The four horns... they appear and disappear in the plot, like they have some great significance, but they don't. Why write about one character almost a quarter of a book only to kill him randomly in the next? Why be so verbose for 95% of a book only to break out into incoherent scenes and inconsistent actions in the last tiny chapter?! And it goes on and on like that. There is no moral to the story, no resolution to the fact that we followed the action of a psychopath for twenty years of book time waiting for this precise ending, only to be robbed of any meaningful closure.

Bottom line: I guess the author has a "great vision" in mind. If Prince of Nothing was followed by The Aspect Emperor, then a new series of books follows which is, in fact, another volume of the story. Only I lost all interest. What is the point in following characters if the author is going to butcher them (and I don't mean kill them off) later on to the point of irrelevancy? What is the point of following a story, if it leads to nothing?

and has 0 comments
Siderite's Razor: "The simplest solution/explanation is often somebody whining"

Fullmetal Alchemist: The Sacred Star of Milos is the film that banks on the hunger of Alchemists all over the world after the Brotherhood series ended. It is not a sequel, just a full feature film happening sometime around the 21st episode of the series. The story is complicated: three nations in turmoils, alchemy of all sorts, chimeras and in the middle of it all: Ed and Al, fighting for what is right.

I liked the story, it hit a lot of sour points of the present, with large nations literally shitting on smaller ones, while they can only maintain their dignity by hanging on old myths that give them moral rights over some God forsaken territory. What I didn't particularly enjoy were the characters and the details of the plot. There were many holes and, in all, no sympathetic characters. The few promising ones were only barely sketched, while the main ones were kind of dull. The animation also felt lazy. If this was supposed to be a send off for the characters, it exceeded its purpose, as now I am considering if I would have even enjoyed a series made in such a lazy way.

So, bottom line, part cash grab, part great concept. A promising film that reminded me of the series I loved so much a decade ago, but failed to rekindle the hunger I felt when the series ended. Goodbye, Elric brothers!

Visual Studio has a nice little feature called Code Coverage Results, which you can find in the Test menu. However, it does absolutely nothing unless you have the Enterprise edition. If you have it, then probably this post is not for you.

How can we test unit test coverage on .NET projects without spending a ton of money?

I have first tried AxoCover, which is a quick install from NuGet. However, while it looks like it does the job, it requires xUnit 2.2.0 and the developer doesn't have the resources to upgrade the extension to the latest xUnit version (which in my case is 2.3.1). In case you don't use xUnit, AxoCover looks like a good solution.

Then I tried OpenCover, which is what AxoCover uses in the background anyway, which is a library that is suspiciously showing in the Visual Studio Manage NuGet Packages as published in Monday, February 8, 2016 (2/8/2016). However, the latest commits on the GitHub site are from April, only three months ago, which means the library is still being maintained. Unfortunately OpenCover doesn't have any graphical interface. This is where ReportGenerator comes in and some batch file coding :)

Bottom line, these are the steps that you need to go through to enable code coverage reporting on your projects:
  1. install the latest version of OpenCover in your unit test project
  2. install the latest version of ReportGenerator in your unit test project
  3. create a folder in your project root called _coverage (or whatever)
  4. add a new batch file in this folder containing the code to run OpenCover, then ReportGenerator. Below I will publish the code for xUnit
  5. run the batch file

Batch file to run coverage and generate a report for xUnit tests:
@echo off
setlocal EnableDelayedExpansion
 
for /d %%f in (..\..\packages\*.*) do (
set name=%%f
set name="!name:~15!"
if "!name:~1,9!" equ "OpenCover" (
set opencover=%%f
)
if "!name:~1,15!" equ "ReportGenerator" (
set reportgenerator=%%f
)
if "!name:~1,20!" equ "xunit.runner.console" (
set xrunner=%%f
)
)
SET errorlevel=0
if "!opencover!" equ "" (
echo OpenCover not found in ..\..\packages !
SET errorlevel=1
) else (
echo Found OpenCover at !opencover!
)
if "!reportgenerator!" equ "" (
echo ReportGenerator not found in ..\..\packages !
SET errorlevel=1
) else (
echo Found ReportGenerator at !reportgenerator!
)
if "!xrunner!" equ "" (
SET errorlevel=1
echo xunit.runner.console not found in ..\..\packages !
) else (
echo Found xunit.runner.console at !xrunner!
)
if %errorlevel% neq 0 exit /b %errorlevel%
 
set cmdCoverage="!opencover:\=\\!\\tools\\opencover.console -register:user -output:coverage.xml -target:\"!xrunner:\=\\!\\tools\\net452\\xunit.console.exe\" \"-targetargs: ..\\bin\\x64\\Debug\\MYPROJECT.Tests.dll -noshadow -parallel none\" \"-filter:+[MYPROJECT.*]* -[MYPROJECT.Tests]* -[MYPROJECT.IntegrationTests]*\" | tee coverage_output.txt"
set cmdReport="!reportgenerator:\=\\!\\tools\\reportgenerator -reports:coverage.xml -targetdir:.\\html"
 
powershell "!cmdCoverage!"
powershell "!cmdReport!"
start "MYPROJECT test coverage" .\html\index.htm

Notes:
  • replace MYPROJECT with your project name
  • the filter says that I want to calculate the coverage for all the assemblies starting with MYPROJECT, but not Tests and IntegrationTests
  • most of the code above is used to find the OpenCover, ReportGenerator and xunit.runner.console folders in the NuGet packages folder
  • powershell is used to execute the command strings and for the tee command, which displays on the console while also writing to a file
  • the little square characters are Escape characters defining ANSI sequences for showing a red text in case of error.

Please feel free to comment with the lines you would use for other unit testing frameworks, I will update the post.

I was under the impression that .NET Framework can only reference .NET Framework assemblies and .NET Core can only reference .NET Core assemblies. After all, that's why .NET Standard appeared, so you can create assemblies that can be referenced from everywhere. However, one can actually reference .NET Framework assemblies from .NET Core (and not the other way around). Moreover, they work! How does that happen?

I chose a particular functionality that works only in Framework: AppDomain.CreateDomain. I've added it to a .NET 4.6 assembly, then referenced the assembly in a .NET Core program. And it compiled!! Does that mean that I can run whatever I want in .NET Core now?

The answer is no. When running the program, I get a PlatformNotSupportedException, meaning that the IL code is executed by .NET Core, but in its own context. It is basically a .NET Standard cheat. Personally, I don't like this, but I guess it's a measure to support adoption of the Core concept.

What goes on behind the scenes is that .NET Core implements .NET Standard, which can reference .NET Framework assemblies. For this to work you need .NET Core 2.0 and Visual Studio 2017 15.3 or higher.

and has 0 comments
Caveat lector: while this works, meaning it compiles and runs, you might have problems when you are trying to package your work into npm packages. When ng-packagr is used with such a system (often found in older versions of Angular as index.ts files) it throws a really unhelpful exception: TypeError: Cannot read property 'module' of undefined somewhere in bundler.js. The bug is being tracked here, but it doesn't seem to be much desire to address it. Apparently, I have rediscovered the concept of barrel, which now seems to have been abandoned and even completely expunged from Angular docs.

Have you ever seen enormous lists of imports in Angular code and you wondered why couldn't they be more compact? My solution for it is to re-export from a module all the classes that I will need in other modules. Here is an example from LocationModule, which contains a service and a model for locations:
import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { LocationService } from './services/location.service';
import { LocationModel } from './models/location-model';
 
@NgModule({
imports: [ CommonModule ],
providers: [ LocationService ]
})
export class LocationModule { }
 
export { LocationModel, LocationService };

Thing to note: I am exporting the model and the service right away. Now I can do something like
import { LocationModule, LocationModel, LocationService } from '../../../location/location.module';
instead of
import { LocationModule } from '../../../location/location.module';
import { LocationModel } from '../../../location/models/location-model';
import { LocationService } from '../../../location/services/location.service';

In .NET APIs we usually adorn the action methods with [Http<HTTP method>] attributes, like HttpGetAttribute or AcceptVerbsAttribute, to set the HTTP methods that are accepted. However, there are conventions on the names of the methods that make them work when such attributes are not used. How does ASP.Net determine which methods on a controller are considered "actions"? The documentation explains this, but the information is hidden in one of the paragraphs:
  1. Attributes as described above: AcceptVerbs, HttpDelete, HttpGet, HttpHead, HttpOptions, HttpPatch, HttpPost, or HttpPut.
  2. Through the beginning of the method name: "Get", "Post", "Put", "Delete", "Head", "Options", or "Patch"
  3. If none of the rules above apply, POST is assumed