and has 0 comments
I didn't like Bird Box. Josh Malerman seems to be a good writer, but the way he chose a cliché as the main character just in order to skirt the explanation of what happened and avoid any actual attempts of problem solving annoyed the hell out of me.

Imagine a world that is suddenly invaded by something, nobody knows what, but just one glimpse of it would make anyone (including animals) intensely suicidal. The main character is a young woman, left pregnant by some guy she randomly met, who has to deal with this new situation. Whenever the character gets too close to actually thinking about a solution or talking to someone who could find one, she gets all emotional because... children. This is such an ugly and demeaning trope.

The action is not that intense either. Imagine some people worrying day and night because they can't open their eyes. Yes, you can't drive! The horror! In several years covered by the out of sequence chapters no one actually attempts to function as a blind person would. The author just dismisses the possibility that true life without eyes makes sense. Everyone is stumbling (blindly) and relying on their hearing by shouting "is anyone there? go away!". Unless this is a metaphor for US foreign policy stupidity, these ideas fell on deaf ears with me. Deaf, get it?

Anyway, there is a Netflix movie made after this book, I have no idea why. It is could be better than the book, but that isn't a high standard.

and has 0 comments
I thought The Psychopath Test was not extraordinarily well written, but I enjoyed it. Imagine Jon Ronson, a tiny overanxious journalist and writer, going around the world to discover who are these psychos, what are they and who made or declared them thus. At times he has to make a connection with people who have violently killed or tortured others and I feel like only Woody Allen would make these scenes justice. A movie adaptation has been announced in 2015, with Scarlett Johanson in the main role and written directed by someone I don't know. What role would that be, though? There are no lead female characters in the book, although there is a woman who was caught in a bomb blast and then had to defend she even existed to a bunch of asses.

Anyway, what threw me off a little was the article/blog style of writing (called gonzo). It's not bad, I just wasn't expecting it. It feels like Ronson wrote several articles, with some overlaps, then glued them together to paint a larger picture. The result is an image of various widths and with some holes in it rather than a smooth picture. It does feel more personal, though, and perhaps this is what it should have been all about: the journey of a writer, hence journalism.

The book is not large and it is easy to read. In it we learn how psychopaths behave, why they are different from the rest of us, who created the rules used to spot them and, coming full circle, wonder if any of it is real. I think it was informative, but there are probably a lot more things to be said on the subject. As a personal journey to discover the meaning of psychopathy, it's a good book.

and has 0 comments
Lovecraft Country is a collection of short stories that are all linked, with the last bringing them together. It's a very fresh and original take, shining a light on racism in America in the 1960s, but also bringing in bits of the Lovecraftian fantastic. And to Matt Ruff's credit, he does both very well, considering the abysmal record of people trying to adapt Lovecraft and also that he is a white New-Yorker.

The heroes of the book are a family of Negroes (their word for it) and while magic and curses and monsters and parallel dimensions are present, the only horrific elements of the story is how they are treated by the white population. Yet they stay positive and resilient and survive. Each short story focuses on one of the family members, sometimes two, but only in the end they all play a part. I found the character of Caleb Braithwaite compelling, too, a roguish and charming magician, very similar to Jack Nicholson's devil character from The Witches of Eastwick.

I recommend the book and I feel like I want Ruff to write more in this universe.

and has 0 comments
After I've read Barry's The Great Influenza I had resigned myself to never read a book as well researched, as interesting and as viscerally informative, so when I started reading Pandemic, by Sonia Shah I had low expectations. And the book blew me away!

While I did notice some factual errors along the way, stuff that was either insufficiently researched or used for dramatic purposes, Pandemic was amazingly good. And terribly disgusting. If Barry took the high road of celebrating the heroes in the fight against pathogens, Shah writes so that every chapter destroyed more and more of my fate in humanity. By the end of the book I was rooting for a disease that just comes and kills us all to spare us the embarrassment of being human.

I mean, the investigation starts with cholera and the undignified way in which it makes you involuntarily squirt every liquid you have until you look and feel like a desiccated corpse and if you don't die the chances are people will confuse you with a corpse and bury you alive. But then it got to the horrid conditions that existed before the 20th century even in New York, a place where the population density exceeded that of modern Tokyo five times and people would wallow in their own excrement thrown in the streets and infesting their water supply. Then it described an epidemic of cholera in such a hellish place; can't get any more disgusting, right?

But wait, then there is a chapter on corruption and how financial interests caused the death of thousands just so some people can build a bank corporation like JPMorgan Chase, the biggest US bank today, built on literally feeding shit to people until they died. Diseases not allowed to come into the public eye for the sake of tourism and all that crap. Can it get worse? Yes, because once the disease is there, the blame game is on. The cause of the disease is not germs, the blame is not on a corrupt medical or political system, the fault lies solely on dirty immigrants, gays, minorities and if all else fails, the aid workers that are trying to help, but probably brought the contagion themselves on some sinister agenda.

And then we get to the point where we learn our brilliant present is based just on the ignorance or indifference to present dangers or current super bug pandemics. After all the horror the book presents, the end result is but a whimper, business as usual, ineffective uninformed lethargic reactions to attacks that started decades ago and were completely ignored (pooh-poohed, to use Shah's expression, alarmingly suggestive of choleric excrement). The science is way better, the attitudes remain pre 19th century.

I feel like The Great Influenza, Pandemic and I Contain Multitudes are three books that need to be read together, like a pack. Followed or perhaps preceded by Sapiens. I know, these are all books I've recently read and there are probably hundreds more that could join a list based on topic, but to me all of these stories clicked like puzzle pieces and opened my eyes to a complete picture.

In conclusion, I highly recommend reading Pandemic. It's good for the people in the medical field, it's good for people that couldn't care less (they will after reading it), it's a must read.

and has 0 comments
I started to read the book in French, so as to remember the language from my high school years, but then got lazy and after a chapter read it in my native Romanian. The literal translation would be In the Forests of Siberia, but for some reason it was translated as The Consolations of the Forest in English. Either title is misleading, as the forests are not really relevant to the story and the whole thing is a personal journal of a French misanthrope who decided to spend six months alone on the shores of lake Baikal.

I am unfamiliar with the work of Sylvain Tesson, he is a journalist and a traveler and I couldn't compare this with other things he wrote, but judging by Goodreads' description of him, this must be his most famous book. Did I like it? I didn't dislike it. In itself is a daily journal and has very little literary value other than the metaphors Tesson uses to express his feelings. Some land true, some simply don't work. There are no detailed descriptions of the landscape either. The author does not paint with his words, he mostly whines. If there are people around, he will insult their nation and their presence in his thoughts, while being civil and hospitable to them; if there are no people around, he will complain about the nature of society, humanity, religion or state. Left alone for a while, though, he will start to be more positive, inspired by nature, but also by the books he devours and then annoyingly feels compelled to quote from.

Some of his emotions ring true, it makes the read compelling and generates thoughts of how the reader would feel or act in the author's stead. Some descriptions sound exactly like what most people, alone in the (proximity of the) woods would produce if their only company were liters of vodka. What I am trying to say is that the book is a journal written by an egotist, therefore describing only him. The beautiful lake, the woods, people, dogs, the wild bears or anything else are just props so we can all bask in his personality... which is pretty shitty. Just as a small example: in four months of journal he mentions his need of random women coming into his hut twice. He mentions he has a girlfriend once. After getting dumped via SMS he whines continuously about how he lost the love of his life which now has no meaning and only his two dogs (received as pups when he got there) helped him through it. After the six months pass, he just leaves the dogs there, proclaiming his love for them.

So, an informative book about how a random French writer asshole felt while living alone in the cold Russian wilderness, but little else. Apparently there is a 2016 movie made after the book. You might want to try that.

and has 0 comments
I've always had the nagging feeling that someone who writes well could do wonders with the Lovecraft "mythos". A lot have tried and most have failed miserably, because Lovecraft was weird and his horror feelings came from being really intolerant of almost anything, but I am still trying to read things inspired by the man in hope I would find something very good.

Unfortunately, Shoggoths in Bloom is one of the shortest stories in this collection of short stories by Elizabeth Bear, is only loosely based on Lovecraft's ideas and is not horror. In fact, none of the stories in the book were horror and some weren't even fantastical, but verged on personal or perhaps historical fantasy. The quality was inconsistent, with some shorts being nice and others a nightmare to finish. Funny thing is one of the stories I liked, Tideline, I had listened to before on the Escape Pod web site.

Bottom line, Bear seems to be an accomplished writer and her writing is good, but I wouldn't recommend this collection, from the standpoint of quality, but also because it uses a Lovecraft concept to sell something completely different.

and has 2 comments
Sonar Source code static analysis rule RSPEC-3906 states:
Delegate event handlers (i.e. delegates used as type of an event) should have a very specific signature:
  • Return type void.
  • First argument of type System.Object and named 'sender'.
  • Second argument of type System.EventArgs (or any derived type) and is named 'e'.


The problem was that I was getting the warning on a simple event declared as EventHandler<TEventArgs>. Going to its source code page revealed the reason in a comment: // Removed TEventArgs constraint post-.NET 4.

and has 0 comments

In 2015 I was so happy to hear that Cory and Lori Cole, game designers for the Sierra Entertainment company, were doing games again, using Kickstarter to fund their work. Particularly I was happy that they were doing something very similar to Quest for Glory, which was one of my very favorite game series ever. Well, the game was finally released in the summer of 2018 and I just had to play it. Short conclusion: I had a lot of fun, but not everything was perfect.

The game is an adventure role playing game called Hero-U: Rogue to Redemption and it's about a small time thief who meets a mysterious bearded figure right after he successfully breaks into a house and steals, as per contract, a "lucky coin". The man gives him the opportunity to stop thieving and instead enroll into Hero University as a Rogue, rogues being a kind of politically correct thieves, taking from the rich and giving to the poor and all that. You spend the next 40-50 hours playing this kid in the strange university and finally getting to be a hero.

You have to understand that I was playing the Quest for Glory games, set in the same universe as Hero-U, when I was a kid. My love for the series does not reflect only the quality of the games, the humor, the nights without Internet where I had to figure out by myself how to solve a puzzle so that I could brag to my friends who were doing the same at the time, but the entire experience of discovery and wonder that was childhood. My memories of the Sierra games are no doubt a lot better than the games themselves and any attempt of doing something similar was doomed to harsh criticism. So, did the Coles destroy my childhood?

Nope. Hero U was full of puns and entertainment and rekindled the emotions I had playing QfG. I recommend it! But it won't get away from criticism, so here it is.

Update: I've finished the game again, going for the "epic" achievement called Perfect Prowler, which requires you don't kill anything. I recommend this as the start game because, if you think about it a bit, it's the easier way to finish the game. To not kill anything you need to sneak past enemies, meaning maxing your stealth. To defeat your enemies (which is also NOT the rogue way as taught at the university) you need to have all sorts of defenses, combat skills, magical weapons or runes, etc. By focusing on stealth you actually focus on the story, even if sometimes it is annoying to try to get past flying skulls for ten minutes, saving and reloading repeatedly, until your stealth is high enough. Some hints for people doing this:

  1. Sleeping powder is your friend, as it instantly makes an enemy unresponsive and does not alert other enemies that are standing right next to them
  2. Sleeping powder works on zombies, for some reason
  3. Demolishing a wall with a Big Boom while guards are sleeping next to it does not hurt said guards, even better, they magically disappear letting you plunder the entire room
  4. If someone else kills your enemy, you didn't kill anything :)
  5. The achievement says you have to not kill things, you can attack them at your leisure as long as you flee or use some other methods to escape


Anyway, the second run made me even more respectful towards the creators of the game, as they thought of so many contingencies to allow you to not get stuck whatever style of play you have. And this on a game that had so many production issues. Congratulations, Transolar!

And now for the original analysis:

What is great about the game is that it makes you want to achieve as much as possible in a rather subtle way. It doesn't show you X points out of Y the way old Sierra games did, but it always hints of the possibility of doing more if you only "apply yourself". Yes, it feels very much like a school. And I liked it. What's wrong with me?

I liked the design of the game, although I wish there was a way to just open a door you often go through, rather than click on the door and then choose Open from the list of possible and useless options like Listen on the door or Look at the door. I liked that you had a lot of actions for the objects in the game, which made it costly to just explore every possible option, but also satisfying to find one that works in your favor.

And the game is big! A lot of decisions, a lot of characters and areas to explore, a lot of quests and a lot of puns. Although, in truth, even if I loved the QfG series for their puns, in Hero-U it feels like they tried a little bit too much. In fact, I will write a lot about what I didn't like, but those are general things that are easy to point out. The beautiful part is in the small details that are much harder to describe (and not spoil).

The biggest issue I had with the game was the time limits. The story takes the hero through a semester of 50 days at the university and he has to do as much as possible in that time. This was good. It makes for a challenge, it forces you to manage the time you have to choose one or the other of several options. You can't just train fighting skills for weeks and then start killing critters. However, each day has several other time limits, mainly breakfast/class, supper and sleep. You may be in the depths of the most difficult dungeon, took you hours to get there, if it's supper time, your "hero" will instantly find his way back so he can grab some grub. You don't have the option to skip meals or a night's sleep, which would have been great as an experience and very little effort in development, as he already has "tired", "hungry", "injured" and other states that influence his skills.

This takes me to the general issue of linearity of story. The best QfG games were wonderful because you had so many options of what you could do: you could explore, do optional side quests that had little or nothing to do with the main story, solve puzzles in a multitude of ways (since in those games you got to choose your class). Hero-U feels very linear to me: a lot of timed quests with areas that only open up after specific events that have nothing to do with you, the items you get at the store change to reflect the point in time you are in, a choice of girls and boys to flirt with, but really only one will easily respond to your attempts at romance, the only possible ending with variations so small as to make them irrelevant and so on. And many a time it is terribly frustrating to easily find a hidden door or secret passage, but be unable to do anything with it until "it's time". You carry these big bombs with you, but when you get to a blocked door you can't just demolish it. I already mentioned the many options you have to interact with random objects in the game, but the vast majority of them are useless and inconsistent. QfG had some of these issues, too, though.

An interesting concept are the elective classes, which are so easy to miss it's ridiculous. Do not miss the chance (as I did) to do science, magic or healing. It reminds me of QfG games you played as a fighter and then started them again as a mage or thief. The point is to take all your tests (and since you get the results a few days later) you need to know your stuff (i.e. read the text of the lectures and understand what the teachers are saying). Unfortunately, the classes don't do much to actually help you. Science gives you a lot of traps and explosives, healing gives you a lot of potions and pills and magic gives you sense magic and some runes. You can easily finish the game without any of them and it is always annoying to have to run from the end of your classes (at 14:00) and reach the elective classroom on another floor, having to dodge Terk and also considering that you might want to do work in the lock room, practice room, library, recreation room and reception, all in one hour (you have to get to the class by 15:00). And the elective eats two hours of your time, just in time for (the mandatory) dinner.

And then there is the plot itself. I had a hard time getting immersed in a story where young people learn at a university teachers know is infested with dangerous creatures that students fight, but do nothing to either stop or optimize the process. Instead, everybody knows about the secret passages, the areas, but pretend they do not. Students never party up to do a quest together. There are other classes in the university, not only Rogues learn there, but you never meet them. Each particular rogue student has a very personal reason to be in the university, which makes me feel it's amazing that the class has seven students; in other years there must have been a maximum of two. You get free food from all over the world, but you have to buy your own school supplies. There are two antagonists that really have absolutely no power over you, no back story, and you couldn't care less that they exist. Few of the characters in the game are sympathetic or even have believable motivations.

Bottom line: I remembered what it was like when I was a child playing these games and enjoyed a few days of great fun. I felt like the story could have had more work done so that we care about the characters more and have more ways to play the game. The limits often felt very artificial and interrupted me from being immersed in the fantastic world. It felt like a Quest for Glory game, but not the best ones.

It is worth remembering that this game is the first since the 1990s when the creators were working in Sierra Games. They overcame a lot of new hurdles and learned a lot to make Hero-U. The next installments or other games will surely go more smoothly both in terms of story and playability. I have a lot of trust in them.

Some notes:

  • There is a Hero-U Student Handbook in PDF form.
  • Time is very important. It pays to save, explore an area, reload and go directly where you need to go.
  • Stealth is useful. There is an epic achievement to finish the game without killing anything. That feels a bit extreme, but it also shows that items and combat skills may be less relevant than expected.
  • Exams are important: save and pass the exams so you can get elective classes. I felt like every part of the story was excessively linear except elective classes which you can even miss completely because you get no help with them from the teachers or the game mechanism.
  • Some doors towards the end cannot be opened and are reserved for future installments of the series.
  • You can lose a lot of time in the catacombs for no good reason. Don't be ashamed to create and use a map of the rooms.


I leave you with a gameplay video:

[youtube:i_4CHnKCJ40]

and has 0 comments
This is another post discussing a static analysis rule that made me learn something new. SonarSource Rule 3898 says: If you're using a struct, it is likely because you're interested in performance. But by failing to implement IEquatable<T> you're loosing performance when comparisons are made because without IEquatable<T>, boxing and reflection are used to make comparisons.

There is a StackOverflow entry that discusses just that and the answer to this particular problem is not actually the accepted one. In pure StackOverflow fashion I will quote the relevant bit of the answer, just in case the site will go offline in the future: I'm amazed that the most important reason is not mentioned here. IEquatable<> was introduced mainly for structs for two reasons:
  1. For value types (read structs) the non-generic Equals(object) requires boxing. IEquatable<> lets a structure implement a strongly typed Equals method so that no boxing is required.
  2. For structs, the default implementation of Object.Equals(Object) (which is the overridden version in System.ValueType) performs a value equality check by using reflection to compare the values of every field in the type. When an implementer overrides the virtual Equals method in a struct, the purpose is to provide a more efficient means of performing the value equality check and optionally to base the comparison on some subset of the struct's field or properties.

I thought this was worth mentioning, for those performance critical struct equality scenarios.

and has 0 comments
Skyward is Brandon Sanderson at his best... and worse. Yes, his best characters have always been young rebellious loud mouths with a penchant for over the top lines and punny jokes. And yes, this is a young adult novel with a classically clichéd plot. I feel guilty for liking it so much, but hey, it apparently works! Personally I feel it's a shame Sanderson spends a year writing a book and I finish it in two days, but at least he's not George R. R. Martin!

The whole idea revolves around this society of humans, driven underground by an alien force. They live on a planet surrounded by a sphere of debris few can get through and attacked periodically by alien fighter planes and bombers that the humans must repel in order to survive. And here is this heroic little girl who dreams of becoming a pilot fighter despite her father being universally despised for being a coward and leaving the field of battle. Determined to clear her and her father's name, she enrolls in a school for cadet fighters and discovers she has what it takes to protect her friends and save humankind.

Sounds familiar? It should, every story lately seems to be about the same character. Is it an interesting and engaging character? Yes. Is the world weird and familiar enough to be enjoyed? Yes. If this is all you need, you will love the book. And of course, it's the first book in a series. I need a little more, though, and I feel that the twists were terribly predictable and there were holes everywhere in the world building. If you only focus on the characters, as the author did, you enjoy the book. But as soon as you try to imagine yourself there, things start to make little sense and whatever you would do, it would not be what the characters in the book do. Plus... that fighter! Deus Ex Machina much?

Bottom line: lovely book to read in a few days and feel you are a reader, but the story is as standard as they come and the only nice thing about it is that Sanderson wrote it.

Intro


Visual Studio has a very interesting feature called Rule Sets. You basically create a file where you declare which warnings from analyzers will be ignored, displayed as info, warning or error. With the built in code analysis, but also with the help of a plethora of extensions and NuGet packages, this can be a very powerful tool. I am using VS2017 Professional for this post.

Create a rule set


Let's start with creating a new project (a .NET Framework console app) called Rulesets, which will create the standard Program.cs file and a solution for the project. Next, right click the solution in Solution Explorer and go to Add → New Item, go to the General category and select Code Analysis Rule Set. Name it whatever you want to name it, I will call it default.ruleset, then save it.



At this point you should be in a rule set editor, showing you a list of rules grouped by code analyzer id. Press F4 or go to the little wrench icon so that the Properties window is open, then give your ruleset a name (Default Rule Set) and save the file (ctrl-S).



You can create as many of these files and they will be saved in the Solution Items or as a file in a project (I recommend the former) and associate them to any number of projects. Let's assign the new rule set to our project: Go to the solution properties, select Common Properties → Code Analysis Settings, then select as many projects as you want in the list in the right. Then click on the little dropdown arrow an you should be prompted with a list of possible rules, including Default Rule Set. Select it.



Obviously, you can just choose a Microsoft included rule set instead, but those do not take into account your own extensions/packages.

Note: in order to start an incremental process, let's say starting with the minimum recommended settings from Microsoft and then adding stuff to it, use the Include element in a .ruleset file. The files coming by default with Visual Studio can be found at %ProgramFiles(x86)%/Microsoft Visual Studio/2017/Professional/Team Tools/Static Analysis Tools/Rule Sets. Example:
<Include Path="minimumrecommendedrules.ruleset" Action="Default" />

From the Visual Studio GUI you can click on the folder icon in the rule set editor top bar to include other sets and the wrench icon to open the settings.

This helps a lot with having small variations between your projects. For example a tests project might have different settings. Or the data access layer project might have auto generated files that don't respect your coding standards. You can just create a new ruleset that includes the default, then disables some of the rules, like mandatory class documentation.

Note that the rule set editor is not perfect. It will only show the rules as defined in the current file, ignoring the included sets. That is why if the default for a rule is None and you set it to Warning in your base set which you then include in another set where you set it back to None, it will not be saved correctly. Some manual checks are required to ensure correctness.

A list of analyzers


Now, I've noticed a list of possible extensions for Visual Studio that use this system. Here is a list of the ones I thought were good enough, free and useful.
Visual Studio extensions:
  • Microsoft Code Analysis 2017 - Live code analysis rules and code fixes addressing API design, performance, security, and best practices for C# and Visual Basic.
  • Security Code Scan - Detects various security vulnerability patterns: SQL Injection, Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), XML eXternal Entity Injection (XXE), etc.
  • MetricsAnalyzer - analyzer extension to check if you code follow metrics rules
  • Moq.Analyzers - Visual Studio extension that helps to write unit tests using Moq mocking library by highlighting typical errors and suggesting quick fixes
  • Code Cracker for C# - analyzer library for C# that uses Roslyn to produce refactorings, code analysis, and other niceties
  • Visual Studio Intellicode - this is interesting in the sense that it uses AI to improve your code and intellisense
  • Roslynator 2017 - A collection of 500+ analyzers, refactorings and fixes for C#, powered by Roslyn.
  • SonarLint for Visual Studio 2017 - Roslyn based static code analysis: Find and instantly fix nasty bugs and code smells in C#, VB.Net, C, C++ and JS.
  • clean-code-net - Set of C# Roslyn analyzers to improve code correctness
  • CommentCop - Analyzes (mostly) xml comments and provides code fixes. Uses Roslyn C# code analyser.

NuGet packages:
  • StyleCop.Analyzers - there is also a StyleCop extension, but weirdly it does not use the Rule Set system and you just run it manually and gives you warnings that you can control only through the extension configuration
  • Public API Analyzer - An analyzer for packages with public APIs.
  • UnityEngineAnalyzer - Roslyn Analyzer for Unity3D
  • AsyncAwaitAnalyzer - A set of Roslyn Diagnostic Analyzers and Code Fixes for Async/Await Programming in C#.
  • ef-perf-analyzer - EntityFramework Performance Analyzer
  • Asyncify-CSharp - an analyzer and codefix that allows you to quickly update your code to use the Task Asynchronous Programming model.

The packages above are NuGets you install in your project. Just check out the list from NuGet: NuGet packages containing Analyzer. Many extensions also have a NuGet counterpart. It's your choice if you want to not bloat your Visual Studio and choose to install analyzers on a per project basis. There is one advantage more in using project packages: the analysis will pop up at build time and you can thus enforce not being able to compile without following the rules in the set.

Curating your rule sets


So you've learned how to choose a rule set for your projects, how to create your own, but what do you use them for? My suggestion is to work on a complete rule set (or sets) for your entire company and then you can just enforce a coding style without having to manually code review everything.

If you are like me, then you probably installed everything in that list above and then tried it on your project... and you got tens of thousands of warnings and errors. How do you curate a rule set without having to go through every single message? I see several ways of doing this.

The perfect project


Some people/companies have a flagship technical project that they are very proud of. It uses all the latest technologies, it is perfectly written, thoroughly code reviewed by all the members of the team. It can do no wrong. If you have such a project, just enable all possible rules, then disable all that make suggestions for change. In the end you will have a rule set enforcing your coding style, for better or worse.

Start from scratch


The other solution is to start with a new project, then review every message until you get none. Then start coding. An iterative process, for sure, one you will never finish, but it will be good enough after a while and it will also engage your team in technical discussions on how to improve their code, which can't possibly hurt.

Add analyzers one by one


Start with a rule set where all rules are disabled, then start reviewing them one by one, as you either chose to disable them or refactor your code. This solution may be the worse, because it gives excuses to just stop the process midway, but it might be the only one available. Anyway, try to install extensions and packages one by one, too.

Start from the coding standards


Perhaps you have a document describing the coding standards in your team. You might start from it, then look for the rules that enforce it. I think that this will only make you see how woefully inadequate your coding standards are, but it might work.

Other notes


I've had the situation where I created derived rulesets from the default one (using Include) and somehow they ended up with an absolute path for the default ruleset file. It might be an issue with how the editor saves the file.

By default, rules in the ruleset editor are grouped by analyzer ID, but multiple analyzers might manage the same rules, so always manage rules individually, else you will see that enabling or disabling an entire group will change other groups as well and you won't know where you started from.

The ruleset editor is not perfect. One very annoying issue is with ruleset inheritance (doesn't load the parent rules). One example is that you want to have a general ruleset that does NOT include an analyzer (let's say the XUnit one) and then you want something inheriting the base ruleset to DO include the XUnit analyzer. While you can do it by hand, the ruleset editor will not allow you to make these changes, as the original ruleset will have every XUnit rule disabled, but the editor for the unit test one will not know this. There are two solutions for this:
  1. Have a very inclusive basic rule set, then remove rules from the others. This is not perfect, as there could be circular needs (one set has one rule and not the other, while the other has them the other way around)
  2. Have a lot of basic rule sets, split on topic. Then manage every rule set that you need as just includes. This works in every situation, but requires you edit the used rulesets by hand. In the case above you could have a special DoNotUseXunitAnalyzers.ruleset, for example.
  3. A third solution that I do not recommend is to never include anything, instead just copy paste the content of the basic ruleset in every inheriting one. While this works and allows the editor and whatever engine behind to work well, it would be a nightmare to maintain.

Conclusion


I've discussed how to define and control static code analysis in your Visual Studio projects. If nothing of what is available is up to your standards, Roslyn now allows making your own code analyzers in a very simple way. Using code analyzers (and refactorings) can improve productivity, engage the team in technical analysis of standards, enforce some coding standards and help you find hard to detect errors in your code.

Hope it helps.

and has 0 comments
I was playing with code analysis rule sets in Visual Studio (see my blog post about it) and I got hit by come conflicting rules. I will discuss only SonarSource rules, but a lot of other analyzers have similar rules.

OK, one of them is something that I intuitively thought was universally good: RSPEC-3962: "static readonly" constants should be "const" instead. Makes sense, right? A constant is compiled better, integrated faster, it's a constant! No overhead, nothing changes it. This rule was marked as a minor improvement to the code, anyway.

Then, bam!, RSPEC-2339: Public constant members should not be used. Critical rule! Basically it says the opposite: turn your constant into static readonly. What's going on?!

This is not one of those pairs of rules that contradict each other based on user preference, like using var instead of the type name when the type is obvious and viceversa. These are two different, apparently conflicting, yet complementary concepts.

But what is really the difference between a static readonly field and a constant, other than constants can only be value types? Constant values are retrieved at compile time, as an optimization, since they are not expected to change, while static readonly values are retrieved at runtime. This means that if you use a library in your project, the constants it declares will be incorporated into your application when you compile it. You may change the .dll of the library afterwards, with inconsistent results, since readonly statics will now have changed values and the constants not.

Here, an example. In the creatively named project Library there is a Container class with a public constant ingeniously named Constant and a public static readonly field that has the same value as Constant.
namespace Library
{
public class Container
{
public const int Constant = 1;
public static readonly int StaticReadonly = Constant;
}
}

Then there is a program that uses these two values to display them:
class Program
{
static void Main(string[] args)
{
Console.WriteLine($"Container.Constant: {Container.Constant} Container.StaticReadonly: {Container.StaticReadonly}");
Console.ReadKey();
}
}

The expected output is Container.Constant: 1 Container.StaticReadonly: 1. Now change the value of Constant to 2, right click the Library project and only build it, not the program. Then take the resulting .dll and copy it in the bin folder of the program, then run it manually. The output is now... Container.Constant: 1 Container.StaticReadonly: 2 and that from a code like StaticReadonly = Constant;.

Conclusion: public constants should be avoided if they are used between projects and since you don't know where they will be used, better to avoid them at all times. This will really annoy people who like to create separate classes to store constants, but that's OK, because the feeling is mutual.

and has 0 comments
So I was watching this Entity Framework presentation and I noticed one example that looked like this:
db.ExecuteSqlCommand($"delete from Log where Time<{time}");

Was this an invitation to SQL injection? Apparently not, since the resulting SQL was something like DELETE FROM Log WHERE Time < @_p0. But how could that be? Enter FormattableString, which is a class implementing the venerable IFormattable interface, but which is available in .NET Framework only from version 4.6 and in .NET Core from the very beginning. Apparently, when an interpolated string is assigned to a FormattableString, it is compiled as an instance with all the values from the string before the formatting. In our case ExecuteSqlCommand had a FormattableString overload. Note that the method is an extension method from RelationalDatabaseFacadeExtensions, not Database.ExecuteSqlCommand.

Let's test this with a little program:
class Program
{
static void Main(string[] args)
{
var timeDisplay = new TimeDisplay();
Test($"Time display:{timeDisplay}");
Console.ReadKey();
}
 
private static void Test(string text)
{
Console.WriteLine(text);
}
 
private class TimeDisplay
{
public override string ToString()
{
return DateTime.Now.ToString("s");
}
}
}

Here I create an instance of TimeDisplay and then use it in an interpolated string which is then sent to the Test method, which Console.WriteLines it. The ToString method of TimeDisplay is overridden to display the current time. The result is predictable: Time display:2018-12-13T11:24:02. I will then change the type of the parameter of Test to be FormattableString. It still works and it displays the same thing. Note that if I have both a FormattableString and a string version of the same method, string will be used first when an interpolated string is sent as a parameter!

But what do I get in that instance? Let's change the Test method even more:
private static void Test(FormattableString text)
{
Console.WriteLine($"Format: {text.Format} " +
$"ArgumentCount: {text.ArgumentCount} " +
$"Arguments: {string.Join(", ",text.GetArguments())}");
}

The displayed result of the program is now Format: Time display:{0} ArgumentCount: 1 Arguments: 2018-12-13T11:28:35. Note that the argument is in fact a TimeDisplay instance and it is displayed as a time stamp because of the ToString override.

What does this mean?

Well, we can do great things like Entity Framework does, interpreting the intent of the developer and providing a more informed output. I am considering this as a solution for logging. Logger.LogDebug($"{someObjectWithAHeavyToString}") now doesn't have to execute the ToString() method of the object unless the Debug log level is enabled, for example.

But we can also really mess things up. I will get past the possible yet unlikely security problem where you believe you pass an object as .ToString() and in fact it is passed as the entire object, allowing a malicious library to do whatever it wants with it. Let's consider more probable scenarios.

One is that a code reviewer will tell you "put magic strings in their own variables or constants", so you immediately take the string sent to test and automatically move it a local variable (which Visual Studio will create it as a FormattableString), then you replace that with var (because the type is obvious, right?). Suddenly the test variable is a string.

Another is even worse, although if you decided to code like this you have other issues. Let's get back to something similar to the original example:
db.ExecuteSqlCommand($"delete from Log where Id = {id}");

And let's change it:
var sql=$"delete from Log where Id = {id}";
db.ExecuteSqlCommand(sql);

Now sql is a string, its value is computed from the id, which might be provided by the user. Replace this with Bobby Tables and you got a nice SQL injection.

Conclusion: an interesting, if somewhat confusing, concept. Other than the logging idea, which I admit is pretty interesting, I am yet to find a good place to use it.

and has 0 comments
The name says it all: "The Everything Creative Writing Book: All you need to know to write novels, plays, short stories, screenplays, poems, articles, or blogs", maybe too much. In this book, Wendy Burt-Thomas takes a holistic approach to writing, discussing everything from how to write poetry and children's books, blogs and technical specs to how to find an agent, self publish and so on. It covers writing techniques and editing advice, writer block solutions and how to deal with rejection (or success for that matter) and many more. In that regard, the book is awesome, it shows everything you might want to know a little about in order to decide what you actually choose to do, but like that Nicholas Butler quote An expert is one who knows more and more about less and less until he knows absolutely everything about nothing, the book is probably not very useful to someone who has already started working on things.

That said, the book is compact, to the point and can help a lot at the very beginning of the writer's journey. It can be used as a reference, so that whenever a particular subject or concern appears, you just flip to that chapter and see what Wendy recommends. Is it good advice? I have no idea. I've certainly read books that go more in depth about topics that interested me more, like how to write a novel or how to set up a scene, but a panoramic view of the business is not bad either. The material also felt a little dated for something released in 2010, especially in the technical sections.

You choose if you find it useful or not.

Intro


An adapter is a software pattern that exposes functionality through an interface different from the original one. Let's say you have an oven, with the function Bake(int temperature, TimeSpan time) and you expose a MakePizza() interface. It still bakes at a specific temperature for an amount of time, but you use it differently. Sometimes we have similar libraries with a common goal, but different scope, that one is tempted to hide under a common adapter. You might want to just cook things, not bake or fry.

So here is a post about good practices of designing a library project (complete with the use of software patterns, ugh!).



Examples


An example in .NET would be the WebRequest.Create method. It receives an URI as a parameter and, based on its type, returns a different implementation that will handle the resource in the way declared by the WebRequest. For HTTP, it will used an HttpWebRequest, for FTP an FtpWebRequest, for file access a FileWebRequest and so on. They are all implementations of the abstract class WebRequest which would be our adapter. The Create method itself is an example of the factory method pattern.

But there are issues with this. Let's assume that we have different libraries/projects that handle a specific resource scope. They may be so different as to be managed by different organizations. A team works on files, another on HTTP and the FTP one is an open source third party library. Your team works on the WebRequest class and has to consider the implications of having a Create factory method. Is there a switch there? "if URI starts with http or https, return new HttpWebRequest"? In that case, your WebRequest library will need to depend on the library that contains HttpWebRequest! And it's just not possible, since it would be a circular reference. Had your project control over all implementations, it would still be a bad idea to let a base class know about a derived class. If you move the factory into a factory class it still means your adapter library has to depend on every implementation of the common interface. As Joe Armstrong would say You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

So how did Microsoft solve it? Well, they did move the implementation of the factory in another creator class that would implement IWebRequestCreate. Then they used configuration to associate a prefix with an implementation of WebRequest. Guess you didn't know that, did you? You can register your own implementations via code or configuration! It's such an obscure feature that if you Google WebRequestModulesSection you mostly get links to source code.

Another very successful example of an adapter library is jQuery. Yes, the one they now say you don't need anymore, it took industry only 12 years to catch up after all. Anyway, at the time there were very different implementations of what people thought a web browser should be. The way the DOM was represented, the Javascript objects and methods, the way they actually worked compared to the way they should have worked, everything was different. So developers were often either favoring a browser over others or were forced to write code for each possible version. Something like "if Internet Explorer, do A, if Netscape, do B". The problem with this is that if you tried to use a browser that was neither Internet Explorer or Netscape, it would either break or show you one of those annoying "browser not supported" messages.

Enter jQuery, which abstracted access over all these different interfaces with a common (and very nicely designed) one. Not only did it have a fluent interface that allowed you to do multiple things with a single target (stuff like $('#myElement').show().css({opacity:0.7}).text('My text');), but it was extensible, allowing third parties to add modules that would allow even more functionality ($('#myElement').doSomethingCool();). Sound familiar? Extensibility seems to be an important common feature of well designed adapters.

Speaking of jQuery, one very used feature was jQuery.browser, which told you what browser you were using. It had a very sophisticated and complex code to get around the quirks of every browser out there. Now you had the ability to do something like if ($.browser.msie) say('OMG! You like Microsoft, you must suck!'); Guess what, the browser extension was deprecated in jQuery 1.9 and not because it was not inclusive. Well, that's the actual reason, but from a technical point of view, not political correctness. You see, now you have all this brand new interface that works great on all browsers and yet still your browser can't access a page correctly. It's either an untested version of a particular browser, or a different type of browser, or the conditions for letting the user in were too restrictive.

The solution was to rely on feature detection, not product versions. For example you use another Javascript library called Modernizr and write code like if (Modernizr.localstorage) { /* supported */ } else { /* not-supported */ }. There are so many possible features to detect that Modernizr lets you pick and choose the ones you need and then constructs the library that handles each instead of bundling it all in one huge package. They are themselves extensible. You might ask what all this has to do with libraries in .NET. I am getting there.

The last example: Entity Framework. This is a hugely popular framework for database access from Microsoft. It would abstract the type of the database behind a very nice (also fluent) interface in .NET code. But how does it do that? I mean, what if I need SQL Server? What if I want MongoDB or PostgreSQL?

The way is having different "providers" to translate .NET code Expressions into whatever the storage needs. The individual providers are added as dependencies to your project, without the need for Entity Framework to know about them. Then they are configured for use in code, because they implement some common interfaces, and they are ready for use.

Principles for adapters


So now we have some idea about what is good in an adapter:
  • Ease of use
  • Common interface
  • Extensibility
  • No direct dependency between the interface and what is adapted
  • An interface per feature

Now that I wrote it down, it sounds kind of weird: the interface should not depend on what it adapts. It is correct, though. In the case of Entity Framework, for example, the provider for MySql is an adapter between the use interface of MySql and the .NET interfaces declared by Entity Framework; interfaces are just declarations of what something should do, not implementation.

Picture time!


The factory and the common interface are one library that will use that library in your project. Each individual adapter depends on it, as well, but your project doesn't need to know about it until needed.

Now, it's your choice if you register the adapters dynamically (so, let's say you load the .dll and extract the objects that implement a specific interface and they know themselves to what they apply, like FtpWebRequest for ftp: strings) or you add dependencies to individual adapters to your project and then manually register them yourself and strong typed. The important thing is that you don't reference the factory library and automatically be forced to get all the possible implementations added to your project.

It seems I've covered all points except the last one. That is pretty important, so read on!

Imagine that the things you want to adapt are not really that similar. You want to force them into a common shape, but there will be bits that are specific to one domain only and you might want them. Now here is an example of how NOT to do things:
var target = new TargetFactory().Get(connectionString);
if
(target is SomeSpecificTarget specificTarget) {
specificTarget.Authenticate(username, password);
}
target.DoTargetStuff();
In this case I use the adapter for Target, but then bring in the knowledge of a specific target called SomeSpecificTarget and use a method that I just know is there. This is bad for several reasons:
  1. For someone to understand this code they must know what SomeSpecificTarget does, invalidating the concept of an adapter
  2. I need to know that for that specific connection string a certain type will always be returned, which might not be the case if the factory changes
  3. I need to know how SomeSpecificTarget works internally, which might also change in the future
  4. I must add a dependency to SomeSpecificTarget to my project, which is at least inconsistent as I didn't add dependencies to all possible Target implementations
  5. If different types of Target will be available, I will have to write code for all possibilities
  6. If new types of Target become available, I will have to change the code for each new addition to what is essentially third party code

And now I will show you two different versions that I think are good. The first is simple enough:
var target = new TargetFactory().Get(connectionString);
if
(target is IAuthenticationTarget authTarget) {
authTarget.Authenticate(username, password);
}
target.DoTargetStuff();
No major change other than I am checking if the target implements IAuthenticationTarget (which would best be an interface in the common interface project). Now every target that requires (or will ever require) authentication will receive the credentials without the need to change your code.

The other solution is more complex, but it allows for greater flexibility:
var serviceProvider = new TargetFactory()
.GetServiceProvider(connectionString);
var target = serviceProvider.Get<ITargetProvider>()
.Get();
serviceProvider.Get<ICredentialsManager>()
?.AddCredentials(target, new Credentials(username, password));
target.DoTargetStuff();
So here I am not getting a target, but a service provider (which is another software pattern, BTW), based on the same connection string. This provider will give me implementations of a target provider and a credentials manager. Now I don't even need to have a credentials manager available: if it doesn't exist, this will do nothing. If I do have one, it will decide by itself what it needs to do with the credentials with a target. Does it need to authenticate now or later? You don't care. You just add the credentials and let the provider decide what needs to be done.

This last approach is related to the concept of inversion of control. Your code declares intent while the framework decides what to do. I don't need to know of the existence of specific implementations of Target or indeed of how credentials are being used.

Here is the final version, using extension methods in a method chaining fashion, similar to jQuery and Entity Framework, in order to reinforce that Ease of use principle:
// your code
var target = new TargetFactory()
.Get(connectionString)
.WithCredentials(username,password);
 
 
// in a static extensions class
 
public static Target WithCredentials(this Target target, string username, string password)
{
target.Get<ICredentialsProvider>()
?.AddCredentials(target, new Credentials(username, password));
return target;
}
 
public static T Get<T>(this Target target)
{
return target.GetServiceProvider()
.Get<T>();
}
This assumes that a Target has a method called GetServiceProvider which will return the provider for any interface required so that the whole code is centered on the Target type, not IServiceProvider, but that's just one possible solution.

Conclusion


As long as the principles above are respected, your library should be easy to use and easy to extend without the need to change existing code or consider individual implementations. The projects using it will only use the minimum amount of code required to do the job and themselves be dependent only on interface declarations. As well as those are respected, the code will work without change. It's really meta: if you respect the interface described in this blog then all interfaces will be respected in the code all the way down! Only some developer locked in a cellar somewhere will need to know how things are actually getting done.