and has 2 comments
A reader asked me how to work with multiple projects in Visual Studio Code and after fumbling a little I realized I had no idea. So I started trying out things.

First I created a folder in which I created two other folders ingeniously named Proj1 and Proj2. I went in both and ran dotnet new console and dotnet new classlib respectively. I moved the Console.WriteLine("Hello World!"); code from the console Program.cs in a static method in the Class1 class, then called the method from Program.cs Main, then tried to find ways of referencing Proj2 from Proj1 in Visual Studio Code.

And here I got stuck. I tried the smart solutions VS Code recommended, but none of them included adding a reference. I right clicked on everything to no avail. I wrote using Proj2; by hand hoping that Code will magically understand I need a project reference. I googled, only to find old articles that discussed project.json, not .csproj type of .NET projects.

In the end I was resigned to write the reference by hand. I opened Proj1.csproj and added
<ItemGroup>
<ProjectReference Include="..\Proj2\Proj2.csproj"/>
</ItemGroup>

After saving the file and going to the unresolved Class1 reference, I now got using Proj2; as an option to fix it. And now I got to the problem my reader was having. When trying to run Proj1, I got Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'Proj2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified. at Proj1.Program.Main(String[] args).

It's disgustingly easy to solve, you just need to know what to do. Either Ctrl-Shit-P and type restore, then select restoring Proj1 or do it manually by going to the Proj1 folder and running dotnet restore by hand. After that the project is compiled and runs.

Summary:
  1. add project reference by hand to .csproj file
  2. resolve whatever compilation errors you have by specifying the correct usings or inlining namespaces
  3. dotnet restore the project you added references to

and has 2 comments
As a .NET developer I am very familiar with LInQ, or Language Integrated Queries, which is a collection of fluent interface methods that deal with querying data from collections. However, so many people outside the .NET ecosystem are not familiar with the concept or they use it as disparate functions in their language of choice. What makes it even more confusing is that the same concept is implemented in other languages with different names. Let me give you an example:
var arr=[1,2,3,4,5,6];
var result=arr
.Where(v=>v%2==0) //get only even values
.Select(v=>v*10) //return their values multiplied with 10
.Aggregate(15,(s,v)=>v+s); //aggregate their value into a sum that starts with a seed of 15
// result should be 15+2*10+4*10+6*10=135

We see here the use of three of these methods:
  • Where - filters the values on a condition
  • Select - changes the values it returns
  • Aggregate - creates an aggregate value using an operation on all the values in the collection

Let me write you the same C# code without using these methods:
var arr=[1,2,3,4,5,6];
var result=15;
foreach(var v in arr) {
if (v%2==0) {
result+=v*10;
}
}

In this case, some people might prefer the second version, but it is only an example. LInQ is not a silver bullet that replaces all loops, only a tool amongst many in a large toolset. Some advantages of using such a method are concise code, better readability, a common API for iterating, filtering and querying collections, etc. For example in the largely used Entity Framework or its previous incarnations such as Linq over SQL, the queries would look the same, but they would be translated into SQL and sent to the database and executed just once. So it would not get a list of thousands of records to filter it in memory, instead it would translate the expression of the function sent to the query into SQL and execute it there. The same sort of operations can be used on streams of data, rather than fixed collections, like in the case of Reactive Extensions.

Some other methods in this set include:
  • First/Last - getting the first or last element in an enumerable that satisfies a boolean condition
  • Skip - ignoring a number of values in a collection
  • Take - returning a number of values in a collection
  • Any/All - returning true if at least one or all of the items satisfy a boolean condition
  • Average/Sum/Min/Max - specific aggregating methods for the elements in the collection
  • OrderBy/OrderByDescending - sorting
  • Count - counting

There are many others, you can look them up here.

Does this system of querying data seem familiar to you? To SQL developers it will feel second nature. In SQL the same result from above would be achieved by using something like:
SELECT 15+SUM(SELECT v*10 FROM table WHERE v%2=0)

Note that other than putting the source of the data in front, LInQ syntax is almost identical.

However, in other languages this sort of data query is called map/reduce and in fact there is a very used programming model called MapReduce that applies in big data processing. In Java, the function that filters data is called filter, the one that alters the values is called map and the one that aggregates data is called reduce. Similar in Javascript. Here is the same code in Javascript:
var arr=[1,2,3,4,5,6];
var result=arr
.filter(v=>v%2==0) //get only even values
.map(v=>v*10) //return their values multiplied with 10
.reduce((s,v)=>v+s,15); //aggregate their value into a sum that starts with a seed of 15
// result should be 15+2*10+4*10+6*10=135
Note that the lambda syntax of writing functions used here is new in ECMA Script version 6. Before you would have to use the function(x) { return [something with x]; } syntax.

In Haxe, the concept is achieved by using the Lambda library and the functions are again named differently: filter for filtering, map for altering and fold for aggregating.

There is another sort of people that would instantly recognize this model of data querying: functional programming people. Indeed, SQL is a functional programming language at its core and the same standard for data querying is used very efficiently in functional programming languages, since they know whether a function is pure or not (has side effects). When only dealing with pure functions, some optimizations can be made on the query by the compiler before anything is even executed. Haskell has the same naming as Haxe (filter, map, fold) for example.

So whenever I get to review other people's code, especially people that have little experience with either SQL or C#, I cringe to see stuff like this:
var max=-1;
for (var i=0; i<arr.length; i++) {
if (max<arr[i]) max=arr[i];
}
In my head this should be simply arr.max(); And considering how easy it is to implement something like this in Javascript, for example, it's a crime for not using it:
Array.prototype.max=function() { return Math.max.apply(null,this); }

Yet there is more to this than my personal preference for reading code. Composition, for example. Because this works like a fluent API or a builder pattern, one can keep adding conditions to a query. Imagine you have to filter a list of strings based on a Google like query string. At the very minimum you would need to split the query into strings and filter repeatedly on each one. Something like this:
var arr=['this is my special query string','this is a string','my query string is this awesome','no query strings here, move along','these are not the strings you are looking for'];
var query="this is a query string";
var splits=query.split(/\s+/g);
var result=arr;
splits.forEach(s=>result=result.filter(a=>a.includes(s)));
console.log(result);

There is a lot of stuff I could be saying about this subject, but I need to summarize it. It's all about inverting loops. Instead of having to go through a collection, a stream or some other data source, then executing some code for each element, this method allows you to encapsulate the operations you want to execute on those elements, pass them around, compose them, translate them, then use them on any data source in the same way. A common API means reusability, better readability of code, less written code and a simpler declaration of intent. Because we get out of the loop system, we can expand the use for other paradigms, such as infinite data streams or event buses.

and has 0 comments
It occurred to me recently that the opposite of fear is hope. Well, of course, you will say, didn't you know that? I did, but I also didn't fully grasp the concept. It doesn't help that fear is considered an emotion, yet hope a more complicated idea.

I was thinking about the things that go wrong in my country and some of it, a large part, comes from bad laws. And I was trying to understand what a "bad law" is. I tried some examples, like the dog leash one - I know, I have a special personal hate for that one in particular - but I noticed a pattern. It's not about the content of the law as it is about its trigger. You see, lawmen don't propose and pass laws because they like work, but because there was an event that triggered the need for that law. Law is always reactive, not proactive. It could be proactive, but there is a lot more effort involved, like convincing people that there is an actual problem that needs addressing. It's much easier to wait for the problem to manifest and then try (or pretend) to fix it.

Anyway, the pattern that I noticed was related to the trigger for individual laws. The bad laws were the ones that came out of fear. One kid got killed by stray dogs, kill them all and institute mandatory leashes on pets. The good laws, on the other hand, come from hope. Lower taxes so people are more inclined to work and thus produce more and so get more tax in. Hopefully people will not be lazy.

And it's not only related to laws, but to personal decisions as well. Will I try a new thing, hoping that it will make me better, teach me something, be fun, or will I not try it because it is dangerous, somebody might get hurt, I may lose precious time, etc? When it is so abstract it's almost a given that you will take the first choice, yet when it is more personal fear tends to paralyze.

Fear is also contagious. The people who want us to be afraid are afraid themselves. Control freaks, power hungry people, they don't want to take us to a better place because they are afraid to lose that control, because they are afraid of what might happen. And their toolkit is based on fear, too. Something exploded and killed people, some asshole drove a car into people: we must ban explosives, cars and - just to be safe - people. Don't go to space because people might die, although they die every second and most of the time you don't care about it. Let's hoard money and things because we might not get another chance to have them, because we might lose them, because we are so afraid. The fear people don't know any other language but fear and they will use it against you. Much easier to instill fear than to give hope, so hope is not that contagious. It is fragile and it is precious.

I submit that while fear might keep us safe it will never make us happy. The very expression "to keep safe" implies stagnation, keeping, holding, controlling, restricting freedom.

So here is my solution. As Saint-Exupery said, perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. Let's strictly define our safe zone, or the area we need to be safe in order to not be afraid. Personally, as a group, as a country, as a planet, let's set the minimum requirements to being safe, a place or situation we can always retreat to and not be afraid. Whether it is a place that is your own, or a lack of debt, or a job or business that will give you just enough money to survive and not spiral out of control, a relationship or some other safety net, everyone needs it. But beyond it, let's abandon fear and instead use hope. Hope that you can do more, you can be better, you can live more or have fun, that other people will act good rather than badly, that strangers will help rather than harm you, that the unknown will reveal beauty rather than terror.

I will choose to define good decisions as coming from hope. Will that hope be proven to be unfounded? Maybe. But a decision based on fear will never ever be good enough. And if all else fails, I have my safe zone to get back to. And I know, I very much know that having a place to get back to from failure is a luxury, that not many people have it as good as I do, but to have it and still live in fear, that's just stupid.

and has 0 comments
A friend of mine recommended this as one of his favorite books, so of course I went into it with very high expectations and of course I was disappointed. That doesn't mean it's a bad book, just that I expected more than I've got.

In Song of Kali, Dan Simmons describes Calcutta as a place of evil, in a culture of filth and senseless violence and death. He goes there with his Indian wife and their infant child when he is called to retrieve a new manuscript from a supposedly dead Indian poet. A lot of culture shock, a lot of weird mystical events and some weird and horrible people that do horrible things is what the book is about.

In 1985 this was perhaps a fantastic story, I don't know, but now it feels a little bit cliché: American man goes somewhere he sees as completely alien and where he feels out of place, usually going there with the family, so that the empathy and horror can be heightened, and where abnormal things he has no control over happen. It also part of a category of stories that I personally dislike: the "something that can't be explained or controlled" category, which implies absolutely no character growth other than realizing there are situations like that in which one can find themselves. And indeed the book is all like that: stories that make little sense, but somehow are linked to the perceptions and experiences of the protagonist, mysterious characters that do things that mean little unless the story takes them exactly to a certain point, at which you are left wondering how did they know to do that thing, and a lot of extraneous details that are there only to reinforce the feeling of disgust and dread that the character feels, but do little to further the story.

In the end, it is just some weird ass plot that makes no sense, a bunch of characters that you can't empathize with (some of them you can't even understand) and a big fat "It is so because I feel it is so", which is so American and has little to do with me. Others agree that the book is most effective when describing the humid fetid heat of the city and the inhumanity of its inhabitants and less with the so called "horror" in the text or the connection the reader feels with the characters. It brings to mind Lovecraft and his strong feelings about things that now are banal and CGI in every movie. Some are even more vehement in their dislike of the book. Here is another review in the same vein.

So how come so many people speak highly of the novel? Well, my guess is that it affects the reader if they are in the right frame of mind. My friend told me about the part that he liked in the book and, frankly, that part is NOT in the book, so whatever literary hallucination he had when reading the book I had none of it. My rating of it cannot be but average, even considering it's a debut novel that won the 1986 World Fantasy Award.

and has 0 comments
So I heard that there is this fan made cut of the series, two hours long, that encompasses the entire story of Breaking Bad. I got a hold of it and watched it. Pretty good. Just some lazy editing in some places, but overall good quality. Therefore, if you want to see what happened in the series overall, without bingeing on 62 hours of TV show, you might want to check it out.

My problem with the film is that it validated my decision to stop watching the series. It focused primarily on Walter's decision points, which were mostly related to his problems with his family (mostly his bitch of a wife that I believe is one of the most irritating characters of all time), friends and coworkers. The only part that I really enjoyed about the series was the first season, where there was actual chemistry involved. Just like other shows that start off with a brilliant specialist that is rather annoying otherwise but gets away with it because he is a flawless craftsman, it begins great then devolves in stories about his personal life. Why would anyone want to for years follow the personal issues of someone who they only became interested in because of their work story is beyond me, but this is what happens. Dr. House, Numb3rs, Elementary, Weeds, even lawyer, doctor and cop shows slowly force their heroes to stop doing their work and instead deal with all kinds of problems in their off duty life; they all lose me at season two, usually. Because of this focus on personal life, the chemistry part got removed from the movie, which makes is all the more boring.

Anyway, my duty is complete on informing the Internet on this film. It is amazing how people spend their time doing something like this for nothing as much as recognition (because then lawyers would bust their chops about using copyrighted content), but do such a lovely job. I would love to have this sort of edits for every show on the planet. Then I would be able to keep up with all of them! :D Also interesting is that there is an IMDb page for the movie found by Google, but then when you navigate to it you get a big 404 page, meaning someone probably created it and then it promptly got deleted. Even if illegal, it is still a movie, assholes! Here is the Google cached version, for how long it will work.

And BTW, if you want to still write a review on the movie, as deleted as it is, you can do so by following this link. Maybe that will force the guys from IMDb to undelete the page.

and has 0 comments
In Fevre Dream, George R. R. Martin writes about a fat bearded guy with a large appetite and a passion for food that loves to be a boat captain. Write what you know, they say. Anyways, this book about vampires in the bayou feels really dated. It has been described as "Bram Stoker meets Mark Twain", so you can imagine how much; written in 1982, it feels like written by a Lovecraft contemporary.

I love Lovecraft, but it gets worse. None of the characters in the book except maybe the main protagonist are likable. They come off as either high and mighty or ridiculously servile. And I understand that in a story where vampires have a master that can be all controlling this is to be expected, but at the same time the hero of the story, without being "compelled", still acts like a servant, enthralled (pardon my pun) by the aristocratic majesty of his vampire friend. One has to get through pages of tedious description of architecture and food and home improvement to get to the succulent part (OK, couldn't help that one) but which then feels cloyed and unsatisfactory. So many interesting characters get just a few scenes, while most of the book is how much the captain loves his food and his ship. And while it discusses some social issues, like slavery and how easily people died or disappeared at the time, it also promotes this idea of personal nobility that justifies other people getting used. This focus on aristocracy is something one sees in A Song of Fire and Ice as well, but less pronounced.

I could have given it an average to good rating if not for the abysmal ending. While at the beginning I had applauded the way the author was building tension and apparently providing a solution only to snatch it away at the last moment, the ending destroys all of it by pretty much invalidating much of the foil of the characters and a major part of the story. The time displacement also accentuates this feeling, as I thought "waited so much for this?!", and by that I mean both me as a reader and the main character in the book.

Bottom line: uninteresting vampires in a slow paced story that probably appeals to Martin fans only. It manages to insert the reader in the eighteen hundreds and the river boat mentality, but there is nothing much else to learn or enjoy in the book beyond that.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Previously on Learning ASP.Net MVC...


Started with the idea of a project that would use user configurable queries to do Google searches, store the information in the results and then perform various data analysis on them and display them based on what the user wants. However, I first implemented a Google authentication and then went to write some theoretical posts. Lastly, I've upgraded the project from .NET Core 1.0 to version 1.1.

Well, it took me a while to get here because I was at a crossroads. I like the idea of dependency injectable services to do data access. At the same time there is the entire Entity Framework tutorial path that kind of wants to strongly integrate EF with my projects. I mean, if I have a service that gives me the list of all items in the database and then I want to get only a few items, it would be bad design to filter the entire list. As such, I would have to write a different method that allows me to get the items based on some kind of filters. On the other hand, Entity Framework code looks just like that "give me all you have, filtered by this" which is then translated into an efficient query to the database. One possibility would be to have my service return IQueryable <T>, so I could also use the system to generate the database code on the fly.

The Design


I've decided on the service architecture, against an EF type IQueryable way, because I want to be able to replace that service with anything, including something that doesn't work with a database or something that doesn't know how to dynamically create queries. Also, the idea that the service methods will describe exactly what I want appeals to me more than avoiding a bit of duplicated code.

Another thing to define now is the method through which I will implement the dependency injection. Being the control freak that I am, I would go with installing my own library, something like SimpleInjector, and configure it myself and use it explicitly. However, ASP.Net Core has dependency injection included out of the box, so I will use that.

As defined, the project needs queries to pass on to Google and a storage service for the results. It needs data services to manage these entities, as well as a service to abstract Google itself. The data gathering operation itself cannot be a simple REST call, since it might take a while, it must be a background task. The data analysis as well. So we need a sort of job manager.

As per a good structured design, the data objects will be stored in a separate project, as well as the interfaces for the services we will be using.

Some code, please!


Well, start with the code of the project so far: GitHub and let's get coding.

Before finding a solution to actually run the background code in the context of ASP.Net, let's write it inside a class. I am going to add a folder called Jobs and add a class in it called QueryProcessor with a method ProcessQueries. The code will be self explanatory, I hope.
public void ProcessQueries()
{
var now = _timeService.Now;
var queries = _queryDataService.GetUnprocessed(now);
var contentItems = queries.AsParallel().WithDegreeOfParallelism(3)
.SelectMany(q => _contentService.Query(q.Text));
_contentDataService.Update(contentItems);
}

So we get the time - from a service, of course - and request the unprocessed queries for that time, then we extract the content items for each query, which then are updated in the database. The idea here is that, for the first time a query is defined or when the interval from the last time the query was processed, the query will be sent to the content service from which content items will be received. These items will be stored in the database.

Now, I've kept the code as concise as possible: there is no indication yet of any implementation detail and I've written as little code as I need to express my intention. Yet, what are all these services? What is a time service? what is a content service? Where are they defined? In order to enable dependency injection, we will populate all of these fields from the constructor of the query processor. Here is how the class would look in its entirety:
using ContentAggregator.Interfaces;
using System.Linq;

namespace ContentAggregator.Jobs
{
public class QueryProcessor
{
private readonly IContentDataService _contentDataService;
private readonly IContentService _contentService;
private readonly IQueryDataService _queryDataService;
private readonly ITimeService _timeService;

public QueryProcessor(ITimeService timeService, IQueryDataService queryDataService, IContentDataService contentDataService, IContentService contentService)
{
_timeService = timeService;
_queryDataService = queryDataService;
_contentDataService = contentDataService;
_contentService = contentService;
}

public void ProcessQueries()
{
var now = _timeService.Now;
var queries = _queryDataService.GetUnprocessed(now);
var contentItems = queries.AsParallel().WithDegreeOfParallelism(3)
.SelectMany(q => _contentService.Query(q.Text));
_contentDataService.Update(contentItems);
}
}
}

Note that the services are only defined as interfaces which we declare in a separate project called ContentAggregator.Interfaces, referred above in the usings block.

Let's ignore the job processor mechanism for a moment and just run ProcessQueries in a test method in the main controller. For this I will have to make dependency injection work and implement the interfaces. For brevity I will do so in the main project, although it would probably be a good idea to do it in a separate ContentAggregator.Implementations project. But let's not get ahead of ourselves. First make the code work, then arrange it all nice, in the refactoring phase.

Implementing the services


I will create mock services first, in order to test the code as it is, so the following implementations just do as little as possible while still following the interface signature.
public class ContentDataService : IContentDataService
{
private readonly static StringBuilder _sb;

static ContentDataService()
{
_sb = new StringBuilder();
}

public void Update(IEnumerable<ContentItem> contentItems)
{
foreach (var contentItem in contentItems)
{
_sb.AppendLine($"{contentItem.FinalUrl}:{contentItem.Title}");
}
}

public static string Output
{
get { return _sb.ToString(); }
}
}

public class ContentService : IContentService
{
private readonly ITimeService _timeService;

public ContentService(ITimeService timeService)
{
_timeService = timeService;
}

public IEnumerable<ContentItem> Query(string text)
{
yield return
new ContentItem
{
OriginalUrl = "http://original.url",
FinalUrl = "https://final.url",
Title = "Mock Title",
Description = "Mock Description",
CreationTime = _timeService.Now,
Time = new DateTime(2017, 03, 26),
ContentType = "text/html",
Error = null,
Content = "Mock Content"
};
}
}

public class QueryDataService : IQueryDataService
{
public IEnumerable<Query> GetUnprocessed(DateTime now)
{
yield return new Query
{
Text="Some query"
};
}
}

public class TimeService : ITimeService
{
public DateTime Now
{
get
{
return DateTime.UtcNow;
}
}
}

Now all I have to do is declare the binding between interface and implementation. The magic happens in ConfigureServices, in Startup.cs:
services.AddTransient<ITimeService, TimeService>();
services.AddTransient<IContentDataService, ContentDataService>();
services.AddTransient<IContentService, ContentService>();
services.AddTransient<IQueryDataService, QueryDataService>();

They are all transient, meaning that for each request of an implementation the system will just create a new instance. Another popular method is AddSingleton.

Using dependency injection


So, now I have to instantiate my query processor and run ProcessQueries.

One way is to set QueryProcessor as a service. I extract an interface, I add a new binding and then I give an interface as a parameter of my controller constructor:
[Authorize]
public class HomeController : Controller
{
private readonly IQueryProcessor _queryProcessor;

public HomeController(IQueryProcessor queryProcessor)
{
_queryProcessor = queryProcessor;
}

public IActionResult Index()
{
return View();
}

[HttpGet("/test")]
public string Test()
{
_queryProcessor.ProcessQueries();
return ContentDataService.Output;
}
}
In fact, I don't even have to declare an interface. I can just use services.AddTransient<QueryProcessor>(); in ConfigureServices and it works as a parameter to the controller.

But what if I want to use it directly, resolve it manually, without injecting it in the controller? One can use the injection of a IServiceProvider instead. Here is an example:
[Authorize]
public class HomeController : Controller
{
private readonly IServiceProvider _serviceProvider;

public HomeController(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}

public IActionResult Index()
{
return View();
}

[HttpGet("/test")]
public string Test()
{
var queryProcessor = _serviceProvider.GetService<QueryProcessor>();
queryProcessor.ProcessQueries();
return ContentDataService.Output;
}
}
Yet you still need to use services.Add... in ConfigureServices and inject the service provider in the constructor of the controller.

There is a way of doing it completely separately like this:
var serviceProvider = new ServiceCollection()
.AddTransient<ITimeService, TimeService>()
.AddTransient<IContentDataService, ContentDataService>()
.AddTransient<IContentService, ContentService>()
.AddTransient<IQueryDataService, QueryDataService>()
.AddTransient<QueryProcessor>()
.BuildServiceProvider();
var queryProcessor = serviceProvider.GetService<QueryProcessor>();

This would be the way to encapsulate the ASP.Net Dependency Injection in another object, maybe in a console application, but clearly it would be pointless in our application.

The complete source code after these modifications can be found here. Test the functionality by going to /test on your local server after you start the app.

and has 0 comments
I've seen several very positive reviews of The Call, by Peadar Ó Guilín so I started reading it. A few hours later I had finished it. It was good: well written, with compelling characters, a fresh idea and a combination of young adult and body horror mixed with Irish mythology that hooked me immediately. I was sorry it had ended and simultaneously hoped for and cursed the idea of "trilogizing" it.

So the book follows this girl who can't use her legs because of polio. She is a happy child until her parents explain to her the realities: Ireland is separated from the world by an impassible barrier and the Aes Sidhe, the Irish fairies, are kidnapping each adolescent kid once, hunt them and hurt them in horrific ways, as revenge for the Irish banishing them to a hellish world. When "the call" comes, the child disappears, leaving back anything that is not part of their bodies and returns in 184 seconds. However, they experience an entire day in the colorless, ugly and cruel world of the Sidhe where they have to fight for their lives. In response, the Irish nation organizes in order to survive, with mandatory child births and training centers where teens are being prepared for the call in hope they will survive.

One might think this is something akin to young adult novels like The Maze, but this is much better. The main character has to overcome her disability as well as the condescending pity or disgust of others. She must manage her crush on a boy in school as well as the rules, both societal and self imposed, about expressing emotion in a world where any friend you have may just disappear in front of you and returned a monster or dead. Her friends are equally well defined, without the book being overly descriptive. The fairies have the ability to change the human body with a mere touch, so even the few kids who survive returned mentally and bodily deformed. The gray world itself is filled with horrors, with an ecosystem of carnivorous plants and animals that are actually made of altered humans, from hunting dogs and mounts to worms and spiders which somehow still maintain some sort of sentience so they can feel pain. I found the Aes Sidhe incredibly compelling: they are incredibly beautiful people and are full of joy and merriment, even as they maim and torture and kill and even when they are themselves in pain or dying, a race of psychotic vengeful people that know nothing but hate.

So I really liked the book and recommend it highly.

and has 0 comments
People who know me often snicker whenever someone utters "refactoring" nearby. I am a strong proponent of refactoring code and I have feelings almost as strong for the managers who disagree. Usually they have really stupid reasons for it, too. However, speaking with a colleague the other day, I realized that refactoring can be bad as well. So here I will explore this idea.

Why refactor at all?


Refactoring is the process of rewriting code so that it is more readable and maintainable. It does not mean writing code to be readable and maintainable from the beginning and it does not mean that doing it you accept your code was not good when you first wrote it. Usually the scope of refactoring is larger than localized bits of code and takes into account several areas in your software. It also has the purpose of aligning your codebase with the inevitable scope creep in any project. But more than this, its primary use is to make easy the work of people that will work on the code later on, be it yourself or some colleague.

I was talking with this friend of mine and he explained to me how, especially in the game industry, managers are reluctant to spend resources in cleaning old code before actually starting work on new one, since release dates come fast and technologies change rapidly. I replied that, to me, refactoring is not something to be done before you write code, but after, as a phase of the development process. In fact, there was even a picture showing it on a wheel: planning, implementing, testing and bug fixing, refactoring. I searched for it, but I found so many other different ideas that I've decided it would be pointless to show it here. However, most of these images and presentation files specified maintenance as the final step of a software project. For most projects, use and maintenance is the longest phase in the cycle. It makes sense to invest in making it easier for your team.



So how could any of this be bad?


Well, there are types of projects that are fire and forget, they disappear after a while, their codebase abandoned. Their maintenance phase is tiny or nonexistent and therefore refactoring the code has a limited value. But still it is not a case when refactoring is wrong, just less useful. I believe that there are situations where refactoring can have an adverse effect and that is exactly the scenario my friend mentioned: before starting to code. Let me expand on that.

Refactoring is a process of rewriting code, which implies you not only have a codebase you want to rewrite, but also that you know how to do it. Except very limited cases where some project is bought by another company with a lot more experienced developers and you just need to clean up garbage, there is no need to touch code that you are just beginning to understand. To refactor after you've finished a planned development phase (a Scrum sprint, for example, or a completed feature) is easy, since you understand how the code was written, what the requirements have become, maybe you are lucky enough to have unit tests on the working code, etc. It's the now I have it working, let's clean it up a little phase. Alternately, doing it when you want to add things is bad because you barely remember what and who did anything. Moreover, you probably want to add features, so changing the old code to accommodate adding some other code makes little sense. Management will surely not only not approve, but even consider it a hostile request from a stupid techie who only cares about the beauty of code and doesn't understand the commercial realities of the project. So suggest something like this and you risk souring the entire team on the prospect of refactoring code.

Another refactoring antipattern is when someone decides the architecture needs to be more flexible, so flexible that it could do anything, therefore they rearchitect the whole thing, using software patterns and high level concepts, but ignoring the actual functionality of the existing code and the level of seniority in their team. In fact, I wouldn't even call this refactoring, since it doesn't address problems with code structure, but rewrites it completely. It's not making sure your building is sturdy and all water pipes are new, it's demolishing everything, building something else, then bringing the same furniture in. Indeed, even as I like beautiful code, doing changes to it solely to make it prettier or to make you feel smarter is dead wrong. What will probably happen is that people will get confused on the grand scheme of things and, without expensive supervision in terms of time and other resources, they will start to cut corners and erode the architecture in order to write simpler code.

There is a system where software is released in "versions". So people just write crappy code and pile features one over the other, in the knowledge that if the project has success, then the next version will be well written. However, that rarely happens. Rewriting money making code is perceived as a loss by the financial managers. Trust me on this: the shitty code you write today will haunt you for the rest of the project's lifetime and even in its afterlife, when other projects are started from cannibalized codebases. However, I am not a proponent of writing code right from the beginning, mostly because no one actually knows what it should really do until they end writing it.

Refactoring is often associated with Test Driven Development, probably because they are both difficult to sell to management. It would be a mistake to think that refactoring is useful only in that context. Sure, it is a best practice to have unit tests on the piece of code you need to refactor, but let's face it, reality is hard enough as it is.

Last, but not least, is the partial or incomplete refactoring. It starts and sometime around the middle of the effort new feature requests arrive. The refactoring is "paused", but now part of your code is written one way and the rest another. The perception is that refactoring was not only useless, but even detrimental. Same when you decide to do it and then allow yourself to avoid it or postpone it and you do it badly enough it doesn't help at all. Doing it for the sake of saying you do it is plain bad.

The right time and the right people


I personally believe that refactoring should be done at the end of each development interval, when you are still familiar with the feature and its implementation. Doing it like this doesn't even need special approval, it's just the way things are done, it's the shop culture. It is not what you do after code review - simple code cleaning suggested by people who took five minutes to look it over - it is a team effort to discuss which elements are difficult to maintain or are easy to simplify or reuse or encapsulate. It is not a job for juniors, either. You don't grab the youngest guy in the team and you let him rearrange the code of more experienced people, even if that seems to teach the guy a lot. Also, this is not something that senior devs are allowed to do in their spare time. They might like it, but it is your responsibility to care about the project, not something you expect your team to do when you are too lazy or too cheap. Finally, refactoring is not an excuse to write bad code in the hope you will fix it later.

By the way I am talking about this you probably believe I've worked in many teams where refactoring was second nature and no one would doubt its utility. You would be wrong. Because it is poorly understood, the reaction of non technical people in a software team to the concept of refactoring usually falls in the interval between condescension and terror. Money people don't understand why change something that works, managers can't sell it as a good thing, production and art people don't care. Even worse, most technical people will rather write new stuff than rearrange old stuff and some might even take offense at attempts to make "their code" better. But they will start to mutter and complain a lot when they will get to the maintenance phase or when they will have to write features over old code, maybe even theirs, and they will have difficulty understanding why the code is not written in a way in which their work would be easy. And when managers will go to their dashboards and compare team productivity they will raise eyebrows at a chart that shows clear signs of slowing down.

Refactoring has a nasty side effect: it threatens jobs. If the code would be clean and any change easy to perform, then there will be a lot of pressure on the decision makers to justify their job. They will have to come with relevant new ideas all the time. If the effort to maintain code or add new features is small, there will be pressure on developers to justify their job as well. Why keep a large team for a project that can easily accommodate a few junior devs that occasionally add something. Refactoring is the bane of the type of worker than does their job confusingly enough that only they can continue to do it or pretend to be managing a difficult project, but they are the ones that make it be so. So in certain situations, for example in single product companies, refactoring will make people fear they will be made redundant. Yet in others it will accelerate the speed of development for new projects, improve morale and win a shit load of money.

So my parting thoughts are these: sell it right and do it right! Most likely it will have a positive effect on the entire project and team. People will be happier and more productive, which means their bosses will be happier and filthy richer. Do it badly or sell it wrong and you will alienate people and curse shitty code for as long as you work there.

and has 0 comments
Inspired by the writings of classics like Asimov, Heinlein and Clarke, Arkwright is a short book that spans several centuries of space exploration and colonization, so after a very positive review on Io9, I've decided to read it. My conclusion: a reedited collection of poorly written shorts stories, it is optimistic and nostalgic enough to be read without effort, but it doesn't really teach anything. Like many of the works it was inspired from, it feels anachronistic, yet it was published in 2016, which makes me wonder why did anyone review this so positively. Perhaps if reviews would not word things so bombastically: "sweeping epic", "hard science fiction", etc. I would enjoy books that are clearly not so more.

Long story short, is starts with a group of 1939 science fiction writers, one of which eventually has a huge success. On his dying bed, he leaves his entire fortune to a foundation with the purpose to invest and support space colonization, in particular other star systems. Somehow, this seed money manages to successfully fund the construction of a beam sail starship which ends up putting people on another star's planet. Most of the book is the story of the family descendants who "live the dream" by monitoring the long journey of the automated ship.

First of all, I didn't enjoy the writing style. Episodic and descriptive, it felt more appropriate for a history book or a diary than a science fiction novel. Then the biases of the writer are more than made evident when he belittles antiscience protesters and religious colonists that believe in the starship as their god. It's not that I don't agree with him, but it was written so condescendingly that it bothered me. Same with the "I told you so" part with the asteroid on collision course with Earth. Same when the Arkwright descendants are pretty much strongarmed into getting into the family business. And third, while focusing on the Arkwright clan, the book completely ignored the rest of the world. While explaining how they designed and constructed and monitored a starship for generations, the author ignored any scientific breakthroughs that happened during that time. It is like the only people that cared about science and space expansion were the Arkwrights. It made the book feel very provincial. I would have preferred to see them in a global context, rather than read about their family issues.

I liked the sentiment, though. The idea that if you put your mind to something, you can do it. Of course, ignoring economic, technical and probabilistic realities does help when you write the book, but still. The story is centered on an old science fiction writer who takes humanity to another star, clearly something the author would have liked to have been autobiographical. It felt like one of those stories grandpas tell their children, all moral and wise, yet totally boring. It's not that they don't mean well and that the moral isn't good, but the way they tell it makes it unappetizing to small children. If I had to use one word to describe this book it is unappetizing

Funny thing is that I've read a similar centuries spanning book about the evolution of mankind that I liked a lot more and was much better written. I would suggest you don't read Arkwright and instead try Accelerando, by Charles Stross.

It is about time to revisit my series on ASP.Net MVC Core. From the time of my last blog post the .Net Core version has changed to 1.1, so just installing the SDK and running the project was not going to work. This post explains how to upgrade a .Net project to the latest version.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Short version


Pressing the batch Update button for NuGet packages corrupted project.json. Here are the steps to successfully migrate a .Net Core project to a higher version.

  1. Download and install the .NET Core 1.1 SDK
  2. Change the version of the SDK in global.json - you can find out the SDK version by creating a new .Net Core project and checking what it uses
  3. Change "netcoreapp1.0" to "netcoreapp1.1" in project.json
  4. Change Microsoft.NETCore.App version from "1.0.0" to "1.1.0" in project.json
  5. Add
    "runtimes": {
    "win10-x64": { }
    },
    to project.json
  6. Go to "Manage NuGet packages for the solution", to the Update tab, and update projects one by one. Do not press the batch Update button for selected packages
  7. Some packages will restore, but remain in the list. Skip them for now
  8. Whenever you see a "downgrade" warning when restoring, go to those packages and restore them next
  9. For packages that tell you to upgrade NuGet, ignore them, it's an error that probably happens because you restore a package while the previous package restoring was not completed
  10. For the remaining packages that just won't update, write down their names, uninstall them and reinstall them

Code after changes can be found on GitHub

That should do it. For detailed steps of what I actually did to get to this concise list, read on.

Long version


Step 0 - I don't care, just load the damn project!


Downloaded the source code from GitHub, loaded the .sln with Visual Studio 2015. Got a nice blocking alert, because this was a .NET Core virgin computer:
Of course, I could have tried to install that version, but I wanted to upgrade to the latest Core.

Step 1 - read the Microsoft documentation


And here I went to Announcing the Fastest ASP.NET Yet, ASP.NET Core 1.1 RTM. I followed the instructions there, made Visual Studio 2015 load my project and automatically restore packages:
  1. Download and install the .NET Core 1.1 SDK
  2. If your application is referencing the .NET Core framework, your should update the references in your project.json file for netcoreapp1.0 or Microsoft.NetCore.App version 1.0 to version 1.1. In the default project.json file for an ASP.NET Core project running on the .NET Core framework, these two updates are located as follows:

    Two places to update project.json to .NET Core 1.1

  3. to be continued...

I got to the second step, but still got the alert...

Step 2 - fumble around


... so I commented out the sdk property in global.json. I got another alert:


This answer recommended uninstalling old versions of SDKs, in my case "Microsoft .NET Core 1.0.1 - SDK 1.0.0 Preview 2-003131 (x64)". Don't worry, it didn't work. More below:

TL;DR; version: do not uninstall the Visual Studio .NET Core Tooling


And then... got the same No executable found matching command "dotnet=projectmodel-server" error again.

I created a new .NET core project, just to see the version of SDK it uses: 1.0.0-preview2-003131 and I added it to global.json and reopened the project. It restored packages and didn't throw any errors! Dude, it even compiled and ran! But now I got a System.ArgumentException: The 'ClientId' option must be provided. Probably it had something to do with the Secret Manager. Follow the steps in the link to store your secrets in the app. It then worked.

Step 1.1 (see what I did there?) - continue to read the Microsoft documentation


The third step in the Microsoft instructions was removed by me because it caused some problems to me. So don't do it, yet. It was
  1. Update your ASP.NET Core packages dependencies to use the new 1.1.0 versions. You can do this by navigating to the NuGet package manager window and inspecting the “Updates” tab for the list of packages that you can update.

    Updating Packages using the NuGet package manager UI with the last pre-release build of ASP.NET Core 1.1


Since I had not upgraded the packages, as in the Microsoft third step, I decided to do it. 26 updates waited for me, so I optimistically selected them all and clicked Update. Of course, errors! One popped up as more interesting: Package 'Microsoft.Extensions.SecretManager.Tools 1.0.0' uses features that are not supported by the current version of NuGet. To upgrade NuGet, see http://docs.nuget.org/consume/installing-nuget. Another was even more worrisome: Unexpected end of content while loading JObject. Path 'dependencies', line 68, position 0 in project.json. Somehow the updating operation for the packages corrupted project.json! From a 3050 byte file, it now was 1617.

Step 3 - repair what the Microsoft instructions broke


Suspecting it was a problem with the NuGet package manager, I went to the link in the first error. But in Visual Studio 2015 NuGet is included and it was clearly the latest version. So the only solution was to go through each package and see which causes the problem. And I went to 26 packages and pressed Install on each and it worked. Apparently, the batch Update button is causing the issue. Weirdly enough there are two packages that were installed, but remained in the Update tab and also appeared in the Consolidate tab: BundleMinifier.Core and Microsoft.EntityFrameworkCore.Tools, although I can't to anything with them there.

Another package (Microsoft.VisualStudio.Web.CodeGeneration.Tools 1.0.0) caused another confusing error: Package 'Microsoft.VisualStudio.Web.CodeGeneration.Tools 1.0.0' uses features that are not supported by the current version of NuGet. To upgrade NuGet, see http://docs.nuget.org/consume/installing-nuget. Yet restarting Visual Studio led to the disappearance of the CodeGeneration.Tools error.

So I tried to build the project only to be met with yet another project.json corruption error: Can not find runtime target for framework '.NETCoreAPP, Version=v1.0' compatible with one of the target runtimes: 'win10-x64, win81-x64, win8-x64, win7-x64'. Possible causes: [blah blah] The project does not list one of 'win10-x64, win81-x64, win7-x64' in the 'runtimes' [blah blah]. I found the fix here, which was to add
"runtimes": {
"win10-x64": { }
},
to project.json.

It compiled. It worked.

and has 3 comments
As you probably know, whenever I blog something, an automated process sends a post to Facebook and one to Twitter. As a result, some people comment on the blog, some on Facebook or Twitter, but more often someone "likes" my blog post. Don't get me wrong, I appreciate the sentiment, but it is quite meaningless. Why did you like it? Was it well written, well researched, did you find it useful and if so in what way? I would wager that most of the time the feeling is not really that clear cut, either. Maybe you liked most of the article, but then you absolutely hated a paragraph. What should you do then? Like it a bunch of times and hate it once?

This idea that people should express emotion related to someone else's content is not only really really stupid, it is damaging. Why? I am glad you asked - clearly you already understand the gist of my article and have decided to express your desire for knowledge over some inevitable sense of awe and gratitude. Because if it is natural for people to express their emotions related to your work, then that means you have to accept some responsibility for what they get to feel, and then you fall into the political correctness, safe zone, don't do anything for someone might get hurt pile of shit. Instead, accept the fact that sharing knowledge or even expressing an opinion is nothing more than a data signal that people may or may not use. Don't even get me started on that "why didn't you like my post? was it something wrong with it? Are you angry with me?" insecurity bullshit that may be cute coming from a 12 year old, but it's really creepy with 50 year old people.

Back to my amazing blog posts, I am really glad you like them. You make my day. I am glowing and I am filled with a sense of happiness that is almost impossible to describe. And then I start to think, and it all goes away. Why did you like it, I wonder? Is it because you feel obligated to like stuff when your friends post? Is it some kind of mercy like? Or did you really enjoy part of the post? Which one was it? Maybe I should reread it and see if I missed something. Mystery like! Nay, more! It is a riddle, wrapped in a mystery, inside an enigma; but perhaps there is a key. That key is personal interest in providing me with useful feedback, the only way you can actually help me improve content.

Let me reiterate this as clear as I possibly can: the worse thing you can do is try to spare my feelings. First of all, it is hubris to believe you have any influence on them at all. Second, you are not skilled enough to understand in what direction your actions would influence them anyway. And third, feeling is the stuff that fixates memories, but you have to have some memory to fixate first! Don't sell a lifetime of knowing something on a few seconds of feeling gratified by some little smiley or bloody heart.

And then there is another reason, maybe one that is more important than everything I have written here. When you make the effort of summarizing what you have read in order to express an opinion you retrieve and generate knowledge in your own head, meaning you will remember it better and it will be more useful to you.

So fuck your wonderful emotions! Give me your thoughts and knowledge instead.

Ever wanted to write a quick and dirty Javascript function that would get content from the web and do something with it, but you couldn't because of the pesky cross origin security limitations? Good Samaritans have created CORS proxies to help with that!

One of them is crossorigin.me, a completely free (and open source) proxy which can be used very easily. Instead of doing an AJAX request to http://someDomainYouDontOwn/somePage, you do it to https://crossorigin.me/http://someDomainYouDontOwn/somePage. And it works for any GET requests, as long as the Origin header is sent (browsers set it automatically for Ajax calls, but not for regular browser requests, so that why https://crossorigin.me/https://google.com will show Origin: header is required if you open it with a browser).

But there are other options, too. CORS Anywhere, CORS proxy and even using YQL are all valid, and that after just five minutes of googling around.

Of course, one might not want to depend on flimsy external free services for a production app, but it sounds perfect for the quick and dirty bastards like me.

and has 0 comments
I want to let you know about the latest features implemented in Bookmark Explorer.



The version number for the extension is already 2.9.3, quickly approaching the new rewrite I am planning for 3.0.0, yet every time I think I don't have anything else I could add, I find new ideas. It would be great if the users of the extension would give me more feedback about the features they use, don't use or want to have.

Here are some examples of new features:
  • Skip button - moves the current page to the end of the bookmark folder and navigates to the next link. Useful for those long articles that you don't have the energy to read, but you want to.
  • Custom URL comparison scheme. Useful for those sites where pages with different parameters or hash values are considered different and you get duplicate notification warnings for no good reason.
  • Duplicate remover in the Manage page. This is an older feature, but now the button for it only appears where there are duplicates in the folder and with the custom URL scheme it's much more useful.
  • Option to move selected bookmarks to start or end of folder, something that is cumbersome to do in the Chrome Bookmark Manager
  • Automatically cleaning bookmark URLs of marketing parameters. This is in the Advanced settings section and must be enabled manually. So far it removes utm_*, wkey, wemail, _hsenc, _hsmi and hsCtaTracking, but I plan to remove much more, like those horrible hashes from Medium, for example. Please let me know of particular URL patterns you want to clean in your bookmarks and if perhaps you want the cleaning to be done automatically for all open URLs

As always, if you want to install the extension go to its Google Chrome extension page: Siderite's Bookmark Explorer

I have switched to a new project at work and it surprised me with the use of a programming language called Haxe. I have just begun, so I will not be able to explain to you all its intricacies, but I am probably going to write some more blog posts about it as I tread along.

What is interesting about Haxe is that it was not designed as just a language, but as a cross platform toolkit, meaning that when you compile the code you've created, it generates code in other languages and platforms, be it C++, C#, Java, Javascript, Flash, PHP, Lua, Java, Python, etc on Windows, iOS, Linux, Android and so on. It's already version 3, so you probably did hear of it, it was just me that was ignorant. Anyway, let's explore a little bit what Haxe can do.

Installing


The starting guide from their web site is telling us to follow some steps, but the gist of it is this:
  1. Download and install an IDE - we'll use FlashDevelop for this intro, for no other reason than this is what I use at work (and it's free)
  2. Once it starts, it will start AppMan, which lets you choose what to install
  3. Select Haxe+Neko
  4. Select Standalone debug Flash Player
  5. Select OpenFL Installer Script
  6. Click Install 3 Items



Read the starting guide for more details.

Writing Code


In FlashDevelop, go to Project → New Project and select OpenFL Project. Let's call it - how else? - HaxeHelloWorld. Note that right under the menu, in the toolbar, you have two dropdowns, one for Debug/Release and another for the target. Let's choose Debug and neko and run it. It should show you an application with a black background, which is the result of running the generated .exe file (on Windows) "HaxeHelloWorld\bin\windows\neko\debug\bin"\HaxeHelloWorld.exe".

Let's write something. The code should look like this, to which you add the part written in italics:
package;

import openfl.display.Sprite;
import openfl.Lib;
/**
* ...
* @author Siderite
*/
class Main extends Sprite
{

public function new()
{
super();

var stage = flash.Lib.current.stage;
var text = new flash.text.TextField();
text.textColor = 0xFFFFFF;
text.text = "Hello world!";
stage.addChild(text);


}
}

Run it and it should show a "Hello world!" message, white on black. Now let's play with the target. Switch it to Flash, html5, neko, windows and run it.



They all show more or less the same white text on a black background. Let's see what it generates:
  • In HaxeHelloWorld\bin\flash\debug\bin\ there is now a file called HaxeHelloWorld.swf.
  • In HaxeHelloWorld\bin\html5\debug\bin\ there is now a web site containing index.html, HaxeHelloWorld.js, HaxeHelloWorld.js.map,favicon.png,lib\howler.min.js and lib\pako.min.js. It's a huge thing for a hello world and it is clearly a machine generated code. What is interesting, though, is that it uses a canvas to draw the string
  • In HaxeHelloWorld\bin\windows\neko\debug\bin\ there are several files, HaxeHelloWorld.exe and lime.ndll being the relevant ones. In fact, lime.ndll is not relevant at all, since you can delete it and the program still works, but if you remove Neko from your system, it will crash with an error saying neko.dll is missing, so it's not a real Windows executable.
  • Now it gets interesting: in D:\_Projects\HaxeHelloWorld\bin\windows\cpp\debug\bin\ you have another HaxeHelloWorld.exe file, but this time it works directly. And if you check D:\_Projects\HaxeHelloWorld\bin\windows\cpp\debug\obj\ you will see generated C++: .cpp and .h files

How about C#? Unfortunately, it seems that the only page explaining how to do this is on the "old.haxe.org" domain, here: Targeting the C# Platform. It didn't work with this code, instead I made it work with the simpler hello world code in the article. Needless to say, the C# code is just as readable as the Javascript above, but it worked!

What I think of it


As far as I will be working with the language, I will be posting stuff I learn. For example, it is obvious FlashDevelop borrowed a lot from Visual Studio, and Haxe a lot from C#, however the familiarity with those might confuse you when Haxe does weird stuff like not having break instructions in switch blocks or not having the protected or internal access modifiers, yet having inheriting classes able to access private members of their base class.

What now?


Well, at the very least, you can try this free to play and open source programming toolkit to build applications that are truly cross platform. Not everything will be easy, but Haxe seems to have built a solid code base, with documentation that is well done and a large user base. It is not the new C# (that's D#, obviously), but it might be interesting to be familiar with it.