I am going to discuss in this post an interview question that pops up from time to time. The solution that is usually presented as best is the same, regardless of the inputs. I believe this to be a mistake. Let me explore this with you.

The problem



The problem is simple: given two sorted arrays of very large size, find the most efficient way to compute their intersection (the list of common items in both).

The solution that is given as correct is described here (you will have to excuse its Javiness), for example. The person who provided the answer made a great effort to list various solutions and list their O complexity and the answer inspires confidence, as coming from one who knows what they are talking about. But how correct is it? Another blog post describing the problem and hinting on some extra information that might influence the result is here.

Implementation


Let's start with some code:

var rnd = new Random();
var n = 100000000;
int[] arr1, arr2;
generateArrays(rnd, n, out arr1, out arr2);
var sw = new Stopwatch();
sw.Start();
var count = intersect(arr1, arr2).Count();
sw.Stop();
Console.WriteLine($"{count} intersections in {sw.ElapsedMilliseconds}ms");

Here I am creating two arrays of size n, using a generateArrays method, then I am counting the number of intersections and displaying the time elapsed. In the intersect method I will also count the number of comparisons, so that we avoid for now the complexities of Big O notation (pardon the pun).

As for the generateArrays method, I will use a simple incremented value to make sure the values are sorted, but also randomly generated:

private static void generateArrays(Random rnd, int n, out int[] arr1, out int[] arr2)
{
    arr1 = new int[n];
    arr2 = new int[n];
    int s1 = 0;
    int s2 = 0;
    for (var i = 0; i < n; i++)
    {
        s1 += rnd.Next(1, 100);
        arr1[i] = s1;
        s2 += rnd.Next(1, 100);
        arr2[i] = s2;
    }
}


Note that n is 1e+7, so that the values fit into an integer. If you try a larger value it will overflow and result in negative values, so the array would not be sorted.

Time to explore ways of intersecting the arrays. Let's start with the recommended implementation:

private static IEnumerable<int> intersect(int[] arr1, int[] arr2)
{
    var p1 = 0;
    var p2 = 0;
    var comparisons = 0;
    while (p1<arr1.Length && p2<arr2.Length)
    {
        var v1 = arr1[p1];
        var v2 = arr2[p2];
        comparisons++;
        switch(v1.CompareTo(v2))
        {
            case -1:
                p1++;
                break;
            case 0:
                p1++;
                p2++;
                yield return v1;
                break;
            case 1:
                p2++;
                break;
        }

    }
    Console.WriteLine($"{comparisons} comparisons");
}


Note that I am not counting the comparisons of the two pointers p1 and p2 with the Length of the arrays, which can be optimized by caching the length. They are just as resource using as comparing the array values, yet we discount them in the name of calculating a fictitious growth rate complexity. I am going to do that in the future as well. The optimization of the code itself is not part of the post.

Running the code I get the following output:

19797934 comparisons
199292 intersections in 832ms


The number of comparisons is directly proportional with the value of n, approximately 2n. That is because we look for all the values in both arrays. If we populate the values with odd and even numbers, for example, so no intersections, the number of comparisons will be exactly 2n.

Experiments


Now let me change the intersect method, make it more general:

private static IEnumerable<int> intersect(int[] arr1, int[] arr2)
{
    var p1 = 0;
    var p2 = 0;
    var comparisons = 0;
    while (p1 < arr1.Length && p2 < arr2.Length)
    {
        var v1 = arr1[p1];
        var v2 = arr2[p2];
        comparisons++;
        switch (v1.CompareTo(v2))
        {
            case -1:
                p1 = findIndex(arr1, v2, p1, ref comparisons);
                break;
            case 0:
                p1++;
                p2++;
                yield return v1;
                break;
            case 1:
                p2 = findIndex(arr2, v1, p2, ref comparisons);
                break;
        }

    }
    Console.WriteLine($"{comparisons} comparisons");
}

private static int findIndex(int[] arr, int v, int p, ref int comparisons)
{
    p++;
    while (p < arr.Length)
    {
        comparisons++;
        if (arr[p] >= v) break;
        p++;
    }
    return p;
}

Here I've replaced the increment of the pointers with a findIndex method that keeps incrementing the value of the pointer until the end of the array is reached or a value larger or equal with the one we are searching for was found. The functionality of the method remains the same, since the same effect would have been achieved by the main loop. But now we are free to try to tweak the findIndex method to obtain better results. But before we do that, I am going to P-hack the shit out of this science and generate the arrays differently.

Here is a method of generating two arrays that are different because all of the elements of the first are smaller than the those of the second. At the very end we put a single element that is equal, for the fun of it.

private static void generateArrays(Random rnd, int n, out int[] arr1, out int[] arr2)
{
    arr1 = new int[n];
    arr2 = new int[n];
    for (var i = 0; i < n - 1; i++)
    {
        arr1[i] = i;
        arr2[i] = i + n;
    }
    arr1[n - 1] = n * 3;
    arr2[n - 1] = n * 3;
}


This is the worst case scenario for the algorithm and the value of comparisons is promptly 2n. But what if we would use binary search (what in the StackOverflow answer was dismissed as having O(n*log n) complexity instead of O(n)?) Well, then... the output becomes

49 comparisons
1 intersections in 67ms

Here is the code for the findIndex method that would do that:

private static int findIndex(int[] arr, int v, int p, ref int comparisons)
{
    var start = p + 1;
    var end = arr.Length - 1;
    if (start > end) return start;
    while (true)
    {
        var mid = (start + end) / 2;
        var val = arr[mid];
        if (mid == start)
        {
            comparisons++;
            return val < v ? mid + 1 : mid;
        }
        comparisons++;
        switch (val.CompareTo(v))
        {
            case -1:
                start = mid + 1;
                break;
            case 0:
                return mid;
            case 1:
                end = mid - 1;
                break;
        }
    }
}


49 comparisons is smack on the value of 2*log2(n). Yeah, sure, the data we used was doctored, so let's return to the randomly generated one. In that case, the number of comparisons grows horribly:

304091112 comparisons
199712 intersections in 5095ms

which is larger than n*log2(n).

Why does that happen? Because in the randomly generated data the binary search find its worst case scenario: trying to find the first value. It divides the problem efficiently, but it still has to go through all the data to reach the first element. Surely we can't use this for a general scenario, even if it is fantastic for one specific case. And here is my qualm with the O notation: without specifying the type of input, the solution is just probabilistically the best. Is it?

Let's compare the results so far. We have three ways of generating data: randomly with increments from 1 to 100, odds and evens, small and large values. Then we have two ways of computing the next index to compare: linear and binary search. The approximate numbers of comparisons are as follows:

RandomOddsEvensSmallLarge

Linear2n2n2n
Binary search3/2*n*log(n)2*n*log(n)2*log(n)

Alternatives


Can we create a hybrid findIndex that would have the best of both worlds? I will certainly try. Here is one possible solution:

private static int findIndex(int[] arr, int v, int p, ref int comparisons)
{
    var inc = 1;
    while (true)
    {
        if (p + inc >= arr.Length) inc = 1;
        if (p + inc >= arr.Length) return arr.Length;
        comparisons++;
        switch(arr[p+inc].CompareTo(v))
        {
            case -1:
                p += inc;
                inc *= 2;
                break;
            case 0:
                return p + inc;
            case 1:
                if (inc == 1) return p + inc;
                inc /= 2;
                break;
        }
    }
}


What am I doing here? If I find the value, I return the index; if the value is smaller, not only do I advance the index, but I also increase the speed of the next advance; if the value is larger, then I slow down until I get to 1 again. Warning: I do not claim that this is the optimal algorithm, this is just something that was annoying me and I had to explore it.

OK. Let's see some results. I will decrease the value of n even more, to a million. Then I will generate the values with random increases of up to 10, 100 and 1000. Let's see all of it in action! This time is the actual count of comparisons (in millions):

Random10Random100Random1000OddsEvensSmallLarge

Linear22222
Binary search303030400.00004
Accelerated search3.43.93.940.0002


So for the general cases, the increase in comparisons is at most twice, while for specific cases the decrease can be four orders of magnitude!

Conclusions


Because I had all of this in my head, I made a fool of myself at a job interview. I couldn't reason all of the things I wrote here in a few minutes and so I had to clear my head by composing this long monstrosity.

Is the best solution the one in O(n)? Most of the time. The algorithm is simple, no hidden comparisons, one can understand why it would be universally touted as a good solution. But it's not the best in every case. I have demonstrated here that I can minimize the extra comparisons in standard scenarios and get immense improvements for specific inputs, like arrays that have chunks of elements smaller than the next value in the other array. I would also risk saying that this findIndex version is adaptive to the conditions at hand with improbable scenarios as worst cases. It works reasonable well for normally distributed arrays, it does wonders for "chunky" arrays (in this is included the case when one array is much smaller than the other) and thus is a contender for some kinds of uses.

What I wanted to explore and now express is that finding the upper growth rate of an algorithm is just part of the story. Sometimes the best implementation fails for not adapting to the real input data. I will say this, though, for the default algorithm: it works with IEnumerables, since it never needs to jump forward over some elements. This intuitively gives me reason to believe that it could be optimized using the array/list indexing. Here it is, in IEnumerable fashion:

private static IEnumerable<int> intersect(IEnumerable<int> arr1, IEnumerable<int> arr2)
{
    var e1 = arr1.GetEnumerator();
    var e2 = arr2.GetEnumerator();
    var loop = e1.MoveNext() && e2.MoveNext();
    while (loop)
    {
        var v1 = e1.Current;
        var v2 = e2.Current;
        switch (v1.CompareTo(v2))
        {
            case -1:
                loop = e1.MoveNext();
                break;
            case 0:
                loop = e1.MoveNext() && e2.MoveNext();
                yield return v1;
                break;
            case 1:
                loop = e2.MoveNext();
                break;
        }

    }
}

Extra work


The source code for a project that tests my various ideas can be found on GitHub. There you can find the following algorithms:

  • Standard - the O(m+n) one described above
  • Reverse - same, but starting from the end of the arrays
  • Binary Search - looks for values in the other array using binary search. Complexity O(m*log(n))
  • Smart Choice - when m*log(n)<m+n, it uses the binary search, otherwise the standard one
  • Accelerating - the one that speeds up when looking for values
  • Divide et Impera - recursive algorithm that splits arrays by choosing the middle value of one and binary searching it in the other. Due to the complexity of the recursiveness, it can't be taken seriously, but sometimes gives surprising results
  • Middle out - it takes the middle value of one array and binary searches it in the other, then uses Standard and Reverse on the resulting arrays
  • Pair search - I had high hopes for this, as it looks two positions in front instead of one. Really good for some cases, though generally it is a bit more than Standard


The testing tool takes all algorithms and runs them on randomly generated arrays:

  1. Lengths m and n are chosen randomly from 1 to 1e+6
  2. A random number s of up to 100 "spikes" is chosen
  3. m and n are split into s+1 equal parts
  4. For each spike a random integer range is chosen and filled with random integer values
  5. At the end, the rest of the list is filled with any random values

Results


For really small first array, the Binary Search is king. For equal size arrays, usually the Standard algorithm wins. However there are plenty of cases when Divide et Impera and Pair Search win - usually not by much. Sometimes it happens that Accelerating Search is better than Standard, but Pair Search wins! I still have the nagging feeling that Pair Search can be improved. I feel it in my gut! However I have so many other things to do for me to dwell on this.

Maybe one of you can find the solution! Your mission, should you choose to accept it, is to find a better algorithm for intersecting sorted arrays than the boring standard one.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services


The previous version of Entity Framework was 6 and the current one is Entity Framework Core 1.0, although for a few years they have been going with Entity Framework 7. It might seem that they changed the naming to be consistent with .NET Core, but according to them they did it to avoid confusion. The new version sprouted from the idea of "EF everywhere", just like .Net Core is ".Net everywhere", and is a rewrite - a port, as they chose to call it, with some extra features but also lacking some of the functionality EF6 had - or better to say has, since they continue to support it for .NET proper. In this post I will examine the history and some of the basic concepts related to working with Entity Framework as opposed to a more direct approach (like opening System.Data.SqlConnection and issuing SqlCommands).

Entity Framework history


Entity Framework started as an ORM, a class of software that abstracts database access. The term itself is either a bit obsolete, with the advent of databases that call themselves non relational, or rebelliously exact, recognizing that anything that can be called a database needs to also encode relationships between data. But that's another topic altogether. When Entity Framework was designed it was all about abstracting SQL into an object oriented framework. How would that work? You would define entities, objects that inherited from a EntityBase class, and decorate their properties with attributes defining some restrictions that databases have, but objects don't, like the size of a field. You also had some default methods that could be overridden in order to control very specific custom requirements. In the background, objects would be mapped to tables, their simple properties to columns and their more complex properties to other tables that had a foreign key relationship with the owner object mapped table.

There were some issues with this system that quickly became apparent. With the data layer separation idea going strong, it was really cumbersome and ugly to move around objects that inherited from an entire hierarchy of Entity Framework classes and held state in ways that were almost opaque to the user. Users demanded the use of POCOs, a way to separate the functionality of EF from the data objects that were used through all the tiers of the application. At the time the solution was mostly to use simple objects within your application and then translate them to data access objects which were entities.

Microsoft also recognized this and in further iterations of EF, they went full POCO. But this enabled them to also move from one way of thinking to another. At the beginning the focus was on the database. You had your database structure and your data access layer and you wanted to add EF to your project, meaning you needed to map existing tables to C# objects. But now, you could go the other way around. You started with an application using plain objects and then just slapped EF on and asked it to create and maintain the database. The first way of thinking was coined "database first" and the other "code first".

In seven iterations of the framework, things have been changed and updated quite a lot. You can imagine that successfully adapting to legacy database structures while seamlessly abstracting changes to that structure and completely managing the mapping of objects to database was no easy. There were ups and downs, but Microsoft stuck with their guns and now they are making the strong argument that all your data manipulation should be done via EF. That's bold and it would be really stupid if Entity Framework weren't a good product they have full confidence in. Which they do. They moved from a framework that was embedded in .NET, to one that was partially embedded and then some extra code was separate and then, with EF6, they went full open source. EF Core is also open source and .NET Core is free of EF specific classes.

Also, EF Core is more friendly towards non relational databases, so you either consider ORM an all encompassing term or EF is no longer just an ORM :)

In order to end this chapter, we also need to discuss alternatives.

Ironically, both the ancestor and the main competitor for Entity Framework was LINQ over SQL. If you don't know what LINQ is, you should take the time to look it up, since it has been an integral part of .NET since version 3.5. in Linq2Sql you would manually map objects to tables, then use the mapping in your code. The management of the database and of the mapping was all you. When EF came along, it was like an improvement over this idea, with the major advantage (or flaw, depending on your political stance) that it handled schema mapping and management for you, as much as possible.

Another system that was and is very used was separating data access based on intent, not on structure. Basically, if you had the need to add/get the names of people from your People table, you would have another project that had some object hierarchy that in the end had methods for AddPeople and GetPeople. You didn't need to delete or update people, you didn't have the API for it. Since the intent was clear, so was the structure and the access to the database, all encapsulated - manually - into this data access layer project. If you wanted to get people by name, for example, you had to add that functionality and code all the intermediary access. This had the advantage (or flaw) that you had someone who was good with databases (and a bit with code) handling the maintenance of the data access layer, basically a database admin with some code writing permissions. Some people love the control over the entire process, while others hate that they need to understand the underlying database in order to access data.

From my perspective, it seems as there is an argument between people who want more control over what is going on and people who want more ease of development. The rest is more an architectural discussion which is irrelevant as EF is concerned. However, it seems to me that the Entity Framework team has worked hard to please both sides of that argument, going for simplicity, but allowing very fine control down the line. It also means that this blog post cannot possibly cover everything about Entity Framework.

Getting started with Entity Framework


So, how do things look in EF Core 1.0? Things are still split down the middle in "code first" and "database first", but code first is the recommended way for starting new projects. Database first is something that must be supported in perpetuity just in case you want to migrate to EF from an existing database.

Database first


Imagine you have tables in an SQL server database. You want to switch to EF so you need to somehow map the existing data to entities. There is a tutorial for that: ASP.NET Core Application to Existing Database (Database First), so I will just quickly go over the essentials.

First thing is to use NuGet to install EF in your project:
Install-Package Microsoft.EntityFrameworkCore.SqlServer
and then add
"Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final"
to the project.json tools section. For the Database First approach we also need other stuff like:
Install-Package Microsoft.EntityFrameworkCore.Tools –Pre
Install-Package Microsoft.EntityFrameworkCore.SqlServer.Design
Final touch, running
Scaffold-DbContext "<Sql connection string>" Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models

At this time alarm bells are sounding already. Wait! I only gave it my database connection string, how can it automagically turn this into C# code and work?

If we look at the code to create the sample database in the tutorial above, there are two tables: Blog and Post and they are related via primary key/foreign key as is recommended to create an SQL database. Columns are clearly defined as NULL or NOT NULL and the size of text fields is conveniently Max.



The process created some interesting classes. Besides the properties that map to fields, the Blog class has a property of type ICollection<Post> which is instantiated with a HashSet<Post>. The real fun is the BloggingContext class, which inherits from DbContext and in the override for ModelCreating configures the relationships in the database.
  • Enforcing the required status of the blog Url:
    modelBuilder.Entity<Blog>(entity =>
    {
    entity.Property(e => e.Url).IsRequired();
    });
  • Defining the one-to-many relationship between Blog and Post:
    modelBuilder.Entity<Post>(entity =>
    {
    entity.HasOne(d => d.Blog)
    .WithMany(p => p.Post)
    .HasForeignKey(d => d.BlogId);
    });
  • Having the root sets used to access entities:
    public virtual DbSet<Blog> Blog { get; set; }
    public virtual DbSet<Post> Post { get; set; }

First thing to surprise me, honestly, is that the data model classes are as bare as possible. I would have expected some attributes on the properties defining their state as required, for example. EF Core allows to not pollute the classes with data annotations, as well as an annotation based system. The collections are interfaces and they are only instantiated with a concrete implementation in the constructor. An interesting choice for the collection type is HashSet. As opposed to a List it does not allow access via indexers, only enumerators. It is designed to optimize search: basically finding an item in the hashset does not depend on the size of the collection. Set operations like union and intersects can be used efficiently with Hashset, as well.

Hashset also does not allow duplicates and that may cause some sort of confusion. How does one define a duplicate? It uses IEqualityComparer. However, a HashSet can be instantiated with a custom IEqualityComparer implementation. Alternately, the Equals and GetHashCode methods can be overridden in the entities themselves. People are divided over whether one should use such mechanisms to optimize Entity Framework functionality, but keep in mind that normally EF would only keep in memory stuff that it immediately needs. Such optimizations are more likely to cause maintainability problems than save processing time.

Database first seems to me just a way to work with Entity Framework after using a migration tool. It sounds great, but there are probably a lot of small issues that one has to gain experience with when dealing with real life databases. I will blog about it if I get to doing something like this.

Code first


The code first tutorial goes the other direction, obviously, but has some interesting differences that tell me that a better model of migrating even existing databases is to start code first, then find a way to migrate the data from the existing database to the new one. This has the advantage that it allows for refactoring the database as well as provide some sort of verification mechanism when comparing the old with the new structure.

The setup is similar: use NuGet to install EF in your project:
Install-Package Microsoft.EntityFrameworkCore.SqlServer
then add
"Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final"
to the project.json tools section.

Then we create the models: a simple DbContext inheritance, containing DbSets of Blog and Post, and the data models themselves: Blog and Post. Here is the code:
public class BloggingContext : DbContext
{
public BloggingContext(DbContextOptions<BloggingContext> options)
: base(options)
{ }

public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
}

public class Blog
{
public int BlogId { get; set; }
public string Url { get; set; }

public List<Post> Posts { get; set; }
}

public class Post
{
public int PostId { get; set; }
public string Title { get; set; }
public string Content { get; set; }

public int BlogId { get; set; }
public Blog Blog { get; set; }
}

Surprisingly, the tutorial doesn't go into any other changes to this code. There are no HashSets, there are no restrictions over what is required or not and how the classes are related to each other. A video demo of this also shows the created database and it contains primary keys. A blog has a primary key on BlogId, for example. To me that suggests that convention over configuration is also used in the background. The SomethingId property of a class named Something will automatically be considered the primary key (also simply Id). Also, if you look in the code that EF is executing when creating the database (these are called migrations and are pretty cool, I'll discuss them later in the post) Blogs are connected to Posts via foreign keys, so this thing works wonders if you name your entities right. I also created a small console application to test this and it worked as advertised.

Obviously this will not work with every scenario and there will be attributes attached to models and novel ways of configuring mapping, but so far it seems pretty straightforward. If you want to go into the more detailed aspects of controlling your data model, try reading the documentation provided by Microsoft so far.

Entity Framework concepts


We could go right into the code fray, but I choose to first write some more boring conceptual stuff first. Working with Entity Framework involves understanding concepts like persistence, caching, migrations, change saving and the underlying mechanisms that turn code into SQL, of the Unit of Work and Repository patterns, etc. I'll try to be brief.

Context


As you have seen earlier, classes inheriting from DbContext are the root of all database access. I say classes, because more of them can be used. If you want to copy from one database to another you will need to contexts. The context defines a data model, differentiated from a database schema by being a purely programmatic concept. DbContext implements IDisposable so for nuclear operations it can be used just as one uses an open SQL connection. In fact, if you are tempted to reuse the same context remember that its memory use increases with the quantity of data it accesses. It is recommended for performance reasons to immediately dispose a context when finishing operations. Also, a DbContext class is not thread safe. It stands to reason to use context for as short a period as possible inside single threaded operations.

DbContext provides two hooks called OnConfiguring and OnModelCreating that users can override to configure the context and the model, respectively. Careful, though, one can configure the context to use a specific implementation of IModel as model, in which case OnModelCreating will not be called. The other most important functionality of DbContext is SaveChanges, which we will discuss later. Worth mentioning are Database and Model, properties that can be used to access database and model information and metadata. The rest are Add, Update, Remove, Attach, Find, etc. plus their async and range versions allowing for the first time - EF6 did not - to dynamically send an object to a function like Add for example and let EF determine where to add it. It's nothing that sounds very safe to use, but probably there were some scenarios where it was necessary.

DbSet


For each entity type to be accessed, the context should have DbSet<TEntity> properties for that type. DbSet allows for the manipulation of entities via methods like Add, Update, Remove, Find, Attach and is an IEnumerable and IQueriable of TEntity. However, in order to persist any change, SaveChanges needs to be called on the context class.

SaveChanges


The SaveChanges method is the most important functionality of the context class, which otherwise caches the accessed objects and their state waiting either for this method to be called or for the context to be disposed. Important improvements on the EF Core code now allows to send these changes to the database using batches of commands. Before, in EF6 and previously, each change was sent separately so, for example, adding two entities to a set and saving changes would do two database trips. From EF Core onward, that would only take one trip unless specifically configured with MaxBatchSize(number). Revert to the EF6 behavior using MaxBatchSize(1). This applies to SqlServer only so far.

This behavior is the reason why contexts need to be released as soon as their work is done. If you query all the items with a name starting with 'A', all of these items will be loaded in the context memory. If you then need to get the ones starting with 'B', the performance and memory will be affected if using the same context. It might be helpful, though, if then you need to query both items starting with 'A' and the ones starting with 'B'. It's your choice.

One particularity of working with Entity Framework is that in order to update or delete records, you first need to query them. Something like
context.Posts.RemoveRange(context.Posts.Where(p => p.Title.StartsWith("x")));
There is no .RemoveRange(predicate) because it would be impossible to resolve a query afterwards. Well, not impossible, only it would have to somehow remember the predicate, alter subsequent selects to somehow gather all information required and apply deletion on the client side. Too complicated. There is a way to access the database by writing SQL directly and again EF Core has some improvements for this, but raw SQL changes are opaque to an already existing context.

Unit of Work and Repository patterns


The Repository pattern is an example of what I was calling before an alternative to Entity Framework: a separation of data access from business logic that improves testability and keeps distinct responsibilities apart. That doesn't mean you can't do it with EF, but sometimes it feels pretty shallow and developers may be tempted to skip this extra encapsulation.

A typical example is getting a list of items with a filter, like blog posts starting with something. So you create a repository class to take over from the Posts DbSet and create a method like GetPostsStartingWith. A naive implementation returns a List of items, but this actually hinders EF in what it tries to do. Let's assume your business logic requires you to return the first ten posts starting with 'A'. The initial code would look like this:
var posts=context.Posts.Where(p=>p.Title.StartsWith("A")).Take(10).ToList();
In this case the SQL code sent to the database is like SELECT TOP 10 * FROM Posts WHERE Title LIKE 'A%'. However, in a code looking like this:
var repo=new PostsRepository();
var posts=repo.GetPostsStartingWith("A").Take(10).ToList();
will first pull all posts starting with "A" then retrieve the first 10. Ouch! The solution is to return IQueryable instead of IEnumerable or a List, but then things start to feel fishy. Aren't you just shallow proxying the DbSet?

Unit of Work is some sort of encapsulation of similar activities using the same data, something akin to a transaction. Let's assume that we store the number of posts in the Counts table. So if we want to add a post we need to do the adding, then change the value of the count. The code might look like this:
var counts=new CountsRepository();
var blogs=new BlogRepository();
var blog=blogs.Where(b.Name=="Siderite's Blog").First();
blog.Posts.Add(post);
counts.IncrementPostCount(blog);
blog.Save();
counts.Save();
Now, since this selects a blog and changes posts then updates the counts, there is no reason to use different contexts for the operation. So one could create a Unit of Work class that would look a bit like a common repository for blogs and counts. Let's ignore the silly example as well as the fact that we are doing posts operations using the BlogRepository, which is something that we are kind of forced to do in this situation unless we start to deconstruct EF operations and recreate them in our code. There is a bigger elephant in the room: there already exists a class that encapsulates access to database, caches the items retrieved and creates one atomic operation for both changes. It's the context itself! If we instantiate the repositories with a constructor that accepts a context, then all one has to do to atomize the operations is to put the code inside a using block.

There are also controversies related to the use of these two patterns with EF. Rob Conery has a nice blog post suggesting Command/Query objects instead. His rationale is that if you have to pass a context object, as above, there is no much decoupling involved.

I lean towards the idea that you need a Data Access Layer encapsulation no matter what. I would put the using block in a method in a class rather than pass the context or not use a repository. Also, since we saw that entity type is not a good separation of "repositories" - I feel that I should name them differently in this situation - and the intent of the methods is already declared in their name (like GetPosts...) then these encapsulation classes should be separated by some other criteria, like ContentRepository and ForumRepository, for example.

Migrations


Migrations are cool! The idea is that when making changes to structure of the database one can extract those changes in a .cs file that can be added to the project and to source control. This is one of the clear advantages of using Entity Framework.

First of all, there are a zillion tutorials on how to enable migrations, most of them wrong. Let's list the possible ways you could go wrong:
  • Enable-Migrations is obsolete - older tutorials recommended to use the Package Manager Console command Enable-Migrations. This is now obsolete and you should use Add-Migration <Name>
  • Trying to install EntityFramework.Commands - due to namespace changes, the correct namespace would be Microsoft.EntityFrameworkCore.Commands anyway, which doesn't exist. EntityFramework.Commands is version 7, so it shouldn't be used in .NET Core. However, at one point or another, this worked if you added some imports and changed stuff around. I tried all that only to understand the sad truth: you should not install it at all!
  • Having a DbContext inheriting class that doesn't have a default constructor or is not configured for dependency injection - the migration tool looks for such classes then creates instances of them. Unless it knows how to create these instances, the Add-Migration will fail.

The correct way to enable migrations is... to install the packages from the Database First section! Yes, that is right, if you want migrations you need to install
Install-Package Microsoft.EntityFrameworkCore.Tools –Pre
Install-Package Microsoft.EntityFrameworkCore.SqlServer.Design
Only then you may open the Package Manage Console and run
Add-Migration FirstMigration
Note that I am discussing an SQL Server example. It is possible you will need other packages if using a different type of database.

The result is a folder called Migrations in which you will find two files: a snapshot and the migration itself. Here is an example of the snapshot:
[DbContext(typeof(BloggingContext))]
partial class BloggingContextModelSnapshot : ModelSnapshot
{
protected override void BuildModel(ModelBuilder modelBuilder)
{
modelBuilder
.HasAnnotation("ProductVersion", "1.0.0-rtm-21431")
.HasAnnotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn);

modelBuilder.Entity("EFCodeFirst.Blog", b =>
{
b.Property<int>("BlogId")
.ValueGeneratedOnAdd();

b.Property<string>("Url");

b.HasKey("BlogId");

b.ToTable("Blogs");
});

modelBuilder.Entity("EFCodeFirst.Post", b =>
{
b.Property<int>("PostId")
.ValueGeneratedOnAdd();

b.Property<int>("BlogId");

b.Property<string>("Content");

b.Property<string>("Title");

b.HasKey("PostId");

b.HasIndex("BlogId");

b.ToTable("Posts");
});

modelBuilder.Entity("EFCodeFirst.Post", b =>
{
b.HasOne("EFCodeFirst.Blog", "Blog")
.WithMany("Posts")
.HasForeignKey("BlogId")
.OnDelete(DeleteBehavior.Cascade);
});
}
}

And here is one of the migration:
public partial class First : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateTable(
name: "Blogs",
columns: table => new
{
BlogId = table.Column<int>(nullable: false)
.Annotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn),
Url = table.Column<string>(nullable: true)
},
constraints: table =>
{
table.PrimaryKey("PK_Blogs", x => x.BlogId);
});

migrationBuilder.CreateTable(
name: "Posts",
columns: table => new
{
PostId = table.Column<int>(nullable: false)
.Annotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn),
BlogId = table.Column<int>(nullable: false),
Content = table.Column<string>(nullable: true),
Title = table.Column<string>(nullable: true)
},
constraints: table =>
{
table.PrimaryKey("PK_Posts", x => x.PostId);
table.ForeignKey(
name: "FK_Posts_Blogs_BlogId",
column: x => x.BlogId,
principalTable: "Blogs",
principalColumn: "BlogId",
onDelete: ReferentialAction.Cascade);
});

migrationBuilder.CreateIndex(
name: "IX_Posts_BlogId",
table: "Posts",
column: "BlogId");
}

protected override void Down(MigrationBuilder migrationBuilder)
{
migrationBuilder.DropTable(
name: "Posts");

migrationBuilder.DropTable(
name: "Blogs");
}
}

Note that this is not something that copies the changes in data, only the ones in the database schema.

Conclusions


Yes, no code in this post. I wanted to explore Entity Framework in my project, but if I would have continued it like that the post would have become too long. As you have seen, there are advantages and disadvantages in using Entity Framework, but at this point I find it more valuable to use it and meet any problems I find face on. Besides, the specifications of my project don't call for complex database operations so the data access mechanism is quite irrelevant.

Stay tuned for the next post in which we actually use EF in ContentAggregator!

About 25 years ago I was getting Compton's Multimedia Encyclopedia CD-ROM as a gift from my father. Back then I had no Internet so I delved into what now seems impossibly boring, looking up facts, weird pictures, reading about this and that.

At one time I remember I found a timeline based feature that showed on a scrolling bar the main events of history. I am not much into history, I can tell you that, but for some reason I became fascinated with how events in American history in particular were lining up. So I extracted only those and, at the end, I presented my findings to my grandmother: America was an expanding empire, conquering, bullying, destabilizing, buying territory. I was really adamant that I had stumbled onto something, since the United States were supposed to be moral and good. Funny how a childhood of watching contraband US movies can make you believe that. My grandmother was not impressed and I, with the typical attention span of a child, abandoned any historical projects in the future.

Fast forward to now, when, looking for Oliver Stone to see what movies he has done lately, I stumble upon a TV Series documentary called The Untold History of the United States. You can find it in video format, but also as a companion book or audio book. While listening to the audio book I realized that Stone was talking about my childhood discovery, also disillusioned after a youth of believing the American propaganda, then going through the Vietnam war and realizing that history doesn't tell the same story as what is being circulated in classes and media now.

However, this is no childish project. The book takes us through the US history, skirting the good stuff and focusing on the bad. Yet it is not done in malice, as far as I could see, but in the spirit that this part of history is "untold", hidden from the average eye, and has to be revealed to all. Stone is a bit extremist in his views, but this is not a conspiracy theory book. It is filled with historical facts, arranged in order, backed by quotes from the people of the era. Most of all, it doesn't provide answers, but rather questions that the reader is invited to answer himself. Critics call it biased, but Stone himself admits that it is with intent. Other materials and tons of propaganda - the history of which is also presented in the book - more than cover the positive aspect of things. This is supposed to be a balancing force in a story that is almost always said from only one side.

The introductory chapter alone was terrifying, not only because of the forgotten atrocities committed by the US in the name of the almighty dollar and God, but also because of the similarities with the present. Almost exactly a century after the American occupation of the Philippines, we find the same situation in the Middle-East. Romanians happy with the US military base at Deveselu should perhaps check what happened to other countries that welcomed US bases on their territory. People swallowing immigration horror stories by the ton should perhaps find out more about a little film called Birth of a Nation, revolutionary in its technical creation and controversial - now - for telling the story of the heroic Ku-Klux-Klan riding to save white folk - especially poor defenseless women - from the savage negroes.

By no means I am calling this a true complete objective history, but the facts that it describes are chilling in their evil banality and unfortunately all true. The thesis of the film is that America is losing its republican founding fathers roots by behaving like an empire, good and moral only in tightly controlled and highly financed media and school curricula. It's hard not to see the similarities between US history a century ago and today, including the presidential candidates and their speeches. The only thing that has changed is the complete military and economic supremacy of the United States and the switch from territorial colonialism to economic colonialism. I am not usually interested in history, but this is a book worth reading.

I leave you with Oliver Stone's interview (the original video was removed by YouTube for some reason):

I am writing this post to rant against subscription popups. I've been on the Internet long enough to remember when this was a thing: a window would open up and ask you to enter your email address. We went from that time, through all the technical, stylistic and cultural changes to the Internet, to this Web 3.0 thing, and the email subscription popups have emerged again. They are not ads, they are simply asking you to allow them into your already cluttered inbox because - even before you've had a chance to read anything - what they have to say is so fucking important. Sometimes they ask you to like them on Facebook or whatever crap like that.

Let me tell you how to get rid of these real quick. Install an ad blocker, like AdBlockPlus or uBlock Origin. I recommend uBlock Origin, since it is faster and I feel works better than the older AdBlock. Now this is something that anyone should do just to get rid of ads. I've personally never browsed the Internet from a tablet or cell phone because they didn't allow ad blockers. I can't go on the web without them.

What you may not know, though, is that there are several lists of filters that you can choose from and that are not enabled by default when you install an ad blocker. One of my favourite lists is Fanboy's Annoyances list. It takes care of popups of all kinds, including subscriptions. But even so, if the default list doesn't contain the web site you are looking at, you have the option to pick elements and block them. A basic knowledge of CSS selectors helps, but here is the gist of it: ###something means the element with the id "something" and ##.something is the elements with the class name "something". Here is an example: <div id="divPopup" class="popup ad annoying"> is a div element that has id "divPopup" and class names "popup", "ad" and "annoying".

One of the reason why subscription popups are not always blocked is because beside the elements that they cover the page with, they also place some constraints on the page. For example they place a big element over the screen (what is called an overlay), then a popup element in the center of the screen and also change the style of the entire page to not scroll down. So if you would remove the overlay and the popup, the page would only show you the upper part and not allow you to scroll down. This can be solved with another browser extension called Stylish, which allows you to save and apply your own style to pages you visit. The CSS rule that solves this very common scenario is html,body { overflow: auto !important; }. That is all. Just add a new style for the page and copy paste this. 19 in 20 chances you will get the scroll back.

To conclude, whenever you see such a stupid, stupid thing appearing on the screen, consider blocking subscription popups rather than pressing on the closing button. Block it once and never see it again. Push the close button and chances are you will have to keep pressing it each time you visit a page.

Now, if I only had a similar option for jump scares in movies...

P.S. Yes, cookie consent popups are included in my rant. Did you know that you can block all cookie nagware from Blogspot within one fell swoop, rather than having to click OK at each blog individually, for example?

Learning ASP.Net MVC series:

  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services


In the setup part of the series I've created a set of specifications for the ASP.Net MVC app that I am building and I manufactured a blank project to start me up. There was quite a bit of confusion on how I would continue the series. Do I go towards the client side of things, defining the overall HTML structure and how I intend to style it in the future? Do I go towards the functionality of the application, like google search or extracting text and applying word analysis on it? What about the database where all the information is stored?

In the end I opted for authentication, mainly because I have no idea how it's done and also because it naturally leads into the database part of things. I was saying that I don't intend to have users of the application, they can easily connect with their google account - which hopefully I will also use for searching (I hate that API!). However, that's not quite how it goes: there will be an account for the user, only it will be connected to an outside service. While I completely skirt the part where I have to reset the password or email the user and all that crap - which, BTW, was working rather well in the default project - I still have to set up entities that identify the current user.

How was it done before?


In order to proceed, let's see how the original project did it. It was first setting a database context, then adding Identity using a class named ApplicationUser.

services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

services.AddIdentity<ApplicationUser, IdentityRole>()
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();


ApplicationUser is a class that inherits from IdentityUser, while ApplicationDbContext is something inheriting from IdentityDbContext<ApplicationUser>. Seems like we are out of luck and the identity and db context are coupled pretty strongly. Let's see if we can decouple them :) Our goal: using OAuth to connect with a Google account, while using no database.

Authentication via Google+ API


The starting point of any feature is coding and using autocomplete and Intellisense until it works reading the documentation. In our case, the ASP.Net Authentication section, particularly the authentication using Google part. It's pretty skimpy and it only covers Facebook. Found another link that actually covers Google, but it's for MVC 5.

Enable SSL


Both tutorials agree that first I need to enable SSL on my web project. This is done by going to the project properties, the Debug section, and checking Enable SSL. It's a good idea to copy the https URL and set it as the start URL of the project. Keep that URL in the clipboard, you are going to need it later, as well.



Install Secret Manager


Next step is installing the Secret Manager tool, which in our case is already installed, and specifying a userSecretsId, which should also be already configured.

Create Google OAuth credentials


Next let's create credentials for the Google API. Go to the Google Developer Dashboard, create a project, go to Credentials → OAuth consent screen and fill out the name of the application. Go to the Credentials tab and Create Credentials → OAuth client ID. Select Web Application, fill in a name as well as the two URLs below. We will use the localhost SSL URL for both like this:

  • Authorised JavaScript origins: https://localhost:[port] - the URL that you copied previously
  • Authorised redirect URIs: https://localhost:[port]/account/callback - TODO: create a callback action

Press Create. At this point a popup with the client ID and client secret appears. You can either copy the two values or close the popup and download the json file containing all the data (project id and authorised URLs among them), or copy the values directly from the credentials dialog.



Make sure to go back to the Dashboard section and enable the Google+ API, in the Social APIs group. There is a quota of 10000 requests per day, I hope it's enough. ;)



Writing the authentication code


Let's use the 'dotnet user-secrets' tool to save the two credential values. Run the following two commands in the project folder:

dotnet user-secrets set Authentication:Google:ClientId <client-Id>
dotnet user-secrets set Authentication:Google:ClientSecret <client-Secret>

Use the values from the Google credentials, obviously. In order to get to the two values all we need to do is call Configuration["Authentication:Google:ClientId"] in C#. In order for this to work we need to have loaded the package Microsoft.Extensions.Configuration.UserSecrets in project.json and somewhere in Startup a code that looks like this: builder.AddUserSecrets();, where builder is the ConfigurationBuilder.

Next comes the installation of the middleware responsible for authenticating google and which is called Microsoft.AspNetCore.Authentication.Google. We can install it using NuGet: right click on References in Visual Studio, go to Manage NuGet packages, look for Microsoft.AspNetCore.Authentication.Google ("ASP.NET Core contains middleware to support Google's OpenId and OAuth 2.0 authentication workflows.") and install it.



Now we need to place this in Startup.cs:

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationScheme = "Cookies",
AutomaticAuthenticate = true,
AutomaticChallenge = true,
LoginPath = new PathString("/Account/Login")
});

app.UseGoogleAuthentication(new GoogleOptions
{
AuthenticationScheme="Google",
SignInScheme = "Cookies",
ClientId = Configuration["Authentication:Google:ClientId"],
ClientSecret = Configuration["Authentication:Google:ClientSecret"],
CallbackPath = new PathString("/Account/Callback")
});

Yay! code!

Let's start the website. A useful popup appears with the message "This project is configured to use SSL. To avoid SSL warnings in the browser you can choose to trust the self-signed certificate that IIS Express has generated. Would you like to trust the IIS Express certificate?". Say Yes and click OK on the next dialog.



What did we do here? First, we used cookie authentication, which is not some gluttonous bodyguard with a sweet tooth, but a cookie middleware, of course, and our ticket for authentication without using identity. Then we used another middleware, the Google authentication one, linked to the previous with the "Cookies" SignInScheme. We used the ClientId and ClientSecret we saved previously in the Secret Manager. Note that we specified an AuthenticationScheme name for the Google authentication.

Yet, the project works just fine. I need to do one more thing for the application to ask me for a login and that is to decorate our one action method with the [Authorize] attribute:

[Authorize]
public class HomeController : Controller
{
public IActionResult Index()
{
return View();
}

}

After we do that and restart the project, the start page will still look blank and empty, but if we look in the network activity we will see a redirect to a nonexistent /Account/Login, as configured:


The Account controller


Let's create this Account controller and see how we can finish the example. The controller will need a Login method. Let me first show you the code, then we can discuss it:

public class AccountController : Controller
{
public IActionResult Login(string ReturnUrl)
{
return new ChallengeResult("Google",new AuthenticationProperties
{
RedirectUri = ReturnUrl ?? "/"
});
}
}


We simply return a ChallengeResult with the name of the authentication scheme we want and the redirect path that we get from the login ReturnUrl parameter. Now, when we restart the project, a Google prompt welcomes us:

After clicking Allow, we are returned to the home page.



What happened here? The home page redirected us to Login, which redirected us to the google authentication page, which then redirected us to /Account/Callback, which redirected us - now authenticated - to the home page. But what about the callback? We didn't write any callback method. (Actually I first did, complete with a complex object to receive all the parameters. The code within was never executed). The callback route was actually defined and handled by the Google middleware. In fact, if we call /Account/Callback, we get an authentication error:


One extra functionality that we might need is the logout. Let's add a Logout method:

public async Task<IActionResult> LogOut()
{
await HttpContext.Authentication.SignOutAsync("Cookies");

return RedirectToAction("index", "home");
}

Now, when we go to /Account/Logout we are redirected to the home page, where the whole authentication flow from above is being executed. We are not asked again if we want to give permission to the application to use our google credentials, though. In order to reset that part, go to Apps connected to your account.

What happens when we deny access to the application? Then the callback action will be called with a different set of parameters, triggering a RemoteFailure event. The source code on GitHub contains extra code that covers this scenario, redirecting the user to /Home/Error with the failure reason:

Events = new OAuthEvents
{
OnRemoteFailure = ctx =>
{
ctx.Response.Redirect("/Home/Error?ErrorMessage=" + UrlEncoder.Default.Encode(ctx.Failure.Message));
ctx.HandleResponse();
return Task.FromResult(0);
}
}

What about our user?


In order to check the results of our work, let's add some stuff to the home page. Mainly I want to show all the information we got about our user. Change the index.cshtml file to look like this:

<table class="table">
@foreach (var claim in User.Claims)
{
<tr>
<td>@claim.Type</td>
<td>@claim.Value</td>
</tr>
}
</table>

Now, when I open the home page, this is what gets returned:

http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier 111601945496839159547
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname Siderite
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname Zackwehdex
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name Siderite Zackwehdex
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress sideritezaqwedcxs@gmail.com
urn:google:profile https://plus.google.com/111601945496839159547


User is a System.Security.Claims.ClaimsPrincipal object, that contains not only a simple bag of Claims, but also a list of Identities. In our example I only have an identity and the User.Claims are the same with User.Identities[0].Claims, but in other cases, who knows?

Acknowledgements


If you think it was easy to scrap up this simple example, think again. Before the OAuth2 system there was an OpenID based system that used almost the same method and class names. Then there is the way they did it in .NET proper and the way they do it in ASP.Net Core... which changed recently as well. Everyone and their grandmother have a blog about how to do Google authentication, but most of them either don't apply or are obsolete. So, without further ado, let me give you the links that inspired me to do it this way:

Final thoughts


By no means this is a comprehensive walkthrough for authentication in .NET Core, however I am sure that I will cover a lot more ground in the posts to come. Stay tuned for more!

Source code for the project after this chapter can be found on GitHub.

and has 0 comments

The Brain that Changes Itself is a remarkable book for several reasons. M.D. Norman Doidge presents several cases of extraordinary events that constitute proof for the book's thesis: that the brain is plastic, easy to remold, to adapt to the data you feed it. What is astonishing is that, while these cases are not new and are by far not the only ones out there, the medical community is clinging to the old belief that the brain is made of clearly localized parts that have specific roles. Doidge is trying to change that.

The ramifications of brain plasticity are wide spread: the way we learn or unlearn things, how we fall in love, how we adapt to new things and we keep our minds active and young, the way we would educate our children, the minimal requirement for a computer brain interface and so much more. The book is structured in 11 chapters and some addendums that seem to be extra material that the author didn't know how to properly format. A huge part is acknowledgements and references, so the book is not that large.

These are the chapters, in order:

  • Chapter 1 - A Woman Perpetually Falling. Describes a woman that lost her sense of balance. She feels she is falling at all times and barely manages to walk using her sight. Put her in front of a weird patterned rug and she falls down. When sensors fed information to an electrode plate on her tongue she was able to have balance again. The wonder comes from the fact that a time after removing the device she would retain her sense. The hypothesis is that the receptors in her inner ear were not destroyed, by damaged, leaving some in working order and some sending incorrect information to the brain. Once a method to separate good and bad receptors, the brain immediately adapted itself to use only the good ones. The doctor that spearheaded her recovery learned the hard way that the brain is plastic, when his father was almost paralyzed by a stroke. He pushed his father to crawl on the ground and try to move the hand that wouldn't move, the leg that wouldn't hold him, the tongue that wouldn't speak. In the end, his father recovered. Later, after he died from another stroke while hiking on a mountain, the doctor had a chance to see the extent of damage done by the first stroke: 97% of the nerves that run from the cerebral cortex to the spine were destroyed.
  • Chapter 2 - Building Herself a Better Brain. Barbara was born in the '50s with an brain "asymmetry". While leaving a relatively normal life she had some mental disabilities that branded her as "retarded". It took two decades to stumble upon studies that showed that the brain was plastic and could adapt. She trained her weakest traits, the ones that doctors were sure to remain inadequate because the part in the brain "associated" with it was missing and found out that her mind adapted to compensate. She and her husband opened a school for children with disabilities, but her astonishing results come from when she was over 20 years old, after years of doctors telling her there was nothing to be done.
  • Chapter 3 - Redesigning the Brain. Michael Merzenich designs a program to train the brain against cognitive impairments or brain injuries. Just tens of hours help improve - and teach people how to keep improving on their own - from things like strokes, learning disabilities, even conditions like autism and schizophrenia. His work is based on scientific experiments that, when presented to the wider community, were ridiculed and actively attacked for the only reason that they went against the accepted dogma.
  • Chapter 4 - Acquiring Tastes and Loves. Very interesting article about how our experiences shape our sense of normalcy, the things we like or dislike, the people we fall for and the things we like to do with them. The chapter also talks about Freud, in a light that truly explains how ahead of his time he was, about pornography and its effects on the brain, about how our pleasure system affects both learning and unlearning and has a very interesting theory about oxytocin, seeing it not as a "commitment neuromodulator", but as a "demodulator", a way to replastify the part of the brain responsible for attachments, allowing us to let go of them and create new ones. It all culminates with the story of Bob Flanagan, a "supermasochist" who did horrible things to his body on stage because he had associated pain with pleasure.
  • Chapter 5 - Midnight Resurrection. A surgeon has a stroke that affects half of his body. Through brain training and physiotherapy, he manages to recover - and not gain magical powers. The rest of the chapter talks about experiments on monkeys that show how the feedback from sensors rewires the brain and how what is not used gets weaker and what is used gets stronger, finer and bigger in the brain.
  • Chapter 6 - Brain Lock Unlocked. This chapter discusses obsessions and bad habits and defective associations in the brain and how they can be broken.
  • Chapter 7 - Pain: The Dark Side of Plasticity. A plastic brain is also the reason why we strongly remember painful moments. A specific case is phantom limbs, where people continue to feel sensations - often the most traumatic ones - after limbs have been removed. The chapter discusses causes and solutions.
  • Chapter 8 - Imagination: How Thinking Makes It So. The brain maps for skills that we imagine we perform change almost as much as when we are actually doing them. This applies to mental activities, but also physical ones. Visualising doing sports prepared people for the moment when they actually did it. The chapter also discusses how easily the brain adapts to using external tools. Brain activity recorders were wired to various tools and monkeys quickly learned to use them without the need for direct electric feedback.
  • Chapter 9 - Turning Our Ghosts into Ancestors. Discussing the actual brain mechanisms behind psychotherapy, in the light of what the book teaches about brain plasticity, makes it more efficient as well as easier to use and understand. The case of Mr. L., Freud's patient, who couldn't keep a stable relationship as he was always looking for another and couldn't remember his childhood and adolescence, sheds light on how brain associates trauma with day to day life and how simply separating the two brain maps fixes problems.
  • Chapter 10 - Rejuvenation. A chapter talking about the neural stem cells and how they can be activated. Yes, they exist and they can be employed without surgical procedures.
  • Chapter 11 - More than the Sum of Her Parts. A girl born without her left hemisphere learns that her disabilities are just untrained parts of her brain. After decades of doctors telling her there is nothing to be done because the parts of her brain that were needed for this and that were not present, she learns that her brain can actually adapt and improve, with the right training. An even more extreme case than what we saw in Chapter 2.


There is much more in the book. I am afraid I am not making it justice with the meager descriptions there. It is not a self-help book and it is not popularising science, it is discussing actual cases, the experiments done to back what was done and emits theories about the amazing plasticity of the brain. Some things I took from it are that we can train our brain to do almost anything, but the training has to follow some rules. Also that we do not use gets discarded in time, while what is used gets reinforced albeit with diminishing efficiency. That is a great argument to do new things and train at things that we are bad at, rather than cement a single scenario brain. The book made me hungry for new senses, which in light of what I have read, are trivial to hook up to one's consciousness.

If you are not into reading, there is an one hour video on YouTube that covers about the same subjects:

[youtube:sK51nv8mo-o]

Enjoy!

Following my post about things I need to learn, I've decided to start a series about writing an ASP.Net MVC Core application, covering as much ground as possible. As a result, this experience will cover .NET Core subjects and a thorough exploration of ASP.Net MVC, plus some concepts related to Visual Studio, project structure, Entity Framework, HTML5, ECMAScript 6, Angular 2, ReactJs, CSS (LESS/SASS), responsive design, OAuth, OData, shadow DOM, etc.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Specifications


In order to start any project, some specifications need to be addressed. What will the app do and how will it be implemented? I've decided on a simplified rewrite of my WPF newsletter maker project. It gathers subjects from Google, by searching for configurable queries, it spiders the contents, it displays them, filters them, sorts them, extracting text and analyzing content. It remembers the already loaded URLs and allows for marking them as deleted and setting a category. It will be possible to extract the items that have a category into a newsletter containing links, titles, short descriptions and maybe a picture.

The project will be using ASP.Net Core MVC, combining the API and the display in a single web site (at least for now). Data will be stored in SQLite via Entity Framework. Later on the project will be switched to SQL Server to see how easy that is. The web site itself will have HTML5 structure, using the latest semantic elements, with the simplest possible CSS. All project owned Javascript will be ECMAScript6. OAuth might be needed for using the Google Search API, and I intend to use Google/Facebook/Twitter accounts to log in the application, with a specific account marked in the configuration as Administrator. The IDE will be Visual Studio (not Code). The Javascript needs to be clean, with no CSS or HTML in it, by using CSS classes and HTML templates. The HTML needs to be clean, with no Javascript or styling in it; moreover it needs to be semantically unambiguous, so as to be easily molded with CSS. While starting with a desktop only design, a later phase of the project will revamp the CSS, try to make the interface beautiful and work for all screen formats.

Not the simplest of projects, so let's get started.

Creating the project


I will be using Visual Studio 2015 Update 3 with the .Net Core Preview2 tooling. Personally I had a problem installing the Core tools for Visual Studio, but this link solved it for me with a command line switch (short version: DotNetCore.1.0.0-VS2015Tools.Preview2.exe SKIP_VSU_CHECK=1). First step is to create a New Project → Visual C# → .NET Core → ASP.NET Core Web Application. I will name it ContentAggregator. To the prompt asking which type of project template I want to choose, I will select Web Application, deselect Microsoft Azure Host in Cloud checkbox which for whatever reason is checked by default and click on Change Authentication to select Individual User Accounts.



Close the "Welcome to ASP.Net Core" page, because the template will be heavily changed by the time we finish this chapter.

The default template project


For a more detailed analysis of a .NET Core web project, try reading my previous post of the dotnet default template for web apps. This one will be quick and dirty.

Things to notice:
  • There is a global.json file that lists two projects, src and test. So this is how .NET Core solutions were supposed to work. Since the json format will be abandoned by Microsoft, there is no point of exploring this too much. Interesting, though, while there is a "src" folder, there is no "test".
  • The root folder contains a ContentAggregator.sln file and the src folder contains a ContentAggregator folder with a ContentAggregator.xproj file. Core seems to have abandoned the programming language dependent extension for project files.
  • The rest of the project seems to be pretty much the default dotnet tool one, with the following differences:
  • the template uses SQL Server by default
  • the lib folder in wwwroot is already populated with libraries

So far so good. There is also the little issue of the database. As you may remember from the post about the dotnet tool template, there were some files that needed to initialize the database. The error message then said "In Visual Studio, you can use the Package Manager Console to apply pending migrations to the database: PM> Update-Database". Is that what I have to do? Also, we need to check what the database settings are. While I do have an SQL Server instance on this computer, I haven't configured anything yet. The Project_Readme.html page is not very useful, as the link Run tools such as EF migrations and more goes to an obsolete link on github.io (the documentation seems to have moved to a microsoft.com server now).

I *could* read/watch a tutorial, but what the hell? Let's run the website, see what it does! Amazingly, the web site starts, using IIS Express, so I go to Register, to see how the database works and I get the same error about the migrations. I click on the Apply Migrations button and it says the migrations have been applied and that I need to refresh. I do that and voila, it works!

So, where is the database? It is not in the bin folder as WebApplication.db like in the Sqlite version. It's not in the SQL Server, the service wasn't even running. The DefaultConnection string looks like "Server=(localdb)\\mssqllocaldb;Database=aspnet-ContentAggregator-7fafe484-d38b-4230-b8ed-cf4a5a8df5e1;Trusted_Connection=True;MultipleActiveResultSets=true". What's going on? The answer lies in the SQL Server Express LocalDB instance that Visual Studio comes with.

Changing and removing shit


To paraphrase Antoine de Saint-Exupéry, this project will be set up not when I have nothing else to add, but when I have nothing else to remove.

First order of business it to remove SQL Server and use SQLite instead. Quite the opposite of how I have pictured it, but hey, you do what you must! In theory all I have to do it replace .UseSqlServer with .UseSqlite and then adjust the DefaultConnection string from appsettings.json with something like "Data Source=WebApplication.db". Done that, fixed the namespaces and imported projects, ran the solution. Migration error, Apply Migrations, re-register and everything is working. WebApplication.db and everything.

Second is to remove all crap that I don't need right now. I may need it later, so at this point I am backing up my project. I want to remove:
  • Database - yeah, I know I just recreated it, but I need to know what those migrations contained and if I even need them, considering I want to register with OAuth only
  • Controllers - probably I will end up recreating them, but we need to understand why things are how they are
  • Models - we'll do those from scratch, too
  • Services - they were specific to the default web site, so poof! they're gone.
  • Views - the views will be redesigned completely, so we delete them also
  • Client libraries - we keep jQuery and jQuery validation, but we remove bootstrap
  • CSS - we keep the site.css file, but remove everything in it
  • Javascript - keep site.js, but empty
  • Other assets like images - removed

"What the hell, I read so much of this blog post just for you to remove everything you did until now?" Yes! This part is the project set up and before its end we will have a clean white slate on which to create our masterpiece.

So, action! Close Visual Studio. Delete bin (with the db file in it) and obj, delete all files in Controllers, Data, Models, Services, Views. Empty files site.css and site.js, while deleting the .min versions, delete all images, Project_Readme.html and project.lock.json. In order to cleanly remove bootstrap we need to use bower. Run
bower uninstall bootstrap
which will remove bootstrap, but won't remove it from bower.json, so remove it from there. Reopen Visual Studio and the project, wait until it restores the packages.

When trying to compile the project, there are some errors, obviously. First, namespaces that don't exist anymore, like Controllers, Models, Data, Services. Remove the offending usings. Then there are services that we wanted to inject, like SMS and Email, which for now we don't need. Remove the lines that try to use them under // Add application services. The rest of the errors are about ApplicationDbContext and ApplicationUser. Comment them out. These are needed for when we figure out how the project is going to preserve data. Later on a line in Startup.cs will throw an exception ( app.UseIdentity(); ) so comment it out as well.

Finishing touches


Now the project compiles, but it does nothing. Let's finish up by adding a default Controller and a View.

In Visual Studio right click on the Controllers folder in the Solution Explorer and choose Add → Controller → MVC Controller - Empty. Let's continue to name it HomeController. Go to the Views folder, create a new folder called Home. Now you might think that right clicking on it and selecting Add → View would work, but it doesn't. The Add button stubbornly remains disabled unless you specify a template and a model and other stuff. It may be useful later on, but at this stage ignore it. The way to add a view now is go to Add → New Item → MVC View Page. Create an Index.cshtml view and empty its contents.

Now run the application. It should show a wonderfully empty page with no console errors. That's it for our blank project setup. Check out the source code for this point of the exploration. Stay tuned for the real fun!

In September last year I was leaving my job and starting a sabbatical year, with many plans for what seemed then like a lot of time in which to do everything. I was severely underestimating my ability to waste time. Now the year is almost over and I need to start thinking about the technologies in my field of choice that I need to catch up with; and, boy, there is a lot of them! I leave the IT industry alone for one year and kaboom! it blows up like an angry volcano. To be honest, not all of these things that are new for me are just one year old, some I was just ignoring as I didn't need them for my various jobs. Learn from this, as especially in the software business it gets harder and harder to keep up to date and easier and easier to live in a bubble of your own or your employer's creation.

This post is about a list of programming topics that I would like to learn or at least learn to recognize. It's work in progress and probably I will update it for a time. While on my break I created a folder of software development stuff that I would "leave for later". As you can imagine, it got quite large. Today I am opening it for the first time. Be afraid. Be very afraid. I also have a lot of people, either friends or just casual blog or Twitter followings, that constantly let me know of what they are working on. As such, the list will not be very structured, but will be large. Let's begin.

A simple list would look like this. Let me set the list style to ordered list so you can count them:
  1. Typescript 2
  2. ReactJS
  3. JSX
  4. SignalR
  5. Javascript ES6
  6. Xamarin
  7. PhoneGap
  8. Ionic
  9. NodeJS
  10. .NET Core
  11. ASP.Net MVC
  12. R
  13. Python
  14. Unity
  15. Tensorflow
  16. DMTK/CNTK
  17. Visual Studio Code
  18. Jetbrains Project Rider
  19. npm
  20. Bower
  21. Docker
  22. Webpack
  23. Kubernetes
  24. Deep Learning
  25. Statistics
  26. Data mining
  27. Cloud computing
  28. LESS
  29. SASS
  30. CSSX
  31. Responsive design
  32. Multiplatform mobile apps
  33. Blockchains
  34. Parallel programming
  35. Entity Framework
  36. HTML5
  37. AngularJS 2
  38. Cryptography
  39. OpenCV
  40. ZeroNet
  41. Riffle
  42. Bots
  43. Slack
  44. OAuth
  45. OData
  46. DNS
  47. Bittorrent
  48. Roslyn
  49. Universal Windows Platform / Windows 10 development
  50. Katana
  51. Shadow DOM
  52. Serverless architecture
  53. D3 and D4 (d3-like in ReactJs)
  54. KnockoutJs
  55. Caliburn Micro
  56. Fluent Validation
  57. Electron

Yup, there are more than 50 general concepts, major frameworks, programming languages, tools and what not, some of them already researched but maybe not completely. That is not including various miscellaneous small frameworks, pieces of code, projects I want to study or things I want to do. I also need to prioritize them so that I can have at least the semblance of a study plan. Being July 21st, I have about one full month in which to cover the basic minimum. Clearly almost two subjects a day every day is too ambitious a task. Note to self: ignore that little shrieky voice in your head that says it's not!

Being a .NET developer by trade I imagine my next job will be in that area. Also, while I hate this state of affairs, notice there is nothing related to WPF up there. The blogs about the technology that I was reading a few years ago have all dried up, with many of those folks moving to the bloody web. So, I have to start with:

  1. ASP.Net MVC Core - the Model View Controller way of making .NET web applications, I've worked with it, but I am not an expert, as I need to become. Some quickly googled material:
  2. .NET Core - the new version of .NET, redesigned to be cross platform. There is no point of learning .NET Core as standalone: it will be used all over this plan
  3. Entity Framework Core - honestly, I've moved away from ORMs, but clearly Microsoft is moving full steam ahead on using EF everywhere, so I need to learn it well. As resources, everything written or recommended by Julie Lerman should be good, but a quick google later:
  4. OData - an OASIS standard that defines a set of best practices for building and consuming RESTful APIs. When Microsoft adopts an open standard, you pretty much know it will enter the common use vocabulary as a word used more often than "mother". Some resources:
  5. OAuth - An open protocol to allow secure authorization in a simple and standard method from web, mobile and desktop applications. It is increasingly used as "the" authentication method, mostly because it allows for third party integration with Facebook, Twitter, Google and other online identity providers. Some resources:
  6. Typescript 2 - a strict superset of JavaScript from Microsoft, it adds optional static typing and class-based object-oriented programming to the language. Javascript is great, but you can use it in any way you want. There is no way to take advantage of so many cool features of modern IDEs like Visual Studio + ReSharper without some sort of structure. I hope Typescript provides that for me. Resources:
  7. NodeJS - just when I started liking Javascript as a programming language, here comes NodeJs and brings is everywhere! And that made me like it less. Does that make sense? Anyway, with Microsoft tools needing NodeJs for various reasons, I need to look into it. Resources:
  8. Javascript ES6 - the explosion of Javascript put a lot of pressure on the language itself. ECMAScript6 is the newest iteration, adding support for a lot of features that we take for granted in more advanced languages, like classes, block variable scope, lambdas, multiline literals, better regular expressions, modules, etc. I intend to rewrite my browser extension in ES6 Javascript for version 3, among other things. Here are some resources:
  9. npm - npm is a package installer for Javascript. Everybody likes to use it so much that I believe it will soon become an antipattern. Functions like extracting the left side of a string, for example, are considered "packages".
  10. Bower - Bower is a package manager for the web, an attempt to maintain control over a complex ecosystem of web frameworks and libraries and various assets.
  11. Docker - The world’s leading software containerization platform - I don't even know what that means right now - Docker is a tool that I hear more and more about. In August I will even attend an ASP.Net Core + Docker presentation by a Microsoft guy.
  12. Parallel programming - I have built programs that take advantage of parallel programming, but never in a systematic way. I usually write stuff as a single thread, switching to multithreaded work to solve particular problems or to optimize run time. I believe that I need to write everything with parallelism in mind, so I need to train myself in that regard.
  13. Universal Windows Platform - frankly, I don't even know what it means. I am guessing something that brings application development closer to the mobile device/store system, which so far I don't enjoy much, but hey, I need to find out at least what the hell this is all about. The purpose of this software platform is to help develop Metro-style apps that run on both Windows 10 and Windows 10 Mobile without the need to be re-written for each. Resources:
  14. HTML5 - HTML5 is more than a simple rebuttal of the XHTML concept and the adding of a few extra tags and attributes. It is a new way of looking at web pages. While I've used HTML5 features already, I feel like I need to understand the entire concept as a whole.
  15. Responsive design - the bane of my existence was web development. Now I have to do it so it works on any shape, size or DPI screen. It has gone beyond baneful; yet every recruiter seems to have learned the expression "responsive design" by heart and my answer to that needs to be more evolved than a simple "fuck you, too!"
  16. LESS and SASS - CSS is all nice and functional, but it, just like HTML and Javascript, lacks structure. I hope that these compilable-to-CSS frameworks will help me understand a complex stylesheet like I do a complex piece of code.
  17. AngularJS 2 - I hear that Angular 2 is confusing users of Angular 1! which is funny, because I used Angular just for a few weeks without caring too much about it. I've read a book, but I forgot everything about it. Probably it is for the best as I will be learning the framework starting directly with version 2.

So there you have it: less than 20 items, almost two days each. Still bloody tight, but I don't really need to explore things in depth, just to know what they are and how to use them. The in-depth learning needs to come after that, with weeks if not months dedicated to each.

What about the rest of 35 items? Well, the list is still important as a reference. I intend to go through each, however some of the concepts there are there just because I am interested in them, like DNS, Riffle, Bitcoin and Bittorrent, not because they would be useful at my job or even my current side projects. Data mining and artificial intelligence is a fucking tsunami, but I can't become an expert in something like this without first becoming a beginner, and that takes time - in which the bubble might burst, heh heh. Mobile devices are all nice and cool, but the current trend is for tablets to remain a whim, while people divide their experience between laptops and big screen smartphones. The web takes over everything and I dread that the future is less about native apps and more about web sites. What are native mobile apps for? Speed and access to stuff a browser doesn't usually have access to. Two years and a new API later and a web page does that better. APIs move faster nowadays and if they don't, there are browser extensions that can inject anything and work with a locally installed app that provides just the basic functionality.

What do you think of my list? What needs to be added? What needs to be removed? Often learning goes far smoother when you have partners. Anyone interested in going through some subjects and then discuss it over a laptop and a beer?

Wish me luck!

Another entry from my ".NET with Visual Studio Code" series, this will discuss the default web site generated by the 'dotnet new' command. I recommend reading the previous articles:
Update 31 Mar 2017: This entire article is obsolete.
Since the release of .NET Core 1.1 the dotnet command now has a different syntax: dotnet new <template> where it needs a template specified, doesn't default to console anymore. Also, the web template is really stripped down to only what one needs, similar to my changes to the console project to make it web. The project.json format has become deprecated and now .csproj files are being used. Yet I worked a lot on the post and it might show you how .NET Core came to be and how it changed in time.

The analysis that follows is now more appropriate for the dotnet new mvc template in .NET Core 1.1, although they removed the database access and other things like registration from it.

Setup


Creating the ASP.Net project


Until now we've used the simple 'dotnet new' command which defaults to the Console application project type. However, there are four available types that can be specified with the -t flag: Console,Web,Lib and xunittest. I want to use Visual Studio Code to explore the default web project generated by the command. So let's follow some basic steps:
  1. Create a new folder
  2. Open a command prompt in it (Update: Core has a Terminal window in it now, which one can use)
  3. Type dotnet new -t Web
  4. Open Visual Studio Code and select the newly created folder
  5. To the prompt "Required assets to build and debug are missing from your project. Add them?" choose Yes

So, what do we got here? It seems like an earlier version of John Papa's AspNet5-starter-demo project.

First of all there is a nice README.MD file that announces this is a big update of the .NET Core release and it's a good idea to read the documentation. It says that the application in the project consists of:
  • Sample pages using ASP.NET Core MVC
  • Gulp and Bower for managing client-side libraries
  • Theming using Bootstrap
and follows with a lot of how-to URLs that are mainly from the documentation pages for MVC and the MVC tutorial that uses Visual Studio proper. While we will certainly read all of that eventually, this series is more about exploring stuff rather than learning it from a tutorial.

If we go to the Debug section of Code and run the web site, two interesting things happen. One is that the project compiles! Yay! The other is that the web site at localhost:5000 is automatically opened. This happens because of the launchBrowser property in build.json's configuration:
"launchBrowser" : {
"enabled" : true,
"args" : "${auto-detect-url}",
"windows" : {
"command" : "cmd.exe",
"args" : "/C start ${auto-detect-url}"
},
"osx" : {
"command" : "open"
},
"linux" : {
"command" : "xdg-open"
}
}

The web site itself looks like crap, because there are some console errors of missing files like /lib/bootstrap/dist/css/bootstrap.css, /lib/bootstrap/dist/js/bootstrap.js, /lib/jquery/dist/jquery.js and /lib/bootstrap/dist/js/bootstrap.js. All of these files are Javascript and CSS files from frameworks defined in bower.json:
{
"name": "webapplication",
"private": true,
"dependencies": {
"bootstrap": "3.3.6",
"jquery": "2.2.3",
"jquery-validation": "1.15.0",
"jquery-validation-unobtrusive": "3.2.6"
}
}
and there is also a .bowerrc file indicating the destination folder:
{
"directory": "wwwroot/lib"
}
. This probably has to do with that Gulp and Bower line in the Readme file.

What is Bower? Apparently, it's a package manager that depends on node, npm and git. Could it be that I need to install Node.js just so that I can compile my C# ASP.Net project? Yes it could. That's pretty shitty if you ask me, but I may be a purist at heart and completely wrong.

Installing tools for client side package management


While ASP.Net Core's documentation about Client-Side Development explains some stuff about these tools, I found that Rule-of-Tech's tutorial from a year and a half ago explains better how to install and use them on Windows. For the purposes of this post I will not create my own .NET bower and gulp emulators and install the tools as instructed... and under duress. You're welcome, Internet!

Git


Go to the Git web site, and install the client.

Node.js


Go to the Node.js site and install it. Certainly a pattern seems to emerge here. On their page they seem to offer two wildly different versions: one that is called LTS and has version v4.4.7, another that is called Current at version v6.3.0. A quick google later I know that LTS comes from 'Long Term Support' and is meant for corporate environments where change is rare and the need for specific version support much larger. I will install the Current version for myself.

Bower


The Bower web site indicates how to install it via the Node.js npm:
npm install -g bower
. Oh, you gotta love the ASCII colors and the loader! Soooo cuuuute!


Gulp


Install Gulp with npm as well:
npm install --global gulp
Here tutorials say to install it in your project's folder again:
npm install --save-dev gulp
Why do we need to install gulp twice, once globally and once locally? Because reasons. However, I noticed that in the project there is already a package.json containing what I need for the project:
{
"name": "webapplication",
"version": "0.0.0",
"private": true,
"devDependencies": {
"gulp": "3.9.1",
"gulp-concat": "2.6.0",
"gulp-cssmin": "0.1.7",
"gulp-uglify": "1.5.3",
"rimraf": "2.5.2"
}
}
Gulp is among them so I just run
npm install --save-dev
without specifying a package and it installs what I need. The install immediately gave me some concerning warnings like "graceful-fs v3.0.0 will fail on node releases >=7.0. Please update to graceful-fs 4.0.0" and "please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue" and "lodash v3.0.0 is no longer maintained. Upgrade to lodash v4.0.0". Luckily, the error messages are soon replaced by a random ASCII tree and if you can't scroll up you're screwed :) One solution is to pipe the output to a file or to NUL so that the ASCII tree of the packages doesn't scroll the warnings up
npm install --save-dev >nul

Before I go into weird Linux reminiscent console warnings and fixes, let's see if we can now fix our web site, so I run "gulp" and I get the error "Task 'default' is not in your gulpfile. Please check the documentation for proper gulpfile formatting". Going back to the documentation I understand why: it's because there is no default task in the gulpfile! Well, it sounded more exciting in my head.

Long story short, the commands to be executed are in project.json, in the scripts section:
"scripts": {
"prepublish": [ "npm install", "bower install", "gulp clean", "gulp min" ],
"postpublish": [ "dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%" ]
}
There is npm install, there is bower install and the two tasks for gulp: clean and min.

It was the bower command that we actually needed to make the web site load the frameworks needed. Phew!

Let's start the web site again, from Code, by going to Debug and clicking Run. (By now it should be working with F5, I guess) Victory! The web site looks grand. It works!

Exploring the project


project.json will be phased out


But what is that prepublish nonsense? Frankly, no one actually knows or cares. It's not documented and, to my chagrin, it appears the project.json method of using projects will be phased out anyway! Whaat?

Remember that .NET Core is still version 1. As Microsoft has accustomed us for decades, you never use version 1 of any of their products. If you are in a hurry, you wait until version 2. If you care about your time, you wait for the second service pack of version 2. OK, a cheap shot, but how is it not frustrating to invest in something and then see it "phased out". Didn't they think things through?

Anyway, since the web site is working, I will postpone a rant about corporate vision and instead continue with the analysis of the project.

node_modules


Not only did I install Node and Bower and Gulp, but a lot of dependencies. The local installation folder for Node.js packages called node_modules is now 6MB of files that are mostly comments and boilerplate. It must be the (in)famous by now NPM ecosystem, where every little piece of functionality is a different project. At least nothing else seems to be polluted by it, so we will ignore it as a necessary evil.

Program and Startup


Following the pattern discussed in previous posts, the Program class is building the web host and then passing responsibility for configuration and management to the Startup class. The most obvious new thing in the class is the IConfigurationRoot instance that is built in the class' constructor.
public IConfigurationRoot Configuration { get; }

var builder = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath)
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

Configuration = builder.Build();
Here is the appsettings.json file that is being loaded there:
{
"ConnectionStrings": {
"DefaultConnection": "Data Source=WebApplication.db"
},
"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Debug",
"System": "Information",
"Microsoft": "Information"
}
}
}
Later on these configuration settings will be used in code:
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection"))
);

loggerFactory.AddConsole(Configuration.GetSection("Logging"));

Let me quickly list other interesting things for you to research in depth later:
I've tried to use the most relevant links up there, but you will note that some of them are from spring 2015. There is still much to be written about Core, so look it up yourselves.

Database


As you know, almost any application worth mentioning uses some sort of database. ASP.Net MVC uses an abstraction for that called ApplicationDbContext, which the Microsoft team wants us to use with Entity Framework. ApplicationDbContext just inherits from DbContext and links it to Identity via an ApplicationUser. If you unfamiliar on how to work with EF DbContexts, check out this link. In the default project the database work is instantiated with
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection"))
);

Personally I am not happy that with a single instruction I now have in my projects dependencies to
"Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore": "1.0.0",
"Microsoft.AspNetCore.Identity.EntityFrameworkCore": "1.0.0",
"Microsoft.EntityFrameworkCore.Sqlite": "1.0.0",
"Microsoft.EntityFrameworkCore.Tools": {
"version": "1.0.0-preview2-final",
"type": "build"
}
Let's go with it for now, though.

AddDbContext is a way of injecting the DbContext without specifying an implementation. UseSqlite is an extension method that gets that implementation for you.

Let us test the functionality of the site so that I can get to an interesting EntityFramework concept called Migrations.

So, going to the now familiar localhost:5000 I notice a login/register link combo. I go to register, I enter a ridiculously convoluted password pattern and I get... an error:

A database operation failed while processing the request.

SqliteException: SQLite Error 1: 'no such table: AspNetUsers'.


Applying existing migrations for ApplicationDbContext may resolve this issue

There are migrations for ApplicationDbContext that have not been applied to the database

  • 00000000000000_CreateIdentitySchema


In Visual Studio, you can use the Package Manager Console to apply pending migrations to the database:

PM> Update-Database

Alternatively, you can apply pending migrations from a command prompt at your project directory:

> dotnet ef database update



What is all that about? Well, in a folder called Data in our project there is a Migrations folder containing files 00000000000000_CreateIdentitySchema.cs, 00000000000000_CreateIdentitySchema.Designer.cs and ApplicationDbContextModelSnapshot.cs. They inherit from Migration and ModelSnapshot respectively and seem to describe in coade database structure and entities. Can it be as simple as running:
dotnet ef database update
(make sure you stop the running web server before you run the command)? It compiles the project and it completes in a few seconds. Let's run the app again and attempt to register. It works!

What happened? In the /bin directory of our project there is now a WebApplication.db file which, when opened, reveals it's an SQLite database.

But what about the mysterious button "Apply Migrations"? It generates an Ajax POST call to /ApplyDatabaseMigrations. Clicking it also fixes everything. How does that work? I mean, there is no controller for an ApplyDatabaseMigrations call. It all comes from the line
if (env.IsDevelopment())
{
...
app.UseDatabaseErrorPage();
...
}
which installs the MigrationsEndPointMiddleware. Documentation for it here.

Pesky little middleware, isn't it?

Installing SQLite


It is a good idea to install SQLite as a standalone application. Make sure you install both binaries for the library as well as the client (the library might be x64 and the client x86). The installation is brutally manual. You need to copy both library and client files in a folder, let's say C:\Program Files\SQLite and then add the folder in the PATH environment variable. An alternative is using the free SQLiteBrowser app.

Opening the database with SQLiteBrowser we find tables for Users, Roles, Claims and Tokens, what we would expect from a user identity management system. However, only the Users table was used when creating a new user.

Controllers


The controllers of the application contain the last ASP.Net related functionality of the project, anything else is business logic. The controllers themselves, though, are nothing special in terms of Core. The same class names, the same attributes, the same functionality as any regular MVC. I am not a specialist in that and I believe it to be outside the scope of the current blog post. However it is important to look inside these classes and the ones that are used by them so that we understand at least the basic concepts the creators of the project wanted to teach us. So I will quickly go through them, no code, just conceptual ideas:
  • HomeController - the simplest possible controller, it displays the home view
  • AccountController and ManageController - responsible for account management, they have a lot of functionality including antiforgery token, forgot the password, external login providers, two factor authentication, email and SMS sending, reset password, change email, etc. Of course, register, login and logoff.
  • IEmailSender, ISMSSender and MessageServices - if we waited until this moment for me to reveal the simple and free email/SMS provider architecture in the project, you will be disappointed. MessageServices implements both IEmailSender and ISMSSender with empty methods where you should plus useful code.
  • Views - all views are interesting to explore. They show basic and sometimes not so basics concepts of Razor Model/View/Controller interaction like: imports, declaring the controller and action names in the form element instead of a string URL, declaring the @model type for a view, validation summaries, inline code, partial views, model to view synchronization, variable view content depending on environment (production, staging), fallback source and automated testing that the script loaded, bootstrap theming, etc.
  • Models - model classes are simple POCOs, but they do provide their occasional instruction, like using annotations for data structure, validation and display purposes alike

Running the website


So far I've described running the web site through the Debug sidebar, but how do I run the website from the command line? Simply by running
dotnet run
in your project folder. You can just as well use
dotnet compiled.dll
, but you need to also run it from the project root. In the former case, dotnet will also compile the project for you, in the latter it will just run whatever is already compiled.

In order to change the default IP and port binding, refer to this answer on Stack Overflow. Long story short, you need to use a configuration file from your WebHostBuilder setup.

However there is a lot of talk about dnx. What is this? In short, it's 'dotnet'. Before version 1 there was an entire list of commands like dnvm, dnu, dnx, etc. You may still install them if you want, but I wouldn't recommend it. As detailed in this document, the transition from the dn* tools to the dotnet command is already in progress.

Conclusions


A large number of concepts have been bundled together in the default "web" project type, allowing for someone to immediately create and run a web site, theme it and change it to their own purposes. While creating such a project from Visual Studio is a breeze compared to what I've described, it is still a big step towards enabling people to just quickly install some stuff and immediately start coding. I approve! :) good job, Microsoft!

This is part of the .NET Core and VS Code series, which contains:
Since the release of .NET Core 1.1, a lot has changed, so I am trying to keep this up to date as much as possible, but there still might be some leftover obsolete info from earlier versions.

I am continuing my series about .NET Core, using Visual Studio Code only, on Windows, with as little command line work as possible. Read about writing a console application and then a very simple web application before you read this part.

Setup


In this post I will attempt to write a very simple Web API using .NET Core. I intend to have some sort of authentication and then be able to read and write stuff using REST API calls. As with the projects before, I will use Visual Studio Code to create a folder, open it, then do stuff in it after running the command 'dotnet new' and preferably changing the namespace so that it is not always ConsoleApplication. This time the steps will be a little bit different, more inline with what you would do in real life:
  1. Create a folder for our project called WebCore
  2. In it create an API folder
  3. Open a command prompt in the API folder and type 'dotnet new web' (in .NET Core 1.0 you could omit the template, but from 1.1 you need to use it explicitly)
  4. Open Visual Studio Code by typing 'code' or by any other means
  5. Close the command prompt window
  6. In Code select the API folder and open Program.cs
  7. To the warning "Required assets to build and debug are missing from your project. Add them?" click Yes
  8. To the info "There are unresolved dependencies from '<your project>.csproj'. Please execute the restore command to continue." click Restore
  9. Change the namespace to WebCore.API
  10. Press Ctrl-Shift-B to build the project.

Right now you should have a web project that compiles. In order to turn this into an API project, we need to understand a few things. So go down to the tutorial for ASP.Net Core API that uses Visual Studio to read about what Microsoft suggests, download the zip file of all the docs and tutorials and look in /Docs-master/aspnet/tutorials/first-web-api/sample/src/TodoApi to see what they did in code (or just browse them on GitHub). You also need to understand that while in normal .NET (how do we call it now?) MVC and Web API are two branches that often have similar functionality in similarly named classes, in .NET Core the API is just part of MVC. So no WebApi namespaces, just MVC.

That is why, to turn our app into an API project we need to add to the dependencies of project.json the MVC packages:
,"Microsoft.AspNetCore.Mvc": "1.0.0"
,"Microsoft.AspNetCore.Mvc.Core": "1.0.0"
<PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.1" />
to ItemGroup in API.csproj and tell the project to use MVC in code. This is done by adding services.AddMvc(); in ConfigureServices and replacing the app.Run line in Configure with app.UseMvcWithDefaultRoute(); in Program.cs.

At this point we should have something like this:

project.json
{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable",
"emitEntryPoint": true
},
"dependencies": {},
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.0"
},
"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
"Microsoft.AspNetCore.Mvc": "1.0.0",
"Microsoft.AspNetCore.Mvc.Core": "1.0.0"
},
"imports": "dnxcore50"
}
}
}


API.csproj
<Project Sdk="Microsoft.NET.Sdk.Web">

<PropertyGroup>
<TargetFramework>netcoreapp1.1</TargetFramework>
</PropertyGroup>

<ItemGroup>
<Folder Include="wwwroot\" />
</ItemGroup>

<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
<PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.1" />
</ItemGroup>

</Project>

Program.cs
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;

namespace WebCore.API
{
public class Program
{
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.Build();

host.Run();
}
}
}

Startup.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

namespace WebCore.API
{
public class Startup
{
// This method gets called by the runtime. Use this method to add services to the container.
// For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole();

if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}

app.UseMvcWithDefaultRoute();

/*app.Run(async (context) =>
{
await context.Response.WriteAsync("Hello World!");
});*/
}
}
}

Clean and name the launch settings as well:
.vscode/launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": ".NET Core Launch (web API)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "${workspaceRoot}/bin/Debug/netcoreapp1.0/API.dll",
"args": [],
"cwd": "${workspaceRoot}",
"externalConsole": false,
"stopAtEntry": false
}
]
}


The Controller


Let's add functionality to our API. Create a folder called Controllers and in it add a NoteController.cs file. Just copy paste the code for now, we will understand it later.

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;

namespace WebCore.API.Controllers
{
[Route("api/[controller]")]
public class NoteController : Controller
{
private static List<Note> _notes;
static NoteController()
{
_notes = new List<Note>();
}


[HttpGet]
public IEnumerable<Note> GetAll()
{
return _notes.AsReadOnly();
}

[HttpGet("{id}", Name = "GetNote")]
public IActionResult GetById(string id)
{
var item = _notes.Find(n => n.Id == id);
if (item == null)
{
return NotFound();
}
return new ObjectResult(item);
}

[HttpPost]
public IActionResult Create([FromBody] Note item)
{
if (item == null)
{
return BadRequest();
}
item.Id = (_notes.Count + 1).ToString();
_notes.Add(item);
return CreatedAtRoute("GetNote", new { controller = "Note", id = item.Id }, item);
}

[HttpDelete("{id}")]
public void Delete(string id)
{
_notes.RemoveAll(n => n.Id == id);
}

public class Note
{
public string Id { get; set; }
public string Content { get; set; }
}
}
}

In this new class we have a lot of interesting things to behold. First of all there is the Route attribute that uses a [controller] token (well explained here). Then there are the Http attributes: HttpGet, HttpPost, HttpPut, HttpDelete which as their name implies define methods on the API controller that are accessed via the respective HTTP methods so beloved by the REST crowd.

An important thing to grasp from this is what happens with the HttpGet attribute (note it has a Name defined) and the later used CreatedAtRoute, which are actually Web API v2 concepts. In our case, whenever we create a new item we return the item as a result, but also set the Location header to the route that would "get" it, so that the client knows how to get to it.

Testing the result


Let's test what we did. While the GET methods are easy to test from the browser (http://localhost:5000/api/note should show an empty array result []), the POST and DELETE are trickier. The Microsoft documentation recommends using Fiddler, but I would go with the easy to install as a Chrome application Postman.

In Postman, don't forget to set the Content-Type header to application/json:


Then POST to /api/note/ a content like
{
Content:'test'
}
:


The result of the call should be as explained previously
{
"id": "1",
"content": "test"
}

Go with the browser at /api/note/ or /api/note/1 and you should get results

More complexity


What happened there exactly? We just added a class and it magically worked. How did .Net Core to do that? Core comes with Dependency Injection and Inversion of Control by default. In our case, just because we have a Controller class with a Route attribute made the difference. The line
app.UseMvcWithDefaultRoute();
is the code that enables that behavior. Let me demonstrate more of this "magic". In our example so far I've hardcoded the Note class inside the Controller class, but usually the class would be part of the functionality of the API and there would be a repository handling saving stuff in the database.

To avoid polluting the post with a lot of code, here is the improved source code of the API. Note is now a more complex object, the controller is initialized with a INoteRepository instance which is bound to a NoteRepository implementation. The API works just as before, only now Note has Key, Subject and Body properties instead of Id and Content.

The thing to learn from this version is how in the Startup class, in ConfigureServices the line
services.AddSingleton();
we tell Core that for a request of the implementation for INoteRepository, return the singleton instance of NoteRepository. From this line alone Core knows to initialize the NoteController constructor with the correct repository instance.

Security


Now, we don't want any peasant to come and mess with our important notes. We require some way of authenticating our operations. There is a very detailed section about Security in the Core documentation, but I only wanted a simple example. I worked on it for hours just to notice that line instructing the application builder to use authentication was after the one telling it to use MVC. Also some naming bugs that made me waste a lot of time.

Bottom line: here is the sample code with authentication. You cannot use any API calls unless you are a User. You login as a user calling /api/note/login/12345. The Create/Delete APIs are also unavailable to the Users, you need to be an Admin. There is a commented line in the code that also adds the Claim to the Admin role when logging in. I also added logging, so you can see what goes wrong in the Debug Console.

More information about how to implement authentication without Identity (as most Microsoft samples want) can be found at Using Cookie Middleware without ASP.NET Core Identity and my example is based on that.

Impressions on Visual Studio Code


Update: some of the complaints underneath are not correct anymore, but I am keeping my original reaction unchanged, since it reflects my feelings at the time.

The more complex the project becomes, the less Visual Studio Code can handle it. Intellisense is bloody awful, the exceptions can sometimes be caught only by adding logging to the console, the basic editing features are lacking - like a serious Undo queue or auto formatting when typing or closing brackets or even syntax highlighting at times and there are some features that plainly don't work, for example when you are writing a new class and an option for "Generate Type" appears, but it does nothing. Same for "Generate Variable". While I plan to test VS Code with a large project created in Visual Studio proper, I don't have high hopes. Remember when Microsoft created Silverlight with Javascript and it sucked and they immediately switched to WPF based Silverlight? It feels like that. However, for nostalgic reasons mostly, I also enjoy working in this spartan environment, especially when working with recently released features. It reminds me of the days when I was trying to connect various Linux software with basically duct tape and paper clips.

and has 0 comments
I started reading the book knowing nothing of its author or contents or, indeed, the publishing date. For me it felt like a philosophical treatise on revolutions in general, especially since the names of the countries were often omitted and it seemed that the author was purposeful in trying to make it apply to any era and any nation. Only after finishing the book I realized that Darkness at Noon was a story published in 1940, by Arthur Koestler, a Hungarian-British author who was briefly part of the German Communist movement before he left it, disillusioned. The main character is probably a more dramatic version of himself. As an aside and a fun coincidence, in July 2015 the original German manuscript of the book was found. All the since published versions are based on an English translation after Koestler lost the original. Even the published German version is a retranslation from English by Koestler himself.

The book is difficult to read because it is packed with deep philosophical and political thoughts of the main protagonist. Like a man condemned to purgatory, Rubashov is a former party leader now imprisoned by the very regime he helped create. While waiting for his interrogation and sentencing, he maniacally analyses his life's work and meaning, trying to find where the great revolution failed and why. One of the last survivors of "the old guard", he realizes that the new version of the party is a perversion of his initial ideals, but a logical progression of the principles he followed. He remembers the people he himself condemned to death and tries to understand if he had done the right thing or not. Still faithful to his views that tradition and emotion should be ignored and even eliminated, progressing by logical analysis and cold decisions, he tries to collect his thoughts enough to find the solution, not for himself, but for the failed revolution.

Darkness at Noon is a relatively short book, but a dense one. The author makes Rubashov feel extremely human, even as he remembers his own moments of monstrosity. Repeatedly he reveals to the reader that he thinks of himself as an automaton, as a cog in a machine and that he feels little about his own demise or success, but greatly about the result of the thing bigger than himself in which he believes. Yet even as he does that, he is tormented by human emotion, wracked by guilt, pained by a probably imaginary toothache that flares when he feels his own mistakes and regrets his past actions. He never quite gets to a point to where he is apologetic, though, believing strongly that the ends justify the means.

I liked it. It is not a political manifesto, but a deep rumination on political ideas that the author no longer adheres to. While I am sure many people have tried to use it as a tool against Communism, I find that the book is more than that, treating Communism itself just like another movement in the bowels of humanity. It is almost irrelevant in the end, as the story goes full circle to trap Rubashov in a prison of his own mind, with many of the characters just reflections of parts of himself. In the center of it all stands a man, an archetypal one at that, the book raising his fleeting existence higher than the sluggish fluctuations of any revolution or political ideals.

I read on Wikipedia that Darkness at Noon is meant as the second part of a trilogy, but from the descriptions of the other two books they don't seem to be narratively connected. I certainly didn't feel I missed something discussed in other material. Highly introspective, the story is often thought provoking, forcing one to put down the book to think about what was read. It is, thus, very cerebral: the emotions within are not the type that would make one read feverishly to get to the end and the end itself is just a beginning. I warmly recommend it.

This is part of the .NET Core and VS Code series, which contains:
Since the release of .NET Core 1.1, a lot has changed, so I am trying to keep this up to date as much as possible, but there still might be some leftover obsolete info from earlier versions.

Continuing the post about building a console application in .Net Core with the free Visual Studio Code tool on Windows, I will be looking at ASP.Net, also using Visual Studio Code. Some of the things required to understand this are in the previous post, so do read that one, at least for the part with installing Visual Studio Code and .NET Core.

Reading into it


Start with reading the introduction to ASP.Net Core, take a look at the ASP.Net Core Github repository and then continue with the information on the Microsoft sites, like the Getting Started section. Much more effort went into the documentation for ASP.Net Core, which shows - to my chagrin - that the web is a primary focus for the .Net team, unlike the language itself, native console apps, WPF and so on. Or maybe they're just working on it.

Hello World - web style


Let's get right into it. Following the Getting Started section, I will display the steps required to create a working web Hello World application, then we continue with details of implementation for common scenarios. Surprisingly, creating a web app in .Net Core is very similar to doing a console app; in fact we could just continue with the console program we wrote in the first post and it would work just fine, even without a 'web' section in launch.json.
  1. Go to the Explorer icon and click on Open Folder (or FileOpen Folder)
  2. Create a Code Hello World folder
  3. Right click under the folder in Explorer and choose Open in Command Prompt - if you have it. Some Windows versions removed this option from their context menu. If not, select Open in New Window, go to the File menu and select Open Command Prompt (as Administrator, if you can) from there
  4. Write 'dotnet new console' in the console window, then close the command prompt and the Windows Explorer window, if you needed it
  5. Select the newly created Code Hello World folder
  6. From the open folder, open Program.cs
  7. To the warning "Required assets to build and debug are missing from your project. Add them?" click Yes
  8. To the info "There are unresolved dependencies from '<your project>.csproj'. Please execute the restore command to continue." click Restore
  9. Click the Debug icon and press the green play arrow

Wait! Aren't these the steps for a console app? Yes they are. To turn this into a web application, we have to go through these few more steps:
  1. Open project.json and add
    ,"Microsoft.AspNetCore.Server.Kestrel": "1.0.0"
    to dependencies (right after Microsoft.NETCore.App)
  2. Open the .csproj file and add
    <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
    </ItemGroup>
  3. Open Program.cs and change it by adding
    using Microsoft.AspNetCore.Hosting;
    and replacing the Console.WriteLine thing with
    var host = new WebHostBuilder()
    .UseKestrel()
    .UseStartup<Startup>()
    .Build();
    host.Run();
  4. Create a new file called Startup.cs that looks like this:
    using System;
    using Microsoft.AspNetCore.Builder;
    using Microsoft.AspNetCore.Hosting;
    using Microsoft.AspNetCore.Http;

    namespace <your namespace here>
    {
    public class Startup
    {
    public void Configure(IApplicationBuilder app)
    {
    app.Run(context =>
    {
    return context.Response.WriteAsync("Hello from ASP.NET Core!");
    });
    }
    }
    }

There are a lot of warnings and errors displayed, but the program compiles and when run it keeps running until stopped. To see the result open a browser window on http://localhost:5000. Closing and reopening the folder will make the warnings go away.

#WhatHaveWeDone


So first we added Microsoft.AspNetCore.Server.Kestrel to the project. We added the Microsoft.AspNetCore namespace to the project. With this we have now access to the Microsoft.AspNetCore.Hosting which allows us to use a fluent interface to build a web host. We UseKestrel (Kestrel is based on libuv, which is a multi-platform support library with a focus on asynchronous I/O. It was primarily developed for use by Node.js, but it's also used by Luvit, Julia, pyuv, and others.) and we UseStartup with a class creatively named Startup. In that Startup class we receive an IApplicationBuilder in the Configure method, to which we attach a simple handler on Run.

In order to understand the delegate sent to the application builder we need to go through the concept of Middleware, which are components in a pipeline. In our case Run was used, because as a convention Run is the last thing to be executed in the pipeline. We could have used just as well Use without invoking the next component. Another option is to use Map, which is also a convention exposed through an extension method like Run, and which is meant to branch the pipeline.

Anyway, all of this you can read in the documentation. A must read is the Fundamentals section.

Various useful things


Let's ponder on what we would like in a real life web site. Obviously we need a web server that responds differently for different URLs, but we also need logging, security, static files. Boilerplate such as errors in the pages and not found pages need to be handled. So let's see how we can do this. Some tutorials are available on how to do that with Visual Studio, but in this post we will be doing this with Code! Instead of fully creating a web site, though, I will be adding to the basic Hello World above one or two features at a time, postponing building a fully functioning source for another blog post.

Logging


We will always need a way of knowing what our application is doing and for this we will implement a more complete Configure method in our Startup class. Instead of only accepting an IApplicationBuilder, we will also be asking for an ILoggerFactory. The main package needed for logging is
"Microsoft.Extensions.Logging": "1.0.0"
, which needs to be added to dependencies in project.json.
From .NET Core 1.1, Logging is included in the Microsoft.AspNetCore package.

Here is some code:

Startup.cs
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Logging;

namespace ConsoleApplication
{
public class Startup
{
public void Configure(IApplicationBuilder app, ILoggerFactory loggerFactory)
{
var logger=loggerFactory.CreateLogger("Sample app logger");
logger.LogInformation("Starting app");
}
}
}

project.json
{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable",
"emitEntryPoint": true
},
"dependencies": {},
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.0"
},
"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
"Microsoft.Extensions.Logging": "1.0.0"
},
"imports": "dnxcore50"
}
}
}


So far so good, but where does the logger log? For that matter, why do I need a logger factory, can't I just instantiate my own logger and use it? The logging mechanism intended by the developers is this: you get the logger factory, you add logger providers to it, which in turn instantiate logger instances. So when the logger factory creates a logger, it actually gives you a chain of different loggers, each with their own settings.

For example, in order to log to the console, you run loggerFactory.AddConsole(); for which you need to add the package
"Microsoft.Extensions.Logging.Console": "1.0.0"
to project.json. What AddConsole does is actually factory.AddProvider(new ConsoleLoggerProvider(...));. The provider will then instantiate a ConsoleLogger for the logger chain, which will write to the console.

Additional logging options come with the Debug or EventLog or EventSource packages as you can see at the GitHub repository for Logging. Find a good tutorial on how to create your own Logger here.

Static files


It wouldn't be much of a web server if it wouldn't serve static files. ASP.Net Core has two concepts for web site roots: Web root and Content root. Web is for web-servable content files, while Content is for application content files, like views and stuff. In order to serve these we use the
<PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.1.1" />
package, we instruct the WebHostBuilder what the web root is, then we tell the ApplicationBuilder to use static files. Like this:

project.json
{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable",
"emitEntryPoint": true
},
"dependencies": {},
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.0"
},
"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
"Microsoft.AspNetCore.StaticFiles": "1.0.0"
},
"imports": "dnxcore50"
}
}
}


.csproj file
<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp1.1</TargetFramework>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
<PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.1.1" />
</ItemGroup>

</Project>

Program.cs
using System.IO;
using Microsoft.AspNetCore.Hosting;

namespace ConsoleApplication
{
public class Program
{
public static void Main(string[] args)
{
var path=Path.Combine(Directory.GetCurrentDirectory(),"www");
var host = new WebHostBuilder()
.UseKestrel()
.UseWebRoot(path)
.UseStartup<Startup>()
.Build();

host.Run();
}
}
}

Startup.cs
using Microsoft.AspNetCore.Builder;

namespace ConsoleApplication
{
public class Startup
{
public void Configure(IApplicationBuilder app)
{
app.UseStaticFiles();
}
}
}

Now all we have to do is add a page and some files in the www folder of our app and we have a web server. But what does UseStaticFiles do? It runs app.UseMiddleware(), which adds StaticFilesMiddleware to the middleware chain which, after some validations and checks, runs StaticFileContext.SendRangeAsync(). It is worth looking up the code of these files, since it both demystifies the magic of web servers and awes through simplicity.

Routing


Surely we need to serve static content, but what about dynamic content? Here is where routing comes into place.

First add package
"Microsoft.AspNetCore.Routing": "1.0.0"
to project.json, then add some silly routing to Startup.cs:

project.json
{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable",
"emitEntryPoint": true
},
"dependencies": {},
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.0"
},
"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
"Microsoft.AspNetCore.Routing": "1.0.0"
},
"imports": "dnxcore50"
}
}
}


From .NET Core 1.1, Routing is found in the Microsoft.AspNetCore package.

Program.cs
using Microsoft.AspNetCore.Hosting;

namespace ConsoleApplication
{
public class Program
{
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseStartup<Startup>()
.Build();

host.Run();
}
}
}

Startup.cs
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.DependencyInjection;

namespace ConsoleApplication
{
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddRouting();
}

public void Configure(IApplicationBuilder app)
{
app.UseRouter(new HelloRouter());
}

public class HelloRouter : IRouter
{
public VirtualPathData GetVirtualPath(VirtualPathContext context)
{
return null;
}

public Task RouteAsync(RouteContext context)
{
var requestPath = context.HttpContext.Request.Path;
if (requestPath.StartsWithSegments("/hello", StringComparison.OrdinalIgnoreCase))
{
context.Handler = async c =>
{
await c.Response.WriteAsync($"Hello world!");
};
}
return Task.FromResult(0);
}
}
}
}

Some interesting things added here. First of all, we added a new method to the Startup class, called ConfigureServices, which receives an IServicesCollection. To this, we AddRouting, with a familiar by now extension method that shortcuts to adding a whole bunch of services to the collection: contraint resolvers, url encoders, route builders, routing marker services and so on. Then in Configure we UseRouter (shortcut for using a RouterMiddleware) with a custom built implementation of IRouter. What it does is look for a /hello URL request and returns the "Hello World!" string. Go on and test it by going to http://localhost:5000/hello.

Take a look at the documentation for routing for a proper example, specifically using a RouteBuilder to build an implementation of IRouter from a default handler and a list of mapped routes with their handlers and how to map routes, with parameters and constraints. Another post will handle a complete web site, but for now, let's just build a router with several routes, see how it goes:

Startup.cs
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.DependencyInjection;

namespace ConsoleApplication
{
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddRouting();
}

public void Configure(IApplicationBuilder app)
{
var defaultHandler = new RouteHandler(
c => c.Response.WriteAsync($"Default handler! Route values: {string.Join(", ", c.GetRouteData().Values)}")
);

var routeBuilder = new RouteBuilder(app, defaultHandler);

routeBuilder.Routes.Add(new Route(new HelloRouter(), "hello/{name:alpha?}",
app.ApplicationServices.GetService<IInlineConstraintResolver>()));

routeBuilder.MapRoute("Track Package Route",
"package/{operation:regex(track|create|detonate)}/{id:int}");

var router = routeBuilder.Build();
app.UseRouter(router);
}

public class HelloRouter : IRouter
{
public VirtualPathData GetVirtualPath(VirtualPathContext context)
{
return null;
}

public Task RouteAsync(RouteContext context)
{
var name = context.RouteData.Values["name"] as string;
if (String.IsNullOrEmpty(name)) name="World";

var requestPath = context.HttpContext.Request.Path;
if (requestPath.StartsWithSegments("/hello", StringComparison.OrdinalIgnoreCase))
{
context.Handler = async c =>
{
await c.Response.WriteAsync($"Hi, {name}!");
};
}
return Task.FromResult(0);
}
}
}
}

Run it and test the following URLs: http://localhost:5000/ - should return nothing but a 404 code (which you can check in the Network section of your browser's developer tools), http://localhost:5000/hello - should display "Hi, World!", http://localhost:5000/hello/Siderite - should display "Hi, Siderite!", http://localhost:5000/package/track/12 - should display "Default handler! Route values: [operation, track], [id, 12]", http://localhost:5000/track/abc - should again return a 404 code, since there is a constraint on id to be an integer.

How does that work? First of all we added a more complex HelloRouter implementation that handles a name. To the route builder we added a default handler that just displays the parameters receives, a Route instance that contains the URL template, the IRouter implementation and an inline constraint resolver for the parameter and finally we added just a mapped route that goes to the default router. That is why if no hello or package URL template are matched the site returns 404 and otherwise it chooses one handler or another and displays the result.

Error handling


Speaking of 404 code that can only be seen in the browser's developer tools, how do we display an error page? As usual, read the documentation to understand things fully, but for now we will be playing around with the routing example above and adding error handling to it.

The short answer is add some middleware to the mix. Extensions methods in
"Microsoft.AspNetCore.Diagnostics": "1.0.0"

like UseStatusCodePages, UseStatusCodePagesWithRedirects, UseStatusCodePagesWithReExecute, UseDeveloperExceptionPage, UseExceptionHandler.

Here is an example of the Startup class:
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.DependencyInjection;

namespace ConsoleApplication
{
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddRouting();
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseDeveloperExceptionPage();
app.UseStatusCodePages(/*context=>{
throw new Exception($"Page return status code: {context.HttpContext.Response.StatusCode}");
}*/);

var defaultHandler = new RouteHandler(
c => c.Response.WriteAsync($"Default handler! Route values: {string.Join(", ", c.GetRouteData().Values)}")
);

var routeBuilder = new RouteBuilder(app, defaultHandler);

routeBuilder.Routes.Add(new Route(new HelloRouter(), "hello/{name:alpha?}",
app.ApplicationServices.GetService<IInlineConstraintResolver>()));

routeBuilder.MapRoute("Track Package Route",
"package/{operation:regex(track|create|detonate)}/{id:int}");

var router = routeBuilder.Build();
app.UseRouter(router);
}

public class HelloRouter : IRouter
{
public VirtualPathData GetVirtualPath(VirtualPathContext context)
{
return null;
}

public Task RouteAsync(RouteContext context)
{
var name = context.RouteData.Values["name"] as string;
if (String.IsNullOrEmpty(name)) name = "World";
if (String.Equals(name, "error", StringComparison.OrdinalIgnoreCase))
{
throw new ArgumentException("Hate errors! Won't say Hi!");
}

var requestPath = context.HttpContext.Request.Path;
if (requestPath.StartsWithSegments("/hello", StringComparison.OrdinalIgnoreCase))
{
context.Handler = async c =>
{
await c.Response.WriteAsync($"Hi, {name}!");
};
}
return Task.FromResult(0);
}
}
}
}

All you have to do now is try some non existent route like http://localhost:5000/help or cause an error with http://localhost:5000/hello/error

Final thoughts


The post is already too long so I won't be covering here a lot of what is needed such as security or setting the environment name so one can, for example, only show the development error page for the development environment. The VS Code ecosystem is at its beginning and work is being done to improve it as we speak. The basic concept of middleware allows us to build whatever we want, starting from scratch. In a future post I will discuss MVC and Web API and the more advanced concepts related to the web. Perhaps it will be done in VS Code as well, perhaps not, but my point is that with the knowledge so far one can control all of these things by knowing how to handle a few classes in a list of middleware. You might not need an off the shelf framework, you may write your own.

Find a melange of the code above in this GitHub repository.

This is part of the .NET Core and VS Code series, which contains:
Since the release of .NET Core 1.1, a lot has changed, so I am trying to keep this up to date as much as possible, but there still might be some leftover obsolete info from earlier versions.


So .NET Core was released and I started wondering if I could go full in, ignoring the usual way to work with .NET (you know, framework, Visual Studio, ReSharper, all that wonderful jazz). Well, five minutes in and I am worried. It looks like someone is trying to sell me developing in online editors packaged in a skinny app, just so that I run away scared back to the full Visual Studio paid stack. Yet, I'm only five minutes in, so I am going to persevere.

Installation


Getting started (on Windows) involves three easy steps:
  1. Download and install the .NET Core SDK for Windows (if you want the complete Core, that works with Visual Studio, go here for more details. This is the freeware version of getting started :) )
  2. Download and install Visual Studio Code which ironically needs .NET 4.5 to run.
  3. After running Code, press Ctrl-P and in the textbox that appears write 'ext install csharp', wait for a dropdown to appear and click on the C# for Visual Studio Code (powered by Omnisharp) entry to install support for C#

Now you understand why I am a little worried. By default, VS Code comes with built-in support for JavaScript, TypeScript and Node.js. That's it!

Visual Studio Code overview


Next stop, read up on how Code works. It's a different approach than Visual Studio. It doesn't open up projects, it opens up folders. The project, configuration, package files, even the settings files for Code itself, are json, not XML. The shortcut you used earlier is called Quick Open and is used to install stuff, find files, open things up, etc. Another useful shortcut is F1, which doesn't pop up some useless help file, but the Command Palette, which is like a dropdown for menu commands.

And, of course, as for any newly installed software, go immediately to settings and start customizing things. Open the File menu, go to Preferences and open User Settings and you will be amazed to see that instead of a nice interface for options you get two side-by-side json files. One is with the default settings and the other one is for custom settings that would override defaults. Interestingly, I see settings for Sass and Less. I didn't see anything that needed changing right from the start, so let's go further.

Another thing that you notice is the left side icon bar. It contains four icons:
  • Explorer - which opens up files and probably will show solutions
  • Search - which helps you search within files
  • Debug - I can only assume that it debugs stuff
  • Git - So Git is directly integrated in the code editor, not "source control", just Git. It is an interesting statement to see Git as a first class option in the editor, while support for C# needs to be installed.

Hello world! - by ear


A little disappointing, when you Open a Folder from Explorer, it goes directly to My Documents. I don't see a setting for the initial projects folder to open. Then again, maybe I don't need it. Navigate to where you need to, create a new folder, open it up. I named mine "Code Hello World", because I am a creative person at heart. Explorer remains open in the left, showing a list of Working Files (open files) and the folder with some useful options such as New File, New Folder, Refresh and Collapse All. Since we have opened a folder, we have access to the File → Preferences → Workspace Settings json files, which are just like the general Code settings, only applying to this one folder. Just opening the files creates a .vscode/settings.json empty file.

Let's try to write some code. I have no idea what I should write, so let's read up on it. Another surprise: not many articles on how to start a new project. Well, there is something called ASP.Net Core, which is another install, but I care not about the web right now. All I want is write a short Hello World thing. There are some pages about writing console applications with Code on Macs, but you know, when I said I am a creative person I wasn't going that far! I go to Quick Open, I go to the Command Palette, nothing resembling "new project" or "console app" or "dotnet something". The documentation is as full of information as a baby's bottom is full of hair. You know what that means, right? It's cutting edge, man! You got to figure it out yourself!

So let's go from memory, see where it gets us. I need a Program.cs file with a namespace, a Program class and a static Main method that accepts an array of strings as a first argument. I open a new file (which also gets created under the .vscode folder) and I write this:
namespace CodeHelloWorld
{
public class Program {
static void Main(string[] args) {
Console.WriteLine("Hello World!");
}
}
}
I then go to the Debug icon and click it.
It asks me to select the environment in which I will debug stuff: NodeJS, the Code extension environment and .Net Core. I choose .Net Core and another configuration file is presented to me, called launch.json. Now we are getting somewhere! Hoping it is all good - I am tired of all this Linuxy thing - I press the green Debug arrow. Oops! It appears I didn't select the Task Runner. A list of options is presented to me, from which two seem promising: MSBuild and .NET Core. Fuck MSBuild! Let's go all Core, see if it works.

Choosing .NET Core creates a new json file, this time called tasks.json, containing a link to documentation and some settings that tell me nothing. The version number for the file is encouraging: 0.1.0. Oh, remember the good old Linux days when everything was open source, undocumented, crappy and everyone was cranky if you even mentioned a version number higher or equal to 1?

I press the damn green arrow again and I get another error: the preLaunchTask "build" terminated with exit code 1. I debug anyway and it says that I have to configure launch.json program setting, with the name of the program I have to debug. BTW, launch.json has a 0.2.0 version. Yay! Looking more carefully at the launch.json file I see that the name of the program ends with .dll. I want an .exe, yet changing the extension is not the only thing that I need to do. Obviously I need to make my program compile first.

I press Ctrl-P and type > (the same thing can be achieved by Ctrl-Shift-P or going to the menu and choosing Command Palette) and look for Build and I find Tasks: Run Build Task. It even has the familiar Ctrl-Shift-B shortcut. Maybe that will tell me more? It does nothing. Something rotates in the status bar, but no obvious result. And then I remember the good old trusty Output window! I go to View and select Toggle Output. Now I can finally see what went wrong. Can you guess it? Another json file. This time called project.json and having the obvious problem that it doesn't exist. Just for the fun of it I create an empty json file and it still says it doesn't exist.

What now? I obviously have the same problem as when I started: I have no idea how to create a project. Armed with a little bit more information, I go browse the web again. I find the documentation page for project.json, which doesn't help much because it doesn't have a sample file, but also, finally a tutorial that make sense: Console Application. And here I find that I should have first run the command line dotnet new console in the folder I created, then open the project with VS Code.

Hello world! - by tutorial


Reset! Close Code, delete everything from the folder. Also, make sure - if you run a file manager or some other app and you just installed .NET Core - that you have C:\Program Files\dotnet\ in the PATH. Funny thing: the prompt for deleting some files from Code is asking whether to send them to Recycle Bin. I have Recycled Bin disabled, so it fails and then presents you with the option to delete them permanently.

Now, armed with knowledge I go to the folder Code Hello World, run the command 'dotnet new console' ("console" is the name of the project template. Version 1.0 allowed you to omit the template and it would default to console. From version 1.1 you need to specify it explicitly) and it creates a project.json .csproj file (with the name of the folder you were in) and a Program.cs file that is identical to mine, if you count the extra using System line. I run 'dotnet restore', 'dotnet build' and I notice that obj and bin folders have been created. The output, though, is not an exe file, but a dll file. 'dotnet run' actually runs as it should and displays "Hello, World!".

Let's open it with Code. I get a "Required assets to build and debug are missing from your project. Add them?" and I say Yes, which creates the .vscode folder, containing launch.json and tasks.json. Ctrl-Shift-B pressed and I get "Compilation succeeded." Is it possible that now I could press F5 and see the program run? No, of course not, because I must "set up the launch configuration file for my application". What does it mean? Well, apparently if I go to the Debug icon and press the green arrow icon (that has the keyboard shortcut F5) the program does run. I need to press the button for the Debug Console that is at the top of the Debug panel to see the result, but it works. From then on, pressing F5 works, too, no matter where I am.

Unconvinced by the whole thing, I decide to do things again, but this time do as little as possible from the console.

Hello world! - by Code


Reset! Delete the folder entirely and restart Visual Studio code. Then proceed with the following steps:
  1. Go to the Explorer icon and click on Open Folder (or FileOpen Folder)
  2. Create a Code Hello World folder
  3. Right click under the folder in Explorer and choose Open in Command Prompt - if you have it. Some Windows versions removed this option from their context menu. If not, select Open in New Window, go to the File menu and select Open Command Prompt (as Administrator, if you can) from there
  4. Write 'dotnet new console' in the console window, then close the command prompt and the Windows Explorer window, if you needed it
  5. Select the newly created Code Hello World folder
  6. From the open folder, open Program.cs
  7. To the warning "Required assets to build and debug are missing from your project. Add them?" click Yes
  8. To the info "There are unresolved dependencies from '<your project>.csproj'. Please execute the restore command to continue." click Restore
  9. Click the Debug icon and press the green play arrow

That's it!


But let's discuss one poignantly ridiculous part of the step list. Why did we have to open and close the folder? It is in order to get that prompt to add required assets. If you try to run the app without doing that, Code will create a generic launch.json file with placeholders instead of actual folder names. Instead of
{
"version": "0.2.0",
"configurations": [
{
"name": ".NET Core Launch (console)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "${workspaceRoot}/bin/Debug/netcoreapp1.0/Code Hello World.dll",
"args": [],
"cwd": "${workspaceRoot}",
"externalConsole": false,
"stopAtEntry": false
},
{
"name": ".NET Core Attach",
"type": "coreclr",
"request": "attach",
"processId": 0
}
]
}
you get something like
{
"version": "0.2.0",
"configurations": [
{
"name": ".NET Core Launch (console)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "${workspaceRoot}/bin/Debug/<target-framework>/<project-name.dll>",
"args": [],
"cwd": "${workspaceRoot}",
"stopAtEntry": false,
"externalConsole": false
},
{
"name": ".NET Core Launch (web)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "${workspaceRoot}/bin/Debug/<target-framework>/<project-name.dll>",
"args": [],
"cwd": "${workspaceRoot}",
"stopAtEntry": false,
"launchBrowser": {
"enabled": true,
"args": "${auto-detect-url}",
"windows": {
"command": "cmd.exe",
"args": "/C start ${auto-detect-url}"
},
"osx": {
"command": "open"
},
"linux": {
"command": "xdg-open"
}
},
"env": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
{
"name": ".NET Core Attach",
"type": "coreclr",
"request": "attach",
"processId": 0
}
]
}
and a warning telling you to configure launch.json. It only works after changing the program property to '/bin/Debug/netcoreapp1.0/Code Hello World.dll' and, of course, the web and attach configuration sections are pointless.


Debugging


I placed a breakpoint on Console.WriteLine. The only declared variable is the args string array. I can see it in Watch (after adding it by pressing +) or in the Locals panel, I can change it from the Debug Console. I can step through code, I can set a condition on the breakpoint. Nothing untoward here. There is no Intellisense in the Debug Console, though.

In order to give parameters to the application in debug you need to change the args property of launch.json. Instead of args:[], something like args:["test",test2"]. In order to run the application from the command line you need to run it with dotnet, like this: dotnet "Code Hello World.dll" test test2.

Conclusion


.Net Code is not nearly ready as a comfortable IDE, but it's getting there, if enough effort will be put into it. It seems to embrace concepts from both the Windows and Linux world and I am worried that it may gain traction with neither. I am yet to try to build a serious project though and next on the agenda is trying an ASP.Net Core application, maybe a basic API.

While I am not blown away by what I have seen, I declare myself intrigued. If creating extensions for Code is easy enough, I may find myself writing my own code editor tools. Wouldn't that be fun? Stay tuned for more!

Just when I thought I don't have anything else to add, I found new stuff for my Chrome browser extension.

Bookmark Explorer now features:
  • configurable interval for keeping a page open before bookmarking it for Read Later (so that all redirects and icons are loaded correctly)
  • configurable interval after which deleted bookmarks are no longer remembered
  • remembering deleted bookmarks no matter what deletes them
  • more Read Later folders: configure their number and names
  • redesigned options page
  • more notifications on what is going on

The extension most resembles OneTab, in the sense that it is also designed to save you from opening a zillion tabs at the same time, but differs a lot by its ease of use, configurability and the absolute lack of any connection to outside servers: everything is stored in Chrome bookmarks and local storage.

Enjoy!

I have created a Facebook page for the blog, so if you are tired by my Facebook ramblings and only need the updates for this blog, you can subscribe to it. You may find it here.

Also, I've discovered a bad bug with the chess viewer: it didn't allow manual promotions. If you tried to promote a pawn it would call a component that was not on the page (an ugly component) and then it wouldn't promote anyway because of a bug in the viewer. Hopefully I've solved them both. It mainly affected puzzles like the one I posted today.