A year or so ago I wrote a Javascript library that I then ported (badly) to Typescript and which was adding the sweet versatility and performance of LINQ to the *script world. Now I've rewritten the entire thing into a Typescript library. I've abandoned the separation into three different Javascript files. It is just one having everything you need.

I haven't tested it in the wild, but you can get the new version here:

Github: https://github.com/Siderite/LInQer-ts

NPM: https://www.npmjs.com/package/@siderite/linqer-ts

Documentation online: https://siderite.github.io/LInQer-ts

Using the library in the browser directly: https://siderite.github.io/LInQer-ts/lib/LInQer.js and https://siderite.github.io/LInQer-ts/lib/LInQer.min.js

Please give me all the feedback you can! I would really love to hear from people who use this in:

  • Typescript
  • Node.js
  • browser web sites

The main blog post URL for the project is still https://siderite.dev/blog/linq-in-javascript-linqer as the official URL for both libraries.

Have fun using it!

Major Update

I've rewritten the entire thing into a Typescript library. I've abandoned the separation into three different Javascript files. Now it is only one having everything you need.

I haven't tested it in the wild, but you can get the new version here:

Github: https://github.com/Siderite/LInQer-ts

NPM: https://www.npmjs.com/package/@siderite/linqer-ts

Documentation online: https://siderite.github.io/LInQer-ts

Using the library in the browser directly: https://siderite.github.io/LInQer-ts/lib/LInQer.js and https://siderite.github.io/LInQer-ts/lib/LInQer.min.js

Please give me all the feedback you can! I would really love to hear from people who use this in:

  • Typescript
  • Node.js
  • browser web sites

And now for the old post

Update: The npm package can now be used in Node.js and Typescript (with Intellisense). Huge optimizations on the sorting and new extra functions published.

Update: Due to popular demand, I've published the package on npm at https://www.npmjs.com/package/@siderite/linqer. Also, while at it, I've moved the entire code to Typescript, fixed a few issues and also create the Linqer.Slim.js file, that only holds the most common methods. The minimized version can be found at https://siderite.github.io/LInQer/LInQer.slim.min.js and again you can use it freely on your websites. Now, I hope you can use it in Node JS as well, I have no idea how to test it and frankly I don't want to. I've also added a tests.slim.html that only tests the slim version of Linqer. This version is now official and I will try to limit any breaking changes from now on. WARNING: Now the type is Linqer.Enumerable, not just Enumerable as before. If you are already using it in code, just do const Enumerable = Linqer.Enumerable; and you are set to go.

Update: LInQer can now be found published at https://siderite.github.io/LInQer/LInQer.min.js and https://siderite.github.io/LInQer/LInQer.extra.min.js and you can freely use it in your web sites. Source code on GitHub.

Update: All the methods in Enumerable are now implemented, and some extra ones as well. Optimizations have been developed for operations like orderBy().take() and .count().

  This entire thing started from a blog post that explained something that seems obvious in retrospect, but you have to actually think about it to realise it: Array functions in Javascript may seem similar to LInQ methods in C#, but they work with arrays! Every time you filter or you map something you create a new array! The post suggested creating functions that would use the modern Javascript features of iterators and generator functions and supplant the standard Array functions. It was brilliant!

  And then the post abruptly veered and swerved into a tree. The author suggested we add the pipe operator to Javascript, just like they have in functional programming languages, so that we can use those static functions with hash characters as placeholders and... ugh! It was horrid! So I decided to solve the problem in a way that was more compatible with my own sensibilities: make it work just like LInQ (or at least like the Java streams).

This is how LInQer was born. You can find the sources on GitHub at /Siderite/LInQer. The library introduces a class called Enumerable, which can wrap any iterable (meaning arrays, but also generator functions or anything that supports for...of), then expose the methods that the Enumerable class in .NET exposes. There is even a unit test file called tests.html. Open it in the browser and see the supported methods. As you can see in the image of the blog post, the same logic (filtering, mapping and slicing a 10 million item array) took 2000ms using standard Array functions and 150ms using the Enumerable class.

"But wait! It says there that Enumerable exposes static methods! What exactly did you improve?", you will ask. In .NET we have something called "extension methods", which are indeed static methods that the language understands apply to specific classes or interfaces and allows you to call them like you would instance methods. So a method that looks like public static int Sum(this IEnumerable<int> enumerable) can be used statically Enumerable.Sum(arr) or like an instance method arr.Sum(). It's syntactic sugar. But enough about C#, in Javascript I've implemented Enumerable as a wrapper, as Javascript doesn't know about interfaces or static extension methods. This class then exposes prototype functions that work the same way as in LInQ.

Here is where the magic happens:

Enumerable.prototype = {
  [Symbol.iterator]() {
    const iterator = this._src[Symbol.iterator].bind(this._src);
    return iterator();
  },

In other words I wrap _src (the original iterable object) in my Enumerable by exposing the same iterator for both! Now both initial object and its wrapper can be used in for...of loops. The difference is that the wrapper now exposes all that juicy functionality. Let's see a simple example: select (which is the logical equivalent of Array.map):

select(op) {
  _ensureFunction(op);
  const gen = function* () {
    for (const item of this) {
      yield op(item);
    }
  }.bind(this);
  return new Enumerable(gen());
},

Here I am constructing a generator function which I bind to the Enumerable instance so that inside it 'this' is always that instance. Then I am returning the wrapper over the generator. In the function I am yielding the result of the op function on each item. Because of how generator functions work, only while iterating will it need to do its work, meaning that if I use the Enumerable into a for...of loop and breaking after three items, the op function will only be applied on those items, not on all of them. In order to use the Enumerable object in regular code, that works with arrays, you just use .toArray and you are done!

Let's see the code in the tests used to compare the standard Array function use with Enumerable:

// this is the large array we are using in both code blocks:
  const largeArray = Array(10000000).fill(10);

// Standard Array functions
  const someCalculation = largeArray
                             .filter(x=>x===10)
                             .map(x=>'v'+x)
                             .slice(100,110);
// Enumerable
  const someCalculation = Linqer.Enumerable.from(largeArray)
                             .where(x=>x===10)
                             .select(x=>'v'+x)
                             .skip(100)
                             .take(10)
                             .toArray();

There are differences from the C# Enumerable, of course. Some things don't make sense, like thenBy after orderBy. It can be done, but it's complicated and in my career I've only used thenBy twice! toDictionary, toHashSet, toLookup, toList have no meaning in Javascript. Use instead toMap, toSet, toObject, toArray. Last, but not least... join. Join is so complicated to use in LInQ that I almost never used it. Also, joining is usually done in the database, so I rarely needed it. I didn't see a point in implementing it. Cast and AsEnumerable also didn't make sense. But I implemented them anyway! :) Cast and OfType are using either a class or a string to determine if an item is "of type" and join works just like in C#.

But don't fret. Any function that you want to add to this can simply be added by your own code into Enumerable.prototype! So if you really need something custom, it's easy to add without modifying the original library.

There is one disadvantage for the prototype approach and the reason why the initial article suggested to use standalone functions: tree-shaking, the fancy word used to express the automatic elimination of unused code. But I think there is a solution for that, one that I won't implement since I believe the library is small enough and minified and compressed it will be very small indeed. The solution would involve separating each of the LINQ methods (or categories of methods, like first, firstOrDefault, last and lastOrDefault, for example) in different files. Then you can use only what you need and the files would attach the functions to the Enumerable prototype. 

In fact, there are a lot of LINQ methods I have never had use for, stuff like intersect and except or append and prepend. It makes sense to create a core Enumerable class and then add other stuff only when required. But as I said, it's small enough to not matter. As such I separated the functionality in Typescript files that contain the basic functionality, the complete functionality and some extra functions. The Javascript result is a Linqer, a Linqer.slim, a Linqer.extra and a Linqer.all, covering the entire spectrum of needs. It would have been user unfriendly to put every little thing in its own file.

Bottom line, I hope you find this library useful or at least it inspires you as it did me when I've read the original blog post from Dan Shappir.

P.S. I am not the only one that had this idea, you might also want to look at linq.js which uses a completely different approach, using regular expressions. I think the closest to what LINQ should be is Manipula, that uses custom iterators for each operation rather than use the same class. Another possibility is linq-collections, which does the work via TypeScript, apparently. Also linq.es6... I am sure there are more examples if one looks closely. Feel free to let me know if you did or know of similar work and also please give me whatever feedback you see fit. I would like this library to be useful, not just a code example for my blog.

and has 0 comments

This is a side post to Programming a simple game in pure HTML and Javascript in the sense that we are going to add that project to source control in GitHub using Visual Studio Code.

The first step is to create a repository. You need to create a GitHub account first, then either go to Your Repositories

or click the green New button on the top of your repositories list in the main page.

Make sure you give the repository a name and a decent description. I recommend adding a README and a licence (I use MIT), too.

Now that you have a remote repo on GitHub, you need to remember its URL, you will need it later.

Then download Git from their website. The Windows installer is simple to find, download and Next-next-next to success.

Open Visual Studio Code in your project's folder. On the left side menu there are a bunch of vertical buttons, click on the Source Control one 

Then initialize your local repository by clicking on the plus sign and selecting your project folder: 

Now you need to connect your local repo to your remote repo and you do this by typing some stuff in the Visual Studio Code Terminal window:

  1. set the remote by typing git remote add origin <your GitHub repo URL>
  2. verify the remote by typing git remote -v
  3. run a git pull to make sure your project loads the readme and license file by typing git pull origin master

Last step is to push your local files to the remote repository. You do this by

  1. clicking on the check mark at the top of the Source Control window and giving a description to this first commit

  2. pushing/publishing the commit, by clicking on the ... mark and choosing Publish Branch

There you have it. Your project is on source control.

You do this in order to not lose your changes, in order to track your changes forwards and backwards in time and to publish your work for others to see it (like other devs or potential employers)

Verify that all went well by refreshing your GitHub project page. The files in your local folder should now be visible in the file list on the page.

Now, every time you make changes to your code you push the changes on GitHub by committing (the check mark) and then pushing/syncing/publishing (from the ... menu)

I've just published the package Siderite.StreamRegex, which allows to used regular expressions on Stream and TextReader objects in .NET. Please leave any feedback or feature requests or whatever in the comments below. I am grateful for any input.

Here is the GitHub link: Siderite/StreamRegex
Here is the NuGet package link: Siderite.StreamRegex

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Previously on Learning ASP.Net MVC...


Started with the idea of a project that would use user configurable queries to do Google searches, store the information in the results and then perform various data analysis on them and display them based on what the user wants. However, I first implemented a Google authentication and then went to write some theoretical posts. Lastly, I've upgraded the project from .NET Core 1.0 to version 1.1.

Well, it took me a while to get here because I was at a crossroads. I like the idea of dependency injectable services to do data access. At the same time there is the entire Entity Framework tutorial path that kind of wants to strongly integrate EF with my projects. I mean, if I have a service that gives me the list of all items in the database and then I want to get only a few items, it would be bad design to filter the entire list. As such, I would have to write a different method that allows me to get the items based on some kind of filters. On the other hand, Entity Framework code looks just like that "give me all you have, filtered by this" which is then translated into an efficient query to the database. One possibility would be to have my service return IQueryable <T>, so I could also use the system to generate the database code on the fly.

The Design


I've decided on the service architecture, against an EF type IQueryable way, because I want to be able to replace that service with anything, including something that doesn't work with a database or something that doesn't know how to dynamically create queries. Also, the idea that the service methods will describe exactly what I want appeals to me more than avoiding a bit of duplicated code.

Another thing to define now is the method through which I will implement the dependency injection. Being the control freak that I am, I would go with installing my own library, something like SimpleInjector, and configure it myself and use it explicitly. However, ASP.Net Core has dependency injection included out of the box, so I will use that.

As defined, the project needs queries to pass on to Google and a storage service for the results. It needs data services to manage these entities, as well as a service to abstract Google itself. The data gathering operation itself cannot be a simple REST call, since it might take a while, it must be a background task. The data analysis as well. So we need a sort of job manager.

As per a good structured design, the data objects will be stored in a separate project, as well as the interfaces for the services we will be using.

Some code, please!


Well, start with the code of the project so far: GitHub and let's get coding.

Before finding a solution to actually run the background code in the context of ASP.Net, let's write it inside a class. I am going to add a folder called Jobs and add a class in it called QueryProcessor with a method ProcessQueries. The code will be self explanatory, I hope.
public void ProcessQueries()
{
var now = _timeService.Now;
var queries = _queryDataService.GetUnprocessed(now);
var contentItems = queries.AsParallel().WithDegreeOfParallelism(3)
.SelectMany(q => _contentService.Query(q.Text));
_contentDataService.Update(contentItems);
}

So we get the time - from a service, of course - and request the unprocessed queries for that time, then we extract the content items for each query, which then are updated in the database. The idea here is that, for the first time a query is defined or when the interval from the last time the query was processed, the query will be sent to the content service from which content items will be received. These items will be stored in the database.

Now, I've kept the code as concise as possible: there is no indication yet of any implementation detail and I've written as little code as I need to express my intention. Yet, what are all these services? What is a time service? what is a content service? Where are they defined? In order to enable dependency injection, we will populate all of these fields from the constructor of the query processor. Here is how the class would look in its entirety:
using ContentAggregator.Interfaces;
using System.Linq;

namespace ContentAggregator.Jobs
{
public class QueryProcessor
{
private readonly IContentDataService _contentDataService;
private readonly IContentService _contentService;
private readonly IQueryDataService _queryDataService;
private readonly ITimeService _timeService;

public QueryProcessor(ITimeService timeService, IQueryDataService queryDataService, IContentDataService contentDataService, IContentService contentService)
{
_timeService = timeService;
_queryDataService = queryDataService;
_contentDataService = contentDataService;
_contentService = contentService;
}

public void ProcessQueries()
{
var now = _timeService.Now;
var queries = _queryDataService.GetUnprocessed(now);
var contentItems = queries.AsParallel().WithDegreeOfParallelism(3)
.SelectMany(q => _contentService.Query(q.Text));
_contentDataService.Update(contentItems);
}
}
}

Note that the services are only defined as interfaces which we declare in a separate project called ContentAggregator.Interfaces, referred above in the usings block.

Let's ignore the job processor mechanism for a moment and just run ProcessQueries in a test method in the main controller. For this I will have to make dependency injection work and implement the interfaces. For brevity I will do so in the main project, although it would probably be a good idea to do it in a separate ContentAggregator.Implementations project. But let's not get ahead of ourselves. First make the code work, then arrange it all nice, in the refactoring phase.

Implementing the services


I will create mock services first, in order to test the code as it is, so the following implementations just do as little as possible while still following the interface signature.
public class ContentDataService : IContentDataService
{
private readonly static StringBuilder _sb;

static ContentDataService()
{
_sb = new StringBuilder();
}

public void Update(IEnumerable<ContentItem> contentItems)
{
foreach (var contentItem in contentItems)
{
_sb.AppendLine($"{contentItem.FinalUrl}:{contentItem.Title}");
}
}

public static string Output
{
get { return _sb.ToString(); }
}
}

public class ContentService : IContentService
{
private readonly ITimeService _timeService;

public ContentService(ITimeService timeService)
{
_timeService = timeService;
}

public IEnumerable<ContentItem> Query(string text)
{
yield return
new ContentItem
{
OriginalUrl = "http://original.url",
FinalUrl = "https://final.url",
Title = "Mock Title",
Description = "Mock Description",
CreationTime = _timeService.Now,
Time = new DateTime(2017, 03, 26),
ContentType = "text/html",
Error = null,
Content = "Mock Content"
};
}
}

public class QueryDataService : IQueryDataService
{
public IEnumerable<Query> GetUnprocessed(DateTime now)
{
yield return new Query
{
Text="Some query"
};
}
}

public class TimeService : ITimeService
{
public DateTime Now
{
get
{
return DateTime.UtcNow;
}
}
}

Now all I have to do is declare the binding between interface and implementation. The magic happens in ConfigureServices, in Startup.cs:
services.AddTransient<ITimeService, TimeService>();
services.AddTransient<IContentDataService, ContentDataService>();
services.AddTransient<IContentService, ContentService>();
services.AddTransient<IQueryDataService, QueryDataService>();

They are all transient, meaning that for each request of an implementation the system will just create a new instance. Another popular method is AddSingleton.

Using dependency injection


So, now I have to instantiate my query processor and run ProcessQueries.

One way is to set QueryProcessor as a service. I extract an interface, I add a new binding and then I give an interface as a parameter of my controller constructor:
[Authorize]
public class HomeController : Controller
{
private readonly IQueryProcessor _queryProcessor;

public HomeController(IQueryProcessor queryProcessor)
{
_queryProcessor = queryProcessor;
}

public IActionResult Index()
{
return View();
}

[HttpGet("/test")]
public string Test()
{
_queryProcessor.ProcessQueries();
return ContentDataService.Output;
}
}
In fact, I don't even have to declare an interface. I can just use services.AddTransient<QueryProcessor>(); in ConfigureServices and it works as a parameter to the controller.

But what if I want to use it directly, resolve it manually, without injecting it in the controller? One can use the injection of a IServiceProvider instead. Here is an example:
[Authorize]
public class HomeController : Controller
{
private readonly IServiceProvider _serviceProvider;

public HomeController(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}

public IActionResult Index()
{
return View();
}

[HttpGet("/test")]
public string Test()
{
var queryProcessor = _serviceProvider.GetService<QueryProcessor>();
queryProcessor.ProcessQueries();
return ContentDataService.Output;
}
}
Yet you still need to use services.Add... in ConfigureServices and inject the service provider in the constructor of the controller.

There is a way of doing it completely separately like this:
var serviceProvider = new ServiceCollection()
.AddTransient<ITimeService, TimeService>()
.AddTransient<IContentDataService, ContentDataService>()
.AddTransient<IContentService, ContentService>()
.AddTransient<IQueryDataService, QueryDataService>()
.AddTransient<QueryProcessor>()
.BuildServiceProvider();
var queryProcessor = serviceProvider.GetService<QueryProcessor>();

This would be the way to encapsulate the ASP.Net Dependency Injection in another object, maybe in a console application, but clearly it would be pointless in our application.

The complete source code after these modifications can be found here. Test the functionality by going to /test on your local server after you start the app.

It is about time to revisit my series on ASP.Net MVC Core. From the time of my last blog post the .Net Core version has changed to 1.1, so just installing the SDK and running the project was not going to work. This post explains how to upgrade a .Net project to the latest version.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Short version


Pressing the batch Update button for NuGet packages corrupted project.json. Here are the steps to successfully migrate a .Net Core project to a higher version.

  1. Download and install the .NET Core 1.1 SDK
  2. Change the version of the SDK in global.json - you can find out the SDK version by creating a new .Net Core project and checking what it uses
  3. Change "netcoreapp1.0" to "netcoreapp1.1" in project.json
  4. Change Microsoft.NETCore.App version from "1.0.0" to "1.1.0" in project.json
  5. Add
    "runtimes": {
    "win10-x64": { }
    },
    to project.json
  6. Go to "Manage NuGet packages for the solution", to the Update tab, and update projects one by one. Do not press the batch Update button for selected packages
  7. Some packages will restore, but remain in the list. Skip them for now
  8. Whenever you see a "downgrade" warning when restoring, go to those packages and restore them next
  9. For packages that tell you to upgrade NuGet, ignore them, it's an error that probably happens because you restore a package while the previous package restoring was not completed
  10. For the remaining packages that just won't update, write down their names, uninstall them and reinstall them

Code after changes can be found on GitHub

That should do it. For detailed steps of what I actually did to get to this concise list, read on.

Long version


Step 0 - I don't care, just load the damn project!


Downloaded the source code from GitHub, loaded the .sln with Visual Studio 2015. Got a nice blocking alert, because this was a .NET Core virgin computer:
Of course, I could have tried to install that version, but I wanted to upgrade to the latest Core.

Step 1 - read the Microsoft documentation


And here I went to Announcing the Fastest ASP.NET Yet, ASP.NET Core 1.1 RTM. I followed the instructions there, made Visual Studio 2015 load my project and automatically restore packages:
  1. Download and install the .NET Core 1.1 SDK
  2. If your application is referencing the .NET Core framework, your should update the references in your project.json file for netcoreapp1.0 or Microsoft.NetCore.App version 1.0 to version 1.1. In the default project.json file for an ASP.NET Core project running on the .NET Core framework, these two updates are located as follows:

    Two places to update project.json to .NET Core 1.1

  3. to be continued...

I got to the second step, but still got the alert...

Step 2 - fumble around


... so I commented out the sdk property in global.json. I got another alert:


This answer recommended uninstalling old versions of SDKs, in my case "Microsoft .NET Core 1.0.1 - SDK 1.0.0 Preview 2-003131 (x64)". Don't worry, it didn't work. More below:

TL;DR; version: do not uninstall the Visual Studio .NET Core Tooling


And then... got the same No executable found matching command "dotnet=projectmodel-server" error again.

I created a new .NET core project, just to see the version of SDK it uses: 1.0.0-preview2-003131 and I added it to global.json and reopened the project. It restored packages and didn't throw any errors! Dude, it even compiled and ran! But now I got a System.ArgumentException: The 'ClientId' option must be provided. Probably it had something to do with the Secret Manager. Follow the steps in the link to store your secrets in the app. It then worked.

Step 1.1 (see what I did there?) - continue to read the Microsoft documentation


The third step in the Microsoft instructions was removed by me because it caused some problems to me. So don't do it, yet. It was
  1. Update your ASP.NET Core packages dependencies to use the new 1.1.0 versions. You can do this by navigating to the NuGet package manager window and inspecting the “Updates” tab for the list of packages that you can update.

    Updating Packages using the NuGet package manager UI with the last pre-release build of ASP.NET Core 1.1


Since I had not upgraded the packages, as in the Microsoft third step, I decided to do it. 26 updates waited for me, so I optimistically selected them all and clicked Update. Of course, errors! One popped up as more interesting: Package 'Microsoft.Extensions.SecretManager.Tools 1.0.0' uses features that are not supported by the current version of NuGet. To upgrade NuGet, see http://docs.nuget.org/consume/installing-nuget. Another was even more worrisome: Unexpected end of content while loading JObject. Path 'dependencies', line 68, position 0 in project.json. Somehow the updating operation for the packages corrupted project.json! From a 3050 byte file, it now was 1617.

Step 3 - repair what the Microsoft instructions broke


Suspecting it was a problem with the NuGet package manager, I went to the link in the first error. But in Visual Studio 2015 NuGet is included and it was clearly the latest version. So the only solution was to go through each package and see which causes the problem. And I went to 26 packages and pressed Install on each and it worked. Apparently, the batch Update button is causing the issue. Weirdly enough there are two packages that were installed, but remained in the Update tab and also appeared in the Consolidate tab: BundleMinifier.Core and Microsoft.EntityFrameworkCore.Tools, although I can't to anything with them there.

Another package (Microsoft.VisualStudio.Web.CodeGeneration.Tools 1.0.0) caused another confusing error: Package 'Microsoft.VisualStudio.Web.CodeGeneration.Tools 1.0.0' uses features that are not supported by the current version of NuGet. To upgrade NuGet, see http://docs.nuget.org/consume/installing-nuget. Yet restarting Visual Studio led to the disappearance of the CodeGeneration.Tools error.

So I tried to build the project only to be met with yet another project.json corruption error: Can not find runtime target for framework '.NETCoreAPP, Version=v1.0' compatible with one of the target runtimes: 'win10-x64, win81-x64, win8-x64, win7-x64'. Possible causes: [blah blah] The project does not list one of 'win10-x64, win81-x64, win7-x64' in the 'runtimes' [blah blah]. I found the fix here, which was to add
"runtimes": {
"win10-x64": { }
},
to project.json.

It compiled. It worked.

Sometimes we want to serialize and deserialize objects that are dynamic in nature, like having different types of content based on a property. However, the strong typed nature of C# and the limitations of serializers make this frustrating.


In this post I will be discussing several things:
  • How to deserialize a JSON property that can have a value of multiple types that are not declared
  • How to serialize and deserialize a property declared as a base type in WCF
  • How to serialize it back using JSON.Net

The code can all be downloaded from GitHub. The first phase of the project can be found here.

Well, the scenario was this: I had a string in the database that was in JSON format. I would deserialize it using JSON.Net and use the resulting object. The object had a property that could be one of several types, but no special notation to describe its concrete type (like the $type notation for JSON.Net or the __type notation for DataContractSerializer). The only way I knew what type it was supposed to be was a type integer value.

My solution was to declare the property as JToken, meaning anything goes there, then deserialize it manually when I have both the JToken value and the integer type value. However, passing the object through a WCF service, which uses DataContractSerializer and which would throw the exception System.Runtime.Serialization.InvalidDataContractException: Type 'Newtonsoft.Json.Linq.JToken' is a recursive collection data contract which is not supported. Consider modifying the definition of collection 'Newtonsoft.Json.Linq.JToken' to remove references to itself..

I thought I could use two properties that pointed at the same data: one for JSON.Net, which would use JToken, and one for WCF, which would use a base type. It should have been easy, but it was not. People have tried this on the Internet and declared it impossible. Well, it's not!

Let's work with some real data. Database JSON:
{
"MyStuff": [
{
"Configuration": {
"Wheels": 2
},
"ConfigurationType": 1
},
{
"Configuration": {
"Legs": 4
},
"ConfigurationType": 2
}
]
}

Here is the code that works:
[DataContract]
[JsonObject(MemberSerialization.OptOut)]
public class Stuff
{
[DataMember]
public List<Thing> MyStuff;
}

[DataContract]
[KnownType(typeof(Vehicle))]
[KnownType(typeof(Animal))]
public class Configuration
{
}

[DataContract]
public class Vehicle : Configuration
{
[DataMember]
public int Wheels;
}

[DataContract]
public class Animal : Configuration
{
[DataMember]
public int Legs;
}

[DataContract]
public class Thing
{
private Configuration _configuration;
private JToken _jtoken;

[DataMember(Name = "ConfigurationType")]
public int ConfigurationType;

[IgnoreDataMember]
public JToken Configuration
{
get
{
return _jtoken;
}
set
{
_jtoken = value;
_configuration = null;
}
}

[JsonIgnore]
[DataMember(Name = "Configuration")]
public Configuration ConfigurationObject
{
get
{
if (_configuration == null)
{
switch (ConfigurationType)
{
case 1: _configuration = Configuration.ToObject<Vehicle>(); break;
case 2: _configuration = Configuration.ToObject<Animal>(); break;
}
}
return _configuration;
}
set
{
_configuration = value;
_jtoken = JRaw.FromObject(value);
}
}
}

public class ThingContractResolver : DefaultContractResolver
{
protected override JsonProperty CreateProperty(System.Reflection.MemberInfo member,
Newtonsoft.Json.MemberSerialization memberSerialization)
{
var prop = base.CreateProperty(member, memberSerialization);
if (member.Name == "Configuration") prop.Ignored = false;
return prop;
}
}

So now this is what happens:
  • DataContractSerializer ignores the JToken property, and doesn't throw an exception
  • Instead it takes ConfigurationObject and serializes/deserializes it as "Configuration", thanks to the KnownTypeAttributes decorating the Configuration class and the DataMemberAttribute that sets the property's serialization name (in WCF only)
  • JSON.Net ignores DataContract as much as possible (JsonObject(MemberSerialization.OptOut)) and both properties, but then un-ignores the Configuration property when we use the ThingContractResolver
  • DataContractSerializer will add a property called __type to the resulting JSON, so that it knows how to deserialize it.
  • In order for this to work, the JSON deserialization of the original text needs to be done with JSON.Net (no __type property) and the JSON coming to the WCF service to be correctly decorated with __type when it comes to the server to be saved in the database

If you need to remove the __type property from the JSON, try the solution here: JSON.NET how to remove nodes. I didn't actually need to do it.

Of course, it would have been simpler to just replace the default serializer of the WCF service to Json.Net, but I didn't want to introduce other problems to an already complex system.


A More General Solution


To summarize, it would have been grand if I could have used IgnoreDataMemberAttribute and JSON.Net would just ignore it. To do that, I could use another ContractResolver:
public class JsonWCFContractResolver : DefaultContractResolver
{
protected override JsonProperty CreateProperty(System.Reflection.MemberInfo member,
Newtonsoft.Json.MemberSerialization memberSerialization)
{
var prop = base.CreateProperty(member, memberSerialization);
var hasIgnoreDataMember = member.IsDefined(typeof(IgnoreDataMemberAttribute), false);
var hasJsonIgnore = member.IsDefined(typeof(JsonIgnoreAttribute), false);
if (hasIgnoreDataMember && !hasJsonIgnore)
{
prop.Ignored = false;
}
return prop;
}
}
This class now automatically un-ignores properties declared with IgnoreDataMember that are also not declared with JsonIgnore. You might want to create your own custom attribute so that you don't modify JSON.Net's default behavior.

Now, while I thought it would be easy to implement a JsonConverter that works just like the default one, only it uses this contract resolver by default, it was actually a pain in the ass, mainly because there is no default JsonConverter. Instead, I declared the default contract resolver in a static constructor like this:
JsonConvert.DefaultSettings = () => new JsonSerializerSettings
{
ContractResolver=new JsonWCFContractResolver()
};

Now the JSON.Net operations don't need an explicitly declared contract resolver. Updated source code on GitHub


Links that helped

Configure JSON.NET to ignore DataContract/DataMember attributes
replace WCF built-in JavascriptSerializer with Newtonsoft Json.Net json serializer
Cannot return JToken from WCF Web Service
Serialize and deserialize part of JSON object as string with WCF

Update 12 August 2018: I've updates the project to Core 2.1 and changed the Add method to treat OutOfMemoryExceptions in case of memory fragmentation.

I have been looking for a job lately and so I've run into these obnoxious interviews when someone asks you to find the optimal algorithm for some task. And as much as I hate those questions in an interview, they got me thinking about all kinds of normal situations where the implementation is not optimal. I believe it will be hard to find something less optimal than the List<T> implementation in .Net.

Reference


For reference, look at the implementation code in its entirety.

Story


I will be discussing some of the glaring issues with the code, but before that, let's talk about the "business case". You have a C-like programming language, armed with all the default types for such, like bool, int, array, etc. and you need a dynamically sized container, one where you can add, insert and remove elements. The first idea is to use an array, then resize it accordingly. Only you can't know how large the array will need to be and you can't just allocate memory and then resize that allocation, as other variables have already occupied the next blocks of memory. The solution might be to allocate an initial array, then - when its size is no longer sufficient - create a larger one, copy what you need and make the changes in it.

A comment in the source tells us what the developers meant to do: The list is initially empty and has a capacity of zero. Upon adding the first element to the list the capacity is increased to 16, and then increased in multiples of two as required. So, an empty list is just a wrapper for an empty array. A list with one element occupies the size of a 16 element array, but if you add up to 17 elements, the array will double and continue to double.

Implementation


Let's check the algorithmic complexity for the add operation. For the first 16 elements you just add elements in the internal array, n operations for n elements. Once you get to 17, the complexity increases, as you need to copy all previous 16 values first. Now it's 16+16+1, which continues up to 33, where you have 16+16+16+32+1. At 65 it's 16+16+16+32+32+64+1. So while we are adding element after element the operational complexity is on average twice as much as using an array. Meanwhile, the space occupied is half more than you actually need.

Insertion is similarly tricky, but even more inefficient. Basically, when you insert a value or a range, the first operation is EnsureCapacity, which may copy the entire array in a new array. Only afterward the insert algorithm is run and it again copies the part of the array found after the index for the insert.

Removal works in the opposite direction with the caveat that it never decreases the size of the array. If you added 10 million records in your list, then deleted them, your list now contains an internal array that is 10 million elements in size. There is a method called TrimExcess that tries to solve this, but you must call it manually. At least RemoveAll is using an O(n) algorithm instead of calling Remove repeatedly, which would have been a disaster.

The piece of code that sets the internal dimension of the list is actually in the setter of the Capacity property, and it dumbly creates an array and copies the values from the current one to the new one.

A lot of the other operations are implemented by calling the static methods on the Array class: Array.IndexOf, Array.Sort, Array.BinarySearch, Array.Reverse, etc.

The last issue that List has is that, as an array wrapper, it needs contiguous memory space. There will be times where your code will fail not because there is not enough free memory, but because it is fragmented and the runtime cannot find a free block that is large enough for your data.

Better solutions


Before I start spouting all kinds of stupid things, I will direct you to the venerable C5 collection library, which is very well designed, documented and tested. It contains all kinds of containers to optimize whichever scenario you might have been thinking about.

Let's think solutions now. The major problem of this implementation is the need of a continuous array. Why not solve it by adding more arrays instead of replacing the one with another twice as large? When the capacity is exceeded, we create a new array of similar size, then link it to our list. And since we want to have index access, not linked list access, we need to add this array into an array of arrays.

What would that mean? Memory doesn't need to be contiguous. Adding is twice as fast. Inserting is fast, also, as you only need to insert a new array between existing arrays and move around the data in a single inner array. Accessing by index is a bit more complicated, but not by much. Removal is just as simple, with the added bonus that some inner arrays might become empty and be removed directly.

This is by far not "the best" option, but just one level of optimization that tries to fix the biggest single problem with the current implementation. As one of my friends used to say: first collect data about your program, see where the bottlenecks are, then proceed to fix them in decreasing order of importance. The source can be found on GitHub.

Notes


A tester program of sorts is showing the Count, Capacity and time for random operations for a normal list and the FragmentList<T>. The next line shows the individual lengths of the inner arrays. Note that I cheated by using Lists instead of arrays. It only makes sense, as the simplistic operations of List<T> now have a limited negative impact on individual fragments. Take note of AutoDefragmentThreshold, which is 0.8 by default. It replaces all the internal lists with a single contiguous one when there are more than 80% of internal lists that are smaller than a tenth of the total count. To disable the feature you need to set the value to more than 1, not 0. I also implemented all the public methods of List<T>, although you might only need to implement IList<T> and the *Range methods.

Well, enjoy! Hope it helps.

I am going to discuss in this post an interview question that pops up from time to time. The solution that is usually presented as best is the same, regardless of the inputs. I believe this to be a mistake. Let me explore this with you.

The problem



The problem is simple: given two sorted arrays of very large size, find the most efficient way to compute their intersection (the list of common items in both).

The solution that is given as correct is described here (you will have to excuse its Javiness), for example. The person who provided the answer made a great effort to list various solutions and list their O complexity and the answer inspires confidence, as coming from one who knows what they are talking about. But how correct is it? Another blog post describing the problem and hinting on some extra information that might influence the result is here.

Implementation


Let's start with some code:

var rnd = new Random();
var n = 100000000;
int[] arr1, arr2;
generateArrays(rnd, n, out arr1, out arr2);
var sw = new Stopwatch();
sw.Start();
var count = intersect(arr1, arr2).Count();
sw.Stop();
Console.WriteLine($"{count} intersections in {sw.ElapsedMilliseconds}ms");

Here I am creating two arrays of size n, using a generateArrays method, then I am counting the number of intersections and displaying the time elapsed. In the intersect method I will also count the number of comparisons, so that we avoid for now the complexities of Big O notation (pardon the pun).

As for the generateArrays method, I will use a simple incremented value to make sure the values are sorted, but also randomly generated:

private static void generateArrays(Random rnd, int n, out int[] arr1, out int[] arr2)
{
    arr1 = new int[n];
    arr2 = new int[n];
    int s1 = 0;
    int s2 = 0;
    for (var i = 0; i < n; i++)
    {
        s1 += rnd.Next(1, 100);
        arr1[i] = s1;
        s2 += rnd.Next(1, 100);
        arr2[i] = s2;
    }
}


Note that n is 1e+7, so that the values fit into an integer. If you try a larger value it will overflow and result in negative values, so the array would not be sorted.

Time to explore ways of intersecting the arrays. Let's start with the recommended implementation:

private static IEnumerable<int> intersect(int[] arr1, int[] arr2)
{
    var p1 = 0;
    var p2 = 0;
    var comparisons = 0;
    while (p1<arr1.Length && p2<arr2.Length)
    {
        var v1 = arr1[p1];
        var v2 = arr2[p2];
        comparisons++;
        switch(v1.CompareTo(v2))
        {
            case -1:
                p1++;
                break;
            case 0:
                p1++;
                p2++;
                yield return v1;
                break;
            case 1:
                p2++;
                break;
        }

    }
    Console.WriteLine($"{comparisons} comparisons");
}


Note that I am not counting the comparisons of the two pointers p1 and p2 with the Length of the arrays, which can be optimized by caching the length. They are just as resource using as comparing the array values, yet we discount them in the name of calculating a fictitious growth rate complexity. I am going to do that in the future as well. The optimization of the code itself is not part of the post.

Running the code I get the following output:

19797934 comparisons
199292 intersections in 832ms


The number of comparisons is directly proportional with the value of n, approximately 2n. That is because we look for all the values in both arrays. If we populate the values with odd and even numbers, for example, so no intersections, the number of comparisons will be exactly 2n.

Experiments


Now let me change the intersect method, make it more general:

private static IEnumerable<int> intersect(int[] arr1, int[] arr2)
{
    var p1 = 0;
    var p2 = 0;
    var comparisons = 0;
    while (p1 < arr1.Length && p2 < arr2.Length)
    {
        var v1 = arr1[p1];
        var v2 = arr2[p2];
        comparisons++;
        switch (v1.CompareTo(v2))
        {
            case -1:
                p1 = findIndex(arr1, v2, p1, ref comparisons);
                break;
            case 0:
                p1++;
                p2++;
                yield return v1;
                break;
            case 1:
                p2 = findIndex(arr2, v1, p2, ref comparisons);
                break;
        }

    }
    Console.WriteLine($"{comparisons} comparisons");
}

private static int findIndex(int[] arr, int v, int p, ref int comparisons)
{
    p++;
    while (p < arr.Length)
    {
        comparisons++;
        if (arr[p] >= v) break;
        p++;
    }
    return p;
}

Here I've replaced the increment of the pointers with a findIndex method that keeps incrementing the value of the pointer until the end of the array is reached or a value larger or equal with the one we are searching for was found. The functionality of the method remains the same, since the same effect would have been achieved by the main loop. But now we are free to try to tweak the findIndex method to obtain better results. But before we do that, I am going to P-hack the shit out of this science and generate the arrays differently.

Here is a method of generating two arrays that are different because all of the elements of the first are smaller than the those of the second. At the very end we put a single element that is equal, for the fun of it.

private static void generateArrays(Random rnd, int n, out int[] arr1, out int[] arr2)
{
    arr1 = new int[n];
    arr2 = new int[n];
    for (var i = 0; i < n - 1; i++)
    {
        arr1[i] = i;
        arr2[i] = i + n;
    }
    arr1[n - 1] = n * 3;
    arr2[n - 1] = n * 3;
}


This is the worst case scenario for the algorithm and the value of comparisons is promptly 2n. But what if we would use binary search (what in the StackOverflow answer was dismissed as having O(n*log n) complexity instead of O(n)?) Well, then... the output becomes

49 comparisons
1 intersections in 67ms

Here is the code for the findIndex method that would do that:

private static int findIndex(int[] arr, int v, int p, ref int comparisons)
{
    var start = p + 1;
    var end = arr.Length - 1;
    if (start > end) return start;
    while (true)
    {
        var mid = (start + end) / 2;
        var val = arr[mid];
        if (mid == start)
        {
            comparisons++;
            return val < v ? mid + 1 : mid;
        }
        comparisons++;
        switch (val.CompareTo(v))
        {
            case -1:
                start = mid + 1;
                break;
            case 0:
                return mid;
            case 1:
                end = mid - 1;
                break;
        }
    }
}


49 comparisons is smack on the value of 2*log2(n). Yeah, sure, the data we used was doctored, so let's return to the randomly generated one. In that case, the number of comparisons grows horribly:

304091112 comparisons
199712 intersections in 5095ms

which is larger than n*log2(n).

Why does that happen? Because in the randomly generated data the binary search find its worst case scenario: trying to find the first value. It divides the problem efficiently, but it still has to go through all the data to reach the first element. Surely we can't use this for a general scenario, even if it is fantastic for one specific case. And here is my qualm with the O notation: without specifying the type of input, the solution is just probabilistically the best. Is it?

Let's compare the results so far. We have three ways of generating data: randomly with increments from 1 to 100, odds and evens, small and large values. Then we have two ways of computing the next index to compare: linear and binary search. The approximate numbers of comparisons are as follows:

RandomOddsEvensSmallLarge

Linear 2n 2n 2n
Binary search 3/2*n*log(n) 2*n*log(n) 2*log(n)

Alternatives


Can we create a hybrid findIndex that would have the best of both worlds? I will certainly try. Here is one possible solution:

private static int findIndex(int[] arr, int v, int p, ref int comparisons)
{
    var inc = 1;
    while (true)
    {
        if (p + inc >= arr.Length) inc = 1;
        if (p + inc >= arr.Length) return arr.Length;
        comparisons++;
        switch(arr[p+inc].CompareTo(v))
        {
            case -1:
                p += inc;
                inc *= 2;
                break;
            case 0:
                return p + inc;
            case 1:
                if (inc == 1) return p + inc;
                inc /= 2;
                break;
        }
    }
}


What am I doing here? If I find the value, I return the index; if the value is smaller, not only do I advance the index, but I also increase the speed of the next advance; if the value is larger, then I slow down until I get to 1 again. Warning: I do not claim that this is the optimal algorithm, this is just something that was annoying me and I had to explore it.

OK. Let's see some results. I will decrease the value of n even more, to a million. Then I will generate the values with random increases of up to 10, 100 and 1000. Let's see all of it in action! This time is the actual count of comparisons (in millions):

Random10Random100Random1000OddsEvensSmallLarge

Linear 2 2 2 2 2
Binary search 30 30 30 40 0.00004
Accelerated search 3.4 3.9 3.9 4 0.0002


So for the general cases, the increase in comparisons is at most twice, while for specific cases the decrease can be four orders of magnitude!

Conclusions


Because I had all of this in my head, I made a fool of myself at a job interview. I couldn't reason all of the things I wrote here in a few minutes and so I had to clear my head by composing this long monstrosity.

Is the best solution the one in O(n)? Most of the time. The algorithm is simple, no hidden comparisons, one can understand why it would be universally touted as a good solution. But it's not the best in every case. I have demonstrated here that I can minimize the extra comparisons in standard scenarios and get immense improvements for specific inputs, like arrays that have chunks of elements smaller than the next value in the other array. I would also risk saying that this findIndex version is adaptive to the conditions at hand with improbable scenarios as worst cases. It works reasonable well for normally distributed arrays, it does wonders for "chunky" arrays (in this is included the case when one array is much smaller than the other) and thus is a contender for some kinds of uses.

What I wanted to explore and now express is that finding the upper growth rate of an algorithm is just part of the story. Sometimes the best implementation fails for not adapting to the real input data. I will say this, though, for the default algorithm: it works with IEnumerables, since it never needs to jump forward over some elements. This intuitively gives me reason to believe that it could be optimized using the array/list indexing. Here it is, in IEnumerable fashion:

private static IEnumerable<int> intersect(IEnumerable<int> arr1, IEnumerable<int> arr2)
{
    var e1 = arr1.GetEnumerator();
    var e2 = arr2.GetEnumerator();
    var loop = e1.MoveNext() && e2.MoveNext();
    while (loop)
    {
        var v1 = e1.Current;
        var v2 = e2.Current;
        switch (v1.CompareTo(v2))
        {
            case -1:
                loop = e1.MoveNext();
                break;
            case 0:
                loop = e1.MoveNext() && e2.MoveNext();
                yield return v1;
                break;
            case 1:
                loop = e2.MoveNext();
                break;
        }

    }
}

Extra work


The source code for a project that tests my various ideas can be found on GitHub. There you can find the following algorithms:

  • Standard - the O(m+n) one described above
  • Reverse - same, but starting from the end of the arrays
  • Binary Search - looks for values in the other array using binary search. Complexity O(m*log(n))
  • Smart Choice - when m*log(n)<m+n, it uses the binary search, otherwise the standard one
  • Accelerating - the one that speeds up when looking for values
  • Divide et Impera - recursive algorithm that splits arrays by choosing the middle value of one and binary searching it in the other. Due to the complexity of the recursiveness, it can't be taken seriously, but sometimes gives surprising results
  • Middle out - it takes the middle value of one array and binary searches it in the other, then uses Standard and Reverse on the resulting arrays
  • Pair search - I had high hopes for this, as it looks two positions in front instead of one. Really good for some cases, though generally it is a bit more than Standard


The testing tool takes all algorithms and runs them on randomly generated arrays:

  1. Lengths m and n are chosen randomly from 1 to 1e+6
  2. A random number s of up to 100 "spikes" is chosen
  3. m and n are split into s+1 equal parts
  4. For each spike a random integer range is chosen and filled with random integer values
  5. At the end, the rest of the list is filled with any random values

Results


For really small first array, the Binary Search is king. For equal size arrays, usually the Standard algorithm wins. However there are plenty of cases when Divide et Impera and Pair Search win - usually not by much. Sometimes it happens that Accelerating Search is better than Standard, but Pair Search wins! I still have the nagging feeling that Pair Search can be improved. I feel it in my gut! However I have so many other things to do for me to dwell on this.

Maybe one of you can find the solution! Your mission, should you choose to accept it, is to find a better algorithm for intersecting sorted arrays than the boring standard one.

Learning ASP.Net MVC series:

  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services


In the setup part of the series I've created a set of specifications for the ASP.Net MVC app that I am building and I manufactured a blank project to start me up. There was quite a bit of confusion on how I would continue the series. Do I go towards the client side of things, defining the overall HTML structure and how I intend to style it in the future? Do I go towards the functionality of the application, like google search or extracting text and applying word analysis on it? What about the database where all the information is stored?

In the end I opted for authentication, mainly because I have no idea how it's done and also because it naturally leads into the database part of things. I was saying that I don't intend to have users of the application, they can easily connect with their google account - which hopefully I will also use for searching (I hate that API!). However, that's not quite how it goes: there will be an account for the user, only it will be connected to an outside service. While I completely skirt the part where I have to reset the password or email the user and all that crap - which, BTW, was working rather well in the default project - I still have to set up entities that identify the current user.

How was it done before?


In order to proceed, let's see how the original project did it. It was first setting a database context, then adding Identity using a class named ApplicationUser.

services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

services.AddIdentity<ApplicationUser, IdentityRole>()
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();


ApplicationUser is a class that inherits from IdentityUser, while ApplicationDbContext is something inheriting from IdentityDbContext<ApplicationUser>. Seems like we are out of luck and the identity and db context are coupled pretty strongly. Let's see if we can decouple them :) Our goal: using OAuth to connect with a Google account, while using no database.

Authentication via Google+ API


The starting point of any feature is coding and using autocomplete and Intellisense until it works reading the documentation. In our case, the ASP.Net Authentication section, particularly the authentication using Google part. It's pretty skimpy and it only covers Facebook. Found another link that actually covers Google, but it's for MVC 5.

Enable SSL


Both tutorials agree that first I need to enable SSL on my web project. This is done by going to the project properties, the Debug section, and checking Enable SSL. It's a good idea to copy the https URL and set it as the start URL of the project. Keep that URL in the clipboard, you are going to need it later, as well.



Install Secret Manager


Next step is installing the Secret Manager tool, which in our case is already installed, and specifying a userSecretsId, which should also be already configured.

Create Google OAuth credentials


Next let's create credentials for the Google API. Go to the Google Developer Dashboard, create a project, go to Credentials → OAuth consent screen and fill out the name of the application. Go to the Credentials tab and Create Credentials → OAuth client ID. Select Web Application, fill in a name as well as the two URLs below. We will use the localhost SSL URL for both like this:

  • Authorised JavaScript origins: https://localhost:[port] - the URL that you copied previously
  • Authorised redirect URIs: https://localhost:[port]/account/callback - TODO: create a callback action

Press Create. At this point a popup with the client ID and client secret appears. You can either copy the two values or close the popup and download the json file containing all the data (project id and authorised URLs among them), or copy the values directly from the credentials dialog.



Make sure to go back to the Dashboard section and enable the Google+ API, in the Social APIs group. There is a quota of 10000 requests per day, I hope it's enough. ;)



Writing the authentication code


Let's use the 'dotnet user-secrets' tool to save the two credential values. Run the following two commands in the project folder:

dotnet user-secrets set Authentication:Google:ClientId <client-Id>
dotnet user-secrets set Authentication:Google:ClientSecret <client-Secret>

Use the values from the Google credentials, obviously. In order to get to the two values all we need to do is call Configuration["Authentication:Google:ClientId"] in C#. In order for this to work we need to have loaded the package Microsoft.Extensions.Configuration.UserSecrets in project.json and somewhere in Startup a code that looks like this: builder.AddUserSecrets();, where builder is the ConfigurationBuilder.

Next comes the installation of the middleware responsible for authenticating google and which is called Microsoft.AspNetCore.Authentication.Google. We can install it using NuGet: right click on References in Visual Studio, go to Manage NuGet packages, look for Microsoft.AspNetCore.Authentication.Google ("ASP.NET Core contains middleware to support Google's OpenId and OAuth 2.0 authentication workflows.") and install it.



Now we need to place this in Startup.cs:

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationScheme = "Cookies",
AutomaticAuthenticate = true,
AutomaticChallenge = true,
LoginPath = new PathString("/Account/Login")
});

app.UseGoogleAuthentication(new GoogleOptions
{
AuthenticationScheme="Google",
SignInScheme = "Cookies",
ClientId = Configuration["Authentication:Google:ClientId"],
ClientSecret = Configuration["Authentication:Google:ClientSecret"],
CallbackPath = new PathString("/Account/Callback")
});

Yay! code!

Let's start the website. A useful popup appears with the message "This project is configured to use SSL. To avoid SSL warnings in the browser you can choose to trust the self-signed certificate that IIS Express has generated. Would you like to trust the IIS Express certificate?". Say Yes and click OK on the next dialog.



What did we do here? First, we used cookie authentication, which is not some gluttonous bodyguard with a sweet tooth, but a cookie middleware, of course, and our ticket for authentication without using identity. Then we used another middleware, the Google authentication one, linked to the previous with the "Cookies" SignInScheme. We used the ClientId and ClientSecret we saved previously in the Secret Manager. Note that we specified an AuthenticationScheme name for the Google authentication.

Yet, the project works just fine. I need to do one more thing for the application to ask me for a login and that is to decorate our one action method with the [Authorize] attribute:

[Authorize]
public class HomeController : Controller
{
public IActionResult Index()
{
return View();
}

}

After we do that and restart the project, the start page will still look blank and empty, but if we look in the network activity we will see a redirect to a nonexistent /Account/Login, as configured:


The Account controller


Let's create this Account controller and see how we can finish the example. The controller will need a Login method. Let me first show you the code, then we can discuss it:

public class AccountController : Controller
{
public IActionResult Login(string ReturnUrl)
{
return new ChallengeResult("Google",new AuthenticationProperties
{
RedirectUri = ReturnUrl ?? "/"
});
}
}


We simply return a ChallengeResult with the name of the authentication scheme we want and the redirect path that we get from the login ReturnUrl parameter. Now, when we restart the project, a Google prompt welcomes us:

After clicking Allow, we are returned to the home page.



What happened here? The home page redirected us to Login, which redirected us to the google authentication page, which then redirected us to /Account/Callback, which redirected us - now authenticated - to the home page. But what about the callback? We didn't write any callback method. (Actually I first did, complete with a complex object to receive all the parameters. The code within was never executed). The callback route was actually defined and handled by the Google middleware. In fact, if we call /Account/Callback, we get an authentication error:


One extra functionality that we might need is the logout. Let's add a Logout method:

public async Task<IActionResult> LogOut()
{
await HttpContext.Authentication.SignOutAsync("Cookies");

return RedirectToAction("index", "home");
}

Now, when we go to /Account/Logout we are redirected to the home page, where the whole authentication flow from above is being executed. We are not asked again if we want to give permission to the application to use our google credentials, though. In order to reset that part, go to Apps connected to your account.

What happens when we deny access to the application? Then the callback action will be called with a different set of parameters, triggering a RemoteFailure event. The source code on GitHub contains extra code that covers this scenario, redirecting the user to /Home/Error with the failure reason:

Events = new OAuthEvents
{
OnRemoteFailure = ctx =>
{
ctx.Response.Redirect("/Home/Error?ErrorMessage=" + UrlEncoder.Default.Encode(ctx.Failure.Message));
ctx.HandleResponse();
return Task.FromResult(0);
}
}

What about our user?


In order to check the results of our work, let's add some stuff to the home page. Mainly I want to show all the information we got about our user. Change the index.cshtml file to look like this:

<table class="table">
@foreach (var claim in User.Claims)
{
<tr>
<td>@claim.Type</td>
<td>@claim.Value</td>
</tr>
}
</table>

Now, when I open the home page, this is what gets returned:

http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier 111601945496839159547
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname Siderite
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname Zackwehdex
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name Siderite Zackwehdex
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress sideritezaqwedcxs@gmail.com
urn:google:profile https://plus.google.com/111601945496839159547


User is a System.Security.Claims.ClaimsPrincipal object, that contains not only a simple bag of Claims, but also a list of Identities. In our example I only have an identity and the User.Claims are the same with User.Identities[0].Claims, but in other cases, who knows?

Acknowledgements


If you think it was easy to scrap up this simple example, think again. Before the OAuth2 system there was an OpenID based system that used almost the same method and class names. Then there is the way they did it in .NET proper and the way they do it in ASP.Net Core... which changed recently as well. Everyone and their grandmother have a blog about how to do Google authentication, but most of them either don't apply or are obsolete. So, without further ado, let me give you the links that inspired me to do it this way:

Final thoughts


By no means this is a comprehensive walkthrough for authentication in .NET Core, however I am sure that I will cover a lot more ground in the posts to come. Stay tuned for more!

Source code for the project after this chapter can be found on GitHub.

Following my post about things I need to learn, I've decided to start a series about writing an ASP.Net MVC Core application, covering as much ground as possible. As a result, this experience will cover .NET Core subjects and a thorough exploration of ASP.Net MVC, plus some concepts related to Visual Studio, project structure, Entity Framework, HTML5, ECMAScript 6, Angular 2, ReactJs, CSS (LESS/SASS), responsive design, OAuth, OData, shadow DOM, etc.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Specifications


In order to start any project, some specifications need to be addressed. What will the app do and how will it be implemented? I've decided on a simplified rewrite of my WPF newsletter maker project. It gathers subjects from Google, by searching for configurable queries, it spiders the contents, it displays them, filters them, sorts them, extracting text and analyzing content. It remembers the already loaded URLs and allows for marking them as deleted and setting a category. It will be possible to extract the items that have a category into a newsletter containing links, titles, short descriptions and maybe a picture.

The project will be using ASP.Net Core MVC, combining the API and the display in a single web site (at least for now). Data will be stored in SQLite via Entity Framework. Later on the project will be switched to SQL Server to see how easy that is. The web site itself will have HTML5 structure, using the latest semantic elements, with the simplest possible CSS. All project owned Javascript will be ECMAScript6. OAuth might be needed for using the Google Search API, and I intend to use Google/Facebook/Twitter accounts to log in the application, with a specific account marked in the configuration as Administrator. The IDE will be Visual Studio (not Code). The Javascript needs to be clean, with no CSS or HTML in it, by using CSS classes and HTML templates. The HTML needs to be clean, with no Javascript or styling in it; moreover it needs to be semantically unambiguous, so as to be easily molded with CSS. While starting with a desktop only design, a later phase of the project will revamp the CSS, try to make the interface beautiful and work for all screen formats.

Not the simplest of projects, so let's get started.

Creating the project


I will be using Visual Studio 2015 Update 3 with the .Net Core Preview2 tooling. Personally I had a problem installing the Core tools for Visual Studio, but this link solved it for me with a command line switch (short version: DotNetCore.1.0.0-VS2015Tools.Preview2.exe SKIP_VSU_CHECK=1). First step is to create a New Project → Visual C# → .NET Core → ASP.NET Core Web Application. I will name it ContentAggregator. To the prompt asking which type of project template I want to choose, I will select Web Application, deselect Microsoft Azure Host in Cloud checkbox which for whatever reason is checked by default and click on Change Authentication to select Individual User Accounts.



Close the "Welcome to ASP.Net Core" page, because the template will be heavily changed by the time we finish this chapter.

The default template project


For a more detailed analysis of a .NET Core web project, try reading my previous post of the dotnet default template for web apps. This one will be quick and dirty.

Things to notice:
  • There is a global.json file that lists two projects, src and test. So this is how .NET Core solutions were supposed to work. Since the json format will be abandoned by Microsoft, there is no point of exploring this too much. Interesting, though, while there is a "src" folder, there is no "test".
  • The root folder contains a ContentAggregator.sln file and the src folder contains a ContentAggregator folder with a ContentAggregator.xproj file. Core seems to have abandoned the programming language dependent extension for project files.
  • The rest of the project seems to be pretty much the default dotnet tool one, with the following differences:
  • the template uses SQL Server by default
  • the lib folder in wwwroot is already populated with libraries

So far so good. There is also the little issue of the database. As you may remember from the post about the dotnet tool template, there were some files that needed to initialize the database. The error message then said "In Visual Studio, you can use the Package Manager Console to apply pending migrations to the database: PM> Update-Database". Is that what I have to do? Also, we need to check what the database settings are. While I do have an SQL Server instance on this computer, I haven't configured anything yet. The Project_Readme.html page is not very useful, as the link Run tools such as EF migrations and more goes to an obsolete link on github.io (the documentation seems to have moved to a microsoft.com server now).

I *could* read/watch a tutorial, but what the hell? Let's run the website, see what it does! Amazingly, the web site starts, using IIS Express, so I go to Register, to see how the database works and I get the same error about the migrations. I click on the Apply Migrations button and it says the migrations have been applied and that I need to refresh. I do that and voila, it works!

So, where is the database? It is not in the bin folder as WebApplication.db like in the Sqlite version. It's not in the SQL Server, the service wasn't even running. The DefaultConnection string looks like "Server=(localdb)\\mssqllocaldb;Database=aspnet-ContentAggregator-7fafe484-d38b-4230-b8ed-cf4a5a8df5e1;Trusted_Connection=True;MultipleActiveResultSets=true". What's going on? The answer lies in the SQL Server Express LocalDB instance that Visual Studio comes with.

Changing and removing shit


To paraphrase Antoine de Saint-Exupéry, this project will be set up not when I have nothing else to add, but when I have nothing else to remove.

First order of business it to remove SQL Server and use SQLite instead. Quite the opposite of how I have pictured it, but hey, you do what you must! In theory all I have to do it replace .UseSqlServer with .UseSqlite and then adjust the DefaultConnection string from appsettings.json with something like "Data Source=WebApplication.db". Done that, fixed the namespaces and imported projects, ran the solution. Migration error, Apply Migrations, re-register and everything is working. WebApplication.db and everything.

Second is to remove all crap that I don't need right now. I may need it later, so at this point I am backing up my project. I want to remove:
  • Database - yeah, I know I just recreated it, but I need to know what those migrations contained and if I even need them, considering I want to register with OAuth only
  • Controllers - probably I will end up recreating them, but we need to understand why things are how they are
  • Models - we'll do those from scratch, too
  • Services - they were specific to the default web site, so poof! they're gone.
  • Views - the views will be redesigned completely, so we delete them also
  • Client libraries - we keep jQuery and jQuery validation, but we remove bootstrap
  • CSS - we keep the site.css file, but remove everything in it
  • Javascript - keep site.js, but empty
  • Other assets like images - removed

"What the hell, I read so much of this blog post just for you to remove everything you did until now?" Yes! This part is the project set up and before its end we will have a clean white slate on which to create our masterpiece.

So, action! Close Visual Studio. Delete bin (with the db file in it) and obj, delete all files in Controllers, Data, Models, Services, Views. Empty files site.css and site.js, while deleting the .min versions, delete all images, Project_Readme.html and project.lock.json. In order to cleanly remove bootstrap we need to use bower. Run
bower uninstall bootstrap
which will remove bootstrap, but won't remove it from bower.json, so remove it from there. Reopen Visual Studio and the project, wait until it restores the packages.

When trying to compile the project, there are some errors, obviously. First, namespaces that don't exist anymore, like Controllers, Models, Data, Services. Remove the offending usings. Then there are services that we wanted to inject, like SMS and Email, which for now we don't need. Remove the lines that try to use them under // Add application services. The rest of the errors are about ApplicationDbContext and ApplicationUser. Comment them out. These are needed for when we figure out how the project is going to preserve data. Later on a line in Startup.cs will throw an exception ( app.UseIdentity(); ) so comment it out as well.

Finishing touches


Now the project compiles, but it does nothing. Let's finish up by adding a default Controller and a View.

In Visual Studio right click on the Controllers folder in the Solution Explorer and choose Add → Controller → MVC Controller - Empty. Let's continue to name it HomeController. Go to the Views folder, create a new folder called Home. Now you might think that right clicking on it and selecting Add → View would work, but it doesn't. The Add button stubbornly remains disabled unless you specify a template and a model and other stuff. It may be useful later on, but at this stage ignore it. The way to add a view now is go to Add → New Item → MVC View Page. Create an Index.cshtml view and empty its contents.

Now run the application. It should show a wonderfully empty page with no console errors. That's it for our blank project setup. Check out the source code for this point of the exploration. Stay tuned for the real fun!

This is part of the .NET Core and VS Code series, which contains:
Since the release of .NET Core 1.1, a lot has changed, so I am trying to keep this up to date as much as possible, but there still might be some leftover obsolete info from earlier versions.

I am continuing my series about .NET Core, using Visual Studio Code only, on Windows, with as little command line work as possible. Read about writing a console application and then a very simple web application before you read this part.

Setup


In this post I will attempt to write a very simple Web API using .NET Core. I intend to have some sort of authentication and then be able to read and write stuff using REST API calls. As with the projects before, I will use Visual Studio Code to create a folder, open it, then do stuff in it after running the command 'dotnet new' and preferably changing the namespace so that it is not always ConsoleApplication. This time the steps will be a little bit different, more inline with what you would do in real life:
  1. Create a folder for our project called WebCore
  2. In it create an API folder
  3. Open a command prompt in the API folder and type 'dotnet new web' (in .NET Core 1.0 you could omit the template, but from 1.1 you need to use it explicitly)
  4. Open Visual Studio Code by typing 'code' or by any other means
  5. Close the command prompt window
  6. In Code select the API folder and open Program.cs
  7. To the warning "Required assets to build and debug are missing from your project. Add them?" click Yes
  8. To the info "There are unresolved dependencies from '<your project>.csproj'. Please execute the restore command to continue." click Restore
  9. Change the namespace to WebCore.API
  10. Press Ctrl-Shift-B to build the project.

Right now you should have a web project that compiles. In order to turn this into an API project, we need to understand a few things. So go down to the tutorial for ASP.Net Core API that uses Visual Studio to read about what Microsoft suggests, download the zip file of all the docs and tutorials and look in /Docs-master/aspnet/tutorials/first-web-api/sample/src/TodoApi to see what they did in code (or just browse them on GitHub). You also need to understand that while in normal .NET (how do we call it now?) MVC and Web API are two branches that often have similar functionality in similarly named classes, in .NET Core the API is just part of MVC. So no WebApi namespaces, just MVC.

That is why, to turn our app into an API project we need to add to the dependencies of project.json the MVC packages:
,"Microsoft.AspNetCore.Mvc": "1.0.0"
,"Microsoft.AspNetCore.Mvc.Core": "1.0.0"
<PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.1" />
to ItemGroup in API.csproj and tell the project to use MVC in code. This is done by adding services.AddMvc(); in ConfigureServices and replacing the app.Run line in Configure with app.UseMvcWithDefaultRoute(); in Program.cs.

At this point we should have something like this:

project.json
{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable",
"emitEntryPoint": true
},
"dependencies": {},
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.0"
},
"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
"Microsoft.AspNetCore.Mvc": "1.0.0",
"Microsoft.AspNetCore.Mvc.Core": "1.0.0"
},
"imports": "dnxcore50"
}
}
}


API.csproj
<Project Sdk="Microsoft.NET.Sdk.Web">

<PropertyGroup>
<TargetFramework>netcoreapp1.1</TargetFramework>
</PropertyGroup>

<ItemGroup>
<Folder Include="wwwroot\" />
</ItemGroup>

<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
<PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.1" />
</ItemGroup>

</Project>

Program.cs
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;

namespace WebCore.API
{
public class Program
{
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.Build();

host.Run();
}
}
}

Startup.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

namespace WebCore.API
{
public class Startup
{
// This method gets called by the runtime. Use this method to add services to the container.
// For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole();

if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}

app.UseMvcWithDefaultRoute();

/*app.Run(async (context) =>
{
await context.Response.WriteAsync("Hello World!");
});*/
}
}
}

Clean and name the launch settings as well:
.vscode/launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": ".NET Core Launch (web API)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "${workspaceRoot}/bin/Debug/netcoreapp1.0/API.dll",
"args": [],
"cwd": "${workspaceRoot}",
"externalConsole": false,
"stopAtEntry": false
}
]
}


The Controller


Let's add functionality to our API. Create a folder called Controllers and in it add a NoteController.cs file. Just copy paste the code for now, we will understand it later.

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;

namespace WebCore.API.Controllers
{
[Route("api/[controller]")]
public class NoteController : Controller
{
private static List<Note> _notes;
static NoteController()
{
_notes = new List<Note>();
}


[HttpGet]
public IEnumerable<Note> GetAll()
{
return _notes.AsReadOnly();
}

[HttpGet("{id}", Name = "GetNote")]
public IActionResult GetById(string id)
{
var item = _notes.Find(n => n.Id == id);
if (item == null)
{
return NotFound();
}
return new ObjectResult(item);
}

[HttpPost]
public IActionResult Create([FromBody] Note item)
{
if (item == null)
{
return BadRequest();
}
item.Id = (_notes.Count + 1).ToString();
_notes.Add(item);
return CreatedAtRoute("GetNote", new { controller = "Note", id = item.Id }, item);
}

[HttpDelete("{id}")]
public void Delete(string id)
{
_notes.RemoveAll(n => n.Id == id);
}

public class Note
{
public string Id { get; set; }
public string Content { get; set; }
}
}
}

In this new class we have a lot of interesting things to behold. First of all there is the Route attribute that uses a [controller] token (well explained here). Then there are the Http attributes: HttpGet, HttpPost, HttpPut, HttpDelete which as their name implies define methods on the API controller that are accessed via the respective HTTP methods so beloved by the REST crowd.

An important thing to grasp from this is what happens with the HttpGet attribute (note it has a Name defined) and the later used CreatedAtRoute, which are actually Web API v2 concepts. In our case, whenever we create a new item we return the item as a result, but also set the Location header to the route that would "get" it, so that the client knows how to get to it.

Testing the result


Let's test what we did. While the GET methods are easy to test from the browser (http://localhost:5000/api/note should show an empty array result []), the POST and DELETE are trickier. The Microsoft documentation recommends using Fiddler, but I would go with the easy to install as a Chrome application Postman.

In Postman, don't forget to set the Content-Type header to application/json:


Then POST to /api/note/ a content like
{
Content:'test'
}
:


The result of the call should be as explained previously
{
"id": "1",
"content": "test"
}

Go with the browser at /api/note/ or /api/note/1 and you should get results

More complexity


What happened there exactly? We just added a class and it magically worked. How did .Net Core to do that? Core comes with Dependency Injection and Inversion of Control by default. In our case, just because we have a Controller class with a Route attribute made the difference. The line
app.UseMvcWithDefaultRoute();
is the code that enables that behavior. Let me demonstrate more of this "magic". In our example so far I've hardcoded the Note class inside the Controller class, but usually the class would be part of the functionality of the API and there would be a repository handling saving stuff in the database.

To avoid polluting the post with a lot of code, here is the improved source code of the API. Note is now a more complex object, the controller is initialized with a INoteRepository instance which is bound to a NoteRepository implementation. The API works just as before, only now Note has Key, Subject and Body properties instead of Id and Content.

The thing to learn from this version is how in the Startup class, in ConfigureServices the line
services.AddSingleton();
we tell Core that for a request of the implementation for INoteRepository, return the singleton instance of NoteRepository. From this line alone Core knows to initialize the NoteController constructor with the correct repository instance.

Security


Now, we don't want any peasant to come and mess with our important notes. We require some way of authenticating our operations. There is a very detailed section about Security in the Core documentation, but I only wanted a simple example. I worked on it for hours just to notice that line instructing the application builder to use authentication was after the one telling it to use MVC. Also some naming bugs that made me waste a lot of time.

Bottom line: here is the sample code with authentication. You cannot use any API calls unless you are a User. You login as a user calling /api/note/login/12345. The Create/Delete APIs are also unavailable to the Users, you need to be an Admin. There is a commented line in the code that also adds the Claim to the Admin role when logging in. I also added logging, so you can see what goes wrong in the Debug Console.

More information about how to implement authentication without Identity (as most Microsoft samples want) can be found at Using Cookie Middleware without ASP.NET Core Identity and my example is based on that.

Impressions on Visual Studio Code


Update: some of the complaints underneath are not correct anymore, but I am keeping my original reaction unchanged, since it reflects my feelings at the time.

The more complex the project becomes, the less Visual Studio Code can handle it. Intellisense is bloody awful, the exceptions can sometimes be caught only by adding logging to the console, the basic editing features are lacking - like a serious Undo queue or auto formatting when typing or closing brackets or even syntax highlighting at times and there are some features that plainly don't work, for example when you are writing a new class and an option for "Generate Type" appears, but it does nothing. Same for "Generate Variable". While I plan to test VS Code with a large project created in Visual Studio proper, I don't have high hopes. Remember when Microsoft created Silverlight with Javascript and it sucked and they immediately switched to WPF based Silverlight? It feels like that. However, for nostalgic reasons mostly, I also enjoy working in this spartan environment, especially when working with recently released features. It reminds me of the days when I was trying to connect various Linux software with basically duct tape and paper clips.

This is part of the .NET Core and VS Code series, which contains:
Since the release of .NET Core 1.1, a lot has changed, so I am trying to keep this up to date as much as possible, but there still might be some leftover obsolete info from earlier versions.

Continuing the post about building a console application in .Net Core with the free Visual Studio Code tool on Windows, I will be looking at ASP.Net, also using Visual Studio Code. Some of the things required to understand this are in the previous post, so do read that one, at least for the part with installing Visual Studio Code and .NET Core.

Reading into it


Start with reading the introduction to ASP.Net Core, take a look at the ASP.Net Core Github repository and then continue with the information on the Microsoft sites, like the Getting Started section. Much more effort went into the documentation for ASP.Net Core, which shows - to my chagrin - that the web is a primary focus for the .Net team, unlike the language itself, native console apps, WPF and so on. Or maybe they're just working on it.

Hello World - web style


Let's get right into it. Following the Getting Started section, I will display the steps required to create a working web Hello World application, then we continue with details of implementation for common scenarios. Surprisingly, creating a web app in .Net Core is very similar to doing a console app; in fact we could just continue with the console program we wrote in the first post and it would work just fine, even without a 'web' section in launch.json.
  1. Go to the Explorer icon and click on Open Folder (or FileOpen Folder)
  2. Create a Code Hello World folder
  3. Right click under the folder in Explorer and choose Open in Command Prompt - if you have it. Some Windows versions removed this option from their context menu. If not, select Open in New Window, go to the File menu and select Open Command Prompt (as Administrator, if you can) from there
  4. Write 'dotnet new console' in the console window, then close the command prompt and the Windows Explorer window, if you needed it
  5. Select the newly created Code Hello World folder
  6. From the open folder, open Program.cs
  7. To the warning "Required assets to build and debug are missing from your project. Add them?" click Yes
  8. To the info "There are unresolved dependencies from '<your project>.csproj'. Please execute the restore command to continue." click Restore
  9. Click the Debug icon and press the green play arrow

Wait! Aren't these the steps for a console app? Yes they are. To turn this into a web application, we have to go through these few more steps:
  1. Open project.json and add
    ,"Microsoft.AspNetCore.Server.Kestrel": "1.0.0"
    to dependencies (right after Microsoft.NETCore.App)
  2. Open the .csproj file and add
    <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
    </ItemGroup>
  3. Open Program.cs and change it by adding
    using Microsoft.AspNetCore.Hosting;
    and replacing the Console.WriteLine thing with
    var host = new WebHostBuilder()
    .UseKestrel()
    .UseStartup<Startup>()
    .Build();
    host.Run();
  4. Create a new file called Startup.cs that looks like this:
    using System;
    using Microsoft.AspNetCore.Builder;
    using Microsoft.AspNetCore.Hosting;
    using Microsoft.AspNetCore.Http;

    namespace <your namespace here>
    {
    public class Startup
    {
    public void Configure(IApplicationBuilder app)
    {
    app.Run(context =>
    {
    return context.Response.WriteAsync("Hello from ASP.NET Core!");
    });
    }
    }
    }

There are a lot of warnings and errors displayed, but the program compiles and when run it keeps running until stopped. To see the result open a browser window on http://localhost:5000. Closing and reopening the folder will make the warnings go away.

#WhatHaveWeDone


So first we added Microsoft.AspNetCore.Server.Kestrel to the project. We added the Microsoft.AspNetCore namespace to the project. With this we have now access to the Microsoft.AspNetCore.Hosting which allows us to use a fluent interface to build a web host. We UseKestrel (Kestrel is based on libuv, which is a multi-platform support library with a focus on asynchronous I/O. It was primarily developed for use by Node.js, but it's also used by Luvit, Julia, pyuv, and others.) and we UseStartup with a class creatively named Startup. In that Startup class we receive an IApplicationBuilder in the Configure method, to which we attach a simple handler on Run.

In order to understand the delegate sent to the application builder we need to go through the concept of Middleware, which are components in a pipeline. In our case Run was used, because as a convention Run is the last thing to be executed in the pipeline. We could have used just as well Use without invoking the next component. Another option is to use Map, which is also a convention exposed through an extension method like Run, and which is meant to branch the pipeline.

Anyway, all of this you can read in the documentation. A must read is the Fundamentals section.

Various useful things


Let's ponder on what we would like in a real life web site. Obviously we need a web server that responds differently for different URLs, but we also need logging, security, static files. Boilerplate such as errors in the pages and not found pages need to be handled. So let's see how we can do this. Some tutorials are available on how to do that with Visual Studio, but in this post we will be doing this with Code! Instead of fully creating a web site, though, I will be adding to the basic Hello World above one or two features at a time, postponing building a fully functioning source for another blog post.

Logging


We will always need a way of knowing what our application is doing and for this we will implement a more complete Configure method in our Startup class. Instead of only accepting an IApplicationBuilder, we will also be asking for an ILoggerFactory. The main package needed for logging is
"Microsoft.Extensions.Logging": "1.0.0"
, which needs to be added to dependencies in project.json.
From .NET Core 1.1, Logging is included in the Microsoft.AspNetCore package.

Here is some code:

Startup.cs
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Logging;

namespace ConsoleApplication
{
public class Startup
{
public void Configure(IApplicationBuilder app, ILoggerFactory loggerFactory)
{
var logger=loggerFactory.CreateLogger("Sample app logger");
logger.LogInformation("Starting app");
}
}
}

project.json
{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable",
"emitEntryPoint": true
},
"dependencies": {},
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.0"
},
"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
"Microsoft.Extensions.Logging": "1.0.0"
},
"imports": "dnxcore50"
}
}
}


So far so good, but where does the logger log? For that matter, why do I need a logger factory, can't I just instantiate my own logger and use it? The logging mechanism intended by the developers is this: you get the logger factory, you add logger providers to it, which in turn instantiate logger instances. So when the logger factory creates a logger, it actually gives you a chain of different loggers, each with their own settings.

For example, in order to log to the console, you run loggerFactory.AddConsole(); for which you need to add the package
"Microsoft.Extensions.Logging.Console": "1.0.0"
to project.json. What AddConsole does is actually factory.AddProvider(new ConsoleLoggerProvider(...));. The provider will then instantiate a ConsoleLogger for the logger chain, which will write to the console.

Additional logging options come with the Debug or EventLog or EventSource packages as you can see at the GitHub repository for Logging. Find a good tutorial on how to create your own Logger here.

Static files


It wouldn't be much of a web server if it wouldn't serve static files. ASP.Net Core has two concepts for web site roots: Web root and Content root. Web is for web-servable content files, while Content is for application content files, like views and stuff. In order to serve these we use the
<PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.1.1" />
package, we instruct the WebHostBuilder what the web root is, then we tell the ApplicationBuilder to use static files. Like this:

project.json
{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable",
"emitEntryPoint": true
},
"dependencies": {},
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.0"
},
"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
"Microsoft.AspNetCore.StaticFiles": "1.0.0"
},
"imports": "dnxcore50"
}
}
}


.csproj file
<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp1.1</TargetFramework>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
<PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.1.1" />
</ItemGroup>

</Project>

Program.cs
using System.IO;
using Microsoft.AspNetCore.Hosting;

namespace ConsoleApplication
{
public class Program
{
public static void Main(string[] args)
{
var path=Path.Combine(Directory.GetCurrentDirectory(),"www");
var host = new WebHostBuilder()
.UseKestrel()
.UseWebRoot(path)
.UseStartup<Startup>()
.Build();

host.Run();
}
}
}

Startup.cs
using Microsoft.AspNetCore.Builder;

namespace ConsoleApplication
{
public class Startup
{
public void Configure(IApplicationBuilder app)
{
app.UseStaticFiles();
}
}
}

Now all we have to do is add a page and some files in the www folder of our app and we have a web server. But what does UseStaticFiles do? It runs app.UseMiddleware(), which adds StaticFilesMiddleware to the middleware chain which, after some validations and checks, runs StaticFileContext.SendRangeAsync(). It is worth looking up the code of these files, since it both demystifies the magic of web servers and awes through simplicity.

Routing


Surely we need to serve static content, but what about dynamic content? Here is where routing comes into place.

First add package
"Microsoft.AspNetCore.Routing": "1.0.0"
to project.json, then add some silly routing to Startup.cs:

project.json
{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable",
"emitEntryPoint": true
},
"dependencies": {},
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.0"
},
"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
"Microsoft.AspNetCore.Routing": "1.0.0"
},
"imports": "dnxcore50"
}
}
}


From .NET Core 1.1, Routing is found in the Microsoft.AspNetCore package.

Program.cs
using Microsoft.AspNetCore.Hosting;

namespace ConsoleApplication
{
public class Program
{
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseStartup<Startup>()
.Build();

host.Run();
}
}
}

Startup.cs
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.DependencyInjection;

namespace ConsoleApplication
{
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddRouting();
}

public void Configure(IApplicationBuilder app)
{
app.UseRouter(new HelloRouter());
}

public class HelloRouter : IRouter
{
public VirtualPathData GetVirtualPath(VirtualPathContext context)
{
return null;
}

public Task RouteAsync(RouteContext context)
{
var requestPath = context.HttpContext.Request.Path;
if (requestPath.StartsWithSegments("/hello", StringComparison.OrdinalIgnoreCase))
{
context.Handler = async c =>
{
await c.Response.WriteAsync($"Hello world!");
};
}
return Task.FromResult(0);
}
}
}
}

Some interesting things added here. First of all, we added a new method to the Startup class, called ConfigureServices, which receives an IServicesCollection. To this, we AddRouting, with a familiar by now extension method that shortcuts to adding a whole bunch of services to the collection: contraint resolvers, url encoders, route builders, routing marker services and so on. Then in Configure we UseRouter (shortcut for using a RouterMiddleware) with a custom built implementation of IRouter. What it does is look for a /hello URL request and returns the "Hello World!" string. Go on and test it by going to http://localhost:5000/hello.

Take a look at the documentation for routing for a proper example, specifically using a RouteBuilder to build an implementation of IRouter from a default handler and a list of mapped routes with their handlers and how to map routes, with parameters and constraints. Another post will handle a complete web site, but for now, let's just build a router with several routes, see how it goes:

Startup.cs
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.DependencyInjection;

namespace ConsoleApplication
{
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddRouting();
}

public void Configure(IApplicationBuilder app)
{
var defaultHandler = new RouteHandler(
c => c.Response.WriteAsync($"Default handler! Route values: {string.Join(", ", c.GetRouteData().Values)}")
);

var routeBuilder = new RouteBuilder(app, defaultHandler);

routeBuilder.Routes.Add(new Route(new HelloRouter(), "hello/{name:alpha?}",
app.ApplicationServices.GetService<IInlineConstraintResolver>()));

routeBuilder.MapRoute("Track Package Route",
"package/{operation:regex(track|create|detonate)}/{id:int}");

var router = routeBuilder.Build();
app.UseRouter(router);
}

public class HelloRouter : IRouter
{
public VirtualPathData GetVirtualPath(VirtualPathContext context)
{
return null;
}

public Task RouteAsync(RouteContext context)
{
var name = context.RouteData.Values["name"] as string;
if (String.IsNullOrEmpty(name)) name="World";

var requestPath = context.HttpContext.Request.Path;
if (requestPath.StartsWithSegments("/hello", StringComparison.OrdinalIgnoreCase))
{
context.Handler = async c =>
{
await c.Response.WriteAsync($"Hi, {name}!");
};
}
return Task.FromResult(0);
}
}
}
}

Run it and test the following URLs: http://localhost:5000/ - should return nothing but a 404 code (which you can check in the Network section of your browser's developer tools), http://localhost:5000/hello - should display "Hi, World!", http://localhost:5000/hello/Siderite - should display "Hi, Siderite!", http://localhost:5000/package/track/12 - should display "Default handler! Route values: [operation, track], [id, 12]", http://localhost:5000/track/abc - should again return a 404 code, since there is a constraint on id to be an integer.

How does that work? First of all we added a more complex HelloRouter implementation that handles a name. To the route builder we added a default handler that just displays the parameters receives, a Route instance that contains the URL template, the IRouter implementation and an inline constraint resolver for the parameter and finally we added just a mapped route that goes to the default router. That is why if no hello or package URL template are matched the site returns 404 and otherwise it chooses one handler or another and displays the result.

Error handling


Speaking of 404 code that can only be seen in the browser's developer tools, how do we display an error page? As usual, read the documentation to understand things fully, but for now we will be playing around with the routing example above and adding error handling to it.

The short answer is add some middleware to the mix. Extensions methods in
"Microsoft.AspNetCore.Diagnostics": "1.0.0"

like UseStatusCodePages, UseStatusCodePagesWithRedirects, UseStatusCodePagesWithReExecute, UseDeveloperExceptionPage, UseExceptionHandler.

Here is an example of the Startup class:
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.DependencyInjection;

namespace ConsoleApplication
{
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddRouting();
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseDeveloperExceptionPage();
app.UseStatusCodePages(/*context=>{
throw new Exception($"Page return status code: {context.HttpContext.Response.StatusCode}");
}*/);

var defaultHandler = new RouteHandler(
c => c.Response.WriteAsync($"Default handler! Route values: {string.Join(", ", c.GetRouteData().Values)}")
);

var routeBuilder = new RouteBuilder(app, defaultHandler);

routeBuilder.Routes.Add(new Route(new HelloRouter(), "hello/{name:alpha?}",
app.ApplicationServices.GetService<IInlineConstraintResolver>()));

routeBuilder.MapRoute("Track Package Route",
"package/{operation:regex(track|create|detonate)}/{id:int}");

var router = routeBuilder.Build();
app.UseRouter(router);
}

public class HelloRouter : IRouter
{
public VirtualPathData GetVirtualPath(VirtualPathContext context)
{
return null;
}

public Task RouteAsync(RouteContext context)
{
var name = context.RouteData.Values["name"] as string;
if (String.IsNullOrEmpty(name)) name = "World";
if (String.Equals(name, "error", StringComparison.OrdinalIgnoreCase))
{
throw new ArgumentException("Hate errors! Won't say Hi!");
}

var requestPath = context.HttpContext.Request.Path;
if (requestPath.StartsWithSegments("/hello", StringComparison.OrdinalIgnoreCase))
{
context.Handler = async c =>
{
await c.Response.WriteAsync($"Hi, {name}!");
};
}
return Task.FromResult(0);
}
}
}
}

All you have to do now is try some non existent route like http://localhost:5000/help or cause an error with http://localhost:5000/hello/error

Final thoughts


The post is already too long so I won't be covering here a lot of what is needed such as security or setting the environment name so one can, for example, only show the development error page for the development environment. The VS Code ecosystem is at its beginning and work is being done to improve it as we speak. The basic concept of middleware allows us to build whatever we want, starting from scratch. In a future post I will discuss MVC and Web API and the more advanced concepts related to the web. Perhaps it will be done in VS Code as well, perhaps not, but my point is that with the knowledge so far one can control all of these things by knowing how to handle a few classes in a list of middleware. You might not need an off the shelf framework, you may write your own.

Find a melange of the code above in this GitHub repository.

Just when I thought I don't have anything else to add, I found new stuff for my Chrome browser extension.

Bookmark Explorer now features:
  • configurable interval for keeping a page open before bookmarking it for Read Later (so that all redirects and icons are loaded correctly)
  • configurable interval after which deleted bookmarks are no longer remembered
  • remembering deleted bookmarks no matter what deletes them
  • more Read Later folders: configure their number and names
  • redesigned options page
  • more notifications on what is going on

The extension most resembles OneTab, in the sense that it is also designed to save you from opening a zillion tabs at the same time, but differs a lot by its ease of use, configurability and the absolute lack of any connection to outside servers: everything is stored in Chrome bookmarks and local storage.

Enjoy!

Update 17 June 2016: I've changed the focus of the extension to simply change the aspect of stories based on status, so that stories with content are highlighted over simple shares. I am currently working on another extension that is more adaptive, but it will be branded differently.

Update 27 May 2016: I've published the very early draft of the extension because it already does a cool thing: putting original content in the foreground and shrinking the reposts and photo uploads and feeling sharing and all that. You may find and install the extension here.

Have you ever wanted to decrease the spam in your Facebook page but couldn't do it in any way that would not make you miss important posts? I mean, even if you categorize all your contacts into good friends, close friends, relatives, acquaintances, then you unfollow the ones that really spam too much and you hide all posts that you don't like, you have no control over how Facebook decides to order what you see on the page. Worse than that, try to refresh repeatedly your Facebook page and see wildly oscillating results: posts appear, disappear, reorder themselves. It's a mess.

Well, true to this and my word I have started work on a Chrome extension to help me with this. My plan is pretty complicated, so before I publish the extension on the Chrome Webstore, like I did with my previous two efforts, I will publish this on GitHub while I am still working on it. So, depending on where I am, this might be alpha, beta or stable. At the moment of this writing - first commit - alpha is a pretty big word.

Here is the plan for the extension:
  1. Detect the user has opened the Facebook page
  2. Inject jQuery and extension code into the page
  3. Detect any post as it appears on the page
  4. Extract as many features as possible
  5. Allow the user to create categories for posts
  6. Allow the user to drag posts into categories or out of them
  7. Use AI to determine the category a post most likely belongs to
  8. Alternatively, let the user create their own filters, a la Outlook
  9. Show a list of categories (as tabs, perhaps) and hide all posts under the respective categories
This way, one might skip the annoying posts, based on personal preferences, while still enjoying the interesting ones. At the time of this writing, the first draft, the extension only works on https://www.facebook.com, not on any subpages, it extracts the type of the post and sets a CSS class on it. It also injects a CSS which makes posts get dimmer and smaller based on category. Mouse over to get the normal size and opacity.

How to make it work for you:
  1. In Chrome, go to Manage Extensions (chrome://extensions/)
  2. Click on the Developer Mode checkbox
  3. Click on the Load unpacked extension... button
  4. Select a folder where you have downloaded the source of this extension
  5. Open a new tab and load Facebook there
  6. You should see the posts getting smaller and dimmer based on category.
Change statusProcessor.css to select your own preferences (you may hide posts altogether or change the background color, etc).

As usual, please let me know what you think and contribute with code and ideas.