Learning from React series:

  • Part 1 (this one) - why examining React is useful even if you won't end up using it
  • Part 2 - what Facebook wanted to do with React and how to get a grasp on it
  • Part 3 - what is Reactive Programming all about?
  • Part 4 - is React functional programming?
  • Part 5 - Typescript, for better and for worse
  • Part 6 - Single Page Applications are not where they wanted to be

  A billion years ago, Microsoft was trying to push a web development model that simulated Windows Forms development: ASP.Net Web Forms. It featured several ideas:

  • component based design (input fields were a component, you could bundle two together into another component, the page was a component, etc)
  • each component was rendering itself
  • components were defined using both HTML-like language, Javascript, CSS and server .Net code bundled together, sometimes in the same file
  • the rendering of the component was done on the server side and pushed to the client
  • data changes or queries came from the client to the server via event messages
  • partial rendering was possible using UpdatePanels, which were a wrapper over ajax calls that called for partial content
    • at the time many juniors were putting the entire page into an UpdatePanel and said they were doing AJAX while senior devs smugly told them how bad that was and that it shouldn't be done. I agreed with the senior devs, but I really disliked their uninformed condescending attitude, so I created a method of diffing the content sent previously and the new content and sending only the difference. This minimized the amount of data sent via the network about a hundred times. 

  Sound familiar? Because for me, learning React made me think of that almost immediately. React features:

  • component based design
  • each component renders itself
  • components are defined by bundling together HTML, Javascript/Typescript and CSS
  • the rendering of the component is done in the virtual DOM and pushed to the actual browser DOM
  • data changes or queries are sent from the browser to the React code via event messages
  • partial rendering is built in the system, by diffing the existing render tree with a newly generated one from data changes

  At first look, an old guy like me would say "been there, done that. It's bad design and it will soon go away". But I was also motivated enough at the time of ASP.Net Forms to look into it, even under the hood, and understand the thing. To say it was badly designed would be stupid. It worked for many years and powered (and still does) thousands of big applications. The reason why Forms failed is because better ideas came along, not because it was a bad idea when it was created.

  Let's look a little bit on what made Forms become obsolete: the MVC pattern, more specifically implemented by Ruby developers and taking the world by storm and ending up being adopted by Microsoft, too. But Model View Controller wasn't a new pattern, it has been used forever on desktop applications, so why was it such a blow to Forms? It was a lot of fashion elitism, but also that MVC molded itself better on web applications:

  • a clear separation of concerns: data, logic and display
  • the ability to push the display more towards the client, which was new but becoming increasingly easy in browsers
  • a clear separation of programming languages: HTML in html files, Javascript in .js files, server code in .cs files
  • full (and simple) control over how things were rendered, displayed and sent to the server

  Yet in the case of React, things are going in the opposite direction, going from MVC applications with clearly separated language files to these .jsx or .tsx files that contain javascript, html and even css in the same file, mixed together into components. Is that bad? Not really. It kind of looks bad, but the entire work is done on the client. There is no expensive interface between a server and a browser that would make the model, otherwise used successfully for decades in desktop applications, ineffective. It is basically Windows Forms in the browser, with some new ideas thrown in. As for the combination of languages in a single syntax, it can be abused, just like any technology, but it can also be done right: with state, logic and UI represented by different files and areas of the application. Yes, you need script to hide or show something based on data, but that script is part of the UI and different from the script used to represent logic. Just the language is the same. 

  "Is Siderite joining the React camp, then? Switching sides, going frontend and doing nasty magic with Javascript and designing web pages?" people will ask. A reasonable question, considering most of my close friends still think React is for people who can't code or too young to remember the .aspx hell. The answer is no! Yet just as with the UpdatePanel seniors that couldn't stop for a second to look a bit deeper into an idea and see how it could be done and then cruelly dismissing curious juniors, thinking that React can't teach you anything is just dumb.

  In this series I will explore not only React ideas, but also the underlying principles. React is just an example of reactive programming, which has been in use, even if less popular, for decades. It is now making a comeback because of microservices, another fashionable fad that has had implementations since 1990, but no one gave them the time of day. Ideas of data immutability are coming from functional programming, which is also making a comeback as it works great with big data. So why not try this thing out, iron out the kinks and learn what they did right?

Update: due to popular demand, I've added Tyrion as a Github repo.

Update 2: due to more popular demand, I even made the repo public. :) Sorry about that.

Intro

  Discord is something I have only vaguely heard about and when a friend told me he used it for chat with friends, I installed it, too. I was pleasantly surprised to see it is a very usable and free chat application, which combines feature of IRC, other messenger applications and a bit of Slack. You can create servers and add channels to them, for example, where you can determine the rights of people and so on. What sets Discord apart from anything, perhaps other than Slack, is the level of "integration", the ability to programatically interact with it. So I create a "bot", a program which stays active and responds to user chat messages and can take action. This post is about how to do that.

  Before you implement a bot you obviously need:

  All of this has been done to death and you can follow the links above to learn how to do it. Before we continue, a little something that might not be obvious: you can edit a Discord chat invite so that it never expires, as it is the one on this blog now.

Writing code

One can write a bot in a multitude of programming languages, but I am a .NET dev, so Discord.NET it is. Note that this is an "unofficial" library, so it may not (and it is not) completely in sync with all the options that the Discord API provides. One such feature, for example, is multiple attachments to a message. But I digress.

Since my blog is also written in ASP.NET Core, it made sense to add the bot code to that. Also, in order to make it all clean code, I will use dependency injection as much as possible and use the built-in system for commands, even if it is quite rudimentary.

Step 1 - making dependencies available

We are going to need these dependencies:

  • DiscordSocketClient - the client to connect to Discord
  • CommandService - the service managing commands
  • BotSettings - a class used to hold settings and configuration
  • BotService - the bot itself, which we are going to make implement IHostedService so we can add it as a hosted service in ASP.Net

In order to keep things separated, I will not add all of this in Startup, instead encapsulating them into a Bootstrap class:

public static class Bootstrap
{
    public static IWebHostBuilder UseDiscordBot(this IWebHostBuilder builder)
    {
        return builder.ConfigureServices(services =>
        {
            services
                .AddSingleton<BotSettings>()
                .AddSingleton<DiscordSocketClient>()
                .AddSingleton<CommandService>()
                .AddHostedService<BotService>();
        });
    }
}

This allows me to add the bot simply in CreateWebHostBuilder as: 

WebHost.CreateDefaultBuilder(args)
   .UseStartup<Startup>()
   .UseKestrel(a => a.AddServerHeader = false)
   .UseDiscordBot();

Step 2 - the settings

The BotSettings class will be used not only to hold information, but also communicate it between classes. Each Discord chat bot needs an access token to connect and we can add that as a configuration value in appsettings.config:

{
  ...
  "DiscordBot": {
	"Token":"<the token value>"
  },
  ...
}
public class BotSettings
{
    public BotSettings(IConfiguration config, IHostingEnvironment hostingEnvironment)
    {
        Token = config.GetValue<string>("DiscordBot:Token");
        RootPath = hostingEnvironment.WebRootPath;
        BotEnabled = true;
    }

    public string Token { get; }
    public string RootPath { get; }
    public bool BotEnabled { get; set; }
}

As you can see, no fancy class for getting the config, nor do we use IOptions or anything like that. We only need to get the token value once, let's keep it simple. I've added the RootPath because you might want to use it to access files on the local file system. The other property is a setting for enabling or disabling the functionality of the bot.

Step 3 - the bot skeleton

Here is the skeleton for a bot. It doesn't change much outside the MessageReceived and CommandReceived code.

public class BotService : IHostedService, IDisposable
{
    private readonly DiscordSocketClient _client;
    private readonly CommandService _commandService;
    private readonly IServiceProvider _services;
    private readonly BotSettings _settings;

    public BotService(DiscordSocketClient client,
        CommandService commandService,
        IServiceProvider services,
        BotSettings settings)
    {
        _client = client;
        _commandService = commandService;
        _services = services;
        _settings = settings;
    }

    // The hosted service has started
    public async Task StartAsync(CancellationToken cancellationToken)
    {
        _client.Ready += Ready;
        _client.MessageReceived += MessageReceived;
        _commandService.CommandExecuted += CommandExecuted;
        _client.Log += Log;
        _commandService.Log += Log;
        // look for classes implementing ModuleBase to load commands from
        await _commandService.AddModulesAsync(Assembly.GetEntryAssembly(), _services);
        // log in to Discord, using the provided token
        await _client.LoginAsync(TokenType.Bot, _settings.Token);
        // start bot
        await _client.StartAsync();
    }

    // logging
    private async Task Log(LogMessage arg)
    {
        // do some logging
    }

    // bot has connected and it's ready to work
    private async Task Ready()
    {
        // some random stuff you can do once the bot is online: 

        // set status to online
        await _client.SetStatusAsync(UserStatus.Online);
        // Discord started as a game chat service, so it has the option to show what games you are playing
        // Here the bot will display "Playing dead" while listening
        await _client.SetGameAsync("dead", "https://siderite.dev", ActivityType.Listening);
    }
    private async Task MessageReceived(SocketMessage msg)
    {
        // message retrieved
    }
    private async Task CommandExecuted(Optional<CommandInfo> command, ICommandContext context, IResult result)
    {
        // a command execution was attempted
    }

    // the hosted service is stopping
    public async Task StopAsync(CancellationToken cancellationToken)
    {
        await _client.SetGameAsync(null);
        await _client.SetStatusAsync(UserStatus.Offline);
        await _client.StopAsync();
        _client.Log -= Log;
        _client.Ready -= Ready;
        _client.MessageReceived -= MessageReceived;
        _commandService.Log -= Log;
        _commandService.CommandExecuted -= CommandExecuted;
    }


    public void Dispose()
    {
        _client?.Dispose();
    }
}

Step 4 - adding commands

In order to add commands to the bot, you must do the following:

  • create a class to inherit from ModuleBase
  • add public methods that are decorated with the CommandAttribute
  • don't forget to call commandService.AddModuleAsync like above

Here is an example of an enable/disable command class:

public class BotCommands:ModuleBase
{
    private readonly BotSettings _settings;

    public BotCommands(BotSettings settings)
    {
        _settings = settings;
    }

    [Command("bot")]
    public async Task Bot([Remainder]string rest)
    {
        if (string.Equals(rest, "enable",StringComparison.OrdinalIgnoreCase))
        {
            _settings.BotEnabled = true;
        }
        if (string.Equals(rest, "disable", StringComparison.OrdinalIgnoreCase))
        {
            _settings.BotEnabled = false;
        }
        await this.Context.Channel.SendMessageAsync("Bot is "
            + (_settings.BotEnabled ? "enabled" : "disabled"));
    }
}

When the bot command will be issued, then the state of the bot will be sent as a message to the chat. If the parameter of the command is enable or disable, the state will also be changed accordingly.

Yet, in order for this command to work, we need to add code to the bot MessageReceived method: 

private async Task MessageReceived(SocketMessage msg)
{
    // do not process bot messages or system messages
    if (msg.Author.IsBot || msg.Source != MessageSource.User) return;
    // only process this type of message
    var message = msg as SocketUserMessage;
    if (message == null) return;
    // match the message if it starts with R2D2
    var match = Regex.Match(message.Content, @"^\s*R2D2\s+", RegexOptions.IgnoreCase);
    int? pos = null;
    if (match.Success)
    {
        // this is an R2D2 command, everything after the match is the command text
        pos = match.Length;
    }
    else if (message.Channel is IPrivateChannel)
    {
        // this is a command sent directly to the private channel of the bot, 
        // don't expect to start with R2D2 at all, just execute it
        pos = 0;
    }
    if (pos.HasValue)
    {
        // this is a command, execute it
        var context = new SocketCommandContext(_client, message);
        await _commandService.ExecuteAsync(context, message.Content.Substring(pos.Value), _services);
    }
    else
    {
        // processing of messages that are not commands
        if (_settings.BotEnabled)
        {
            // if the bot is enabled and people are talking about it, show an image and say "beep beep"
            if (message.Content.Contains("R2D2",StringComparison.OrdinalIgnoreCase))
            {
                await message.Channel.SendFileAsync(_settings.RootPath + "/img/R2D2.gif", "Beep beep!", true);
            }
        }
    }
}

This code will forward commands to the command service if message starts with R2D2, else, if bot is enabled, will send replies with the R2D2 picture and saying beep beep to messages that contain R2D2.

Step 5 - handling command results

Command execution may end in one of three states:

  • command is not recognized
  • command has failed
  • command has succeeded

Here is a CommandExecuted event handler that takes these into account:

private async Task CommandExecuted(Optional<CommandInfo> command, ICommandContext context, IResult result)
{
    // if a command isn't found
    if (!command.IsSpecified)
    {
        await context.Message.AddReactionAsync(new Emoji("🤨")); // eyebrow raised emoji
        return;
    }

    // log failure to the console 
    if (!result.IsSuccess)
    {
        await Log(new LogMessage(LogSeverity.Error, nameof(CommandExecuted), $"Error: {result.ErrorReason}"));
        return;
    }
    // react to message
    await context.Message.AddReactionAsync(new Emoji("🤖")); // robot emoji
}

Note that the command info object does not expose a result value, other than success and failure.

Conclusion

This post has shown you how to create a Discord chat bot in .NET and add it to an ASP.Net Core web site as a hosted service. You may see the result by joining this blog's chat and giving commands to Tyr, the chat's bot:

  • play
  • fart
  • use metric or imperial units in messages
  • use Yahoo Messenger emoticons in messages
  • whatever else I will add in it when I get in the mood :)

In .NET APIs we usually adorn the action methods with [Http<HTTP method>] attributes, like HttpGetAttribute or AcceptVerbsAttribute, to set the HTTP methods that are accepted. However, there are conventions on the names of the methods that make them work when such attributes are not used. How does ASP.Net determine which methods on a controller are considered "actions"? The documentation explains this, but the information is hidden in one of the paragraphs:
  1. Attributes as described above: AcceptVerbs, HttpDelete, HttpGet, HttpHead, HttpOptions, HttpPatch, HttpPost, or HttpPut.
  2. Through the beginning of the method name: "Get", "Post", "Put", "Delete", "Head", "Options", or "Patch"
  3. If none of the rules above apply, POST is assumed

I did something and suddenly my API was not run in IIS Express anymore, but in Kestrel. I reviewed code changes, configuration changes, all to now avail. I don't want Kestrel, I want IIS Express!!! Why did Visual Studio suddenly decided to switch the development server?

Solution: it is your Visual Studio. The button you use to start debugging has a little dropdown that allows you to choose which server to use. I had probably pressed the wrong one at some point.

Update for .NET Core 3.0:

Seems for .NET Core 3.0 the solution is much simpler:
  • install the Microsoft.AspNetCore.Authentication.Negotiate NuGet package
  • add authentication in ConfigureServices like this:
    services
    .AddAuthentication(NegotiateDefaults.AuthenticationScheme)
    .AddNegotiate();
  • use the authentication in Configure (above app.UseAuthorization();)
    app.UseAuthentication();

No need to UseIISIntegration, UseHttpSys or anything.

Original post:

If you get the System.InvalidOperationException "No authenticationScheme was specified, and there was no DefaultChallengeScheme found." it means that ... err... you don't have a default authentication scheme. Solution:
  • Install NuGet package Microsoft.AspNetCore.Authentication in your project
  • add
    services.AddAuthentication(Microsoft.AspNetCore.Server.IISIntegration.IISDefaults.AuthenticationScheme);
    to the ConfigureServices method.

Update: Note that this is for IIS integration. If you want to use self hosted or Kestrel in debug, you should use HttpSysDefaults.AuthenticationScheme. Funny though, it's the same string value for both constants: "Windows".

Oh, and if you enter the credentials badly when prompted and you can't reenter them, try to restart Chrome (as in this answer)

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Previously on Learning ASP.Net MVC...


Started with the idea of a project that would use user configurable queries to do Google searches, store the information in the results and then perform various data analysis on them and display them based on what the user wants. However, I first implemented a Google authentication and then went to write some theoretical posts. Lastly, I've upgraded the project from .NET Core 1.0 to version 1.1.

Well, it took me a while to get here because I was at a crossroads. I like the idea of dependency injectable services to do data access. At the same time there is the entire Entity Framework tutorial path that kind of wants to strongly integrate EF with my projects. I mean, if I have a service that gives me the list of all items in the database and then I want to get only a few items, it would be bad design to filter the entire list. As such, I would have to write a different method that allows me to get the items based on some kind of filters. On the other hand, Entity Framework code looks just like that "give me all you have, filtered by this" which is then translated into an efficient query to the database. One possibility would be to have my service return IQueryable <T>, so I could also use the system to generate the database code on the fly.

The Design


I've decided on the service architecture, against an EF type IQueryable way, because I want to be able to replace that service with anything, including something that doesn't work with a database or something that doesn't know how to dynamically create queries. Also, the idea that the service methods will describe exactly what I want appeals to me more than avoiding a bit of duplicated code.

Another thing to define now is the method through which I will implement the dependency injection. Being the control freak that I am, I would go with installing my own library, something like SimpleInjector, and configure it myself and use it explicitly. However, ASP.Net Core has dependency injection included out of the box, so I will use that.

As defined, the project needs queries to pass on to Google and a storage service for the results. It needs data services to manage these entities, as well as a service to abstract Google itself. The data gathering operation itself cannot be a simple REST call, since it might take a while, it must be a background task. The data analysis as well. So we need a sort of job manager.

As per a good structured design, the data objects will be stored in a separate project, as well as the interfaces for the services we will be using.

Some code, please!


Well, start with the code of the project so far: GitHub and let's get coding.

Before finding a solution to actually run the background code in the context of ASP.Net, let's write it inside a class. I am going to add a folder called Jobs and add a class in it called QueryProcessor with a method ProcessQueries. The code will be self explanatory, I hope.
public void ProcessQueries()
{
var now = _timeService.Now;
var queries = _queryDataService.GetUnprocessed(now);
var contentItems = queries.AsParallel().WithDegreeOfParallelism(3)
.SelectMany(q => _contentService.Query(q.Text));
_contentDataService.Update(contentItems);
}

So we get the time - from a service, of course - and request the unprocessed queries for that time, then we extract the content items for each query, which then are updated in the database. The idea here is that, for the first time a query is defined or when the interval from the last time the query was processed, the query will be sent to the content service from which content items will be received. These items will be stored in the database.

Now, I've kept the code as concise as possible: there is no indication yet of any implementation detail and I've written as little code as I need to express my intention. Yet, what are all these services? What is a time service? what is a content service? Where are they defined? In order to enable dependency injection, we will populate all of these fields from the constructor of the query processor. Here is how the class would look in its entirety:
using ContentAggregator.Interfaces;
using System.Linq;

namespace ContentAggregator.Jobs
{
public class QueryProcessor
{
private readonly IContentDataService _contentDataService;
private readonly IContentService _contentService;
private readonly IQueryDataService _queryDataService;
private readonly ITimeService _timeService;

public QueryProcessor(ITimeService timeService, IQueryDataService queryDataService, IContentDataService contentDataService, IContentService contentService)
{
_timeService = timeService;
_queryDataService = queryDataService;
_contentDataService = contentDataService;
_contentService = contentService;
}

public void ProcessQueries()
{
var now = _timeService.Now;
var queries = _queryDataService.GetUnprocessed(now);
var contentItems = queries.AsParallel().WithDegreeOfParallelism(3)
.SelectMany(q => _contentService.Query(q.Text));
_contentDataService.Update(contentItems);
}
}
}

Note that the services are only defined as interfaces which we declare in a separate project called ContentAggregator.Interfaces, referred above in the usings block.

Let's ignore the job processor mechanism for a moment and just run ProcessQueries in a test method in the main controller. For this I will have to make dependency injection work and implement the interfaces. For brevity I will do so in the main project, although it would probably be a good idea to do it in a separate ContentAggregator.Implementations project. But let's not get ahead of ourselves. First make the code work, then arrange it all nice, in the refactoring phase.

Implementing the services


I will create mock services first, in order to test the code as it is, so the following implementations just do as little as possible while still following the interface signature.
public class ContentDataService : IContentDataService
{
private readonly static StringBuilder _sb;

static ContentDataService()
{
_sb = new StringBuilder();
}

public void Update(IEnumerable<ContentItem> contentItems)
{
foreach (var contentItem in contentItems)
{
_sb.AppendLine($"{contentItem.FinalUrl}:{contentItem.Title}");
}
}

public static string Output
{
get { return _sb.ToString(); }
}
}

public class ContentService : IContentService
{
private readonly ITimeService _timeService;

public ContentService(ITimeService timeService)
{
_timeService = timeService;
}

public IEnumerable<ContentItem> Query(string text)
{
yield return
new ContentItem
{
OriginalUrl = "http://original.url",
FinalUrl = "https://final.url",
Title = "Mock Title",
Description = "Mock Description",
CreationTime = _timeService.Now,
Time = new DateTime(2017, 03, 26),
ContentType = "text/html",
Error = null,
Content = "Mock Content"
};
}
}

public class QueryDataService : IQueryDataService
{
public IEnumerable<Query> GetUnprocessed(DateTime now)
{
yield return new Query
{
Text="Some query"
};
}
}

public class TimeService : ITimeService
{
public DateTime Now
{
get
{
return DateTime.UtcNow;
}
}
}

Now all I have to do is declare the binding between interface and implementation. The magic happens in ConfigureServices, in Startup.cs:
services.AddTransient<ITimeService, TimeService>();
services.AddTransient<IContentDataService, ContentDataService>();
services.AddTransient<IContentService, ContentService>();
services.AddTransient<IQueryDataService, QueryDataService>();

They are all transient, meaning that for each request of an implementation the system will just create a new instance. Another popular method is AddSingleton.

Using dependency injection


So, now I have to instantiate my query processor and run ProcessQueries.

One way is to set QueryProcessor as a service. I extract an interface, I add a new binding and then I give an interface as a parameter of my controller constructor:
[Authorize]
public class HomeController : Controller
{
private readonly IQueryProcessor _queryProcessor;

public HomeController(IQueryProcessor queryProcessor)
{
_queryProcessor = queryProcessor;
}

public IActionResult Index()
{
return View();
}

[HttpGet("/test")]
public string Test()
{
_queryProcessor.ProcessQueries();
return ContentDataService.Output;
}
}
In fact, I don't even have to declare an interface. I can just use services.AddTransient<QueryProcessor>(); in ConfigureServices and it works as a parameter to the controller.

But what if I want to use it directly, resolve it manually, without injecting it in the controller? One can use the injection of a IServiceProvider instead. Here is an example:
[Authorize]
public class HomeController : Controller
{
private readonly IServiceProvider _serviceProvider;

public HomeController(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}

public IActionResult Index()
{
return View();
}

[HttpGet("/test")]
public string Test()
{
var queryProcessor = _serviceProvider.GetService<QueryProcessor>();
queryProcessor.ProcessQueries();
return ContentDataService.Output;
}
}
Yet you still need to use services.Add... in ConfigureServices and inject the service provider in the constructor of the controller.

There is a way of doing it completely separately like this:
var serviceProvider = new ServiceCollection()
.AddTransient<ITimeService, TimeService>()
.AddTransient<IContentDataService, ContentDataService>()
.AddTransient<IContentService, ContentService>()
.AddTransient<IQueryDataService, QueryDataService>()
.AddTransient<QueryProcessor>()
.BuildServiceProvider();
var queryProcessor = serviceProvider.GetService<QueryProcessor>();

This would be the way to encapsulate the ASP.Net Dependency Injection in another object, maybe in a console application, but clearly it would be pointless in our application.

The complete source code after these modifications can be found here. Test the functionality by going to /test on your local server after you start the app.

It is about time to revisit my series on ASP.Net MVC Core. From the time of my last blog post the .Net Core version has changed to 1.1, so just installing the SDK and running the project was not going to work. This post explains how to upgrade a .Net project to the latest version.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Short version


Pressing the batch Update button for NuGet packages corrupted project.json. Here are the steps to successfully migrate a .Net Core project to a higher version.

  1. Download and install the .NET Core 1.1 SDK
  2. Change the version of the SDK in global.json - you can find out the SDK version by creating a new .Net Core project and checking what it uses
  3. Change "netcoreapp1.0" to "netcoreapp1.1" in project.json
  4. Change Microsoft.NETCore.App version from "1.0.0" to "1.1.0" in project.json
  5. Add
    "runtimes": {
    "win10-x64": { }
    },
    to project.json
  6. Go to "Manage NuGet packages for the solution", to the Update tab, and update projects one by one. Do not press the batch Update button for selected packages
  7. Some packages will restore, but remain in the list. Skip them for now
  8. Whenever you see a "downgrade" warning when restoring, go to those packages and restore them next
  9. For packages that tell you to upgrade NuGet, ignore them, it's an error that probably happens because you restore a package while the previous package restoring was not completed
  10. For the remaining packages that just won't update, write down their names, uninstall them and reinstall them

Code after changes can be found on GitHub

That should do it. For detailed steps of what I actually did to get to this concise list, read on.

Long version


Step 0 - I don't care, just load the damn project!


Downloaded the source code from GitHub, loaded the .sln with Visual Studio 2015. Got a nice blocking alert, because this was a .NET Core virgin computer:
Of course, I could have tried to install that version, but I wanted to upgrade to the latest Core.

Step 1 - read the Microsoft documentation


And here I went to Announcing the Fastest ASP.NET Yet, ASP.NET Core 1.1 RTM. I followed the instructions there, made Visual Studio 2015 load my project and automatically restore packages:
  1. Download and install the .NET Core 1.1 SDK
  2. If your application is referencing the .NET Core framework, your should update the references in your project.json file for netcoreapp1.0 or Microsoft.NetCore.App version 1.0 to version 1.1. In the default project.json file for an ASP.NET Core project running on the .NET Core framework, these two updates are located as follows:

    Two places to update project.json to .NET Core 1.1

  3. to be continued...

I got to the second step, but still got the alert...

Step 2 - fumble around


... so I commented out the sdk property in global.json. I got another alert:


This answer recommended uninstalling old versions of SDKs, in my case "Microsoft .NET Core 1.0.1 - SDK 1.0.0 Preview 2-003131 (x64)". Don't worry, it didn't work. More below:

TL;DR; version: do not uninstall the Visual Studio .NET Core Tooling


And then... got the same No executable found matching command "dotnet=projectmodel-server" error again.

I created a new .NET core project, just to see the version of SDK it uses: 1.0.0-preview2-003131 and I added it to global.json and reopened the project. It restored packages and didn't throw any errors! Dude, it even compiled and ran! But now I got a System.ArgumentException: The 'ClientId' option must be provided. Probably it had something to do with the Secret Manager. Follow the steps in the link to store your secrets in the app. It then worked.

Step 1.1 (see what I did there?) - continue to read the Microsoft documentation


The third step in the Microsoft instructions was removed by me because it caused some problems to me. So don't do it, yet. It was
  1. Update your ASP.NET Core packages dependencies to use the new 1.1.0 versions. You can do this by navigating to the NuGet package manager window and inspecting the “Updates” tab for the list of packages that you can update.

    Updating Packages using the NuGet package manager UI with the last pre-release build of ASP.NET Core 1.1


Since I had not upgraded the packages, as in the Microsoft third step, I decided to do it. 26 updates waited for me, so I optimistically selected them all and clicked Update. Of course, errors! One popped up as more interesting: Package 'Microsoft.Extensions.SecretManager.Tools 1.0.0' uses features that are not supported by the current version of NuGet. To upgrade NuGet, see http://docs.nuget.org/consume/installing-nuget. Another was even more worrisome: Unexpected end of content while loading JObject. Path 'dependencies', line 68, position 0 in project.json. Somehow the updating operation for the packages corrupted project.json! From a 3050 byte file, it now was 1617.

Step 3 - repair what the Microsoft instructions broke


Suspecting it was a problem with the NuGet package manager, I went to the link in the first error. But in Visual Studio 2015 NuGet is included and it was clearly the latest version. So the only solution was to go through each package and see which causes the problem. And I went to 26 packages and pressed Install on each and it worked. Apparently, the batch Update button is causing the issue. Weirdly enough there are two packages that were installed, but remained in the Update tab and also appeared in the Consolidate tab: BundleMinifier.Core and Microsoft.EntityFrameworkCore.Tools, although I can't to anything with them there.

Another package (Microsoft.VisualStudio.Web.CodeGeneration.Tools 1.0.0) caused another confusing error: Package 'Microsoft.VisualStudio.Web.CodeGeneration.Tools 1.0.0' uses features that are not supported by the current version of NuGet. To upgrade NuGet, see http://docs.nuget.org/consume/installing-nuget. Yet restarting Visual Studio led to the disappearance of the CodeGeneration.Tools error.

So I tried to build the project only to be met with yet another project.json corruption error: Can not find runtime target for framework '.NETCoreAPP, Version=v1.0' compatible with one of the target runtimes: 'win10-x64, win81-x64, win8-x64, win7-x64'. Possible causes: [blah blah] The project does not list one of 'win10-x64, win81-x64, win7-x64' in the 'runtimes' [blah blah]. I found the fix here, which was to add
"runtimes": {
"win10-x64": { }
},
to project.json.

It compiled. It worked.

I am mentally preparing for giving a talk about dependency injection and inversion of control and how are they important, so I intend to clarify my thoughts on the blog first. This has been spurred by seeing how so many talented and even experienced programmers don't really understand the concepts and why they should use them. I also intend to briefly explore these concepts in the context of programming languages other than C#.

And yes, I know I've started an ASP.Net MVC exploration series and stopped midway, and I truly intend to continue it, it's just that this is more urgent.

Head on intro


So, instead of going to the definitions, let me give you some examples, instead.
public class MyClass {
public IEnumerable<string> GetData() {
var provider=new StringDataProvider();
var data=provider.GetStringsNewerThan(DateTime.Now-TimeSpan.FromHours(1));
return data;
}
}
In this piece of code I create a class that has a method that gets some text. That's why I use a StringDataProvider, because I want to be provided with string data. I named my class so that it describes as best as possible what it intends to do, yet that descriptiveness is getting lost up the chain when my method is called just GetData. It is called so because it is the data that I need in the context of MyClass, which may not care, for example, that it is in string format. Maybe MyClass just displays enumerations of objects. Another issue with this is that it hides the date and time parameter that I pass in the method. I am getting string data, but not all of it, just for the last hour. Functionally, this will work fine: task complete, you can move to the next. Yet it has some nagging issues.

Dependency Injection


Let me show you the same piece of code, written with dependency injection in mind:
public class MyClass {
private IDataProvider _dataProvider;
private IDateTimeProvider _dateTimeProvider;

public void MyClass(IDataProvider dataProvider, IDateTimeProvider dateTimeProvider) {
this._dataProvider=dataProvider;
this._dateTimeProvider=dateTimeProvider;
}

public IEnumerable<string> GetData() {
var oneHourBefore=_dateTimeProvider.Now-TimeSpan.FromHours(1);
var data=_dataProvider.GetDataNewerThan(oneHourBefore);
return data;
}
}
A lot more code, but it solves several issues while introducing so many benefits that I wonder why people don't code like this from the get go.

Let's analyse this for a bit. First of all I introduce a constructor to MyClass, one that accepts and caches two parameters. They are not class types, but interfaces, which declare the intention for any class implementing them. The method then does the same thing as in the original example, using the providers it cached. Now, when I write the code of the class I don't actually need to have any provider implementation. I just declare what I need and worry about it later. I also don't need to inject real providers, I can mock them so that I can test my class as standalone. Note that the previous implementation of the class would have returned different data based on the system time and I had no way to control that behavior. The best benefit, for me, is that now the class is really descriptive. It almost reads like English: "Hi, folks, I am a class that needs someone to give me some data and the time of day and I will give you some processed data in return!". The rule of thumb is that for each method, external factors that may influence its behavior must be abstracted away. In our case if the date time provider provides the same time and the data provider the same data, the effect of the method is always the same.

Note that the interface I used was not IStringDataProvider, but IDataProvider. I don't really care, in my class, that the data is a bunch of strings. There is something called the Single Responsibility Principle, which says that a class or a method or some sort of unit of computation should try to only have one responsibility. If you change that code, it should only affect one area. Now, real life is a little different and classes do many things in many directions, yet they can implement any number of interfaces. The interfaces themselves can declare only one responsibility, which is why this is so nice. I don't actually have to have a class that is only a data provider, but in the context of my class, I only need that part and I am clearly declaring my intent in the code.

This here is called dependency injection, which is a fancy expression for saying "my code receives all third party instances as parameters". It is also in line with the Single Responsibility Principle, as now your class doesn't have to carry the responsibility of knowing how to instantiate the classes it needs. It makes the code more modular, easier to test, more legible and more maintainable.

But there is a problem. While before I was using something like new MyClass().GetData(), now I have to push the instantiation of the providers somewhere up the stream and do maybe something like this:
var dataProvider=new StringDataProvider();
var dateTimeProvider=new DateTimeProvider();
var myClass=new MyClass(dataProvider,dateTimeProvider);
myClass.GetData();
The apparent gains were all for naught! I just pushed the same ugly code somewhere else. But here is where Inversion of Control comes in. What if you never need to instantiate anything again? What it you never actually had to write any new Something() code?

Inversion of Control


Inversion of Control actually takes over the responsibility of creating instances from you. With it, you might get this code instead:
public interface IMyClass {
IEnumerable<string> GetData();
}

public class MyClass:IMyClass {
private IDataProvider _dataProvider;
private IDateTimeProvider _dateTimeProvider;

public void MyClass(IDataProvider dataProvider, IDateTimeProvider dateTimeProvider) {
this._dataProvider=dataProvider;
this._dateTimeProvider=dateTimeProvider;
}

public IEnumerable<string> GetData() {
var oneHourBefore=_dateTimeProvider.Now-TimeSpan.FromHours(1);
var data=_dataProvider.GetDataNewerThan(oneHourBefore);
return data;
}
}
Note that I created an interface for MyClass to implement, one that declares my GetData method. Now, to use it, I could write something like this:
var myClass=Dependency.Get<IMyClass>();
myClass.GetData();

Wow! What happened here? I just used a magical class called Dependency that gets me an instance of IMyClass. And I really don't care how it does it. It can discover implementations by itself or maybe I am manually binding interfaces to implementations when the application starts (for example Dependency.Bind<IMyClass,MyClass>();). When it needs to create a new MyClass it automatically sees that it needs two other interfaces as parameters, so it gets implementations for those first and continues up the chain. It is called a dependency chain and the container will go through it all to simply "Get" you what you need. There are many inversion of control frameworks out there, but the concept is so simple that one can make their own easily.

And I get another benefit: if I want to display some other type of data, all I have to do is instruct the dependency container that I want another implementation for the interface. I can even think about versioning: take a class that I know does the job and compare it with a new implementation of the same interface. I can tell it to use different versions based on the client used. And all of this in exactly one place: the dependency container bindings. You may want to plug different implementations provided by third parties and all they have to care about is respecting the contract in your interface.



Solution structure


This way of writing code forces some changes in the structure of your projects. If all you have is written in a single project, you don't care, but if you want to split your work in several libraries, you have to take into account that interfaces need to be referenced by almost everything, including third party modules that you want to plug. That means the interfaces need their own library. Yet in order to declare the interfaces, you need access to all the data objects that their members need, so your Interfaces project needs to reference all the projects with data objects in them. And that means that your logic will be separated from your data objects in order to avoid circular dependencies. The only project that will probably need to go deeper will be the unit and integration test project.

Bottom line: in order to implement this painlessly, you need an Entities library, containing data objects, then an Interfaces library, containing the interfaces you need and, maybe, the dependency container mechanism, if you don't put it in yet another library. All the logic needs to be in other projects. And that brings us to a nice side effect: the only connection between logic modules is done via abstractions like interfaces and simple data containers. You can now substitute one library with another without actually caring about the rest. The unit tests will work just the same, the application will function just the same and functionality can be both encapsulated and programatically described.

There is a drawback to this. Whenever you need to see how some method is implemented and you navigate to definition, you will often reach the interface declaration, which tells you nothing. You then need to find classes that implement the interface or to search for uses of the interface method to find implementations. Even so, I would say that this is an IDE problem, not a dependency injection issue.

Other points of view


Now, the intro above describes what I understand by dependency injection and inversion of control. The official definition of Dependency Injection claims it is a subset of Inversion of Control, not a separate thing.

For example, Martin Fowler says that when he and his fellow software pattern creators thought of it, they called it Inversion of Control, but they decided that it was too broad a term, so they moved to calling it Dependency Injection. That seems strange to me, since I can describe situations where dependencies are injected, or at least passed around, but they are manually instantiated, or situations where the creation of instances is out of the control of the developer, but no dependencies are passed around. He seems to see both as one thing. On the other hand, the pattern where dependencies are injected by constructor, property setters or weird implementation of yet another set of interfaces (which he calls Dependency Injection) is different from Service Locator, where you specifically ask for a type of service.

Wikipedia says that Dependency Injection is a software pattern which implements Inversion of Control to resolve dependencies, while it calls Inversion of Control a design principle (so, not a pattern?) in which custom-written portions of a computer program receive the flow of control from a generic framework. It even goes so far as to say Dependency Injection is a specific type of Inversion of Control. Anyway, the pages there seem to follow the general definitions that Martin Fowler does, which pits Dependency Injection versus Service Locator.

On StackOverflow a very well viewed answer sees dependency injection as "giving an object its instance variables". I tend to agree. I also liked another answer below that said "DI is very much like the classic avoiding of hardcoded constants in the code." It makes one think of a variable as an abstraction for values of a certain type. Same page holds another interesting view: "Dependency Injection and dependency Injection Containers are different things: Dependency Injection is a method for writing better code, a DI Container is a tool to help injecting dependencies. You don't need a container to do dependency injection. However a container can help you."

Another StackOverflow question has tons of answers explaining how Dependency Injection is a particular case of Inversion of Control. They all seem to have read Fowler before answering, though.

A CodeProject article explains how Dependency Injection is just a flavor of Inversion of Control, others being Service Locator, Events, Delegates, etc.

Composition over inheritance, convention over configuration


An interesting side effect of this drastic decoupling of code is that it promotes composition over inheritance. Let's face it: inheritance was supposed to solve all of humanity's problems and it failed. You either have an endless chain of classes inheriting from each other from which you usually use only one or two or you get misguided attempts to allow inheritance from multiple sources which complicates understanding of what does what. Instead interfaces have become more widespread, as declarations of intent, while composition has provided more of what inheritance started off as promising. And what is dependency injection if not a sort of composition? In the intro example we compose a date time provider and a data provider into a time aware data provider, all the time while the actors in this composition need to know nothing else than the contracts each part must abide by. Do that same thing with other implementations and you get a different result. I will go as far as to say that inheritance defines what classes are, while composition defines what classes do, which is what matters in the end.

Another interesting effect is the wider adoption of convention over configuration. For example you can find the default implementation of an interface as the class that implements it and has the same name minus the preceding "I". Rather than explicitly tell the framework that we want to use the Manager class each time someone needs an IManager implementation, it can figure it out for itself by naming alone. This would never work if the responsibility of getting class instances resided with each method using them.

Real life examples


Simple Injector


If you look on the Internet, one of the first dependency injection frameworks you find for .Net is Simple Injector, which works on every flavor of .Net including Mono and Core. It's as easy to use as installing the NuGet package and doing something like this:
// 1. Create a new Simple Injector container
var container = new Container();

// 2. Configure the container (register)
container.Register<IUserRepository, SqlUserRepository>(Lifestyle.Transient);
container.Register<ILogger, MailLogger>(Lifestyle.Singleton);

// 3. Optionally verify the container's configuration.
container.Verify();

// 4. Get the implementation by type
IUserService service = container.GetInstance<IUserService>();

ASP.Net Core


ASP.Net Core has dependency injection built in. You configure your bindings in ConfigureServices:
public void ConfigureServices(IServiceCollection svcs)
{
svcs.AddSingleton(_config);

if (_env.IsDevelopment())
{
svcs.AddTransient<IMailService, LoggingMailService>();
}
else
{
svcs.AddTransient<IMailService, MailService>();
}

svcs.AddDbContext<WilderContext>(ServiceLifetime.Scoped);

// ...
}
then you use any of the registered classes and interfaces as constructor parameters for controllers or even using them as method parameters (see FromServicesAttribute)

Managed Extensibility Framework


MEF is a big beast of a framework, but it can simplify a lot of work you would have to do to glue things together, especially in extensibility scenarios. Typically one would use attributes to declare which interface something "exports" and then use other attributes to "import" implementations in properties and values. All you need to do is put them in the same place. Something like this:
[Export(typeof(ICalculator))]
class SimpleCalculator : ICalculator {
//...
}

class Program {

[Import(typeof(ICalculator))]
public ICalculator calculator;

// do something with calculator
}
Of course, in order for this to work seamlessly you need stuff like this, as well:
private Program()
{
//An aggregate catalog that combines multiple catalogs
var catalog = new AggregateCatalog();
//Adds all the parts found in the same assembly as the Program class
catalog.Catalogs.Add(new AssemblyCatalog(typeof(Program).Assembly));
catalog.Catalogs.Add(new DirectoryCatalog("C:\\Users\\SomeUser\\Documents\\Visual Studio 2010\\Projects\\SimpleCalculator3\\SimpleCalculator3\\Extensions"));


//Create the CompositionContainer with the parts in the catalog
_container = new CompositionContainer(catalog);

//Fill the imports of this object
try
{
this._container.ComposeParts(this);
}
catch (CompositionException compositionException)
{
Console.WriteLine(compositionException.ToString());
}
}

Dependency Injection in other languages


Admit it, C# is great, but it is not by far the most used computer language. That place is reserved, at least for now, for Javascript. Not only is it untyped and dynamic, but Javascript isn't even a class inheritance language. It uses the so called prototype inheritance, which uses an instance of an object attached to a type to provide default values for the instance of said type. I know, it sounds confusing and it is, but what is important is that it has no concept of interfaces or reflection. So while it is trivial to create a dictionary of instances (or functions that create instances) of objects which you could then use to get what you need by using a string key (something like var manager=Dependency.Get('IManager');, for example) it is difficult to imagine how one could go through the entire chain of dependencies to create objects that need other objects.

And yet this is done, by AngularJs, RequireJs or any number of modern Javascript frameworks. The secret? Using regular expressions to determine the parameters needed for a constructor function after turning it to string. It's complicated and beyond the scope of this blog post, but take a look at this StackOverflow question and its answers to understand how it's done.

Let me show you an example from AngularJs:
angular.module('myModule', [])
.directive('directiveName', ['depService', function(depService) {
// ...
}])
In this case the key/type of the service is explicit using an array notation that says "this is the list of parameters that the dependency injector needs to give to the function", but this might be have been written just as the function:
angular.module('myModule', [])
.directive('directiveName', function(depService) {
// ...
})
In this case Angular would use the regular expression approach on the function string.


What about other languages? Java is very much like C# and the concepts there are similar. Even if all are flavors of C, C++ is very different, yet Dependency Injection can be achieved. I am not a C++ developer, so I can't tell you much about that, but take a look at this StackOverflow question and answers; it is claimed that there is no one method, but many that can be used to do dependency injection in C++.

In fact, the only languages I can think of that can't do dependency injection are silly ones like SQL. Since you cannot (reasonably) define your own types or pass functions along, the concept makes no sense. Even so, one can imagine creating dummy stored procedures that other stored procedures would use in order to be tested. There is no reason why you wouldn't use dependency injection if the language allows for it.

Testability


I mentioned briefly unit testing. Dependency Injection works hand in hand with automated testing. Given that the practice creates modules of software that give reproducible results for the same inputs and account for all the inputs, testing becomes a breeze. Let me give you some examples using Moq, a mocking library for .Net:
var dateTimeMock=new Mock<IDateTimeProvider>();
dateTimeMock
.Setup(m=>m.Now)
.Returns(new DateTime(2016,12,03));

var dataMock=new Mock<IDataProvider>();
dataMock
.Setup(m=>m.GetDataNewerThan(It.IsAny<DateTime>()))
.Returns(new[] { "test","data" });

var testClass=new MyClass(dateTimeMock.Object, dataMock.Object);

var result=testClass.GetData();
AssertDeepEqual(result,new[] { "test","data" });

First of all, I take care of all dependencies. I create a "mock" for each of them and I "set up" the methods or property setters/getters that interest me. I don't really need to set up the date time mock for Now, since the data from the data provider is always the same no matter the parameter, but it's there for you to see how it's done. Second, I instantiate the class I want to test using the Object property of my mocks, which returns an object that implements the type given as a generic parameter in the constructor. Third I assert that the side effects of my call are the ones I expect. The mocks need to be as dumb as possible. If you feel you need to write code to define your mocks you are probably doing something wrong.

The type of the tests, for people who are not familiar with this concept, is usually a fully positive one - that is give full valid data and expect the correct result - followed by many negative ones, where the correct data is made incorrect in all possible ways and it is tested that the method fails. If there are many types of combinations of data that would be considered valid, you need a test for as many of them.

Note that the test is instantiating the test class directly, using the constructor. We are not testing the injector here, but the actual class.

Conclusions


What I appreciate most with Dependency Injection is that it forces you to write code that has clear boundaries defined by interfaces. Once this is achieved, you can go write your own stuff and not care about what other people do with theirs. You can test your modules without even caring if the rest of the project even exists. It allows to refactor code in steps and with a lot more confidence since you are covered by unit tests.

While some people work on fire-and-forget projects, like small games or utilities, and they don't care about maintainability, one of the most touted reasons for using unit tests and dependency injection, these practices bring so many other benefits that are almost impossible to get otherwise.

The entire point of this is reducing the complexity of dependencies, which include not only the modules in your application, but also the support frame for them, like people working on them. While some managers might not see the wisdom of reducing friction between software components, surely they can see the positive value of reducing friction between people.

There was one other topic that I wanted to touch, but it is both vast and I have not enough experience with it, however it feels very attractive to me: refactoring old code in order to use dependency injection. Best practices, how to make it safe enough and fast enough to make managers approve it and so on. Perhaps another post later on. I was thinking of a combination of static analysis and automated methods, like replacing all usages of "new" with a single point of instantiation, warning about static methods and properties, automatically replacing known bad practices like DateTime.Now and so on. It might be interesting, right?

I hope I wasn't too confusing and I appreciate any feedback you have. I will be working on a presentation file with similar content, so any help will go into doing a better job explaining it to others.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services


The previous version of Entity Framework was 6 and the current one is Entity Framework Core 1.0, although for a few years they have been going with Entity Framework 7. It might seem that they changed the naming to be consistent with .NET Core, but according to them they did it to avoid confusion. The new version sprouted from the idea of "EF everywhere", just like .Net Core is ".Net everywhere", and is a rewrite - a port, as they chose to call it, with some extra features but also lacking some of the functionality EF6 had - or better to say has, since they continue to support it for .NET proper. In this post I will examine the history and some of the basic concepts related to working with Entity Framework as opposed to a more direct approach (like opening System.Data.SqlConnection and issuing SqlCommands).

Entity Framework history


Entity Framework started as an ORM, a class of software that abstracts database access. The term itself is either a bit obsolete, with the advent of databases that call themselves non relational, or rebelliously exact, recognizing that anything that can be called a database needs to also encode relationships between data. But that's another topic altogether. When Entity Framework was designed it was all about abstracting SQL into an object oriented framework. How would that work? You would define entities, objects that inherited from a EntityBase class, and decorate their properties with attributes defining some restrictions that databases have, but objects don't, like the size of a field. You also had some default methods that could be overridden in order to control very specific custom requirements. In the background, objects would be mapped to tables, their simple properties to columns and their more complex properties to other tables that had a foreign key relationship with the owner object mapped table.

There were some issues with this system that quickly became apparent. With the data layer separation idea going strong, it was really cumbersome and ugly to move around objects that inherited from an entire hierarchy of Entity Framework classes and held state in ways that were almost opaque to the user. Users demanded the use of POCOs, a way to separate the functionality of EF from the data objects that were used through all the tiers of the application. At the time the solution was mostly to use simple objects within your application and then translate them to data access objects which were entities.

Microsoft also recognized this and in further iterations of EF, they went full POCO. But this enabled them to also move from one way of thinking to another. At the beginning the focus was on the database. You had your database structure and your data access layer and you wanted to add EF to your project, meaning you needed to map existing tables to C# objects. But now, you could go the other way around. You started with an application using plain objects and then just slapped EF on and asked it to create and maintain the database. The first way of thinking was coined "database first" and the other "code first".

In seven iterations of the framework, things have been changed and updated quite a lot. You can imagine that successfully adapting to legacy database structures while seamlessly abstracting changes to that structure and completely managing the mapping of objects to database was no easy. There were ups and downs, but Microsoft stuck with their guns and now they are making the strong argument that all your data manipulation should be done via EF. That's bold and it would be really stupid if Entity Framework weren't a good product they have full confidence in. Which they do. They moved from a framework that was embedded in .NET, to one that was partially embedded and then some extra code was separate and then, with EF6, they went full open source. EF Core is also open source and .NET Core is free of EF specific classes.

Also, EF Core is more friendly towards non relational databases, so you either consider ORM an all encompassing term or EF is no longer just an ORM :)

In order to end this chapter, we also need to discuss alternatives.

Ironically, both the ancestor and the main competitor for Entity Framework was LINQ over SQL. If you don't know what LINQ is, you should take the time to look it up, since it has been an integral part of .NET since version 3.5. in Linq2Sql you would manually map objects to tables, then use the mapping in your code. The management of the database and of the mapping was all you. When EF came along, it was like an improvement over this idea, with the major advantage (or flaw, depending on your political stance) that it handled schema mapping and management for you, as much as possible.

Another system that was and is very used was separating data access based on intent, not on structure. Basically, if you had the need to add/get the names of people from your People table, you would have another project that had some object hierarchy that in the end had methods for AddPeople and GetPeople. You didn't need to delete or update people, you didn't have the API for it. Since the intent was clear, so was the structure and the access to the database, all encapsulated - manually - into this data access layer project. If you wanted to get people by name, for example, you had to add that functionality and code all the intermediary access. This had the advantage (or flaw) that you had someone who was good with databases (and a bit with code) handling the maintenance of the data access layer, basically a database admin with some code writing permissions. Some people love the control over the entire process, while others hate that they need to understand the underlying database in order to access data.

From my perspective, it seems as there is an argument between people who want more control over what is going on and people who want more ease of development. The rest is more an architectural discussion which is irrelevant as EF is concerned. However, it seems to me that the Entity Framework team has worked hard to please both sides of that argument, going for simplicity, but allowing very fine control down the line. It also means that this blog post cannot possibly cover everything about Entity Framework.

Getting started with Entity Framework


So, how do things look in EF Core 1.0? Things are still split down the middle in "code first" and "database first", but code first is the recommended way for starting new projects. Database first is something that must be supported in perpetuity just in case you want to migrate to EF from an existing database.

Database first


Imagine you have tables in an SQL server database. You want to switch to EF so you need to somehow map the existing data to entities. There is a tutorial for that: ASP.NET Core Application to Existing Database (Database First), so I will just quickly go over the essentials.

First thing is to use NuGet to install EF in your project:
Install-Package Microsoft.EntityFrameworkCore.SqlServer
and then add
"Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final"
to the project.json tools section. For the Database First approach we also need other stuff like:
Install-Package Microsoft.EntityFrameworkCore.Tools –Pre
Install-Package Microsoft.EntityFrameworkCore.SqlServer.Design
Final touch, running
Scaffold-DbContext "<Sql connection string>" Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models

At this time alarm bells are sounding already. Wait! I only gave it my database connection string, how can it automagically turn this into C# code and work?

If we look at the code to create the sample database in the tutorial above, there are two tables: Blog and Post and they are related via primary key/foreign key as is recommended to create an SQL database. Columns are clearly defined as NULL or NOT NULL and the size of text fields is conveniently Max.



The process created some interesting classes. Besides the properties that map to fields, the Blog class has a property of type ICollection<Post> which is instantiated with a HashSet<Post>. The real fun is the BloggingContext class, which inherits from DbContext and in the override for ModelCreating configures the relationships in the database.
  • Enforcing the required status of the blog Url:
    modelBuilder.Entity<Blog>(entity =>
    {
    entity.Property(e => e.Url).IsRequired();
    });
  • Defining the one-to-many relationship between Blog and Post:
    modelBuilder.Entity<Post>(entity =>
    {
    entity.HasOne(d => d.Blog)
    .WithMany(p => p.Post)
    .HasForeignKey(d => d.BlogId);
    });
  • Having the root sets used to access entities:
    public virtual DbSet<Blog> Blog { get; set; }
    public virtual DbSet<Post> Post { get; set; }

First thing to surprise me, honestly, is that the data model classes are as bare as possible. I would have expected some attributes on the properties defining their state as required, for example. EF Core allows to not pollute the classes with data annotations, as well as an annotation based system. The collections are interfaces and they are only instantiated with a concrete implementation in the constructor. An interesting choice for the collection type is HashSet. As opposed to a List it does not allow access via indexers, only enumerators. It is designed to optimize search: basically finding an item in the hashset does not depend on the size of the collection. Set operations like union and intersects can be used efficiently with Hashset, as well.

Hashset also does not allow duplicates and that may cause some sort of confusion. How does one define a duplicate? It uses IEqualityComparer. However, a HashSet can be instantiated with a custom IEqualityComparer implementation. Alternately, the Equals and GetHashCode methods can be overridden in the entities themselves. People are divided over whether one should use such mechanisms to optimize Entity Framework functionality, but keep in mind that normally EF would only keep in memory stuff that it immediately needs. Such optimizations are more likely to cause maintainability problems than save processing time.

Database first seems to me just a way to work with Entity Framework after using a migration tool. It sounds great, but there are probably a lot of small issues that one has to gain experience with when dealing with real life databases. I will blog about it if I get to doing something like this.

Code first


The code first tutorial goes the other direction, obviously, but has some interesting differences that tell me that a better model of migrating even existing databases is to start code first, then find a way to migrate the data from the existing database to the new one. This has the advantage that it allows for refactoring the database as well as provide some sort of verification mechanism when comparing the old with the new structure.

The setup is similar: use NuGet to install EF in your project:
Install-Package Microsoft.EntityFrameworkCore.SqlServer
then add
"Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final"
to the project.json tools section.

Then we create the models: a simple DbContext inheritance, containing DbSets of Blog and Post, and the data models themselves: Blog and Post. Here is the code:
public class BloggingContext : DbContext
{
public BloggingContext(DbContextOptions<BloggingContext> options)
: base(options)
{ }

public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
}

public class Blog
{
public int BlogId { get; set; }
public string Url { get; set; }

public List<Post> Posts { get; set; }
}

public class Post
{
public int PostId { get; set; }
public string Title { get; set; }
public string Content { get; set; }

public int BlogId { get; set; }
public Blog Blog { get; set; }
}

Surprisingly, the tutorial doesn't go into any other changes to this code. There are no HashSets, there are no restrictions over what is required or not and how the classes are related to each other. A video demo of this also shows the created database and it contains primary keys. A blog has a primary key on BlogId, for example. To me that suggests that convention over configuration is also used in the background. The SomethingId property of a class named Something will automatically be considered the primary key (also simply Id). Also, if you look in the code that EF is executing when creating the database (these are called migrations and are pretty cool, I'll discuss them later in the post) Blogs are connected to Posts via foreign keys, so this thing works wonders if you name your entities right. I also created a small console application to test this and it worked as advertised.

Obviously this will not work with every scenario and there will be attributes attached to models and novel ways of configuring mapping, but so far it seems pretty straightforward. If you want to go into the more detailed aspects of controlling your data model, try reading the documentation provided by Microsoft so far.

Entity Framework concepts


We could go right into the code fray, but I choose to first write some more boring conceptual stuff first. Working with Entity Framework involves understanding concepts like persistence, caching, migrations, change saving and the underlying mechanisms that turn code into SQL, of the Unit of Work and Repository patterns, etc. I'll try to be brief.

Context


As you have seen earlier, classes inheriting from DbContext are the root of all database access. I say classes, because more of them can be used. If you want to copy from one database to another you will need to contexts. The context defines a data model, differentiated from a database schema by being a purely programmatic concept. DbContext implements IDisposable so for nuclear operations it can be used just as one uses an open SQL connection. In fact, if you are tempted to reuse the same context remember that its memory use increases with the quantity of data it accesses. It is recommended for performance reasons to immediately dispose a context when finishing operations. Also, a DbContext class is not thread safe. It stands to reason to use context for as short a period as possible inside single threaded operations.

DbContext provides two hooks called OnConfiguring and OnModelCreating that users can override to configure the context and the model, respectively. Careful, though, one can configure the context to use a specific implementation of IModel as model, in which case OnModelCreating will not be called. The other most important functionality of DbContext is SaveChanges, which we will discuss later. Worth mentioning are Database and Model, properties that can be used to access database and model information and metadata. The rest are Add, Update, Remove, Attach, Find, etc. plus their async and range versions allowing for the first time - EF6 did not - to dynamically send an object to a function like Add for example and let EF determine where to add it. It's nothing that sounds very safe to use, but probably there were some scenarios where it was necessary.

DbSet


For each entity type to be accessed, the context should have DbSet<TEntity> properties for that type. DbSet allows for the manipulation of entities via methods like Add, Update, Remove, Find, Attach and is an IEnumerable and IQueriable of TEntity. However, in order to persist any change, SaveChanges needs to be called on the context class.

SaveChanges


The SaveChanges method is the most important functionality of the context class, which otherwise caches the accessed objects and their state waiting either for this method to be called or for the context to be disposed. Important improvements on the EF Core code now allows to send these changes to the database using batches of commands. Before, in EF6 and previously, each change was sent separately so, for example, adding two entities to a set and saving changes would do two database trips. From EF Core onward, that would only take one trip unless specifically configured with MaxBatchSize(number). Revert to the EF6 behavior using MaxBatchSize(1). This applies to SqlServer only so far.

This behavior is the reason why contexts need to be released as soon as their work is done. If you query all the items with a name starting with 'A', all of these items will be loaded in the context memory. If you then need to get the ones starting with 'B', the performance and memory will be affected if using the same context. It might be helpful, though, if then you need to query both items starting with 'A' and the ones starting with 'B'. It's your choice.

One particularity of working with Entity Framework is that in order to update or delete records, you first need to query them. Something like
context.Posts.RemoveRange(context.Posts.Where(p => p.Title.StartsWith("x")));
There is no .RemoveRange(predicate) because it would be impossible to resolve a query afterwards. Well, not impossible, only it would have to somehow remember the predicate, alter subsequent selects to somehow gather all information required and apply deletion on the client side. Too complicated. There is a way to access the database by writing SQL directly and again EF Core has some improvements for this, but raw SQL changes are opaque to an already existing context.

Unit of Work and Repository patterns


The Repository pattern is an example of what I was calling before an alternative to Entity Framework: a separation of data access from business logic that improves testability and keeps distinct responsibilities apart. That doesn't mean you can't do it with EF, but sometimes it feels pretty shallow and developers may be tempted to skip this extra encapsulation.

A typical example is getting a list of items with a filter, like blog posts starting with something. So you create a repository class to take over from the Posts DbSet and create a method like GetPostsStartingWith. A naive implementation returns a List of items, but this actually hinders EF in what it tries to do. Let's assume your business logic requires you to return the first ten posts starting with 'A'. The initial code would look like this:
var posts=context.Posts.Where(p=>p.Title.StartsWith("A")).Take(10).ToList();
In this case the SQL code sent to the database is like SELECT TOP 10 * FROM Posts WHERE Title LIKE 'A%'. However, in a code looking like this:
var repo=new PostsRepository();
var posts=repo.GetPostsStartingWith("A").Take(10).ToList();
will first pull all posts starting with "A" then retrieve the first 10. Ouch! The solution is to return IQueryable instead of IEnumerable or a List, but then things start to feel fishy. Aren't you just shallow proxying the DbSet?

Unit of Work is some sort of encapsulation of similar activities using the same data, something akin to a transaction. Let's assume that we store the number of posts in the Counts table. So if we want to add a post we need to do the adding, then change the value of the count. The code might look like this:
var counts=new CountsRepository();
var blogs=new BlogRepository();
var blog=blogs.Where(b.Name=="Siderite's Blog").First();
blog.Posts.Add(post);
counts.IncrementPostCount(blog);
blog.Save();
counts.Save();
Now, since this selects a blog and changes posts then updates the counts, there is no reason to use different contexts for the operation. So one could create a Unit of Work class that would look a bit like a common repository for blogs and counts. Let's ignore the silly example as well as the fact that we are doing posts operations using the BlogRepository, which is something that we are kind of forced to do in this situation unless we start to deconstruct EF operations and recreate them in our code. There is a bigger elephant in the room: there already exists a class that encapsulates access to database, caches the items retrieved and creates one atomic operation for both changes. It's the context itself! If we instantiate the repositories with a constructor that accepts a context, then all one has to do to atomize the operations is to put the code inside a using block.

There are also controversies related to the use of these two patterns with EF. Rob Conery has a nice blog post suggesting Command/Query objects instead. His rationale is that if you have to pass a context object, as above, there is no much decoupling involved.

I lean towards the idea that you need a Data Access Layer encapsulation no matter what. I would put the using block in a method in a class rather than pass the context or not use a repository. Also, since we saw that entity type is not a good separation of "repositories" - I feel that I should name them differently in this situation - and the intent of the methods is already declared in their name (like GetPosts...) then these encapsulation classes should be separated by some other criteria, like ContentRepository and ForumRepository, for example.

Migrations


Migrations are cool! The idea is that when making changes to structure of the database one can extract those changes in a .cs file that can be added to the project and to source control. This is one of the clear advantages of using Entity Framework.

First of all, there are a zillion tutorials on how to enable migrations, most of them wrong. Let's list the possible ways you could go wrong:
  • Enable-Migrations is obsolete - older tutorials recommended to use the Package Manager Console command Enable-Migrations. This is now obsolete and you should use Add-Migration <Name>
  • Trying to install EntityFramework.Commands - due to namespace changes, the correct namespace would be Microsoft.EntityFrameworkCore.Commands anyway, which doesn't exist. EntityFramework.Commands is version 7, so it shouldn't be used in .NET Core. However, at one point or another, this worked if you added some imports and changed stuff around. I tried all that only to understand the sad truth: you should not install it at all!
  • Having a DbContext inheriting class that doesn't have a default constructor or is not configured for dependency injection - the migration tool looks for such classes then creates instances of them. Unless it knows how to create these instances, the Add-Migration will fail.

The correct way to enable migrations is... to install the packages from the Database First section! Yes, that is right, if you want migrations you need to install
Install-Package Microsoft.EntityFrameworkCore.Tools –Pre
Install-Package Microsoft.EntityFrameworkCore.SqlServer.Design
Only then you may open the Package Manage Console and run
Add-Migration FirstMigration
Note that I am discussing an SQL Server example. It is possible you will need other packages if using a different type of database.

The result is a folder called Migrations in which you will find two files: a snapshot and the migration itself. Here is an example of the snapshot:
[DbContext(typeof(BloggingContext))]
partial class BloggingContextModelSnapshot : ModelSnapshot
{
protected override void BuildModel(ModelBuilder modelBuilder)
{
modelBuilder
.HasAnnotation("ProductVersion", "1.0.0-rtm-21431")
.HasAnnotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn);

modelBuilder.Entity("EFCodeFirst.Blog", b =>
{
b.Property<int>("BlogId")
.ValueGeneratedOnAdd();

b.Property<string>("Url");

b.HasKey("BlogId");

b.ToTable("Blogs");
});

modelBuilder.Entity("EFCodeFirst.Post", b =>
{
b.Property<int>("PostId")
.ValueGeneratedOnAdd();

b.Property<int>("BlogId");

b.Property<string>("Content");

b.Property<string>("Title");

b.HasKey("PostId");

b.HasIndex("BlogId");

b.ToTable("Posts");
});

modelBuilder.Entity("EFCodeFirst.Post", b =>
{
b.HasOne("EFCodeFirst.Blog", "Blog")
.WithMany("Posts")
.HasForeignKey("BlogId")
.OnDelete(DeleteBehavior.Cascade);
});
}
}

And here is one of the migration:
public partial class First : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateTable(
name: "Blogs",
columns: table => new
{
BlogId = table.Column<int>(nullable: false)
.Annotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn),
Url = table.Column<string>(nullable: true)
},
constraints: table =>
{
table.PrimaryKey("PK_Blogs", x => x.BlogId);
});

migrationBuilder.CreateTable(
name: "Posts",
columns: table => new
{
PostId = table.Column<int>(nullable: false)
.Annotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn),
BlogId = table.Column<int>(nullable: false),
Content = table.Column<string>(nullable: true),
Title = table.Column<string>(nullable: true)
},
constraints: table =>
{
table.PrimaryKey("PK_Posts", x => x.PostId);
table.ForeignKey(
name: "FK_Posts_Blogs_BlogId",
column: x => x.BlogId,
principalTable: "Blogs",
principalColumn: "BlogId",
onDelete: ReferentialAction.Cascade);
});

migrationBuilder.CreateIndex(
name: "IX_Posts_BlogId",
table: "Posts",
column: "BlogId");
}

protected override void Down(MigrationBuilder migrationBuilder)
{
migrationBuilder.DropTable(
name: "Posts");

migrationBuilder.DropTable(
name: "Blogs");
}
}

Note that this is not something that copies the changes in data, only the ones in the database schema.

Conclusions


Yes, no code in this post. I wanted to explore Entity Framework in my project, but if I would have continued it like that the post would have become too long. As you have seen, there are advantages and disadvantages in using Entity Framework, but at this point I find it more valuable to use it and meet any problems I find face on. Besides, the specifications of my project don't call for complex database operations so the data access mechanism is quite irrelevant.

Stay tuned for the next post in which we actually use EF in ContentAggregator!

While researching the new .NET Core features and functionalities I've stumbled upon this pattern for hiding functionality, but also making it accessible when needed.

There was a long history of Microsoft writing the code as closed as possible: classes and interfaces are internal protected and sealed and all that jazz. If you have ever tried to copy paste Microsoft .NET source code into your project, in order to modify it to your needs, you know what I mean. More times than not I gave up because of the immense chain of dependencies that had to be all copy pasted in order for a small piece of code to work.

Well, .NET Core is now open source and there is a strong current of moving away from such practices. One pattern that drew my attention is the IInfrastructure<T> interface and pattern used in EntityFramework. Basically, instead of exposing rarely used members directly, you hide them within a generic interface that can be retrieved at will.

Yes, it is possible to do the same thing with an explicitly implemented interface, but this is more of a two step way of doing it (and also of uncluttering class signatures). The concrete example is DbContext, which is an explicit implementation of IInfrastructure<IServiceProvider>. IService provider has a GetService<T> method that returns specific implementation of interfaces of base classes. Then, with a nice extension method called GetInfrastructure<T>, one can get the service provider. For example one can retrieve the relational type mapper from a context using:
var serviceProvider=context.GetInfrastructure();
var mapper=serviceProvider.GetService<IRelationalTypeMapper>();

I find it interesting as a general pattern, allowing one to expose innumerable interface signatures without inheriting from them all. A class can enable any number of mechanisms for discovery and execution simply by implementing its own service provider. Moreover, if there is some sort of general Service Locator pattern in place, classes can locally override that mechanism while leaving the rest in place. Clearly there is potential for abuse, but I also see it as a way to clearly represent and separate concerns.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

In the setup part of the series I've created a set of specifications for the ASP.Net MVC app that I am building and I manufactured a blank project to start me up. There was quite a bit of confusion on how I would continue the series. Do I go towards the client side of things, defining the overall HTML structure and how I intend to style it in the future? Do I go towards the functionality of the application, like google search or extracting text and applying word analysis on it? What about the database where all the information is stored?

In the end I opted for authentication, mainly because I have no idea how it's done and also because it naturally leads into the database part of things. I was saying that I don't intend to have users of the application, they can easily connect with their google account - which hopefully I will also use for searching (I hate that API!). However, that's not quite how it goes: there will be an account for the user, only it will be connected to an outside service. While I completely skirt the part where I have to reset the password or email the user and all that crap - which, BTW, was working rather well in the default project - I still have to set up entities that identify the current user.

How was it done before?


In order to proceed, let's see how the original project did it. It was first setting a database context, then adding Identity using a class named ApplicationUser.
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

services.AddIdentity<ApplicationUser, IdentityRole>()
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();

ApplicationUser is a class that inherits from IdentityUser, while ApplicationDbContext is something inheriting from IdentityDbContext<ApplicationUser>. Seems like we are out of luck and the identity and db context are coupled pretty strongly. Let's see if we can decouple them :) Our goal: using OAuth to connect with a Google account, while using no database.

Authentication via Google+ API


The starting point of any feature is coding and using autocomplete and Intellisense until it works reading the documentation. In our case, the ASP.Net Authentication section, particularly the authentication using Google part. It's pretty skimpy and it only covers Facebook. Found another link that actually covers Google, but it's for MVC 5.

Enable SSL


Both tutorials agree that first I need to enable SSL on my web project. This is done by going to the project properties, the Debug section, and checking Enable SSL. It's a good idea to copy the https URL and set it as the start URL of the project. Keep that URL in the clipboard, you are going to need it later, as well.



Install Secret Manager


Next step is installing the Secret Manager tool, which in our case is already installed, and specifying a userSecretsId, which should also be already configured.

Create Google OAuth credentials


Next let's create credentials for the Google API. Go to the Google Developer Dashboard, create a project, go to Credentials → OAuth consent screen and fill out the name of the application. Go to the Credentials tab and Create Credentials → OAuth client ID. Select Web Application, fill in a name as well as the two URLs below. We will use the localhost SSL URL for both like this:
  • Authorised JavaScript origins: https://localhost:[port] - the URL that you copied previously
  • Authorised redirect URIs: https://localhost:[port]/account/callback - TODO: create a callback action
Press Create. At this point a popup with the client ID and client secret appears. You can either copy the two values or close the popup and download the json file containing all the data (project id and authorised URLs among them), or copy the values directly from the credentials dialog.



Make sure to go back to the Dashboard section and enable the Google+ API, in the Social APIs group. There is a quota of 10000 requests per day, I hope it's enough. ;)



Writing the authentication code


Let's use the 'dotnet user-secrets' tool to save the two credential values. Run the following two commands in the project folder:
dotnet user-secrets set Authentication:Google:ClientId <client-Id>
dotnet user-secrets set Authentication:Google:ClientSecret <client-Secret>
Use the values from the Google credentials, obviously. In order to get to the two values all we need to do is call Configuration["Authentication:Google:ClientId"] in C#. In order for this to work we need to have loaded the package Microsoft.Extensions.Configuration.UserSecrets in project.json and somewhere in Startup a code that looks like this: builder.AddUserSecrets();, where builder is the ConfigurationBuilder.

Next comes the installation of the middleware responsible for authenticating google and which is called Microsoft.AspNetCore.Authentication.Google. We can install it using NuGet: right click on References in Visual Studio, go to Manage NuGet packages, look for Microsoft.AspNetCore.Authentication.Google ("ASP.NET Core contains middleware to support Google's OpenId and OAuth 2.0 authentication workflows.") and install it.



Now we need to place this in Startup.cs:
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationScheme = "Cookies",
AutomaticAuthenticate = true,
AutomaticChallenge = true,
LoginPath = new PathString("/Account/Login")
});

app.UseGoogleAuthentication(new GoogleOptions
{
AuthenticationScheme="Google",
SignInScheme = "Cookies",
ClientId = Configuration["Authentication:Google:ClientId"],
ClientSecret = Configuration["Authentication:Google:ClientSecret"],
CallbackPath = new PathString("/Account/Callback")
});
Yay! code!

Let's start the website. A useful popup appears with the message "This project is configured to use SSL. To avoid SSL warnings in the browser you can choose to trust the self-signed certificate that IIS Express has generated. Would you like to trust the IIS Express certificate?". Say Yes and click OK on the next dialog.



What did we do here? First, we used cookie authentication, which is not some gluttonous bodyguard with a sweet tooth, but a cookie middleware, of course, and our ticket for authentication without using identity. Then we used another middleware, the Google authentication one, linked to the previous with the "Cookies" SignInScheme. We used the ClientId and ClientSecret we saved previously in the Secret Manager. Note that we specified an AuthenticationScheme name for the Google authentication.

Yet, the project works just fine. I need to do one more thing for the application to ask me for a login and that is to decorate our one action method with the [Authorize] attribute:
[Authorize]
public class HomeController : Controller
{
public IActionResult Index()
{
return View();
}

}
After we do that and restart the project, the start page will still look blank and empty, but if we look in the network activity we will see a redirect to a nonexistent /Account/Login, as configured:


The Account controller


Let's create this Account controller and see how we can finish the example. The controller will need a Login method. Let me first show you the code, then we can discuss it:
public class AccountController : Controller
{
public IActionResult Login(string ReturnUrl)
{
return new ChallengeResult("Google",new AuthenticationProperties
{
RedirectUri = ReturnUrl ?? "/"
});
}
}

We simply return a ChallengeResult with the name of the authentication scheme we want and the redirect path that we get from the login ReturnUrl parameter. Now, when we restart the project, a Google prompt welcomes us:

After clicking Allow, we are returned to the home page.



What happened here? The home page redirected us to Login, which redirected us to the google authentication page, which then redirected us to /Account/Callback, which redirected us - now authenticated - to the home page. But what about the callback? We didn't write any callback method. (Actually I first did, complete with a complex object to receive all the parameters. The code within was never executed). The callback route was actually defined and handled by the Google middleware. In fact, if we call /Account/Callback, we get an authentication error:


One extra functionality that we might need is the logout. Let's add a Logout method:
public async Task<IActionResult> LogOut()
{
await HttpContext.Authentication.SignOutAsync("Cookies");

return RedirectToAction("index", "home");
}
Now, when we go to /Account/Logout we are redirected to the home page, where the whole authentication flow from above is being executed. We are not asked again if we want to give permission to the application to use our google credentials, though. In order to reset that part, go to Apps connected to your account.

What happens when we deny access to the application? Then the callback action will be called with a different set of parameters, triggering a RemoteFailure event. The source code on GitHub contains extra code that covers this scenario, redirecting the user to /Home/Error with the failure reason:
Events = new OAuthEvents
{
OnRemoteFailure = ctx =>
{
ctx.Response.Redirect("/Home/Error?ErrorMessage=" + UrlEncoder.Default.Encode(ctx.Failure.Message));
ctx.HandleResponse();
return Task.FromResult(0);
}
}

What about our user?


In order to check the results of our work, let's add some stuff to the home page. Mainly I want to show all the information we got about our user. Change the index.cshtml file to look like this:
<table class="table">
@foreach (var claim in User.Claims)
{
<tr>
<td>@claim.Type</td>
<td>@claim.Value</td>
</tr>
}
</table>
Now, when I open the home page, this is what gets returned:
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier 111601945496839159547
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname Siderite
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname Zackwehdex
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name Siderite Zackwehdex
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress sideritezaqwedcxs@gmail.com
urn:google:profile https://plus.google.com/111601945496839159547

User is a System.Security.Claims.ClaimsPrincipal object, that contains not only a simple bag of Claims, but also a list of Identities. In our example I only have an identity and the User.Claims are the same with User.Identities[0].Claims, but in other cases, who knows?

Acknowledgements


If you think it was easy to scrap up this simple example, think again. Before the OAuth2 system there was an OpenID based system that used almost the same method and class names. Then there is the way they did it in .NET proper and the way they do it in ASP.Net Core... which changed recently as well. Everyone and their grandmother have a blog about how to do Google authentication, but most of them either don't apply or are obsolete. So, without further due, let me give you the links that inspired me to do it this way:

Final thoughts


By no means this is a comprehensive walkthrough for authentication in .NET Core, however I am sure that I will cover a lot more ground in the posts to come. Stay tuned for more!

Source code for the project after this chapter can be found on GitHub.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

After I've spent a day of writing working on the application I realized that many of the concepts I take for granted have not been discussed. Consider this part as an introduction to the things *I* know about ASP.Net MVC. :)

Emveesee


The Model View Controller pattern attempts to separate three different concerns of the application: the flow (Controller), the display and the user interface (View) and the various data objects that are passed, validated and manipulated (Model), which are also responsible for the logic and rules of the application. In ASP.Net, MVC means:
  • the models are POCOs, for which validation constraints, display options and other aspects of how they are intended to be used are expressed with attributes decorating the classes or their properties.
  • the controllers are classes inheriting from Controller, their names ending with "Controller". Their methods are called controller actions and represent endpoints for HTTP calls. Attributes are again used to configure these actions, like if they need to be accessed by POST or PUT. The responsibility of controllers is to... well... control the action in the application.
  • the views are files with the .cshtml extension. They are found in the Views folder and the convention is that the view for a controller action is found in /Views/ControllerName(without "Controller")/ActionName. While I guess someone could hack MVC to use the old ASP.Net engine, the preferred engine for views is Razor (the one with the mustaches). The direction of the ASP.Net MVC views and templates is fine control over the generated markup, as opposed to the old ASP.Net way of encapsulating everything in server side user controls. The preferred way to encapsulate control behavior is now client side, with frameworks helping with it like AngularJS and ReactJS.

Add to this services and middleware. Services are encapsulation of logic. For example a service may determine aspects of flow of data from the controller to the view or validate the values of a model class. While these two services would be part of the Controller and Model parts, respectively, in code they are pieces of code, usually implementations of interfaces - for testing and dependency injection reasons - that determine their purpose. Middleware are components - they can be composed - that react to what happens in the HTTP stream (requests and responses). Most of what MVC does internally depends on middleware and services.

MVC controversies


The ugly truth is that MVC has been around for 30 years and people implement it differently every time. Because it spans a large domain (it handles everything, basically) the vague wording used to describe the pattern causes a lot of confusion. Why did Microsoft choose MVC for their new version of ASP.Net? Well, first of all because their first attempt - ASP.Net Forms, which tried to bring the desktop application development style to the web - failed miserably. Second, because at the time they were making the decision, Ruby on Rails was the coolest thing since man discovered fire. I mean, it made a shitty programming language like Ruby look useful! (Just kidding, irate Ruby developer. Was trying to see if you're paying attention). People are still fighting over the question if the model is supposed to handle the application logic or the controller or even a separate part (services?).

My own interpretation is that models are mostly data classes. They should have code that handles their own internal state, but nothing else. The controller controls the flow of the data, meaning that if the user accesses an action, the method will direct execution towards the correct component handling that action. Therefore, for me, application logic is neither in the controller or the models.

Imagine a family: the wife tells the husband "go to the market and buy 10 eggs and some tomatoes!". Wifey is the user, the husband is the controller. He understands the intent of the user and directs execution towards its implementation. Now, the husband could go to the market and buy the eggs himself, but that would be bad form (heh heh), so he goes to his two sons and tells them "Frank and Joe, go to the market! Frank, get me some eggs, 10 of them. Joe, get me some tomatoes for a salad. Now, git!" (get it? git? I am on a roll). At this point the sons are confused: are they Model or are they Controller? Meanwhile the eggs and tomatoes are clearly part of the model. An egg may spoil, for example, and that is probably the responsibility of the egg. You may consider that the market basket containing eggs and tomatoes is the model, conveniently leaving aside the functionality when the user sees the quality of the purchases and chastises the poor controller for it.

Certainly, ASP.Net MVC leans towards my interpretation of things. Classes in the Models folder in the default application are just classes with decorated properties and the piece of code that interprets the attributes and their values, that binds parameters to properties, that is a service. The code that does stuff, after the controller determined it's OK to be executed, are again managers and services. For example there are sign-in and user managers in the code, which are implementations from the .Net code itself. If one inlines all of them, it looks to me as if the controller is taking care of the logic of the application, not the model.

Convention over configuration


ASP.Net MVC embraced the Convention over configuration paradigm. You don't need to hook up controllers anywhere, or define the dependency between views and controllers. A controller for movies will be a class called MoviesController, placed in the Controllers folder and the convention is that every call to its actions would start with /movies. A view for a List action would be placed in /Views/Movies/List.cshtml and expected to be called as http(s)://host:port/movies/list. A typical code would look like this:
public IActionResult List()
{
return View();
}
View is a shorthand for rendering the view of this method by naming convention.

The pipeline for MVC is based on what they call middleware - what in the old ASP.Net were called handlers, I guess. The main component of an MVC application is its WebHostBuilder, which then uses a class to configure itself, which is usually named Startup. The methods and properties in Startup will be executed/populated using dependency injection, meaning that parameters will be interfaces: their specific implementation will be determined by ASP.Net MVC based on user configuration, if any.

Same thing applies to action parameters. Values are obtained from the body of the call or from the HTTP GET or POST parameters. An action like List(int id, string name) will get the correct parameters from a call like /list?id=1&name=Steven. Based on the routing (the default one: {controller=Home}/{action=Index}/{id?} being the one responsible for the most common REST conventions), the same result can be achieved with /list/1?name=Steven, for example. The values can be retrieved also for a method looking like this: List(User user), if the User class has Id and Name properties. This is called model binding and most of the interaction between the browser and the .NET code at the backend will be done through it.

Extension methods: dependency injection and middleware


The pattern of configuration for your application is to have it built by fluent methods, using the so called builder pattern. You start with the WebHostBuilder, for example, then you .UseKestrel, .UseIISIntegration, .UseStartup, etc. The default template code looks like this:
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.Build();

host.Run();
These methods are extension methods, their complex functionality hidden behind this simple pattern of use. Check out the simple .AddMvc() method, how it deceptively covers so much complexity. And in the source code, other extension methods, each with their own complexity, but eventually leading to either injecting some dependency or configuring and adding middleware to the MVC pipeline. It seems to me that dependency injection methods start with Add, while the middleware inserting methods start with Use.

As an example, let's take one of the lines inside .AddMvc(): builder.AddRazorViewEngine();. Following one of the many branches defined by this extension pattern (I am still not sure how much I like it) we get to MvcRazorMvcCoreBuilderExtensions.AddRazorViewEngineServices, which injects a lot of dependencies. Take a look at
// This caches compilation related details that are valid across the lifetime of the application.
services.TryAddSingleton<ICompilationService, DefaultRoslynCompilationService>();
. One can change the implementation of the compilation service! Alternately, let's look at .UseStaticFiles(). It's a wrapper over
app.UseMiddleware<StaticFileMiddleware>();

Open source .NET Core


As you've seen from the examples above, I've often linked to the source code of methods and classes. That is because finally Microsoft decided to open source the .NET Core code and thus both let people find and solve their problems and allow developers to find where pesky hard to explain bugs are coming from. The extension method pattern is making difficult to explore what is going on, as you have to switch from one project to another on the GitHub interface (or your own file system, depending on how you decide to work). Dependency injection makes it even harder, as you have to first find the interface responsible for your current programming task, then find all implementations and what injected them in the first place. I tried to find some decent exploring tool, but found none and I am too busy to make one of my own. Homework? :)

Even so, it is a great boon that one can look into the innards of the Microsoft code. It not only helps pinpoint issues, but also teaches about how one of the biggest software companies writes code. I don't want to dissect middleware in this post, but I strongly suggest you take a look at how they are made and how they are working. Whenever I find it's useful, I will mention the middleware responsible with what I am discussing, so try to make an effort to look its source code and see what it actually does.

Attributes


Attributes are used all over ASP.Net MVC. They tell what HTTP method to accept for controller actions, how to authorize access, how to validate models, how to bind the incoming parameters to models. Here is en example:
//The user needs to be authorized to access this method
[Authorize]
//only POST requests
[HttpPost]
// over HTTPS are accepted
[RequireHttps]
//The URL for this method will be /util not /Hardcore
[ActionName("util")]
//controller method
public IActionResult Hardcore([FromBody] /*the data will be taken from the body of the request*/ HardcoreData data)
{
//only show the view if the model is valid
if (ModelState.IsValid) return View();
//otherwise return a bad request
return BadRequest(ModelState);
}

public class HardcoreData
{
// value needs to be set (not null)
[Required]
// the format of Id needs to be a URL
[DataType(DataType.Url,ErrorMessage = "The Id need to be a URL")]
// shorter or equal than 500 characters
[StringLength(500,ErrorMessage ="Maximum URL length is 500 characters")]
public string Id { get; set; }

//Range validation from 0 to 100
[Range(0,100,ErrorMessage ="Value needs to be between 0 and 100 and even")]
//Custom validation using the class and method mentioned
[CustomValidation(typeof(HarcoreDataValidator),"ValidateValue")]
public int Value { get; set; }
}

public class HarcoreDataValidator {
public static ValidationResult ValidateValue(int value)
{
return value % 2 == 0
? ValidationResult.Success
: new ValidationResult("Value needs to be even");
}
}

These attributes will be read and used by various services injected at startup. Everything can be changed, so for example you may change the validation system to interpret the RangeAttribute values differently or ignore RequiredAttribute or use custom attributes. Attribute classes only mark the intent of the developer, but do almost nothing themselves.

Models


I've mentioned previously that models are used to move data back and forth. Model binding is responsible for taking HTTP requests and turning their parameters into C# classes. Services then use those models, like EntityFramework, or the validation system or the Razor views. You've seen in the previous example how an object may be read from the body of a request. Similarly, they can be read from the HTTP parameters sent to the action method. Read an example of an investigation to see how various methods of model binding can be used with different attributes.

An important use case for models is validation. Some of it was demonstrated above. Read more in the documentation. An interesting part of it is the client validation that is implemented out of the box with the right javascript imports and using the right attributes.

Views


In ASP.Net Forms, the code and the presentation were (somewhat) separated into .aspx markup and .cs codebehind. The aspx syntax is probably isomorphic with the Razor syntax and I remember that at one time you could use ASP.Net Forms with Razor. In MVC, views have code in them, using Razor, but they are not strongly coupled with a specific piece of code. In fact, one can reuse a view for multiple models, especially the partial ones - which take over from UserControls, I guess. So in fact there is quite an overlap between ASP.Net Forms and MVC, if you add a separate injection mechanism to Forms in order to decouple markup and codebehind.

For views, the biggest difference as far as I am concerned is the encapsulation of reusable content, what before were controls. Panels, Grids, UserControls, all of them inherited from a Control class that handled the various ASP.Net phases of its lifecycle. There was no job interview in which you weren't asked about the ASP.Net lifecycle and now it's irrelevant. Nowadays, you render things from the markup up, with focus on the client side. HTML helpers and Tag Helpers are what allows you to encapsulate some rendering logic.

What caused this switch to a new paradigm? Well, I would say HTML5 and javascript frameworks. You would have a wonderful Grid control rendering a nice table layout and the developer would shout foul because he wants everything with DIVs. You would have a nice Calendar extension control and the dev would dismiss it immediately because he wasn't to use the latest jQueryUI client side calendar. Most of all, it would be because the web designer would use Microsoft agnostic tools that create pure HTML and then the poor dev would have to reverse engineer that in order to get the same layout with default controls. Today a grid is only a DIV, a Razor @foreach and a template for the rows using the values of the items displayed. Certainly all of this can be encapsulated further into your own library of HTML helpers, but you would have complete control over it.

In ASP.Net MVC Core partial views will be superseded by View Components. If you thought the Microsoft interpretation of MVC was a little vague, this will make your head explode. View Components are most similar to controllers, only you can't call them directly via HTTP, are not part of the controller lifecycle and can't use filters. They have views associated with them in /Views/ControllerName or Shared/Components/ViewComponentName/ViewName. You may invoke them directly from a controller or from a view, using the wonderfully ridiculous syntax
@Component.InvokeAsync("Name of view component", <anonymous type containing parameters>)

If not specified by the user, views are discovered by their location in the project. Specifying the view means specifying the exact path of the cshtml file, an ugly and not recommended solution. Otherwise, when you just return View();, MVC looks in /Views/ControllerName/ViewName.cshtml and then in /Views/Shared/ViewName.cshtml. As we are accustomed, this default behavior can be changed by implementing a different IViewLocationExpander.

You may specify a model type for a view, which helps a lot with Intellisense. Otherwise, you may render server side data using the viewbag ViewData or you may use the @model keyword, which allows dynamic use of properties, but doesn't help much with Intellisense. Using the wrong property name will generate runtime errors.

Needless to say, before I actually go into the code, views are a bit of a mystery to me as well. They also clash a bit with the architecture of an application that makes most sense to me: API + client side code. I feel I need to discuss this, so...

Types of MVC application architecture


In .NET proper ASP.Net MVC and ASP.Net Web API were two different things, with huge overlap of functionality. In .NET Core, they are the same thing, gathered under the umbrella of MVC. A controller action can receive AJAX calls in JSON format and return properly formatted HTML5 markup, for example. It is very difficult to find a reasonable way to separate the two concepts in .NET. However, there are two major ways of using them, separating them by use, as it were. The MVC application that uses controllers and views is still a product of turning ASP.Net Forms into a Ruby on Rails clone. While the overall architecture of the application has changed, giving more control to the developers, it also constrains them into a type of functional architecture that may be - frankly - obsolete already.

There are three types of architectures that I will discuss for a very simple application that displays news items using their title, description, url and image:
  1. starting from an ASP.Net Forms page and a list of NewsItem objects in C#, we use an .aspx page that contains a Grid control. We define the way the title is rendered as a link, the description as a short text block and the image as a side thumbnail.
  2. starting from an ASP.Net MVC controller and a list of NewsItem objects in C#, we render a view which uses a Razor @foreach to display sections with a title link, a description and a thumbnail.
  3. starting from an HTML page, we fire an AJAX call to a .NET API that returns a Javascript array of NewsItem objects, that then we render as sections with a title link, a description and a thumbnail, maybe by using an MVC client-side library like AngularJS.

See what I mean? The first two versions are basically the same. Whether the mechanism for rendering comes from a Forms Control or from a Razor loop is irrelevant to the overall design of the app. The third, though, presents some very interesting ideas:
  • The website is not a .NET website. It's pure HTML. It can be served from any type of server, on any platform, can be created with any tools.
  • There is no visual interface on the actual .NET server side. It's a simple API that sends and receives data in serialized form.
  • The MVC architecture moves towards the client side, where views, models and controllers are just Javascript code, HTML and CSS.
  • There is a very clear functional separation of concerns. There is server side development: C#, serialization and persistence of data, sensitive or resource intensive processing, the good ole things that .NET developers love. And there is the client side development: HTML, Javascript, CSS, responsive design, native mobile apps and all that crap that designers and frontend developers do.

(Again, joking, dear frontend or mobile developer! Just making sure you weren't asleep)

The conclusions are staggering, actually. With no concern for presentation, the server side API framework can be incredibly tiny. Efforts can be turned towards making it efficient, fast, secure, using less resources, being scalable. There is no need for a Razor engine, HTML helpers, partial views, View Components, no one cares about them. Instead what it enables is working with any kind of client side user interface. Mobile native apps from all platforms, multiple web sites, other APIs, they all could just attach to the API and present functionality. Meanwhile, the client side interface developer is exempt of all the dependencies on Microsoft tools, of concerns over how many servers they are, where they are located, the general background functionality of the application.

I've worked in such a way, it was great! People working on iOS, Android and web applications would just come to me and ask for an API that does this and that. After days of fighting over how the API signature should look :) everyone would just do what they are good at. Even more, because we were so different, bugs were easier to discover when we tried to connect our work over this simple interface.

The downsides are blessings in disguise. As an API call needs to finish quickly and return small quantities of data, the developer is forced to consider from the get go things like: pagination, chunking, asynchronous programming, concurrency, etc. Instead of importing a list of URLs for the news and then waiting for the output of the page while the server side is spidering the data, the app needs to show "importing data, please wait" and then periodically query the API if the import is finished. When hundreds of people try to do this, there is no problem, as the list of links to spider just grows and the same process extracts the data. If two users import the same links, they only get spidered once.

Even if the application that we will be working on is not based on this design, consider from the beginning if you even need to use ASP.Net in an MVC way. The world is moving away from "applications" and towards "services". Perhaps the API itself would only be a front that accesses other APIs in the background, microservices that are optimized perfectly for the tiny bit they perform.

Data Access Architecture


A small rant against ORMs


Just like with the overall design, one may use different ways of accessing data. The ASP.Net MVC guide for working with data suggests a single clear path: the Microsoft ORM Entity Framework. As I am still to use it in any serious capacity, I will not explain EF concepts here. I will ask you a question instead: do you even need an ORM?

Object Relational Mappers are tools that abstract the database from the viewpoint of a developer. They work with contexts and sets and strongly types objects and have great Intellisense support. Started as a way to map an existing database to a .NET data framework, Entity Framework now goes the other direction: code first! You start writing your app, using Entity Framework just like you would already have everything you need and it creates and maintains the database in the background. Switching from SQL Server to PostgreSQL or SQLite or even a custom data persistence method is a breeze. They sound great!

However, if you already know what persistence model you use and are proficient in designing and optimizing the data structure there, using an ORM starts to lose its appeal. Just as with ASP.Net Forms, you have no control over the way the ORM chooses to communicate with the database. It may do better than you or it may do horribly bad. You start developing your app, everything works fine, you add feature after feature and when you finally load the actual real life data something goes wrong and you have no idea what and where.

There already are patterns of abstracting the data access and usually it involves using the data from a separate library (or service) that encapsulates the desired behavior and is structured by intention. Why would I get all NewsItems, when there are millions of them I and in no situation I can conceive would I need all of them? Why would I get a NewsItem by Id, when the Id means nothing to me and things like the URL are more relevant? Why would I choose to store in memory all the items I want to delete, when my condition for them to disappear is a simple WHERE condition?

OK, OK, I know that if you worked with Entity Framework you have a lot of (good) answers to all of these questions. Yet my original question still needs to be considered before you embark on your development journey: do you even need Entity Framework?

The main disadvantages I see for Entity Framework specifically are:
  • It diffuses the API for working with data. Instead of writing a NewsItemManager class that gives you items by url and by date, for example, the developer is tempted to write custom queries inside the logic of the application. This leads to difficulties refactoring the code or redesigning the application.
  • It hides the complexity of the database. Instead of working with the actual stored data, you work with an abstraction that may look good to you, but hides problems that you are tempted to ignore.
  • It forces switches of competencies. If you want to debug and optimize your data access you now need an Entity Framework expert, rather than a database expert.
  • It causes technical debt that you are not even aware of. From this list, I believe this to be the most insidious. There is a chance, that may be very small, that your application needs a functionality that Entity Framework was not designed for. EF works great in any other area except that one. And when you try to fix it, you have several options that are all horrible: create a separate system for it, hack Entity Framework into submission, leave it slow and bad because everything works so well otherwise. At this point, when you notice there is a problem, it's already too late

In our application I will gladly use Entity Framework. It seems some of the basic functionality of MVC, like identity, are strongly designed to work together with the Entity Framework data abstraction. Yet even so, I will try to abstract the data layer - mostly because I have no need to implement it for this demo. This will probably lead to an interesting consequence: the default MVC modules will use EF in a way, while I will use it for my application in another way.

Entity Framework concepts


An actual advantage of EF that I think is great is the concept of migrations. EF is able to save modifications to the data layer as C# code files that can be added to source control. This helps a lot when working in teams of multiple people.

As an aside, I was working for a project that used stored procedures to access the database. The data access layer was getting and changing data using these functions and procedures that were saved in a folder as .SQL create files. It was easy during deployment to delete all procedures and functions and then recreate them, but how about database schema or data changes? For this we used a folder of .SQL changes. For each file we needed to create also a rollback file, to "fix" whatever this was doing. They were difficult to manage at first, but after a while you got the hang of it. I wonder if Entity Framework allows for this kind of workflow. That would be great. Aside over.

The root of an EF model seems to be the context. Inheriting from DbContext (or as in the default template app from IdentityDbContext<ApplicationUser>, coupling it inexorably with the identity of the user), this class need not have a lot of own code at first. As time goes by, one point changes to the data mechanism are probably hooked here. The DbContext will have properties of type DbSet<SomeEntity> which will be used to queries said entities. A simple services.AddDbContext<MyDbContext>() in the startup class declares your willingness to work with a context or another.

A mix of conventions and attributes defines the mapping between your context and the underlying database. A good link to explore this can be found here.

Another interesting quality of Entity Framework is that you can use it in memory, very useful with automated testing. Here is a link that explains it.

Using LInQ to Entities and the DbSet properties of the context, one can create, read, update and delete records, but there are some differences from what you may be used to. The delete or update operations by default need to first retrieve the items, then alter them. A good intro to the changes in Entity Framework 7 can be found here.

The pattern used by Entity Framework is called "unit of work". If you want to go down the rabbit hole, look it up. A nice article about it and some possible improvements can be found here.

An interesting reason for using Entity Framework would be for when you don't have a lot of control over your persistence medium. I haven't worked with "the cloud" yet, but basically they give you some services and tax you for using them. If EF can abstract that away and minimize cost, it would be a boon, but I have no information about this.

Miscelaneous


The post cannot be complete without some concepts like:
In this series I will not go into details for many of them, so read the info the .NET team has prepared for each subject.

Leaving so soon?


The next post will be about authentication, more exploratory and with code examples.

Following my post about things I need to learn, I've decided to start a series about writing an ASP.Net MVC Core application, covering as much ground as possible. As a result, this experience will cover .NET Core subjects and a thorough exploration of ASP.Net MVC, plus some concepts related to Visual Studio, project structure, Entity Framework, HTML5, ECMAScript 6, Angular 2, ReactJs, CSS (LESS/SASS), responsive design, OAuth, OData, shadow DOM, etc.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Specifications


In order to start any project, some specifications need to be addressed. What will the app do and how will it be implemented? I've decided on a simplified rewrite of my WPF newsletter maker project. It gathers subjects from Google, by searching for configurable queries, it spiders the contents, it displays them, filters them, sorts them, extracting text and analyzing content. It remembers the already loaded URLs and allows for marking them as deleted and setting a category. It will be possible to extract the items that have a category into a newsletter containing links, titles, short descriptions and maybe a picture.

The project will be using ASP.Net Core MVC, combining the API and the display in a single web site (at least for now). Data will be stored in SQLite via Entity Framework. Later on the project will be switched to SQL Server to see how easy that is. The web site itself will have HTML5 structure, using the latest semantic elements, with the simplest possible CSS. All project owned Javascript will be ECMAScript6. OAuth might be needed for using the Google Search API, and I intend to use Google/Facebook/Twitter accounts to log in the application, with a specific account marked in the configuration as Administrator. The IDE will be Visual Studio (not Code). The Javascript needs to be clean, with no CSS or HTML in it, by using CSS classes and HTML templates. The HTML needs to be clean, with no Javascript or styling in it; moreover it needs to be semantically unambiguous, so as to be easily molded with CSS. While starting with a desktop only design, a later phase of the project will revamp the CSS, try to make the interface beautiful and work for all screen formats.

Not the simplest of projects, so let's get started.

Creating the project


I will be using Visual Studio 2015 Update 3 with the .Net Core Preview2 tooling. Personally I had a problem installing the Core tools for Visual Studio, but this link solved it for me with a command line switch (short version: DotNetCore.1.0.0-VS2015Tools.Preview2.exe SKIP_VSU_CHECK=1). First step is to create a New Project → Visual C# → .NET Core → ASP.NET Core Web Application. I will name it ContentAggregator. To the prompt asking which type of project template I want to choose, I will select Web Application, deselect Microsoft Azure Host in Cloud checkbox which for whatever reason is checked by default and click on Change Authentication to select Individual User Accounts.



Close the "Welcome to ASP.Net Core" page, because the template will be heavily changed by the time we finish this chapter.

The default template project


For a more detailed analysis of a .NET Core web project, try reading my previous post of the dotnet default template for web apps. This one will be quick and dirty.

Things to notice:
  • There is a global.json file that lists two projects, src and test. So this is how .NET Core solutions were supposed to work. Since the json format will be abandoned by Microsoft, there is no point of exploring this too much. Interesting, though, while there is a "src" folder, there is no "test".
  • The root folder contains a ContentAggregator.sln file and the src folder contains a ContentAggregator folder with a ContentAggregator.xproj file. Core seems to have abandoned the programming language dependent extension for project files.
  • The rest of the project seems to be pretty much the default dotnet tool one, with the following differences:
  • the template uses SQL Server by default
  • the lib folder in wwwroot is already populated with libraries

So far so good. There is also the little issue of the database. As you may remember from the post about the dotnet tool template, there were some files that needed to initialize the database. The error message then said "In Visual Studio, you can use the Package Manager Console to apply pending migrations to the database: PM> Update-Database". Is that what I have to do? Also, we need to check what the database settings are. While I do have an SQL Server instance on this computer, I haven't configured anything yet. The Project_Readme.html page is not very useful, as the link Run tools such as EF migrations and more goes to an obsolete link on github.io (the documentation seems to have moved to a microsoft.com server now).

I *could* read/watch a tutorial, but what the hell? Let's run the website, see what it does! Amazingly, the web site starts, using IIS Express, so I go to Register, to see how the database works and I get the same error about the migrations. I click on the Apply Migrations button and it says the migrations have been applied and that I need to refresh. I do that and voila, it works!

So, where is the database? It is not in the bin folder as WebApplication.db like in the Sqlite version. It's not in the SQL Server, the service wasn't even running. The DefaultConnection string looks like "Server=(localdb)\\mssqllocaldb;Database=aspnet-ContentAggregator-7fafe484-d38b-4230-b8ed-cf4a5a8df5e1;Trusted_Connection=True;MultipleActiveResultSets=true". What's going on? The answer lies in the SQL Server Express LocalDB instance that Visual Studio comes with.

Changing and removing shit


To paraphrase Antoine de Saint-Exupéry, this project will be set up not when I have nothing else to add, but when I have nothing else to remove.

First order of business it to remove SQL Server and use SQLite instead. Quite the opposite of how I have pictured it, but hey, you do what you must! In theory all I have to do it replace .UseSqlServer with .UseSqlite and then adjust the DefaultConnection string from appsettings.json with something like "Data Source=WebApplication.db". Done that, fixed the namespaces and imported projects, ran the solution. Migration error, Apply Migrations, re-register and everything is working. WebApplication.db and everything.

Second is to remove all crap that I don't need right now. I may need it later, so at this point I am backing up my project. I want to remove:
  • Database - yeah, I know I just recreated it, but I need to know what those migrations contained and if I even need them, considering I want to register with OAuth only
  • Controllers - probably I will end up recreating them, but we need to understand why things are how they are
  • Models - we'll do those from scratch, too
  • Services - they were specific to the default web site, so poof! they're gone.
  • Views - the views will be redesigned completely, so we delete them also
  • Client libraries - we keep jQuery and jQuery validation, but we remove bootstrap
  • CSS - we keep the site.css file, but remove everything in it
  • Javascript - keep site.js, but empty
  • Other assets like images - removed

"What the hell, I read so much of this blog post just for you to remove everything you did until now?" Yes! This part is the project set up and before its end we will have a clean white slate on which to create our masterpiece.

So, action! Close Visual Studio. Delete bin (with the db file in it) and obj, delete all files in Controllers, Data, Models, Services, Views. Empty files site.css and site.js, while deleting the .min versions, delete all images, Project_Readme.html and project.lock.json. In order to cleanly remove bootstrap we need to use bower. Run
bower uninstall bootstrap
which will remove bootstrap, but won't remove it from bower.json, so remove it from there. Reopen Visual Studio and the project, wait until it restores the packages.

When trying to compile the project, there are some errors, obviously. First, namespaces that don't exist anymore, like Controllers, Models, Data, Services. Remove the offending usings. Then there are services that we wanted to inject, like SMS and Email, which for now we don't need. Remove the lines that try to use them under // Add application services. The rest of the errors are about ApplicationDbContext and ApplicationUser. Comment them out. These are needed for when we figure out how the project is going to preserve data. Later on a line in Startup.cs will throw an exception ( app.UseIdentity(); ) so comment it out as well.

Finishing touches


Now the project compiles, but it does nothing. Let's finish up by adding a default Controller and a View.

In Visual Studio right click on the Controllers folder in the Solution Explorer and choose Add → Controller → MVC Controller - Empty. Let's continue to name it HomeController. Go to the Views folder, create a new folder called Home. Now you might think that right clicking on it and selecting Add → View would work, but it doesn't. The Add button stubbornly remains disabled unless you specify a template and a model and other stuff. It may be useful later on, but at this stage ignore it. The way to add a view now is go to Add → New Item → MVC View Page. Create an Index.cshtml view and empty its contents.

Now run the application. It should show a wonderfully empty page with no console errors. That's it for our blank project setup. Check out the source code for this point of the exploration. Stay tuned for the real fun!

A friend of mine has been employed to write code in the new Microsoft cross-platform ASP.Net Mvc Core and one day he asked be about the difference between the [Required] and the [BindRequired] attributes. Not being an expert in ASP.Net MVC and even less in the Core version, I had no idea so I went exploring.

Fast and dirty

I created a small project and ran it several times to see what the attributes were doing. RequiredAttribute seemed to work quite straightforward: decorate a property of an object with it and, when used as a parameter in an MVC API controller method, that property needs to be set. If it is not, the Controller ModelState.IsValid value will be set to false. Something like this:
public IActionResult Test([FromBody]TestModel model)
TestModel is a simple POCO with Required and BindRequired attributes set on its properties:
public class TestModel
{
[Required]
public string A { get; set; }
[BindRequired]
public string B { get; set; }
}
Now, if I send an object like this
{ A:'Something',B:'Something else' }
I get a true ModelState.IsValid value. If I send only property A, I also get a valid model. If I send only B or an empty object, IsValid will be false. So [Required] works as expected, [BindRequired] doesn't seem to do anything.

Searching through the methods of the controller I noticed one in particular called TryUpdateModelAsync. It tries to populate a model, given as a parameter, with values from the URL or form parameters for the action method. That means if you call the API with something like /api/test/test&A=Something&B=SomethingElse then run
var valid = TryUpdateModelAsync(model).Result;
, valid will be true. However, if you fail to specify a B parameter, valid will be false. Note that the properties of the model change either way. Property A will be filled with whatever you send it, only the result of the attempt will be false.

My friend was not completely satisfied with what I discovered, but it was what I could do in a few minutes of work.

More work

Frankly, I wasn't satisfied either. The only mention I could find of this BindRequired attribute was in the ASP.Net Core documentation, thrown in a list together with [FromBody], which applied to parameters, not object properties, and in unit tests code. So I started digging.

I went to the GitHub repository for aspnet/Mvc and downloaded the source. I then looked for a class named BindRequiredAttribute. It is a very simple class that inherits from BindingBehaviorAttribute and sends BindingBehavior.Required as the base constructor parameter. Its description is "Indicates that a property is required for model binding. When applied to a property, the model binding system requires a value for that property. When applied to a type, the model binding system requires values for all properties of that type." This doesn't say much. So I started to go through the chain of extension methods, interfaces, dependency injection. It wasn't pretty. Let's just say that after a few pages of blog post where I described wandering through classes and properties and trying everything - you wouldn't have understood anything because I didn't - I've decided to delete everything and search the web for more information.

In the end, experimenting with the project is what made it clear(er) for me. In the example I created, the Test method received an object [FromBody], but if I removed that attribute, the content of the call to the API was ignored. Instead, the values from the URL or form POST would be used to fill the object - this explains why FromBody and BindRequired were grouped in the same list. So for the same code without [FromBody]:
public IActionResult Test(TestModel model)
and now the request /api/test/test&A=Something&B=SomethingElse fills the object without the need for content (and ignoring it if it exists). If I remove either A or B, ModelState.IsValid becomes false.

What does it mean?


To me it feels as the [Required] attribute is all that I need since it works in both cases: form values and content, however besides [BindRequired], there is also [BindNever], which removes that property from binding. Imagine you would have a property like "IsAdmin" and you would set it to true programatically - you don't want it to be bound from the URL of the call. I was expecting that attempting to set a property decorated with BindNever would invalidate the model state, but it doesn't work like that, it just ignores the parameter. The System.ComponentModel.DataAnnotations namespace doesn't have an equivalent to BindNeverAttribute.

There is also the issue of the many many many points where the behavior of MVC can be changed and customized. There are a lot of out of the box classes that come with ASP.Net MVC; it only makes sense to use Attribute classes from the same package.

What about the inconsistency? Why does a [BindRequired] attribute not matter when populating a model from the body of the request? Frankly, I don't know. I believe it is because in that case the entire model class has been "bound", its individual properties being "populated" instead.

Hope that helps out a little.

Another entry from my ".NET with Visual Studio Code" series, this will discuss the default web site generated by the 'dotnet new' command. I recommend reading the previous articles:
Update 31 Mar 2017: This entire article is obsolete.
Since the release of .NET Core 1.1 the dotnet command now has a different syntax: dotnet new <template> where it needs a template specified, doesn't default to console anymore. Also, the web template is really stripped down to only what one needs, similar to my changes to the console project to make it web. The project.json format has become deprecated and now .csproj files are being used. Yet I worked a lot on the post and it might show you how .NET Core came to be and how it changed in time.

The analysis that follows is now more appropriate for the dotnet new mvc template in .NET Core 1.1, although they removed the database access and other things like registration from it.

Setup


Creating the ASP.Net project


Until now we've used the simple 'dotnet new' command which defaults to the Console application project type. However, there are four available types that can be specified with the -t flag: Console,Web,Lib and xunittest. I want to use Visual Studio Code to explore the default web project generated by the command. So let's follow some basic steps:
  1. Create a new folder
  2. Open a command prompt in it (Update: Core has a Terminal window in it now, which one can use)
  3. Type dotnet new -t Web
  4. Open Visual Studio Code and select the newly created folder
  5. To the prompt "Required assets to build and debug are missing from your project. Add them?" choose Yes

So, what do we got here? It seems like an earlier version of John Papa's AspNet5-starter-demo project.

First of all there is a nice README.MD file that announces this is a big update of the .NET Core release and it's a good idea to read the documentation. It says that the application in the project consists of:
  • Sample pages using ASP.NET Core MVC
  • Gulp and Bower for managing client-side libraries
  • Theming using Bootstrap
and follows with a lot of how-to URLs that are mainly from the documentation pages for MVC and the MVC tutorial that uses Visual Studio proper. While we will certainly read all of that eventually, this series is more about exploring stuff rather than learning it from a tutorial.

If we go to the Debug section of Code and run the web site, two interesting things happen. One is that the project compiles! Yay! The other is that the web site at localhost:5000 is automatically opened. This happens because of the launchBrowser property in build.json's configuration:
"launchBrowser" : {
"enabled" : true,
"args" : "${auto-detect-url}",
"windows" : {
"command" : "cmd.exe",
"args" : "/C start ${auto-detect-url}"
},
"osx" : {
"command" : "open"
},
"linux" : {
"command" : "xdg-open"
}
}

The web site itself looks like crap, because there are some console errors of missing files like /lib/bootstrap/dist/css/bootstrap.css, /lib/bootstrap/dist/js/bootstrap.js, /lib/jquery/dist/jquery.js and /lib/bootstrap/dist/js/bootstrap.js. All of these files are Javascript and CSS files from frameworks defined in bower.json:
{
"name": "webapplication",
"private": true,
"dependencies": {
"bootstrap": "3.3.6",
"jquery": "2.2.3",
"jquery-validation": "1.15.0",
"jquery-validation-unobtrusive": "3.2.6"
}
}
and there is also a .bowerrc file indicating the destination folder:
{
"directory": "wwwroot/lib"
}
. This probably has to do with that Gulp and Bower line in the Readme file.

What is Bower? Apparently, it's a package manager that depends on node, npm and git. Could it be that I need to install Node.js just so that I can compile my C# ASP.Net project? Yes it could. That's pretty shitty if you ask me, but I may be a purist at heart and completely wrong.

Installing tools for client side package management


While ASP.Net Core's documentation about Client-Side Development explains some stuff about these tools, I found that Rule-of-Tech's tutorial from a year and a half ago explains better how to install and use them on Windows. For the purposes of this post I will not create my own .NET bower and gulp emulators and install the tools as instructed... and under duress. You're welcome, Internet!

Git


Go to the Git web site, and install the client.

Node.js


Go to the Node.js site and install it. Certainly a pattern seems to emerge here. On their page they seem to offer two wildly different versions: one that is called LTS and has version v4.4.7, another that is called Current at version v6.3.0. A quick google later I know that LTS comes from 'Long Term Support' and is meant for corporate environments where change is rare and the need for specific version support much larger. I will install the Current version for myself.

Bower


The Bower web site indicates how to install it via the Node.js npm:
npm install -g bower
. Oh, you gotta love the ASCII colors and the loader! Soooo cuuuute!


Gulp


Install Gulp with npm as well:
npm install --global gulp
Here tutorials say to install it in your project's folder again:
npm install --save-dev gulp
Why do we need to install gulp twice, once globally and once locally? Because reasons. However, I noticed that in the project there is already a package.json containing what I need for the project:
{
"name": "webapplication",
"version": "0.0.0",
"private": true,
"devDependencies": {
"gulp": "3.9.1",
"gulp-concat": "2.6.0",
"gulp-cssmin": "0.1.7",
"gulp-uglify": "1.5.3",
"rimraf": "2.5.2"
}
}
Gulp is among them so I just run
npm install --save-dev
without specifying a package and it installs what I need. The install immediately gave me some concerning warnings like "graceful-fs v3.0.0 will fail on node releases >=7.0. Please update to graceful-fs 4.0.0" and "please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue" and "lodash v3.0.0 is no longer maintained. Upgrade to lodash v4.0.0". Luckily, the error messages are soon replaced by a random ASCII tree and if you can't scroll up you're screwed :) One solution is to pipe the output to a file or to NUL so that the ASCII tree of the packages doesn't scroll the warnings up
npm install --save-dev >nul

Before I go into weird Linux reminiscent console warnings and fixes, let's see if we can now fix our web site, so I run "gulp" and I get the error "Task 'default' is not in your gulpfile. Please check the documentation for proper gulpfile formatting". Going back to the documentation I understand why: it's because there is no default task in the gulpfile! Well, it sounded more exciting in my head.

Long story short, the commands to be executed are in project.json, in the scripts section:
"scripts": {
"prepublish": [ "npm install", "bower install", "gulp clean", "gulp min" ],
"postpublish": [ "dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%" ]
}
There is npm install, there is bower install and the two tasks for gulp: clean and min.

It was the bower command that we actually needed to make the web site load the frameworks needed. Phew!

Let's start the web site again, from Code, by going to Debug and clicking Run. (By now it should be working with F5, I guess) Victory! The web site looks grand. It works!

Exploring the project


project.json will be phased out


But what is that prepublish nonsense? Frankly, no one actually knows or cares. It's not documented and, to my chagrin, it appears the project.json method of using projects will be phased out anyway! Whaat?

Remember that .NET Core is still version 1. As Microsoft has accustomed us for decades, you never use version 1 of any of their products. If you are in a hurry, you wait until version 2. If you care about your time, you wait for the second service pack of version 2. OK, a cheap shot, but how is it not frustrating to invest in something and then see it "phased out". Didn't they think things through?

Anyway, since the web site is working, I will postpone a rant about corporate vision and instead continue with the analysis of the project.

node_modules


Not only did I install Node and Bower and Gulp, but a lot of dependencies. The local installation folder for Node.js packages called node_modules is now 6MB of files that are mostly comments and boilerplate. It must be the (in)famous by now NPM ecosystem, where every little piece of functionality is a different project. At least nothing else seems to be polluted by it, so we will ignore it as a necessary evil.

Program and Startup


Following the pattern discussed in previous posts, the Program class is building the web host and then passing responsibility for configuration and management to the Startup class. The most obvious new thing in the class is the IConfigurationRoot instance that is built in the class' constructor.
public IConfigurationRoot Configuration { get; }

var builder = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath)
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

Configuration = builder.Build();
Here is the appsettings.json file that is being loaded there:
{
"ConnectionStrings": {
"DefaultConnection": "Data Source=WebApplication.db"
},
"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Debug",
"System": "Information",
"Microsoft": "Information"
}
}
}
Later on these configuration settings will be used in code:
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection"))
);

loggerFactory.AddConsole(Configuration.GetSection("Logging"));

Let me quickly list other interesting things for you to research in depth later:
I've tried to use the most relevant links up there, but you will note that some of them are from spring 2015. There is still much to be written about Core, so look it up yourselves.

Database


As you know, almost any application worth mentioning uses some sort of database. ASP.Net MVC uses an abstraction for that called ApplicationDbContext, which the Microsoft team wants us to use with Entity Framework. ApplicationDbContext just inherits from DbContext and links it to Identity via an ApplicationUser. If you unfamiliar on how to work with EF DbContexts, check out this link. In the default project the database work is instantiated with
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection"))
);

Personally I am not happy that with a single instruction I now have in my projects dependencies to
"Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore": "1.0.0",
"Microsoft.AspNetCore.Identity.EntityFrameworkCore": "1.0.0",
"Microsoft.EntityFrameworkCore.Sqlite": "1.0.0",
"Microsoft.EntityFrameworkCore.Tools": {
"version": "1.0.0-preview2-final",
"type": "build"
}
Let's go with it for now, though.

AddDbContext is a way of injecting the DbContext without specifying an implementation. UseSqlite is an extension method that gets that implementation for you.

Let us test the functionality of the site so that I can get to an interesting EntityFramework concept called Migrations.

So, going to the now familiar localhost:5000 I notice a login/register link combo. I go to register, I enter a ridiculously convoluted password pattern and I get... an error:

A database operation failed while processing the request.

SqliteException: SQLite Error 1: 'no such table: AspNetUsers'.


Applying existing migrations for ApplicationDbContext may resolve this issue

There are migrations for ApplicationDbContext that have not been applied to the database

  • 00000000000000_CreateIdentitySchema


In Visual Studio, you can use the Package Manager Console to apply pending migrations to the database:

PM> Update-Database

Alternatively, you can apply pending migrations from a command prompt at your project directory:

> dotnet ef database update



What is all that about? Well, in a folder called Data in our project there is a Migrations folder containing files 00000000000000_CreateIdentitySchema.cs, 00000000000000_CreateIdentitySchema.Designer.cs and ApplicationDbContextModelSnapshot.cs. They inherit from Migration and ModelSnapshot respectively and seem to describe in coade database structure and entities. Can it be as simple as running:
dotnet ef database update
(make sure you stop the running web server before you run the command)? It compiles the project and it completes in a few seconds. Let's run the app again and attempt to register. It works!

What happened? In the /bin directory of our project there is now a WebApplication.db file which, when opened, reveals it's an SQLite database.

But what about the mysterious button "Apply Migrations"? It generates an Ajax POST call to /ApplyDatabaseMigrations. Clicking it also fixes everything. How does that work? I mean, there is no controller for an ApplyDatabaseMigrations call. It all comes from the line
if (env.IsDevelopment())
{
...
app.UseDatabaseErrorPage();
...
}
which installs the MigrationsEndPointMiddleware. Documentation for it here.

Pesky little middleware, isn't it?

Installing SQLite


It is a good idea to install SQLite as a standalone application. Make sure you install both binaries for the library as well as the client (the library might be x64 and the client x86). The installation is brutally manual. You need to copy both library and client files in a folder, let's say C:\Program Files\SQLite and then add the folder in the PATH environment variable. An alternative is using the free SQLiteBrowser app.

Opening the database with SQLiteBrowser we find tables for Users, Roles, Claims and Tokens, what we would expect from a user identity management system. However, only the Users table was used when creating a new user.

Controllers


The controllers of the application contain the last ASP.Net related functionality of the project, anything else is business logic. The controllers themselves, though, are nothing special in terms of Core. The same class names, the same attributes, the same functionality as any regular MVC. I am not a specialist in that and I believe it to be outside the scope of the current blog post. However it is important to look inside these classes and the ones that are used by them so that we understand at least the basic concepts the creators of the project wanted to teach us. So I will quickly go through them, no code, just conceptual ideas:
  • HomeController - the simplest possible controller, it displays the home view
  • AccountController and ManageController - responsible for account management, they have a lot of functionality including antiforgery token, forgot the password, external login providers, two factor authentication, email and SMS sending, reset password, change email, etc. Of course, register, login and logoff.
  • IEmailSender, ISMSSender and MessageServices - if we waited until this moment for me to reveal the simple and free email/SMS provider architecture in the project, you will be disappointed. MessageServices implements both IEmailSender and ISMSSender with empty methods where you should plus useful code.
  • Views - all views are interesting to explore. They show basic and sometimes not so basics concepts of Razor Model/View/Controller interaction like: imports, declaring the controller and action names in the form element instead of a string URL, declaring the @model type for a view, validation summaries, inline code, partial views, model to view synchronization, variable view content depending on environment (production, staging), fallback source and automated testing that the script loaded, bootstrap theming, etc.
  • Models - model classes are simple POCOs, but they do provide their occasional instruction, like using annotations for data structure, validation and display purposes alike

Running the website


So far I've described running the web site through the Debug sidebar, but how do I run the website from the command line? Simply by running
dotnet run
in your project folder. You can just as well use
dotnet compiled.dll
, but you need to also run it from the project root. In the former case, dotnet will also compile the project for you, in the latter it will just run whatever is already compiled.

In order to change the default IP and port binding, refer to this answer on Stack Overflow. Long story short, you need to use a configuration file from your WebHostBuilder setup.

However there is a lot of talk about dnx. What is this? In short, it's 'dotnet'. Before version 1 there was an entire list of commands like dnvm, dnu, dnx, etc. You may still install them if you want, but I wouldn't recommend it. As detailed in this document, the transition from the dn* tools to the dotnet command is already in progress.

Conclusions


A large number of concepts have been bundled together in the default "web" project type, allowing for someone to immediately create and run a web site, theme it and change it to their own purposes. While creating such a project from Visual Studio is a breeze compared to what I've described, it is still a big step towards enabling people to just quickly install some stuff and immediately start coding. I approve! :) good job, Microsoft!