and has 1 comment

Intro

  Some of the most visited posts on this blog relate to dependency injection in .NET. As you may know, dependency injection has been baked in in ASP.Net almost since the beginning, but it culminated with the MVC framework and the .Net Core rewrite. Dependency injection has been separated into packages from where it can be used everywhere. However, probably because they thought it was such a core concept or maybe because it is code that came along since the days of UnityContainer, the entire mechanism is sealed, internalized and without any hooks on which to add custom code. Which, in my view, is crazy, since dependency injection serves, amongst other things, the purpose of one point of change for class instantiations.

  Now, to be fair, I am not an expert in the design patterns used in dependency injection in the .NET code. There might be some weird way in which you can extend the code that I am unaware of. In that case, please illuminate me. But as far as I went in the code, this is the simplest way I found to insert my own hook into the resolution process. If you just want the code, skip to the end.

Using DI

  First of all, a recap on how to use dependency injection (from scratch) in a console application:

// you need the nuget packages Microsoft.Extensions.DependencyInjection 
// and Microsoft.Extensions.DependencyInjection.Abstractions
using Microsoft.Extensions.DependencyInjection;
...

// create a service collection
var services = new ServiceCollection();
// add the mappings between interface and implementation
services.AddSingleton<ITest, Test>();
// build the provider
var provider = services.BuildServiceProvider();

// get the instance of a service
var test = provider.GetService<ITest>();

  Note that this is a very simplified scenario. For more details, please check Creating a console app with Dependency Injection in .NET Core.

Recommended pattern for DI

  Second of all, a recap of the recommended way of using dependency injection (both from Microsoft and myself) which is... constructor injection. It serves two purposes:

  1. It declares all the dependencies of an object in the constructor. You can rest assured that all you would ever need for that thing to work is there.
  2. When the constructor starts to fill a page you get a strong hint that your class may be doing too many things and you should split it up.

  But then again, there is the "Learn the rules. Master the rules. Break the rules" concept. I've familiarized myself with it before writing this post so that now I can safely break the second part and not master anything before I break stuff. I am talking now about property injection, which is generally (for good reason) frowned upon, but which one may want to use in scenarios adjacent to the functionality of the class, like logging. One of the things that always bothered me is having to declare a logger in every constructor ever, even if in itself a logger does nothing to the functionality of the class.

  So I've had this idea that I would use constructor dependency injection EVERYWHERE, except logging. I would create an ILogger<T> property which would be automatically injected with the correct implementation at resolution time. Only there is a problem: Microsoft's dependency injection does not support property injection or resolution hooks (as far as I could find). So I thought of a solution.

How does it work?

  Third of all, a small recap on how ServiceProvider really works.

  When one does services.BuildServiceProvider() they actually call an extension method that does new ServiceProvider(services, someServiceProviderOptions). Only that constructor is internal, so you can't use it yourself. Then, inside the provider class, the GetService method is using a ConcurrentDictionary of service accessors to get your service. In case the service accessor is not there, the method from the field _createServiceAccessor is going to be used. So my solution: replace the field value with a wrapper that will also execute our own code.

The solution

  Before I show you the code, mind that this applies to .NET 7.0. I guess it will work in most .NET Core versions, but they could change the internal field name or functionality in which case this might break.

  Finally, here is the code:

public static class ServiceProviderExtensions
{
    /// <summary>
    /// Adds a custom handler to be executed after service provider resolves a service
    /// </summary>
    /// <param name="provider">The service provider</param>
    /// <param name="handler">An action receiving the service provider, 
    /// the registered type of the service 
    /// and the actual instance of the service</param>
    /// <returns>the same ServiceProvider</returns>
    public static ServiceProvider AddCustomResolveHandler(this ServiceProvider provider,
                 Action<IServiceProvider, Type, object> handler)
    {
        var field = typeof(ServiceProvider).GetField("_createServiceAccessor",
                        BindingFlags.Instance | BindingFlags.NonPublic);
        var accessor = (Delegate)field.GetValue(provider);
        var newAccessor = (Type type) =>
        {
            Func<object, object> newFunc = (object scope) =>
            {
                var resolver = (Delegate)accessor.DynamicInvoke(new[] { type });
                var resolved = resolver.DynamicInvoke(new[] { scope });
                handler(provider, type, resolved);
                return resolved;
            };
            return newFunc;
        };
        field.SetValue(provider, newAccessor);
        return provider;
    }
}

  As you can see, we take the original accessor delegate and we replace it with a version that runs our own handler immediately after the service has been instantiated.

Populating a Logger property

  And we can use it like this to do property injection now:

static void Main(string[] args)
{
    var services = new ServiceCollection();
    services.AddSingleton<ITest, Test>();
    var provider = services.BuildServiceProvider();
    provider.AddCustomResolveHandler(PopulateLogger);

    var test = (Test)provider.GetService<ITest>();
    Assert.IsNotNull(test.Logger);
}

private static void PopulateLogger(IServiceProvider provider, 
                                    Type type, object service)
{
    if (service is null) return;
    var propInfo = service.GetType().GetProperty("Logger",
                    BindingFlags.Instance|BindingFlags.Public);
    if (propInfo is null) return;
    var expectedType = typeof(ILogger<>).MakeGenericType(service.GetType());
    if (propInfo.PropertyType != expectedType) return;
    var logger = provider.GetService(expectedType);
    propInfo.SetValue(service, logger);
}

  See how I've added the PopulateLogger handler in which I am looking for a property like 

public ILogger<Test> Logger { get; private set; }

  (where the generic type of ILogger is the same as the class) and populate it.

Populating any decorated property

  Of course, this is kind of ugly. If you want to enable property injection, why not use an attribute that makes your intention clear and requires less reflection? Fine. Let's do it like this:

// Add handler
provider.AddCustomResolveHandler(InjectProperties);
...

// the handler populates all properties that are decorated with [Inject]
private static void InjectProperties(IServiceProvider provider, Type type, object service)
{
    if (service is null) return;
    var propInfos = service.GetType()
        .GetProperties(BindingFlags.Instance | BindingFlags.Public)
        .Where(p => p.GetCustomAttribute<InjectAttribute>() != null)
        .ToList();
    foreach (var propInfo in propInfos)
    {
        var instance = provider.GetService(propInfo.PropertyType);
        propInfo.SetValue(service, instance);
    }
}
...

// the attribute class
[AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = true)]
public class InjectAttribute : Attribute {}

Conclusion

I have demonstrated how to add a custom handler to be executed after any service instance is resolved by the default Microsoft ServiceProvider class, which in turn enables property injection, one point of change to all classes, etc. I once wrote code to wrap any class into a proxy that would trace all property and method calls with their parameters automatically. You can plug that in with the code above, if you so choose.

Be warned that this solution is using reflection to change the functionality of the .NET 7.0 ServiceProvider class and, if the code there changes for some reason, you might need to adapt it to the latest functionality.

If you know of a more elegant way of doing this, please let me know.

Hope it helps!

Bonus

But what about people who really, really, really hate reflection and don't want to use it? What about situations where you have a dependency injection framework running for you, but you have no access to the service provider builder code? Isn't there any solution?

Yes. And No. (sorry, couldn't help myself)

The issue is that ServiceProvider, ServiceCollection and all that jazz are pretty closed up. There is no solution I know of that solved this issue. However... there is one particular point in the dependency injection setup which can be hijacked and that is... the adding of the service descriptors themselves!

You see, when you do ServiceCollection.AddSingleton<Something,Something>, what gets called is yet another extension method, the ServiceCollection itself is nothing but a list of ServiceDescriptor. The Add* extensions methods come from ServiceCollectionServiceExtensions class, which contains a lot of methods that all defer to just three different effects:

  • adding a ServiceDescriptor on a type (so associating an type with a concrete type) with a specific lifetime (transient, scoped or singleton)
  • adding a ServiceDescriptor on an instance (so associating a type with a specific instance of a class), by default singleton
  • adding a ServiceDescriptor on a factory method (so associating a type with a constructor method)

If you think about it, the first two can be translated into the third. In order to instantiate a type using a service provider you do ActivatorUtilities.CreateInstance(provider, type) and a factory method that returns a specific instance of a class is trivial.

So, the solution: just copy paste the contents of ServiceCollectionServiceExtensions and make all of the methods end up in the Add method using a service factory method descriptor. Now instead of using the extensions from Microsoft, you use your class, with the same effect. Next step: replace the provider factory method with a wrapper that also executes stuff.

Since this is a bonus, I let you implement everything except the Add method, which I will provide here:

// original code
private static IServiceCollection Add(
    IServiceCollection collection,
    Type serviceType,
    Func<IServiceProvider, object> implementationFactory,
    ServiceLifetime lifetime)
{
    var descriptor = new ServiceDescriptor(serviceType, implementationFactory, lifetime);
    collection.Add(descriptor);
    return collection;
}

//updated code
private static IServiceCollection Add(
    IServiceCollection collection,
    Type serviceType,
    Func<IServiceProvider, object> implementationFactory,
    ServiceLifetime lifetime)
{
    Func<IServiceProvider, object> factory = (sp)=> {
        var instance = implementationFactory(sp);
        // no stack overflow, please
        if (instance is IDependencyResolver) return instance;
        // look for a registered instance of IDependencyResolver (our own interface)
        var resolver=sp.GetService<IDependencyResolver>();
        // intercept the resolution and replace it with our own 
        return resolver?.Resolve(sp, serviceType, instance) ?? instance;
    };
    var descriptor = new ServiceDescriptor(serviceType, factory, lifetime);
    collection.Add(descriptor);
    return collection;
}

All you have to do is (create the interface and then) register your own implementation of IDependencyResolver and do whatever you want to do in the Resolve method, including the logger instantiation, the inject attribute handling or the wrapping of objects, as above. All without reflection.

The kick here is that you have to make sure you don't use the default Add* methods when you register your services, or this won't work. 

There you have it, bonus content not found on dev.to ;)

Intro

Dependency injection is baked in the ASP.Net Core projects (yes, I still call it Core), but it's missing from console app templates. And while it is easy to add, it's not that clear cut on how to do it. I present here three ways to do it:

  1. The fast one: use the Worker Service template and tweak it to act like a console application
  2. The simple one: use the Console Application template and add dependency injection to it
  3. The hybrid: use the Console Application template and use the same system as in the Worker Service template

Tweak the Worker Service template

It makes sense that if you want a console application you would select the Console Application template when creating a new project, but as mentioned above, it's just the default template, as old as console apps are. Yet there is another default template, called Worker Service, which almost does the same thing, only it has all the dependency injection goodness baked in, just like an ASP.Net Core Web App template.

So start your Visual Studio, add a new project and choose Worker Service:

It will create a project containing a Program.cs, a Worker.cs and an appsettings.json file. Program.cs holds the setup and Worker.cs holds the code to be executed.

Worker.cs has an ExecuteAsync method that logs some stuff every second, but even if we remove the while loop and add our own code, the application doesn't stop. This might be a good thing, as sometimes we just want stuff to work until we press Ctrl-C, but it's not a console app per se.

In order to transform it into something that works just like a console application you need to follow these steps:

  1. inject an IHost instance into your worker
  2. specifically instruct the host to stop whenever your code has finished

So, you go from:

public class Worker : BackgroundService
{
    private readonly ILogger<Worker> _logger;

    public Worker(ILogger<Worker> logger)
    {
        _logger = logger;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
            await Task.Delay(1000, stoppingToken);
        }
    }
}

to:

public class Worker : BackgroundService
{
    private readonly ILogger<Worker> _logger;
    private readonly IHost _host;

    public Worker(ILogger<Worker> logger, IHost host)
    {
        _logger = logger;
        _host = host;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        Console.WriteLine("Hello world!");
        _host.StopAsync();
    }
}

Note that I did not "await" the StopAsync method because I don't actually need to. You are telling the host to stop and it will do it whenever it will see fit.

If we look into the Program.cs code we will see this:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureServices((hostContext, services) =>
            {
                services.AddHostedService<Worker>();
            });
}

I don't know why they bothered with creating a new method and then writing it as an expression body, but that's the template. You see that there is a lambda adding dependencies (by default just the Worker class), but everything starts with Host.CreateDefaultBuilder. In .NET source code, this leads to HostingHostBuilderExtensions.ConfigureDefaults, which adds a lot of stuff:

  • environment variables to config
  • command line arguments to config (via CommandLineConfigurationSource)
  • support for appsettings.json configuration
  • logging based on operating system

That is why, if you want these things by default, your best bet is to tweak the Worker Service template

Add Dependency Injection to an existing console application

Some people want to have total control over what their code is doing. Or maybe you already have a console application doing stuff and you just want to add Dependency Injection. In that case, these are the steps you want to follow:

  1. create a ServiceCollection instance (needs a dependency to Microsoft.Extensions.DependencyInjection)
  2. register all dependencies you want to use to it
  3. create a starting class that will execute stuff (just like Worker above)
  4. register starting class in the service collection
  5. build a service provider from the collection
  6. get an instance of the starting class and execute its one method

Here is an example:

class Program
{
    static void Main(string[] args)
    {
        var services = new ServiceCollection();
        ConfigureServices(services);
        services
            .AddSingleton<Executor,Executor>()
            .BuildServiceProvider()
            .GetService<Executor>()
            .Execute();
    }

    private static void ConfigureServices(IServiceCollection services)
    {
        services
            .AddSingleton<ITest, Test>();
    }
}

public class Executor
{
    private readonly ITest _test;

    public Executor(ITest test)
    {
        _test = test;
    }

    public void Execute()
    {
        _test.DoSomething();
    }
}

The only reason we register the Executor class is in order to get an instance of it later, complete with constructor injection support. You can even make Execute an async method, so you can get full async/await support. Of course, for this example appsettings.json configuration or logging won't work until you add them.

Mix them up

Of course, one could try to get the best of both worlds. This would work kind of like this:

  1. use Host.CreateDefaultBuilder() anyway (needs a dependency to Microsoft.Extensions.Hosting), but in a normal console app
  2. use the resulting service provider to instantiate a starting class
  3. start it

Here is an example:

class Program
{
    static void Main(string[] args)
    {
        Host.CreateDefaultBuilder()
            .ConfigureServices(ConfigureServices)
            .ConfigureServices(services => services.AddSingleton<Executor>())
            .Build()
            .Services
            .GetService<Executor>()
            .Execute();
    }

    private static void ConfigureServices(HostBuilderContext hostContext, IServiceCollection services)
    {
        services.AddSingleton<ITest,Test>();
    }
}

The Executor class would be just like in the section above, but now you get all the default logging and configuration options from the Worker Service section.

Conclusion

What the quickest and best solution is depends on your situation. One could just as well start with a Worker Service template, then tweak it to never Run the builder and instead configure it as above. One can create a Startup class, complete with Configure and ConfigureServices as in an ASP.Net Core Web App template or even start with such a template, then tweak it to work as a console app/web hybrid. In .NET Core everything is a console app anyway, it's just depends on which packages you load and how you configure them.

and has 0 comments

A few years ago I wrote an article about using RealProxy to intercept methods and properties calls in order to log them. It was only for .NET Framework and suggested you inherit all intercepted classes from MarshalByRefObject. This one is a companion piece that shows how interception can be done in a more general way and without the need for MarshalByRefObject.

To do that I am going to give you two versions of the same class, one for .NET Framework and one for .NET Core which can be used like this:

//Initial code:
IInterface obj = new Implementation();

//Interceptor added:
IInterface obj = new Implementation();
var interceptor = new MyInterceptor<IInterface>();
obj = interceptor.Decorate(obj);

//Interceptor class (every method there is optional):
public class MyInterceptor<T> : ClassInterceptor<T>
{
    protected override void OnInvoked(MethodInfo methodInfo, object[] args, object result)
    {
        // do something when the method or property call ended succesfully
    }

    protected override void OnInvoking(MethodInfo methodInfo, object[] args)
    {
        // do something before the method or property call is invoked
    }

    protected override void OnException(MethodInfo methodInfo, object[] args, Exception exception)
    {
        // do something when a method or property call throws an exception
    }
}

This code would be the same for .NET Framework or Core. The difference is in the ClassInterceptor code and the only restriction is that your class has to implement an interface for the methods and properties intercepted.

Here is the .NET Framework code:

public abstract class ClassInterceptor<TInterface> : RealProxy
{
    private object _decorated;

    public ClassInterceptor()
        : base(typeof(TInterface))
    {
    }

    public TInterface Decorate<TImplementation>(TImplementation decorated)
        where TImplementation:TInterface
    {
        _decorated = decorated;
        return (TInterface)GetTransparentProxy();
    }

    public override IMessage Invoke(IMessage msg)
    {
        var methodCall = msg as IMethodCallMessage;
        var methodInfo = methodCall.MethodBase as MethodInfo;
        OnInvoking(methodInfo,methodCall.Args);
        object result;
        try
        {
            result = methodInfo.Invoke(_decorated, methodCall.InArgs);
        } catch(Exception ex)
        {
            OnException(methodInfo, methodCall.Args, ex);
            throw;
        }
        OnInvoked(methodInfo, methodCall.Args, result);
        return new ReturnMessage(result, null, 0, methodCall.LogicalCallContext, methodCall);
    }

    protected virtual void OnException(MethodInfo methodInfo, object[] args, Exception exception) { }
    protected virtual void OnInvoked(MethodInfo methodInfo, object[] args, object result) { }
    protected virtual void OnInvoking(MethodInfo methodInfo, object[] args) { }
}

In it, we use the power of RealProxy to create a transparent proxy. For Core we use DispatchProxy, which is the .NET Core replacement from Microsoft. Here is the code:

public abstract class ClassInterceptor<TInterface> : DispatchProxy
{
    private object _decorated;

    public ClassInterceptor() : base()
    {
    }

    public TInterface Decorate<TImplementation>(TImplementation decorated)
        where TImplementation : TInterface
    {
        var proxy = typeof(DispatchProxy)
                    .GetMethod("Create")
                    .MakeGenericMethod(typeof(TInterface),GetType())
                    .Invoke(null,Array.Empty<object>())
            as ClassInterceptor<TInterface>;

        proxy._decorated = decorated;

        return (TInterface)(object)proxy;
    }

    protected override object Invoke(MethodInfo targetMethod, object[] args)
    {
        OnInvoking(targetMethod,args);
        try
        {
            var result = targetMethod.Invoke(_decorated, args);
            OnInvoked(targetMethod, args,result);
            return result;
        }
        catch (TargetInvocationException exc)
        {
            OnException(targetMethod, args, exc);
            throw exc.InnerException;
        }
    }


    protected virtual void OnException(MethodInfo methodInfo, object[] args, Exception exception) { }
    protected virtual void OnInvoked(MethodInfo methodInfo, object[] args, object result) { }
    protected virtual void OnInvoking(MethodInfo methodInfo, object[] args) { }
}

DispatchProxy is a weird little class. Look how it generates an object which can be cast simultaneously to T or Class<T>!

There are many other things one can do to improve this class:

  • the base class could make the distinction between a method call and a property call. In the latter case the MethodInfo object will have IsSpecialName true and start with set_ or get_
  • for async/await scenarios and not only, the result of a method would be a Task<T> and if you want to log the result you should check for that, await the task, get the result, then log it. So this class could make this functionality available out of the box
  • support for Dependency Injection scenarios could also be added as the perfect place to use interception is when you register an interface-implementation pair. An extension method like container.RegisterSingletonWithLogging could be used instead of container.RegisterSingleton, by registering a factory which replaces the implementation with a logging proxy

I hope this helps!

P.S. Here is an article helping to migrate from RealProxy to DispatchProxy: Migrating RealProxy Usage to DispatchProxy

and has 0 comments

Intro

  When talking Dependency Injection, if a class implementing Interface1 needs an implementation of Interface2 in its constructor and the implementation for Interface2 needs an implementation of Interface1 you get a circular dependency error. This could be fixed, though, by providing lazy proxy implementations, which would also fix issues with resources getting allocated too early and other similar issues.  Now, theoretically this is horrible. Yet in practice one meets this situation a lot. This post will attempt to clarify why this happens and how practice may be different from theory.

Problem definition

  Let's start with defining what an interface is. Wikipedia says it's a shared boundary between components. In the context of dependency injection you often hear about the Single Responsibility Principle, which stipulates that a class (and by extension an interface) should only do one thing. Yet even in this case, the implementation for any of the Facade, Bridge, Decorator, Proxy and Adapter software patterns would do only one thing: to proxy, merge or split the functionality of other components, regardless of how many and how complex they are. Going to the other extreme, one could create an interface for every conceivable method, thus eliminating the need for circular dependencies and also loading code that is not yet needed. And then there are the humans writing the code. When you need a service to provide the physical location of the application you would call it ILocationService and when you want to compute the distance between two places you would use the same, because it's about locations, right? Having an ILocationProviderService and an ILocationDistanceCalculator feels like overkill. Imagine trying to determine if a functionality regarding locations is already implemented and going through all the ILocation... interfaces to find out, then having to create a new interface when you write the code for it and spending sleepless nights wondering if you named things right (and if you need to invalidate their cache).

  In other words, depending on context, an interface can be anything, as arbitrarily complex as the components it separates. They could contain methods that are required by other components together with methods that require other components. If you have more such interfaces, you might end up with a circular dependency in the instantiation phase even if the execution flow would not have this problem. Let's take a silly example.

  We have a LocationService and a TimeService. One handles points in space the other moments in time. And let's say we have the entire history of someone's movements. You could get a location based on the time provided (GetLocation) or get the time based on a provided location (GetTime). Now, the input from the user is text, so we need the LocationService and the TimeService to translate that text into actual points in space and moments in time, so GetLocation would use an ITimeService, while GetTime would use an ILocationService. You start the program and you get the circular dependency error. I told you it would be silly. Anyway, you can split any of the services into ITimeParser and ITimeManager or whatever, you can create a new interface called ITextParser, there are a myriad refactoring solutions. But what if you don't have the luxury to refactor and why do you even need to do anything? Surely if you call GetLocation you only need to parse the time, you never call GetTime, and the other way around.

Solution?

  A possible solution is to only actually get the dependency implementation when you use it. Instead of providing the actual implementation for the interface you need, you provide a lazy proxy. Here is an example of a generic (and lazy one liner) LazyProxy implementation:

public class LazyProxy<TInterface>:Lazy<TInterface>
{
    public LazyProxy(IServiceProvider serviceProvider) : base(() => serviceProvider.GetService<TInterface>()) { }
}

  Problem solved, right? LocationService would ask for a LazyProxy<ITimeService> implementation, GetLocation would do _lazyTimeService.Value.ParseTime(input) which would instantiate a TimeService for the first time, which would ask for a LazyProxy<ILocationService> and in GetTime it would use _lazyLocationService.Value.ParseLocation(input) which would get the existing instance of LocationService (if it's registered as Singleton). Imagine either of these services would have needed a lot of other dependencies.

  Now, that's what called a "leaky abstraction". You are hiding the complexity of instantiating and caching a service (and all of its dependencies) until you actually use it. Then you might get an error, when the actual shit hits the actual fan. I do believe that the term "leaky" might have originated from the aforementioned idiom. Yuck, right? It's when the abstraction leaked the complexity that lies beneath.

  There are a number of reasons why you shouldn't do it. Let's get through them.

Criticism

  The most obvious one is that you could do better. The design in the simple and at the same time contrived example above is flawed because each of the services are doing two very separate things: providing a value based on a parameter and interpreting text input. If parsing is a necessary functionality of your application, then why not design an ITextParser interface that both services would use? And if your case is that sometimes you instantiate a class to use one set of functions and sometimes to use another set of functions, maybe you should split that up into two. However, in real life situations you might not have full control over the code, you might not have the resources to refactor the code. Hell, you might be neck deep in spaghetti code! Have you ever worked in one of those house of cards companies where you are not allowed to touch any piece of code for fear it would all crash?

  The next issue is that you would push the detection for a possible bug to a particular point of the execution of your code. You would generate a Heisenbug, a bug that gets reproduced inconsistently. How appropriate this would have been if an IMomentumService were used as well. Developers love Heisenbugs, as the time for their resolution can vary wildly and they would be forced to actually use what they code. Oh, the humanity! Yet, the only problem you would detect early is the cycle in the dependency graph, which is more of a design issue anyway. A bug in the implementation would still be detected when you try to use it. 

  One other issue that this pattern would solve should not be there in the first place: heavy resource use in constructors. Constructors should only construct, obviously, leaving other mechanisms to handle the use of external resources. But here is the snag: if you buy into this requirement for constructors you already use leaky abstractions. And again, you might not be able to change the constructors.

  Consider, though, the way this pattern works. It is based on the fact that no matter when you request the instantiation of a class, you would have a ready implementation of IServiceProvider. The fact that the service locator mechanism exists is already lazy instantiation on the GetService method. In fact, this lazy injection pattern is itself a constructor dependency injection abstraction of the service provider pattern. You could just as well do var timeService = _serviceProvider.GetService<ITimeService>() inside your GetLocation method and it would do the exact same thing. So this is another reason why you should not do it: mixing the metaphors. But hey! If you have read this far, you know that I love mixing those suckers up!

Conclusion

  In conclusion, I cannot recommend this solution if you have others like refactoring available. But in a pinch it might work. Let me know what you think!

  BTW, this issue has been also discussed on Stack Overflow, where there are some interesting answers. 

Update: due to popular demand, I've added Tyrion as a Github repo.

Update 2: due to more popular demand, I even made the repo public. :) Sorry about that.

Intro

  Discord is something I have only vaguely heard about and when a friend told me he used it for chat with friends, I installed it, too. I was pleasantly surprised to see it is a very usable and free chat application, which combines feature of IRC, other messenger applications and a bit of Slack. You can create servers and add channels to them, for example, where you can determine the rights of people and so on. What sets Discord apart from anything, perhaps other than Slack, is the level of "integration", the ability to programatically interact with it. So I create a "bot", a program which stays active and responds to user chat messages and can take action. This post is about how to do that.

  Before you implement a bot you obviously need:

  All of this has been done to death and you can follow the links above to learn how to do it. Before we continue, a little something that might not be obvious: you can edit a Discord chat invite so that it never expires, as it is the one on this blog now.

Writing code

One can write a bot in a multitude of programming languages, but I am a .NET dev, so Discord.NET it is. Note that this is an "unofficial" library, so it may not (and it is not) completely in sync with all the options that the Discord API provides. One such feature, for example, is multiple attachments to a message. But I digress.

Since my blog is also written in ASP.NET Core, it made sense to add the bot code to that. Also, in order to make it all clean code, I will use dependency injection as much as possible and use the built-in system for commands, even if it is quite rudimentary.

Step 1 - making dependencies available

We are going to need these dependencies:

  • DiscordSocketClient - the client to connect to Discord
  • CommandService - the service managing commands
  • BotSettings - a class used to hold settings and configuration
  • BotService - the bot itself, which we are going to make implement IHostedService so we can add it as a hosted service in ASP.Net

In order to keep things separated, I will not add all of this in Startup, instead encapsulating them into a Bootstrap class:

public static class Bootstrap
{
    public static IWebHostBuilder UseDiscordBot(this IWebHostBuilder builder)
    {
        return builder.ConfigureServices(services =>
        {
            services
                .AddSingleton<BotSettings>()
                .AddSingleton<DiscordSocketClient>()
                .AddSingleton<CommandService>()
                .AddHostedService<BotService>();
        });
    }
}

This allows me to add the bot simply in CreateWebHostBuilder as: 

WebHost.CreateDefaultBuilder(args)
   .UseStartup<Startup>()
   .UseKestrel(a => a.AddServerHeader = false)
   .UseDiscordBot();

Step 2 - the settings

The BotSettings class will be used not only to hold information, but also communicate it between classes. Each Discord chat bot needs an access token to connect and we can add that as a configuration value in appsettings.config:

{
  ...
  "DiscordBot": {
	"Token":"<the token value>"
  },
  ...
}
public class BotSettings
{
    public BotSettings(IConfiguration config, IHostingEnvironment hostingEnvironment)
    {
        Token = config.GetValue<string>("DiscordBot:Token");
        RootPath = hostingEnvironment.WebRootPath;
        BotEnabled = true;
    }

    public string Token { get; }
    public string RootPath { get; }
    public bool BotEnabled { get; set; }
}

As you can see, no fancy class for getting the config, nor do we use IOptions or anything like that. We only need to get the token value once, let's keep it simple. I've added the RootPath because you might want to use it to access files on the local file system. The other property is a setting for enabling or disabling the functionality of the bot.

Step 3 - the bot skeleton

Here is the skeleton for a bot. It doesn't change much outside the MessageReceived and CommandReceived code.

public class BotService : IHostedService, IDisposable
{
    private readonly DiscordSocketClient _client;
    private readonly CommandService _commandService;
    private readonly IServiceProvider _services;
    private readonly BotSettings _settings;

    public BotService(DiscordSocketClient client,
        CommandService commandService,
        IServiceProvider services,
        BotSettings settings)
    {
        _client = client;
        _commandService = commandService;
        _services = services;
        _settings = settings;
    }

    // The hosted service has started
    public async Task StartAsync(CancellationToken cancellationToken)
    {
        _client.Ready += Ready;
        _client.MessageReceived += MessageReceived;
        _commandService.CommandExecuted += CommandExecuted;
        _client.Log += Log;
        _commandService.Log += Log;
        // look for classes implementing ModuleBase to load commands from
        await _commandService.AddModulesAsync(Assembly.GetEntryAssembly(), _services);
        // log in to Discord, using the provided token
        await _client.LoginAsync(TokenType.Bot, _settings.Token);
        // start bot
        await _client.StartAsync();
    }

    // logging
    private async Task Log(LogMessage arg)
    {
        // do some logging
    }

    // bot has connected and it's ready to work
    private async Task Ready()
    {
        // some random stuff you can do once the bot is online: 

        // set status to online
        await _client.SetStatusAsync(UserStatus.Online);
        // Discord started as a game chat service, so it has the option to show what games you are playing
        // Here the bot will display "Playing dead" while listening
        await _client.SetGameAsync("dead", "https://siderite.dev", ActivityType.Listening);
    }
    private async Task MessageReceived(SocketMessage msg)
    {
        // message retrieved
    }
    private async Task CommandExecuted(Optional<CommandInfo> command, ICommandContext context, IResult result)
    {
        // a command execution was attempted
    }

    // the hosted service is stopping
    public async Task StopAsync(CancellationToken cancellationToken)
    {
        await _client.SetGameAsync(null);
        await _client.SetStatusAsync(UserStatus.Offline);
        await _client.StopAsync();
        _client.Log -= Log;
        _client.Ready -= Ready;
        _client.MessageReceived -= MessageReceived;
        _commandService.Log -= Log;
        _commandService.CommandExecuted -= CommandExecuted;
    }


    public void Dispose()
    {
        _client?.Dispose();
    }
}

Step 4 - adding commands

In order to add commands to the bot, you must do the following:

  • create a class to inherit from ModuleBase
  • add public methods that are decorated with the CommandAttribute
  • don't forget to call commandService.AddModuleAsync like above

Here is an example of an enable/disable command class:

public class BotCommands:ModuleBase
{
    private readonly BotSettings _settings;

    public BotCommands(BotSettings settings)
    {
        _settings = settings;
    }

    [Command("bot")]
    public async Task Bot([Remainder]string rest)
    {
        if (string.Equals(rest, "enable",StringComparison.OrdinalIgnoreCase))
        {
            _settings.BotEnabled = true;
        }
        if (string.Equals(rest, "disable", StringComparison.OrdinalIgnoreCase))
        {
            _settings.BotEnabled = false;
        }
        await this.Context.Channel.SendMessageAsync("Bot is "
            + (_settings.BotEnabled ? "enabled" : "disabled"));
    }
}

When the bot command will be issued, then the state of the bot will be sent as a message to the chat. If the parameter of the command is enable or disable, the state will also be changed accordingly.

Yet, in order for this command to work, we need to add code to the bot MessageReceived method: 

private async Task MessageReceived(SocketMessage msg)
{
    // do not process bot messages or system messages
    if (msg.Author.IsBot || msg.Source != MessageSource.User) return;
    // only process this type of message
    var message = msg as SocketUserMessage;
    if (message == null) return;
    // match the message if it starts with R2D2
    var match = Regex.Match(message.Content, @"^\s*R2D2\s+", RegexOptions.IgnoreCase);
    int? pos = null;
    if (match.Success)
    {
        // this is an R2D2 command, everything after the match is the command text
        pos = match.Length;
    }
    else if (message.Channel is IPrivateChannel)
    {
        // this is a command sent directly to the private channel of the bot, 
        // don't expect to start with R2D2 at all, just execute it
        pos = 0;
    }
    if (pos.HasValue)
    {
        // this is a command, execute it
        var context = new SocketCommandContext(_client, message);
        await _commandService.ExecuteAsync(context, message.Content.Substring(pos.Value), _services);
    }
    else
    {
        // processing of messages that are not commands
        if (_settings.BotEnabled)
        {
            // if the bot is enabled and people are talking about it, show an image and say "beep beep"
            if (message.Content.Contains("R2D2",StringComparison.OrdinalIgnoreCase))
            {
                await message.Channel.SendFileAsync(_settings.RootPath + "/img/R2D2.gif", "Beep beep!", true);
            }
        }
    }
}

This code will forward commands to the command service if message starts with R2D2, else, if bot is enabled, will send replies with the R2D2 picture and saying beep beep to messages that contain R2D2.

Step 5 - handling command results

Command execution may end in one of three states:

  • command is not recognized
  • command has failed
  • command has succeeded

Here is a CommandExecuted event handler that takes these into account:

private async Task CommandExecuted(Optional<CommandInfo> command, ICommandContext context, IResult result)
{
    // if a command isn't found
    if (!command.IsSpecified)
    {
        await context.Message.AddReactionAsync(new Emoji("🤨")); // eyebrow raised emoji
        return;
    }

    // log failure to the console 
    if (!result.IsSuccess)
    {
        await Log(new LogMessage(LogSeverity.Error, nameof(CommandExecuted), $"Error: {result.ErrorReason}"));
        return;
    }
    // react to message
    await context.Message.AddReactionAsync(new Emoji("🤖")); // robot emoji
}

Note that the command info object does not expose a result value, other than success and failure.

Conclusion

This post has shown you how to create a Discord chat bot in .NET and add it to an ASP.Net Core web site as a hosted service. You may see the result by joining this blog's chat and giving commands to Tyr, the chat's bot:

  • play
  • fart
  • use metric or imperial units in messages
  • use Yahoo Messenger emoticons in messages
  • whatever else I will add in it when I get in the mood :)

and has 14 comments

Intro

  If you are like me, you want to first establish a nice skeleton app that has everything just right before you start writing your actual code. However, as weird as it may sound, I couldn't find a way to use command line parameters with dependency injection, in the same simple way that one would use a configuration file with IOptions<T> for example. This post shows you how to use CommandLineParser, a nice library that handles everything regarding command line parsing, but in a dependency injection friendly way.

  In order to use command line arguments, we need to obtain them. For any .NET Core application or .NET Framework console application you get it from the parameters of the static Main method from Program. Alternately, you can use Environment.CommandLine, which is actually a string, not an array of strings, or Environment.GetCommandLineArgs(). But all of these are kind of nudging you towards some ugly code that either has a dependency on the static Environment, either has code early in the application to handle command line arguments, or stores the arguments somehow. What we want is complete separation of modules in our application.

Defining the command line parameters

  In order to use CommandLineParser, you write a class that contains the properties you expect from the command line, decorated with attributes that inform the parser what is the expected syntax for all. In this post I will use this:

// the way we want to use the app is
// FileUtil <command> [-loglevel loglevel] [-quiet] -output <outputFile> file1 file2 .. file10
public class FileUtilOptions
{
    // use Value for parameters with no name
    [Value(0, Required = true, HelpText = "You have to enter a command")]
    public string Command { get; set; }

    // use Option for named parameters
    [Option('l',"loglevel",Required = false, HelpText ="Log level can be None, Normal, Verbose")]
    public string LogLevel { get; set; }

    // use bool for named parameters with no value
    [Option('q', "quiet", Default = false, Required = false, HelpText = "Quiet mode produces no console output")]
    public bool Quiet { get; set; }

    // Required for required values
    [Option('o', "output", Required = true, HelpText = "Output file is required")]
    public string OutputFile { get; set; }

    // use Min/Max for enumerables
    [Value(1, Min = 1, Max = 10, HelpText = "At least one file name and at most 10")]
    public IEnumerable<string> Files { get; set; }
}

  At this point the blog post will split into two parts. One is very short and easy to use, thanks to commenter Murali Karunakaran. The other one is what I wrote in 2020 when I didn't know better. This second part is just a reminder of how much people can write when they don't have to :)

The short and easy solution

All you have to do is add your command line parameters class as options, then define what will happen when you request one instance of it:

// in ConfigureServices or wherever you define dependencies for injection
services
  .AddOptions<FileUtilOptions>()
  .Configure(opt => 
    Parser.Default.ParseArguments(() => opt, Environment.GetCommandLineArgs())
  );

// when needing the parameters
public SomeConstructor(IOptions<FileUtilOptions> options)
{
    _options = options.Value;
}

When an instance of FileUtilOptions is requested, the lambda will be executed, setting the options based on ParseArguments. If any issue, the parser will display the help to the console

This process, however, does not throw any exceptions. The instance of FileUtilOptions requested will be provided empty or partially/incorrectly filled. In order to handle the errors, some more complex code is needed, and here is a silly example:

using (var writer = new StringWriter())
{
	var parser = new Parser(configuration =>
	{
		configuration.AutoHelp = true;
		configuration.AutoVersion = false;
		configuration.CaseSensitive = false;
		configuration.IgnoreUnknownArguments = true;
		configuration.HelpWriter = writer;
	});
	var result = parser.ParseArguments<T>(_args);
	result.WithNotParsed(errors => HandleErrors(errors, writer));
	result.WithParsed(value => _value = value);
}

// a possible way to handle errors
private static void HandleErrors(IEnumerable<Error> errors, TextWriter writer)
{
	if (errors.Any(e => e.Tag != ErrorType.HelpRequestedError && e.Tag != ErrorType.VersionRequestedError))
	{
		string message = writer.ToString();
		throw new CommandLineParseException(message, errors, typeof(T));
	}
}

Now, the original post follows:

Writing a lot more than necessary

  How can we get the arguments by injection? By creating a new type that encapsulates the simple string array.

// encapsulates the arguments
public class CommandLineArguments
{
    public CommandLineArguments(string[] args)
    {
        this.Args = args;
    }

    public string[] Args { get; }
}

// adds the type to dependency injection
services.AddSingleton<CommandLineArguments>(new CommandLineArguments(args));
// the generic type declaration is superfluous, but the code is easy to read

  With this, we can access the command line arguments anywhere by injecting a CommandLineArguments object and accessing the Args property. But this still implies writing command line parsing code wherever we need that data. We could add some parsing logic in the CommandLineArguments class so that instead of the command line arguments array it would provide us with a strong typed value of the type we want. But then we would put business logic in a command line encapsulation class. Why would it know what type of options we need and why would we need only one type of options? 

  What we would like is something like

public SomeClass(IOptions<MyCommandLineOptions> clOptions) {...}

  Now, we could use this system by writing more complicated that adds a ConfigurationSource and then declaring that certain types are command line options. But I don't want that either for several reasons:

  • writing configuration providers is complex code and at some moment in time one has to ask how much are they willing to write in order to get some damn arguments from the command line
  • declaring the types at the beginning does provide some measure of centralized validation, but on the other hand it's declaring types that we need in business logic somewhere in service configuration, which personally I do not like

  What I propose is adding a new type of IOptions, one that is specific to command line arguments:

// declare the interface for generic command line options
public interface ICommandLineOptions<T> : IOptions<T>
    where T : class, new() { }

// add it to service configuration
services.AddSingleton(typeof(ICommandLineOptions<>), typeof(CommandLineOptions<>));

// put the parsing logic inside the implementation of the interface
public class CommandLineOptions<T> : ICommandLineOptions<T>
    where T : class, new()
{
    private T _value;
    private string[] _args;

    // get the arguments via injection
    public CommandLineOptions(CommandLineArguments arguments)
    {
        _args = arguments.Args;
    }

    public T Value
    {
        get
        {
            if (_value==null)
            {
                // set the value by parsing command line arguments
            }
            return _value;
        }
    }

}

  Now, in order to make it work, we will use CommandLineParser which functions in a very simple way:

  • declare a Parser
  • create a POCO class that has properties decorated with attributes that define what kind of command line parameter they are
  • parse the command line arguments string array into the type of class declared above
  • get the value or handle errors

  Also, to follow the now familiar Microsoft pattern, we will write an extension method to register both arguments and the mechanism for ICommandLineOptions. The end result is:

// extension class to add the system to services
public static class CommandLineExtensions
{
    public static IServiceCollection AddCommandLineOptions(this IServiceCollection services, string[] args)
    {
        return services
            .AddSingleton<CommandLineArguments>(new CommandLineArguments(args))
            .AddSingleton(typeof(ICommandLineOptions<>), typeof(CommandLineOptions<>));
    }
}

public class CommandLineArguments // defined above

public interface ICommandLineOptions<T> // defined above

// full class implementation for command line options
public class CommandLineOptions<T> : ICommandLineOptions<T>
    where T : class, new()
{
    private T _value;
    private string[] _args;

    public CommandLineOptions(CommandLineArguments arguments)
    {
        _args = arguments.Args;
    }

    public T Value
    {
        get
        {
            if (_value==null)
            {
                using (var writer = new StringWriter())
                {
                    var parser = new Parser(configuration =>
                    {
                        configuration.AutoHelp = true;
                        configuration.AutoVersion = false;
                        configuration.CaseSensitive = false;
                        configuration.IgnoreUnknownArguments = true;
                        configuration.HelpWriter = writer;
                    });
                    var result = parser.ParseArguments<T>(_args);
                    result.WithNotParsed(errors => HandleErrors(errors, writer));
                    result.WithParsed(value => _value = value);
                }
            }
            return _value;
        }
    }

    private static void HandleErrors(IEnumerable<Error> errors, TextWriter writer)
    {
        if (errors.Any(e => e.Tag != ErrorType.HelpRequestedError && e.Tag != ErrorType.VersionRequestedError))
        {
            string message = writer.ToString();
            throw new CommandLineParseException(message, errors, typeof(T));
        }
    }
}

// usage when configuring dependency injection
services.AddCommandLineOptions(args);

Enjoy!

Final notes

Now there are some quirks in the implementation above. One of them is that the parser class generates the usage help by writing it to a TextWriter (default being Console.Error), but since we want this to be encapsulated, we declare our own StringWriter and then store the generated help if any errors. In the case above, I am storing the help text as the exception message, but it's the principle that matters.

Also, with this system one can ask for multiple types of command line options classes, depending on the module, without the need to declare said types at the configuration of dependency injection. The downside is that if you want validation of the command line options at the very beginning, you have to write extra code. In the way implemented above, the application will fail when first asking for a command line option that cannot be mapped on the command line arguments.

Note that the short style of a parameter needs to be used with a dash, the long one with two dashes:

  • -o outputFile.txt - correct (value outputFile.txt)
  • --output outputFile.txt - correct (value outputFile.txt)
  • -output outputFile.txt - incorrect (value output and outputFile.txt is considered an unnamed argument)

Intro

  This is part of a series that I plan to build on as time goes on: technical interview questions, dissected and laid bare for both interviewers and interviewees. You can also check out the previous one: Interview question: all items in table A but not in B.

  This question is a little bit more complex and abstract at the same time. The post is written more for interviewers this time, because as candidates go, you need to read the links in it if you didn't know the concepts in it. This also is not a question with a single correct answer. It comes after asking about Dependency Injection as a whole and the candidate answering correctly.

  I expect senior developers to be able to go through this successfully, it is not a test for junior developers, although depending on their previous experience juniors might be able to go through it and seniors be force to reason through it.

The test

Bonus introduction question: why use DI at all? Expected answers would be separation of concerns and testability. 

  The question has two steps.

Step 1: given the following code in a legacy application, improve it to use Dependency Injection:

public SomeClass {
  public List<Item> GetItems(int days, string filter) {
    var service = new ItemService();
    return service.GetItems()
      .Where(i => i.Time >= DateTime.Now.AddDays(-days));
  }
}

Bonus questions:

  • has the candidate worked with LINQ before?
  • what does the code do?

Now, this question is about programming knowledge as it is for attention. There are three irregularities that can attract that attention:

  • the most obvious: the service is being instantiated by calling the constructor
    • the interviewer expects at the very least for the candidate to notice it and suggest moving the instantiation of the service in the constructor of the SomeClass class and inject it instead of using new
    • there is the possibility of passing the service as a parameter, as well, but suggest that the signature of the method should remain the same to get around it. Anyway, one can discuss the idea of moving all dependencies to the constructor and/or the calling methods and get insight in the way the candidate is thinking.
  • the unexplained string filter in the signature of the method
    • the interviewer can either tell the candidate that it will become relevant later, because it will, or that this is a method that implements an interface, to which a more snarky candidate might reply that SomeClass implements nothing (bonus for attention)
  • the use of DateTime.Now
    • it is a static property that gives a different output every time so it should be taken into account for Dependency Injection or at least for unit testing

By now you have filtered out the majority of failing candidates and you are left with someone who used or at least understands DI, can read and understand code, has used or at least understood basic LINQ and you have also gauged their level of attention to detail.

If the candidate only talked about the service and they decided to create an interface for ItemService and then add it as a parameter for the constructor of SomeClass, ask them to write a unit test for the method, explain to them that testability is one of the goals of DI if you didn't cover this already

  • bonus: see if they do unit testing or at least understand the concept
  • if they do attempt to write the unit test, ask them what would happen if you would run the test in different days

The expected result of this part is that the candidate understands the need of abstracting DateTime.Now. It is interesting to note how they intend to abstract it, since they do not have access to the code and it is a static method/property to abstract.

Whether the candidate figured it out by themselves or it was explained to them, the expected answer is that DateTime.Now is abstracted by creating an IDateTimeService interface that is implemented as a wrapper over DateTime.

At this point the code should look like this:

public SomeClass {
  private IItemService _itemService;
  private IDateTimeService _dateTimeService;

  public SomeClass(IItemService itemService, IDateTimeService dateTimeService) {
    _itemService = itemService;
    _dateTimeService = dateTimeService;
  }

  public List<Item> GetItems(int days, string filter) {
    return _itemService.GetItems()
      .Where(i => i.Time >= _dateTimeService.Now.AddDays(-days));
  }
}

Also, the candidate should be asked to write a unit test, just to see they know how, for bonus points. Note if the candidate understands isolation for unit testing or does something that would work but be silly like generate the test data based on current date or duplicate the code logic in the test instead of working with static data.

Step 2: tell the candidate that the legacy code they need to fix looks a bit different:

public SomeClass {
  public List<Item> GetItems(int days, string filter) {
    var service = new ItemService(filter);
    return service.GetItems()
      .Where(i => i.Time >= DateTime.Now.AddDays(-days));
  }
}

The ItemService now receives the filter as the parameter. Ask them what to do in this case.

The expected answer is a factory injected instead of the service, which will then be used to instantiate an IItemService with a parameter. Bonus discussion about software patterns can be inserted here.

There are other valid answers here, like using the DI container itself as a factory for the service, which might provoke interesting discussions in itself, like weighing constructor injection versus service provider in dependency injection and whether hybrid solutions might be better.

Bonus question: what if you cannot control the code of ItemService in step 1 and it does not implement an interface or a base class?

  • warning, this might give a hint for the second part of the interview, so use it at the end 
  • correct answer 1: use the class as the type of the parameter and let the dependency container decide how to instantiate it
  • correct answer 2: use a wrapper over the class that implements the interface and proxies to the instance methods.

Conclusion

For me this test told me a lot more about the candidate than just their dependency injection knowledge. We got to talking, I became aware of how their minds worked and I was both pleasantly surprised when they came with alternate solutions that kind of worked and a bit irked that they went that far and didn't see the superior option. Most of the time this made me think about the differences between what I would answer and what they did and this resulted in interesting discussions that enriched not only their experience, but also mine.

Dependency injection, separation of concerns and unit testing are important concepts for any modern developer. I hope this helps devs evolve and interviewers find the best candidates... at least until all of them get to read my blog.

and has 0 comments
There is one basic functionality of all main programming languages: throwing exceptions. Determining in code that something is wrong, one throws an exception of a certain type with extra messages and values. The problem this solves is breaking an execution flow that has entered an invalid state and being aware of what happened. Traditionally, errors are then caught in higher levels of the application and decisions are made: ignore the error, log it, encapsulate it into another exception with extra information, throw it as it is after some cleanup, etc.

But as the joke goes, now you have two problems. When developing .NET code you have to ask yourself what type of exception you are going to throw, what data to add to it and think of what will catch it above and how will it interpret what you sent. Some people create a different exception type for each little issue, in view of the multiple catch(SpecificExceptionType) functionality, so they can choose later what to do at a higher level. Others try to use the out of the box Microsoft exception types, a clear case of stuffing square pegs in round holes. Inevitably someone will just give up in frustration and throw a new Exception("Something went wrong!"); and be done with it. And recently, in order to solve the problems with the above approaches, I envisioned (with full documentation and implementation) a dependency injected IExceptionFactory which I thought was the greatest invention since fire only to discover it was so unwieldy to use that I despaired and deleted the entire thing.

Discussion


Discussing with friends about this deceptively complicated problem, I think I found a solution that covers all major scenarios. But before doing that, I can just feel that some of you thought "Hey! I am doing that and there is nothing wrong with it!", so let's discuss what's wrong with the approaches above. If you want to skip this, go to The Solution section.

Multiple Exception implementations


Extending Exception is not simple. There are four constructors and I dare you to say out the top of your head what Exception(SerializationInfo, StreamingContext) is and where it is used. There are numerous code analyzers that spew a lot of warnings about how Exceptions should be implemented correctly. That's another story: here is a nice article about it. More importantly, doing all this work for every possible exception takes time and effort and duplicated code. In the end, you will get to the next scenario, but with a larger set of hole shapes.

Also, the try/catch block in C# 6.0 has been updated with the catch when syntax, so you can have multiple catch blocks with the same Exception type and different conditions.

Using existing exception types


If you get an empty string from a method and you expected something there, you should throw an exception, but which one? The value is not null, so ArgumentNullException is kind of not applicable. Is ArgumentOutOfRangeException better? I mean, empty string is not in the range of accepted values for the parameter, maybe it could be it. Or is it just ArgumentException? You decide on ArgumentException and you smugly add the name of the variable with nameof(yourLocalVariable), because you are knowledgeable in the ways of code... and you get a warning that yourLocalVariable is not the name of any parameter of the method you are in. That's right, the value was invalid, but ArgumentExceptions are used specifically for the current method arguments.

You don't want to use multiple custom exception types, because you've read this post and abandoned it half way, but you agreed with the first point. Or maybe you are just lazy. You ignore the warning, you use ArgumentException anyway. Later on you are reading the logs and are trying to remember where in the code you used yourLocalVariable and why does it matter it's empty.

Admit it, the Microsoft exception types were not really meant to help you throw exceptions, they are there for Microsoft's internal code and use. Most of the few cases when the exception type is spot on are probably not what the makers of that exception type envisioned when they made it.

Using Exception and a meaningful message


You are done with pointless standards. You just use throw new Exception($"This really specific thing happened with variable {yourVariable}"); and let God sort them out! You can use catch when to look into the string and parse it for information and make decisions on it. It actually works, you're rightly satisfied with yourself. You've showed them all how it's done. Boom! And then a junior developer comes along and decides your wording it not quite right for a native English speaker and changes the string. Suddenly everything literally goes boom, as exceptions get where they shouldn't and flows change unexpectedly.

After warning the entire team to never change the exception strings as they are used in the functionality of the application and you even consider creating a resource system for Exception strings so that it can be used for decision making regardless of content (and inevitably hate the way you need to store format strings and remember what value goes where), a member of the UI team comes and says "Hey, I need to get the reason the flow failed to the user. And I need to translate it to their language". And you despair.

Using a single type of Exception that has everything you need


A slight variation on all of the points above, this involves creating only one type of custom exception, add to it whatever is needed to determine flow, string resource ids, etc. This is actually a pretty decent idea, as it puts the control back into the developer's hands. Why depend on Microsoft types or parsing strings. Context is for kings and you are a king amongst kings.

However, whenever you want to change something, like add a value to an enum that defines the type of the exception, change the way in which a certain exception is handled, you have to change all the code that uses that exception. It's a single point of use, but not a single point of change.

Moreover, other devs in the team think it is cumbersome to work with it. The exception type is stored in the basest of libraries and they all want to add something to it. It becomes bloated and soon enough it creeps into a huge mess that is handled differently in different code and is not easy to maintain, understand or use.

Another layer of indirection


So why not use an exception factory? Everything else in your code works on the premise that "if you want something, you inject an ISomething in the constructor and worry about the implementation never". Why not inject IExceptionFactory everywhere where you need exceptions, then do something magic with it? The result of the operation is determined by the implementation, too. If you want another implementation, you just inject something else. It's genius!

Only then you have to use it. How do you inject the factory in static methods, extension methods, stuff so basic that it used as utilities classes all over the code and now you have to add an extra dependency to everything that uses those classes? Everybody hates you, hates having to add an extra constructor parameter, an extra field, then throw exceptions with something like throw _exceptionFactory.New("Something went wrong!",new ParameterEmptyExceptionData(nameof(localVariable), localVariable)); while adding a dependency to the logging library that the factory uses to log generated exceptions.

Oh, it's just crap!

The Solution


Let's start from an existing piece of code: throw new ArgumentException("{localVariable} is null or empty");. Optimally, we would just want to change this code slightly to solve several issues:
  • formalize that it is an argument empty exception
  • make it clear it's localVariable that was empty
  • maybe add the actual value of localVariable
  • declare the context in which the exception was thrown
  • declare the message that should be used in the exception
  • throw a meaningful exception type
  • decide if this exception should be ignored or thrown
  • log the exception
  • minimize developer effort
  • minimize dependencies
  • use a solution that is closed for modification, but open for extension

A tall order, especially since we've already decided that we don't want to use the factory idea. Some of the issues above are also non-issues in most cases. What if I don't care about the language of the message or if it is a resource or not, it's something used internally in our code. localVariable is empty, I don't need its value. The context is clear from the Stack trace. The exception is meaningful enough as an ArgumentException. In other words, we need to solve one more issue: all of the issues above are optional.

The software pattern that covers this scenario and has been used extensively by library makers is the build pattern. For the sake of exploration, let's see how this would work:
var exception = new ArgumentException("{localVariable} is null or empty");
var builder = new ExceptionBuilder(ex, logger)
.SetError(Error.EmptyValue)
.SetName(nameof(localVariable))
.AddValue(localVariable)
.SetOrigin("Getting the localVariable in order to save the world")
.SetMessageId(Messages.EmptyWorldNameWhenTryingToSaveIt)
.ShouldBeIgnored();
throw builder.Build(); // this also logs and returns an exception of a type the builder decides relevant

This looks promising, considering that every method above is optional, except the builder instantiation and the build at the end, but it's still too close to the factory idea above. Why use new in a project that is based on dependency injection? Why use .Build() everywhere where you need to throw an exception. Where does the logger come from?

So here is the solution I am proposing, using several resources we have at our disposal in C#:
  • the Exception type has a Data Dictionary property for additional data
  • extension methods can be defined in multiple places for the same type
  • there is no need for an instance of a builder when throwing an exception or Build

The code will look like this:
throw new ArgumentException("{localVariable} is null or empty")
.SetError(Error.EmptyValue)
.SetName(nameof(localVariable))
.AddValue(localVariable)
.SetOrigin("Getting the localVariable in order to save the world")
.SetMessageId(Messages.EmptyWorldNameWhenTryingToSaveIt)
.ShouldBeIgnored()
.Build();

Each method above is an extension method on the Exception type. Any of them can decide to return the original object or a different one, but they all return an instance that extends Exception. The information attaching methods use the Data property to hold the information. The Build method is designed to take every information attached to an Exception and perform more complex actions, like logging or constructing a completely different object to be returned, however that step is also optional.

And here is the source for an ExceptionBuilder static class that acts as both container for the more common extension methods as well as the point where dependencies are being registered:
/// <summary>
/// Add data to exceptions, then build a Custom exception
/// using registered <see cref="IExceptionBuildHandler"/> and optional logging
/// </summary>
public static class ExceptionBuilder
{
private const string CustomPrefix = "Custom.";
 
private static readonly List<IExceptionBuildHandler> _handlers = new List<IExceptionBuildHandler>();
private static ICustomLogger _logger;
 
#region Extended Data
 
/// <summary>
/// Attaches a custom <see cref="Error"/> to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="error"></param>
/// <returns></returns>
public static Exception SetError(this Exception ex, Error error)
{
_logger?.LogTrace($"Setting error {error} in exception {ex}");
return ex.SetData(nameof(Error), error);
}
 
/// <summary>
/// Attaches an object as the exception origin to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="origin"></param>
/// <returns></returns>
public static Exception SetOrigin(this Exception ex, object origin)
{
_logger?.LogTrace($"Setting exception origin {origin} in exception {ex}");
return ex.SetData("origin", origin);
}
 
/// <summary>
/// Attaches a name parameter to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="name"></param>
/// <returns></returns>
public static Exception SetName(this Exception ex, string name)
{
_logger?.LogTrace($"Setting exception name {name} in exception {ex}");
return ex.SetData("name", name);
}
 
/// <summary>
/// Declare an exception as not breaking the execution flow.
/// Implement catch blocks for exceptions like this to support this scenario.
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static Exception TryToIgnore(this Exception ex)
{
_logger?.LogTrace($"Declaring exception {ex} as not breaking execution flow");
return ex.SetData("tryToIgnore", true);
}
 
/// <summary>
/// True if this exception is declared as not breaking execution flow
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static bool ShouldBeIgnored(this Exception ex)
{
return object.Equals(ex.GetData("tryToIgnore"), true);
}
 
/// <summary>
/// Attaches a type to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="type"></param>
/// <returns></returns>
public static Exception AddType(this Exception ex, Type type)
{
_logger?.LogTrace($"Attaching type {type} in exception {ex}");
return ex.AddData("types", type);
}
 
 
/// <summary>
/// Attaches a value to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="value"></param>
/// <returns></returns>
public static Exception AddValue(this Exception ex, object value)
{
_logger?.LogTrace($"Attaching type {value} in exception {ex}");
return ex.AddData("values", value);
}
 
/// <summary>
/// Gets data from exception based on key.
/// Returns null if not found.
/// </summary>
/// <param name="ex"></param>
/// <param name="key"></param>
/// <returns></returns>
public static object GetData(this Exception ex, string key)
{
key = $"{CustomPrefix}{key}";
if (ex.Data?.Contains(key)!=true)
{
return null;
}
return ex.Data[key];
}
 
/// <summary>
/// Attaches an object to exception data replacing any previous one with the same key
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="ex"></param>
/// <param name="key"></param>
/// <param name="value"></param>
/// <returns></returns>
public static Exception SetData<T>(this Exception ex, string key, T value)
{
key = $"{CustomPrefix}{key}";
var result = ex.AsCustomException();
ex.Data[key] = value;
return result;
}
 
/// <summary>
/// Adds an object to a list that resides in the exception data at the given key
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="ex"></param>
/// <param name="key"></param>
/// <param name="value"></param>
/// <returns></returns>
public static Exception AddData<T>(this Exception ex, string key, T value)
{
key = $"{CustomPrefix}{key}";
var result = ex.AsCustomException();
var alreadyExists = ex.Data.Contains(key);
if (!alreadyExists || !(ex.Data[key] is List<T> list))
{
if (alreadyExists)
{
_logger?.LogWarning($"Overwriting data {ex.Data[key]} with key {key} with an empty list of {typeof(T).Name} in exception {ex}.");
_logger?.LogWarning($"Are you using Add* and Set* builder methods at the same time or adding objects of different types?");
}
list = new List<T>();
ex.Data[key] = list;
}
lock (list)
{
list.Add(value);
}
return result;
}
 
#endregion Extended Data
 
/// <summary>
/// Builds the exception from the Data and the provided base exception
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static CustomException Build(this Exception ex)
{
var CustomException = ex.AsCustomException();
lock (_handlers)
{
for (var index = _handlers.Count - 1; index >= 0; index--)
{
var handler = _handlers[index];
var result = handler.Build(CustomException, _logger);
if (result != null)
{
CustomException = result.AsCustomException();
break;
}
}
}
_logger?.LogTrace($"Built exception {CustomException}");
return CustomException;
}
 
 
#region Registration
 
/// <summary>
/// Register an <see cref="IExceptionBuildHandler"/>. The last handler to be added will take precedence.
/// </summary>
/// <param name="handler"></param>
public static void RegisterBuildHandler(IExceptionBuildHandler handler)
{
lock (_handlers)
{
_logger?.LogTrace($"Registering exception build handler {handler}");
_handlers.Add(handler);
}
}
 
/// <summary>
/// Register a logger
/// </summary>
/// <param name="logger"></param>
public static void RegisterLogger(ICustomLogger logger)
{
_logger = logger;
_logger?.LogTrace($"Registered logger in the exception builder");
}
 
#endregion Registration
 
private static CustomException AsCustomException(this Exception ex)
{
return ex is CustomException CustomException
? CustomException
: new CustomException(ex);
}
}


Note a few things:
  • All of the extension methods are returning the same object, with the exception of Build, which returns a CustomException that maybe writes the extra Data values in ToString
  • The external dependencies are being registered via methods. In the class I use, I even replaced those methods with a RegisterServiceProvider method that sets up everything it needs, including the list of handlers, from dependency injection
  • One doesn't need to call Build and every extension method just naturally continues a normal existing code like throw new WhateverException();
  • When using Build, though, you can change the exception object that is being thrown just by injecting another instance of IExceptionBuildHandler
  • In my project, I've devised a method of injecting code via a text configuration file. That means that you can change what happens when an exception if being thrown without recompiling your existing code.

Finally, there is one design decision that I am not sure about: to use throw exception, or to use exception.Throw()? The former is natural to all devs, but it needs special catch blocks to be able to resume execution; whatever the builder returns, it will always throw something. The latter needs a change in all code that throws exceptions, but it could handle the decision to whether to throw anything at all without recompile.

I lean on the first, just because changes in an existing code base can be done incrementally and the code can be understood by all devs, regardless of seniority.

I find this to be a wonderful idea, clear, useful and flexible. I hope you do, too!

and has 0 comments

Update:

The problem described here was slightly false. The issue was that IOptionsSnapshot was registered as Scoped and I was just getting the service from the root IServiceProvider. The solution is to call provider.CreateScope() and with that scope as a provider use ActivatorUtilities. Even better, create a scope, then use it to get an instance of a business class that now would support Scoped services as well as Transient, just like a Controller would.

Warning, though: you need to dispose the scope, but you need to make sure you don't use any service that was created there outside the scope (after disposing).

I guess another solution would be to somehow register IOptionsSnapshot<> as transient, but haven't tried it.

And now for the original post

I was trying to create an instance of an object from a service provider to resolve any dependencies, using ActivatorUtilities.CreateInstance<MyObject>(_serviceProvider) and I was getting the exception:

System.InvalidOperationException
HResult=0x80131509
Message=Cannot resolve scoped service 'Microsoft.Extensions.Options.IOptionsSnapshot`1[ExternalConfiguration]' from root provider.


My object was receiving a parameter of type IOptionsSnapshot<ExternalConfiguration> and upon further investigation, my service provider (which came as a resolution from the dependency injection for IServiceProvider) was actually a ServiceProviderEngineScope which just refused to resolve any IOptionsSnapshot! Funny enough, if I replaced IOptionsSnapshot with IOptionsMonitor, which in my mind is a heavier interface, it worked without issues. Further still, the problem appeared only inside an IHostedService (a BackgroundService hooked up with services.AddHostedService<T>); if I wrote the same code in a controller action, for instance, it worked fine.

The .NET 2+ implementation of IOptionsSnapshot<T> is OptionsManager<T>. If I manually resolved an instance of OptionsManager before my object, then added it as a parameter, the code worked:

var optionsSnapshot = ActivatorUtilities.CreateInstance<OptionsManager<TestOptions>>(_serviceProvider);
var myObject = ActivatorUtilities.CreateInstance<MyObject>(_serviceProvider, optionsSnapshot);


So, specifically, the issue is that in .NET Core, the service provider implementation cannot resolve IOptionsSnapshot interfaces in worker services. You can still do that manually, but I suspect it is a bug, since there is no problem using an IOptionsMonitor instead of IOptionsSnapshot.

A possible solution is to use an additional service provider only for IOptionsSnapshot. Warning, this will not work in a general situation if the dependencies from the additional service provider also need parameters that would be found in the original service provider:

// initialization code
var serviceCollection = new ServiceCollection();
serviceCollection.AddSingleton(
typeof(IOptionsSnapshot<>),
typeof(OptionsManager<>)
);
serviceCollection.AddSingleton(
typeof(IOptionsFactory<>),
typeof(OptionsFactory<>)
);
_additionalServiceProvider = serviceCollection.BuildServiceProvider();
 
// resolution code
var constructor = typeof(MyObject).GetConstructors()
.Where(ci=>ci.IsPublic)
.Single();
var args = constructor.GetParameters()
.Select(p =>
{
try
{
return _serviceProvider.GetService(p.ParameterType);
}
catch
{
return _additionalServiceProvider.GetService(p.ParameterType);
}
})
.ToArray();
return ActivatorUtilities.CreateInstance<MyObject>(_serviceProvider, args);

and has 1 comment
.NET Core comes with its own dependency injection engine, separated in the Microsoft.Extensions.DependencyInjection package, and ASP.Net Core uses it by default. In a very simplistic description, it uses an IServiceCollection to add services to, then it builds an IServiceProvider from that list, an interface which returns an implementation based on a type or null if finding none. Any change in the list of services is not supported. There are situations, though, where you want to add new services. One of them being dynamically resolving new types.

Therefore I set up to create a custom implementation of IServiceProvider that fixes that, using the mechanisms already existing in .NET Core. Note that this is just something I did from frustration, "because I could". Most people choose to replace the entire IServiceProvider with an implementation that uses some other DI container, like StructureMap.

First attempt was proxying a normal ServiceProvider and keeping a reference to the collection. Then I would just change the collection and recreate the service provider. That has two major problems. One is that the previous serviceProvider is not disposed. If you try, you automatically dispose all services already resolved and if you do not, you remain with references to the created services. The second, and more dire, is that recreating the service provider will generate new instances for services, even if registered as singletons. That is not good.

I thought of a solution:
  1. keep a list of service providers, instead of just one
  2. use a custom service collection which will let us know when changes occurred
  3. whenever new services are added, add them to a list of new services
  4. whenever a service is resolved, go through the list of providers
  5. if any provider returns a value, provide it
  6. else if any new service create a new provider from the new services and add it to the list
  7. else return null
  8. when disposing, dispose all providers in the list

This works great except the newly added providers are separate from the existing providers so when you try to resolve a type with a second provider and that type has in its constructor a type that was registered in the first provider, you get nothing.

One solution would be to add all services to the second provider, not only the new ones, but then we get back to the original issue of the singletons, only a bit more subtle:
  1. register type1 as a singleton
  2. get an instance of type1 (1)
  3. build the provider
  4. get an instance of type1 (2)
  5. register type2 which receives a type1 in its constructor
  6. get an instance of type2
  7. now, type1 (1) is the same as type1 (2), because it was resolved by the same provider
  8. type1 is different from type2.type1, though, because that was resolved as a different singleton by the second provider in the list

One solution would be to add all previous services as factories, then. For Itype1, instead of returning typeof(type1), return a factory method that resolves the value with our system. And it works... until it reaches a definition (like IOptions) that was registered as an open generic: services.AddSingleton(typeof(IType3<>),typeof(Type3<>)). In case of open generics, you cannot use a descriptor with a factory, because it returns an object, regardless of the generic type argument used. It would not to do return a Type3<Banana> for a requested type of IType3<int>.

So, final version is this:
  1. keep a list of service providers, instead of just one
  2. keep a dictionary of the last object resolved for a type
  3. use a custom service collection which will let us know when changes occurred
  4. whenever new services are added, add them to a list of new services
  5. whenever a service is resolved, go through the list of providers
  6. if any provider returns a value, return it
  7. if no new services registered return null
  8. create a new provider from all the services like this:
    • if it's a new registration, use it as is
    • if it's an open generic definition type:
      • if singleton, add first all the existing resolutions for types that are defined by it
      • use the original descriptor afterwards
    • use a registration that proxies to the advanced resolution mechanism we created
  9. when disposing, dispose all providers in the list

This implementation also has a flaw: if a dependency parameter with a generic type definition descriptor was resolved as a singleton by an additional service provider, then is requested directly and can be resolved by a previous provider, it will return a different instance. Here is the scenario:
  1. the initial provider knows to map I<> to M<>
  2. you add a new singleton mapping from X to Y and Y gets a constructor parameter of type I<Z>
  3. you request an instance of X
  4. the first provider cannot resolve it
  5. the second provider can resolve it, therefore it will also resolve a I<Z> as an M<Z> singleton instance
  6. you request an instance of I<Z>
  7. the first provider can resolve it, therefore it will return a NEW singleton instance of M<Z>

This is an edge case that I don't have the time to solve. So, with the caveat above, here is the final version.
Use it like this:
// IAdvancedServiceProvider either injected 
// or resolved via serviceProvider.GetService<IAdvancedServiceProvider>
// or even serviceProvider as IAdvancedServiceProvider
advancedServiceProvider.ServiceCollection.AddSingleton...

And this is the source code:
/// <summary>
/// Service provider that allows for dynamic adding of new services
/// </summary>
public interface IAdvancedServiceProvider : IServiceProvider
{
/// <summary>
/// Add services to this collection
/// </summary>
IServiceCollection ServiceCollection { get; }
}
 
/// <summary>
/// Service provider that allows for dynamic adding of new services
/// </summary>
public class AdvancedServiceProvider : IAdvancedServiceProvider, IDisposable
{
private readonly List<ServiceProvider> _serviceProviders;
private readonly NotifyChangedServiceCollection _services;
private readonly object _servicesLock = new object();
private List<ServiceDescriptor> _newDescriptors;
private Dictionary<Type, object> _resolvedObjects;
 
/// <summary>
/// Initializes a new instance of the <see cref="AdvancedServiceProvider"/> class.
/// </summary>
/// <param name="services">The services.</param>
public AdvancedServiceProvider(IServiceCollection services)
{
// registers itself in the list of services
services.AddSingleton<IAdvancedServiceProvider>(this);
 
_serviceProviders = new List<ServiceProvider>();
_newDescriptors = new List<ServiceDescriptor>();
_resolvedObjects = new Dictionary<Type, object>();
_services = new NotifyChangedServiceCollection(services);
_services.ServiceAdded += ServiceAdded;
_serviceProviders.Add(services.BuildServiceProvider(true));
}
 
private void ServiceAdded(object sender, ServiceDescriptor item)
{
lock (_servicesLock)
{
_newDescriptors.Add(item);
}
}
 
/// <summary>
/// Add services to this collection
/// </summary>
public IServiceCollection ServiceCollection { get => _services; }
 
/// <summary>
/// Gets the service object of the specified type.
/// </summary>
/// <param name="serviceType">An object that specifies the type of service object to get.</param>
/// <returns>A service object of type serviceType. -or- null if there is no service object of type serviceType.</returns>
public object GetService(Type serviceType)
{
lock (_servicesLock)
{
// go through the service provider chain and resolve the service
var service = GetServiceInternal(serviceType);
// if service was not found and we have new registrations
if (service == null && _newDescriptors.Count > 0)
{
// create a new service collection in order to build the next provider in the chain
var newCollection = new ServiceCollection();
foreach (var descriptor in _services)
{
foreach (var descriptorToAdd in GetDerivedServiceDescriptors(descriptor))
{
((IList<ServiceDescriptor>)newCollection).Add(descriptorToAdd);
}
}
var newServiceProvider = newCollection.BuildServiceProvider(true);
_serviceProviders.Add(newServiceProvider);
_newDescriptors = new List<ServiceDescriptor>();
service = newServiceProvider.GetService(serviceType);
}
if (service != null)
{
_resolvedObjects[serviceType] = service;
}
return service;
}
}
 
private IEnumerable<ServiceDescriptor> GetDerivedServiceDescriptors(ServiceDescriptor descriptor)
{
if (_newDescriptors.Contains(descriptor))
{
// if it's a new registration, just add it
yield return descriptor;
yield break;
}
 
if (!descriptor.ServiceType.IsGenericTypeDefinition)
{
// for a non open type generic singleton descriptor, register a factory that goes through the service provider
yield return ServiceDescriptor.Describe(
descriptor.ServiceType,
_ => GetServiceInternal(descriptor.ServiceType),
descriptor.Lifetime
);
yield break;
}
// if the registered service type for a singleton is an open generic type
// we register as factories all the already resolved specific types that fit this definition
if (descriptor.Lifetime == ServiceLifetime.Singleton)
{
foreach (var servType in _resolvedObjects.Keys.Where(t => t.IsGenericType && t.GetGenericTypeDefinition() == descriptor.ServiceType))
{
 
yield return ServiceDescriptor.Describe(
servType,
_ => _resolvedObjects[servType],
ServiceLifetime.Singleton
);
}
}
// then we add the open type registration for any new types
yield return descriptor;
}
 
private object GetServiceInternal(Type serviceType)
{
foreach (var serviceProvider in _serviceProviders)
{
var service = serviceProvider.GetService(serviceType);
if (service != null)
{
return service;
}
}
return null;
}
 
/// <summary>
/// Dispose the provider and all resolved services
/// </summary>
public void Dispose()
{
lock (_servicesLock)
{
_services.ServiceAdded -= ServiceAdded;
foreach (var serviceProvider in _serviceProviders)
{
try
{
serviceProvider.Dispose();
}
catch
{
// singleton classes might be disposed twice and throw some exception
}
}
_newDescriptors.Clear();
_resolvedObjects.Clear();
_serviceProviders.Clear();
}
}
 
/// <summary>
/// An IServiceCollection implementation that exposes a ServiceAdded event for added service descriptors
/// The collection doesn't support removal or inserting of services
/// </summary>
private class NotifyChangedServiceCollection : IServiceCollection
{
private readonly IServiceCollection _services;
 
/// <summary>
/// Fired when a descriptor is added to the collection
/// </summary>
public event EventHandler<ServiceDescriptor> ServiceAdded;
 
/// <summary>
/// Initializes a new instance of the <see cref="NotifyChangedServiceCollection"/> class.
/// </summary>
/// <param name="services">The services.</param>
public NotifyChangedServiceCollection(IServiceCollection services)
{
_services = services;
}
 
/// <summary>
/// Get the value at index
/// Setting is not supported
/// </summary>
public ServiceDescriptor this[int index]
{
get => _services[index];
set => throw new NotSupportedException("Inserting services in collection is not supported");
}
 
/// <summary>
/// Count of services in the collection
/// </summary>
public int Count { get => _services.Count; }
 
/// <summary>
/// Obviously not
/// </summary>
public bool IsReadOnly { get => false; }
 
/// <summary>
/// Adding a service descriptor will fire the ServiceAdded event
/// </summary>
/// <param name="item"></param>
public void Add(ServiceDescriptor item)
{
_services.Add(item);
ServiceAdded.Invoke(this, item);
}
 
/// <summary>
/// Clear the collection is not supported
/// </summary>
public void Clear() => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// True is the item exists in the collection
/// </summary>
public bool Contains(ServiceDescriptor item) => _services.Contains(item);
 
/// <summary>
/// Copy items to array of service descriptors
/// </summary>
public void CopyTo(ServiceDescriptor[] array, int arrayIndex) => _services.CopyTo(array, arrayIndex);
 
/// <summary>
/// Enumerator for service descriptors
/// </summary>
public IEnumerator<ServiceDescriptor> GetEnumerator() => _services.GetEnumerator();
 
/// <summary>
/// Index of item in the list
/// </summary>
public int IndexOf(ServiceDescriptor item) => _services.IndexOf(item);
 
/// <summary>
/// Inserting is not supported
/// </summary>
public void Insert(int index, ServiceDescriptor item) => throw new NotSupportedException("Inserting services in collection is not supported");
 
/// <summary>
/// Removing items is not supported
/// </summary>
public bool Remove(ServiceDescriptor item) => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// Removing items is not supported
/// </summary>
public void RemoveAt(int index) => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// Enumerator for objects
/// </summary>
IEnumerator IEnumerable.GetEnumerator() => ((IEnumerable)_services).GetEnumerator();
}
}

and has 2 comments
We already know how to load types in .NET Framework and we know what they say we should use in .NET Core. But what about Standard? Is that a trick question? Sort of. Right now we have two .NET Standard and three .NET Core versions, albeit .NET Core 3 is in preview mode. The signature for AssemblyLoadContext and how it is used has changed dramatically. Core 3 enables context unloading, but Standard 2 does not. So you either are forced to build your library as Core 3 or you have to not use Unloading contexts or use reflection, which is not robust and probably will not be needed with the possible arrival of Standard 3.

But there are subtler issues at work. One of them is that, at least with .NET Core 3 Preview6, when you reference System.Runtime.Loader in a Standard library, so you can access AssemblyLoadContext, you get conflicts between the System.Runtime you are using and the one referenced by System.Runtime.Loader. The only solution is to use the System.Runtime.Loader NuGet package, but that returns you to the Standard 2 version of AssemblyLoadContext, even if the library version is higher!

The setup is this: I have an ITestInterface interface which resides in TestInterfaceLibrary.dll. I also have a TestImplementation class that can be found in TestImplementationLibrary.dll and implements ITestInterface. My program either does not reference any of these libraries or it only references the interface one. The task is to load both these types and then simply convert one instance of TestImplementation to ITestInterface. Simple test would be loading the types and then expecting interfaceType.IsAssignableFrom(implementationType) to be true.

Core 3


Let's first try the Core 3 way:
var context = new AssemblyLoadContext("testContext", true);
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface");
Console.WriteLine(interfaceType?.ToString()??"interface type not loaded");
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation");
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
Console.WriteLine("implementation implements interface: "+interfaceType.IsAssignableFrom(implementationType));
 
context.Unload();
The output is:
TestInterfaceLibrary.ITestInterface
TestImplementationLibrary.TestImplementation
implementation implements interface: True

It works! But only because the interface assembly is loaded first. If you try to load just the implementation type first, it will come up as empty. There are no exceptions thrown unless you get all the assembly types or specify the throwOnError parameter in GetType. The exception is "System.IO.FileNotFoundException: 'Could not load file or assembly 'TestInterfaceLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified.'".

In order to solve this, we need to use the Resolve event of the AssemblyLoadContext class. Let's try this:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation", true);
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine("implementation implements interface: " + interfaceType.IsAssignableFrom(implementationType));
 
context.Resolving -= Context_Resolving;
context.Unload();
 
private static Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}

And now it works again, by assuming the assembly name is the same as the assembly file name and that it is found in the same place.

But... if we try this in different contexts:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation", true);
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
context.Resolving -= Context_Resolving;
context.Unload();
context = new AssemblyLoadContext("testContext2", true);
context.Resolving += Context_Resolving;
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine("implementation implements interface: " + interfaceType.IsAssignableFrom(implementationType));
 
context.Resolving -= Context_Resolving;
context.Unload();
the output will show
implementation implements interface: False

This means that if we want to encapsulate this in a TypeLoader class or something, we cannot use different contexts for dynamically loading types. Even if we had one context that we would unload in order to refresh all the types, it could still be different from the main context, in case the interface is loaded twice or referenced directly in the project.

For example, if you reference TestInterfaceLibrary directly and you load TestImplementation dynamically it will work as expected, because ITestInterface is resolved automatically from the main context. However, if you load ITestInterface dynamically, too, it will be a different type from the referenced ITestInterface, even if they apparently have the same name and full name and assembly qualified name! So it kind of makes sense to not load a type twice. Is this where the context unloading comes in? Not really. Let's define a method that counts the number of types with a certain name in the current domain as
private static int CountTypes(string typeName)
{
return AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes().Where(t => t.FullName == typeName))
.Count();
}

And now let's run this code:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var referencedInterfaceType = typeof(ITestInterface);
Console.WriteLine(referencedInterfaceType?.ToString() ?? "interface type not loaded");
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine($"Types are the same: {interfaceType==referencedInterfaceType}");
 
Console.WriteLine($"Number of types with name {interfaceType.FullName}: {CountTypes(interfaceType.FullName)}");
 
context.Resolving -= Context_Resolving;
context.Unload();
Console.WriteLine($"Number of types with name {interfaceType.FullName}: {CountTypes(interfaceType.FullName)}");

There is the referenced type, then we load the type dynamically again, inside a new context. We count the types loaded in the current domain, we unload the context, we count the types again. The result is
TestInterfaceLibrary.ITestInterface
TestInterfaceLibrary.ITestInterface
Types are the same: False
Number of types with name TestInterfaceLibrary.ITestInterface: 2
Number of types with name TestInterfaceLibrary.ITestInterface: 2
The types are always 2!

Bottom line, even when unloading the AssemblyLoadContext, the types used are not unloaded and trying to find a type by name will result in duplicates.

OK, so let's just agree that types with the same name, once loaded, should remain there and no other type with the same name should be loaded. Let's try to incorporate this into a TypeLoader class:
public class TypeLoader : IDisposable
{
private readonly AssemblyLoadContext _context;
 
public TypeLoader()
{
_context = new AssemblyLoadContext(GetType().FullName, true);
_context.Resolving += Context_Resolving;
}
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = _context.LoadFromAssemblyPath(assemblyPath);
return assembly.GetType(typeName, true);
}
 
public void Dispose()
{
_context?.Resolving -= Context_Resolving;
_context?.Unload();
}
}

The code in our test is now much clearer:
var interfaceAssemblyPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestInterfaceLibrary.dll");
var implementationAssemblyPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestImplementationLibrary.dll");
var interfaceTypeName = "TestInterfaceLibrary.ITestInterface";
var implementationTypeName = "TestImplementationLibrary.TestImplementation";
 
using (var loader = new TypeLoader())
{
Type referencedType = typeof(TestInterfaceLibrary.ITestInterface);
var interfaceType = loader.LoadType(interfaceTypeName, interfaceAssemblyPath);
var implementationType = loader.LoadType(implementationTypeName, implementationAssemblyPath);
Console.WriteLine($@"
referenced type: {referencedType}
interface type: {interfaceType}
implementation type: {implementationType}
referenced and loaded interfaces are the same: {referencedType == interfaceType}
interface implemented: {interfaceType.IsAssignableFrom(implementationType)}"
);
}
and the result is
referenced type: TestInterfaceLibrary.ITestInterface
interface type: TestInterfaceLibrary.ITestInterface
implementation type: TestImplementationLibrary.TestImplementation
referenced and loaded interfaces are the same: True
interface implemented: True

But we still use Unload. Maybe it will work some day as I want it to work, but until then, why not get rid of Unload and make TypeLoader a class in a Standard 2 library?

Standard 2


For this I will create a new Standard 2 library project and then reference it in our test Core 3 project. Then I will move the TypeLoader class in the library project.

The errors in the library project are related to not knowing what an AssemblyLoadContext is, therefore the first solution is to reference System.Runtime.Loader from the framework. I get the immediate error "Assembly 'System.Runtime.Loader' with identity 'System.Runtime.Loader, Version=4.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' uses 'System.Runtime, Version=4.2.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' which has a higher version than referenced assembly 'System.Runtime' with identity 'System.Runtime, Version=4.1.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a".

Solution 2: load the System.Runtime.Loader NuGet package, which at the time of writing this, is version 4.3.0. The error is now gone, but several things are immediately apparent:
  1. the Unload method doesn't exist anymore
  2. the constructor doesn't receive a name and a bool anymore
  3. AssemblyLoadContext is now abstract

In order to solve this I am creating a DynamicAssemblyLoadContext class that inherits from AssemblyLoadContext and just return null from the Load method overload, and I give it an Unload method and a constructor with a string and a bool that don't do anything. And it works again. The updated TypeLoader class is now:
public class TypeLoader : IDisposable
{
private readonly DynamicAssemblyLoadContext _context;
 
public TypeLoader()
{
_context = new DynamicAssemblyLoadContext(GetType().FullName, true);
_context.Resolving += Context_Resolving;
}
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(ass => ass.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = _context.LoadFromAssemblyPath(assemblyPath);
return assembly.GetType(typeName, true);
}
 
public void Dispose()
{
_context?.Resolving -= Context_Resolving;
_context?.Unload();
}
 
 
private class DynamicAssemblyLoadContext : AssemblyLoadContext
{
public DynamicAssemblyLoadContext(string name, bool isCollectible)
{
}
 
protected override Assembly Load(AssemblyName assemblyName)
{
return null;
}
 
public void Unload()
{
}
}
}

The safe way


The code above has an issue, though. If the interface type is dynamically loaded before its referenced type is used, this fails again. This is the case when you use dependency injection. You dynamically load the types in order to register the implementation relationship to the interface, but then, when you ask for a resolution for the interface type, now referenced by the main project, you get another type named just the same.

The safe way, considering that we don't really use Unload and we don't count on it every working, why not use the default context, the one where everything loads, and be done with it. When you do that, the code becomes a little uglier, but it works in all situations.

Final version.
public class TypeLoader
{
private readonly object _resolutionLock = new object();
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var context = AssemblyLoadContext.Default;
lock (_resolutionLock)
{
context.Resolving += Context_Resolving;
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(ass => ass.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = context.LoadFromAssemblyPath(assemblyPath);
 
type = assembly.GetType(typeName, true);
context.Resolving -= Context_Resolving;
return type;
}
}
}

You just gotta hate that adding and removing the event inside a lock, right? Well, if you find a better solution, let me know.

and has 0 comments
I wrote some unit tests today (to see if I still feel) and I found a pattern that I believe I will use in the future as well. The problem was that I was testing a class that had multiple services injected in the constructor. Imagine a BigService that needs to receive instances of SmallService1, SmallService2 and SmallService3 in the constructor. In reality there will be more, but I am trying to keep this short. While a dependency injection framework handles this for the code, for unit tests these dependencies must be mocked.
C# Example (using the Moq framework for .NET):
// setup mock for SmallService1 which implements the ISmallService1 interface
var smallService1Mock = new Mock<ISmallService1>();
smallService1Mock
.Setup(srv=>srv.SomeMethod)
.Returns("some result");
var smallService1 = smallService1Mock.Object.
// repeat for 2 and 3
var bigService = new BigService(smallService1, smallService2, smallService3);

My desire was to keep tests separate. I didn't want to have methods that populated fields in the test class so that if they run in parallel, it wouldn't be a problem. But still, I didn't want to copy paste the same test a few times only to change some parameter or another. And the many local variables for mocks and object really bothered me. Moreover, there were some operations that I really wanted in all my tests, like a mock of a service that executed an action that I was giving it as a parameter, with some extra work before and after. I wanted that in all test cases, the mock would just execute the action.

Therefore I got to writing something like this:
public class BigServiceTester {
public Mock<ISmallService1> SmallService1Mock {get;private set;}
public Mock<ISmallService2> SmallService2Mock {get;private set;}
public Mock<ISmallService3> SmallService3Mock {get;private set;}
 
public ISmallService1 SmallService1 => SmallService1Mock.Object;
public ISmallService2 SmallService2 => SmallService2Mock.Object;
public ISmallService3 SmallService3 => SmallService3Mock.Object;
 
public void BigServiceTester() {
SmallService1Mock = new Mock<ISmallService1>();
SmallService2Mock = new Mock<ISmallService2>();
SmallService3Mock = new Mock<ISmallService3>();
}
 
public BigServiceTester SetupDefaultSmallService1() {
SmallService1Mock
.Setup(ss1=>ss1.Execute(It.IsAny<Action>()))
.Callback<Action>(a=>a());
return this;
}
 
public IBigService GetService() =>
new BigService(SmallService1, SmallService2, SmallService3);
}
 
// and here is how to use it in an XUnit test
[Fact]
public void BigServiceShouldDoStuff() {
var pars = new BigServiceTester()
.SetupDefaultSmallService1();
 
pars.SmallService2Mock
.Setup(ss2=>ss2.SomeMethod())
.Returns("some value");
 
var bigService=pars.GetService();
var stuff = bigService.DoStuff();
Assert.Equal("expected value", stuff);
}

The idea is that everything related to BigService is encapsulated in BigServiceTester. Tests will be small because one only creates a new instance of tester, then sets up code specific to the test, then the tester will also instantiate the service, followed by the asserts. Since the setup code and the asserts depend on the specific test, everything else is reusable, but also encapsulated. There is no setup code in the constructor but a fluent interface is used to easily execute common solutions.

As a further step, tester classes can implement interfaces, so for example if some other class needs an instance of ISmallService1, all it has to do is implement INeedsSmallService1 and the SetupDefaultSmallService1 method can be written as an extension method. I kind of like this way of doing things. I hope you do, too.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Previously on Learning ASP.Net MVC...


Started with the idea of a project that would use user configurable queries to do Google searches, store the information in the results and then perform various data analysis on them and display them based on what the user wants. However, I first implemented a Google authentication and then went to write some theoretical posts. Lastly, I've upgraded the project from .NET Core 1.0 to version 1.1.

Well, it took me a while to get here because I was at a crossroads. I like the idea of dependency injectable services to do data access. At the same time there is the entire Entity Framework tutorial path that kind of wants to strongly integrate EF with my projects. I mean, if I have a service that gives me the list of all items in the database and then I want to get only a few items, it would be bad design to filter the entire list. As such, I would have to write a different method that allows me to get the items based on some kind of filters. On the other hand, Entity Framework code looks just like that "give me all you have, filtered by this" which is then translated into an efficient query to the database. One possibility would be to have my service return IQueryable <T>, so I could also use the system to generate the database code on the fly.

The Design


I've decided on the service architecture, against an EF type IQueryable way, because I want to be able to replace that service with anything, including something that doesn't work with a database or something that doesn't know how to dynamically create queries. Also, the idea that the service methods will describe exactly what I want appeals to me more than avoiding a bit of duplicated code.

Another thing to define now is the method through which I will implement the dependency injection. Being the control freak that I am, I would go with installing my own library, something like SimpleInjector, and configure it myself and use it explicitly. However, ASP.Net Core has dependency injection included out of the box, so I will use that.

As defined, the project needs queries to pass on to Google and a storage service for the results. It needs data services to manage these entities, as well as a service to abstract Google itself. The data gathering operation itself cannot be a simple REST call, since it might take a while, it must be a background task. The data analysis as well. So we need a sort of job manager.

As per a good structured design, the data objects will be stored in a separate project, as well as the interfaces for the services we will be using.

Some code, please!


Well, start with the code of the project so far: GitHub and let's get coding.

Before finding a solution to actually run the background code in the context of ASP.Net, let's write it inside a class. I am going to add a folder called Jobs and add a class in it called QueryProcessor with a method ProcessQueries. The code will be self explanatory, I hope.
public void ProcessQueries()
{
var now = _timeService.Now;
var queries = _queryDataService.GetUnprocessed(now);
var contentItems = queries.AsParallel().WithDegreeOfParallelism(3)
.SelectMany(q => _contentService.Query(q.Text));
_contentDataService.Update(contentItems);
}

So we get the time - from a service, of course - and request the unprocessed queries for that time, then we extract the content items for each query, which then are updated in the database. The idea here is that, for the first time a query is defined or when the interval from the last time the query was processed, the query will be sent to the content service from which content items will be received. These items will be stored in the database.

Now, I've kept the code as concise as possible: there is no indication yet of any implementation detail and I've written as little code as I need to express my intention. Yet, what are all these services? What is a time service? what is a content service? Where are they defined? In order to enable dependency injection, we will populate all of these fields from the constructor of the query processor. Here is how the class would look in its entirety:
using ContentAggregator.Interfaces;
using System.Linq;

namespace ContentAggregator.Jobs
{
public class QueryProcessor
{
private readonly IContentDataService _contentDataService;
private readonly IContentService _contentService;
private readonly IQueryDataService _queryDataService;
private readonly ITimeService _timeService;

public QueryProcessor(ITimeService timeService, IQueryDataService queryDataService, IContentDataService contentDataService, IContentService contentService)
{
_timeService = timeService;
_queryDataService = queryDataService;
_contentDataService = contentDataService;
_contentService = contentService;
}

public void ProcessQueries()
{
var now = _timeService.Now;
var queries = _queryDataService.GetUnprocessed(now);
var contentItems = queries.AsParallel().WithDegreeOfParallelism(3)
.SelectMany(q => _contentService.Query(q.Text));
_contentDataService.Update(contentItems);
}
}
}

Note that the services are only defined as interfaces which we declare in a separate project called ContentAggregator.Interfaces, referred above in the usings block.

Let's ignore the job processor mechanism for a moment and just run ProcessQueries in a test method in the main controller. For this I will have to make dependency injection work and implement the interfaces. For brevity I will do so in the main project, although it would probably be a good idea to do it in a separate ContentAggregator.Implementations project. But let's not get ahead of ourselves. First make the code work, then arrange it all nice, in the refactoring phase.

Implementing the services


I will create mock services first, in order to test the code as it is, so the following implementations just do as little as possible while still following the interface signature.
public class ContentDataService : IContentDataService
{
private readonly static StringBuilder _sb;

static ContentDataService()
{
_sb = new StringBuilder();
}

public void Update(IEnumerable<ContentItem> contentItems)
{
foreach (var contentItem in contentItems)
{
_sb.AppendLine($"{contentItem.FinalUrl}:{contentItem.Title}");
}
}

public static string Output
{
get { return _sb.ToString(); }
}
}

public class ContentService : IContentService
{
private readonly ITimeService _timeService;

public ContentService(ITimeService timeService)
{
_timeService = timeService;
}

public IEnumerable<ContentItem> Query(string text)
{
yield return
new ContentItem
{
OriginalUrl = "http://original.url",
FinalUrl = "https://final.url",
Title = "Mock Title",
Description = "Mock Description",
CreationTime = _timeService.Now,
Time = new DateTime(2017, 03, 26),
ContentType = "text/html",
Error = null,
Content = "Mock Content"
};
}
}

public class QueryDataService : IQueryDataService
{
public IEnumerable<Query> GetUnprocessed(DateTime now)
{
yield return new Query
{
Text="Some query"
};
}
}

public class TimeService : ITimeService
{
public DateTime Now
{
get
{
return DateTime.UtcNow;
}
}
}

Now all I have to do is declare the binding between interface and implementation. The magic happens in ConfigureServices, in Startup.cs:
services.AddTransient<ITimeService, TimeService>();
services.AddTransient<IContentDataService, ContentDataService>();
services.AddTransient<IContentService, ContentService>();
services.AddTransient<IQueryDataService, QueryDataService>();

They are all transient, meaning that for each request of an implementation the system will just create a new instance. Another popular method is AddSingleton.

Using dependency injection


So, now I have to instantiate my query processor and run ProcessQueries.

One way is to set QueryProcessor as a service. I extract an interface, I add a new binding and then I give an interface as a parameter of my controller constructor:
[Authorize]
public class HomeController : Controller
{
private readonly IQueryProcessor _queryProcessor;

public HomeController(IQueryProcessor queryProcessor)
{
_queryProcessor = queryProcessor;
}

public IActionResult Index()
{
return View();
}

[HttpGet("/test")]
public string Test()
{
_queryProcessor.ProcessQueries();
return ContentDataService.Output;
}
}
In fact, I don't even have to declare an interface. I can just use services.AddTransient<QueryProcessor>(); in ConfigureServices and it works as a parameter to the controller.

But what if I want to use it directly, resolve it manually, without injecting it in the controller? One can use the injection of a IServiceProvider instead. Here is an example:
[Authorize]
public class HomeController : Controller
{
private readonly IServiceProvider _serviceProvider;

public HomeController(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}

public IActionResult Index()
{
return View();
}

[HttpGet("/test")]
public string Test()
{
var queryProcessor = _serviceProvider.GetService<QueryProcessor>();
queryProcessor.ProcessQueries();
return ContentDataService.Output;
}
}
Yet you still need to use services.Add... in ConfigureServices and inject the service provider in the constructor of the controller.

There is a way of doing it completely separately like this:
var serviceProvider = new ServiceCollection()
.AddTransient<ITimeService, TimeService>()
.AddTransient<IContentDataService, ContentDataService>()
.AddTransient<IContentService, ContentService>()
.AddTransient<IQueryDataService, QueryDataService>()
.AddTransient<QueryProcessor>()
.BuildServiceProvider();
var queryProcessor = serviceProvider.GetService<QueryProcessor>();

This would be the way to encapsulate the ASP.Net Dependency Injection in another object, maybe in a console application, but clearly it would be pointless in our application.

The complete source code after these modifications can be found here. Test the functionality by going to /test on your local server after you start the app.

It is about time to revisit my series on ASP.Net MVC Core. From the time of my last blog post the .Net Core version has changed to 1.1, so just installing the SDK and running the project was not going to work. This post explains how to upgrade a .Net project to the latest version.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Short version


Pressing the batch Update button for NuGet packages corrupted project.json. Here are the steps to successfully migrate a .Net Core project to a higher version.

  1. Download and install the .NET Core 1.1 SDK
  2. Change the version of the SDK in global.json - you can find out the SDK version by creating a new .Net Core project and checking what it uses
  3. Change "netcoreapp1.0" to "netcoreapp1.1" in project.json
  4. Change Microsoft.NETCore.App version from "1.0.0" to "1.1.0" in project.json
  5. Add
    "runtimes": {
    "win10-x64": { }
    },
    to project.json
  6. Go to "Manage NuGet packages for the solution", to the Update tab, and update projects one by one. Do not press the batch Update button for selected packages
  7. Some packages will restore, but remain in the list. Skip them for now
  8. Whenever you see a "downgrade" warning when restoring, go to those packages and restore them next
  9. For packages that tell you to upgrade NuGet, ignore them, it's an error that probably happens because you restore a package while the previous package restoring was not completed
  10. For the remaining packages that just won't update, write down their names, uninstall them and reinstall them

Code after changes can be found on GitHub

That should do it. For detailed steps of what I actually did to get to this concise list, read on.

Long version


Step 0 - I don't care, just load the damn project!


Downloaded the source code from GitHub, loaded the .sln with Visual Studio 2015. Got a nice blocking alert, because this was a .NET Core virgin computer:
Of course, I could have tried to install that version, but I wanted to upgrade to the latest Core.

Step 1 - read the Microsoft documentation


And here I went to Announcing the Fastest ASP.NET Yet, ASP.NET Core 1.1 RTM. I followed the instructions there, made Visual Studio 2015 load my project and automatically restore packages:
  1. Download and install the .NET Core 1.1 SDK
  2. If your application is referencing the .NET Core framework, your should update the references in your project.json file for netcoreapp1.0 or Microsoft.NetCore.App version 1.0 to version 1.1. In the default project.json file for an ASP.NET Core project running on the .NET Core framework, these two updates are located as follows:

    Two places to update project.json to .NET Core 1.1

  3. to be continued...

I got to the second step, but still got the alert...

Step 2 - fumble around


... so I commented out the sdk property in global.json. I got another alert:


This answer recommended uninstalling old versions of SDKs, in my case "Microsoft .NET Core 1.0.1 - SDK 1.0.0 Preview 2-003131 (x64)". Don't worry, it didn't work. More below:

TL;DR; version: do not uninstall the Visual Studio .NET Core Tooling


And then... got the same No executable found matching command "dotnet=projectmodel-server" error again.

I created a new .NET core project, just to see the version of SDK it uses: 1.0.0-preview2-003131 and I added it to global.json and reopened the project. It restored packages and didn't throw any errors! Dude, it even compiled and ran! But now I got a System.ArgumentException: The 'ClientId' option must be provided. Probably it had something to do with the Secret Manager. Follow the steps in the link to store your secrets in the app. It then worked.

Step 1.1 (see what I did there?) - continue to read the Microsoft documentation


The third step in the Microsoft instructions was removed by me because it caused some problems to me. So don't do it, yet. It was
  1. Update your ASP.NET Core packages dependencies to use the new 1.1.0 versions. You can do this by navigating to the NuGet package manager window and inspecting the “Updates” tab for the list of packages that you can update.

    Updating Packages using the NuGet package manager UI with the last pre-release build of ASP.NET Core 1.1


Since I had not upgraded the packages, as in the Microsoft third step, I decided to do it. 26 updates waited for me, so I optimistically selected them all and clicked Update. Of course, errors! One popped up as more interesting: Package 'Microsoft.Extensions.SecretManager.Tools 1.0.0' uses features that are not supported by the current version of NuGet. To upgrade NuGet, see http://docs.nuget.org/consume/installing-nuget. Another was even more worrisome: Unexpected end of content while loading JObject. Path 'dependencies', line 68, position 0 in project.json. Somehow the updating operation for the packages corrupted project.json! From a 3050 byte file, it now was 1617.

Step 3 - repair what the Microsoft instructions broke


Suspecting it was a problem with the NuGet package manager, I went to the link in the first error. But in Visual Studio 2015 NuGet is included and it was clearly the latest version. So the only solution was to go through each package and see which causes the problem. And I went to 26 packages and pressed Install on each and it worked. Apparently, the batch Update button is causing the issue. Weirdly enough there are two packages that were installed, but remained in the Update tab and also appeared in the Consolidate tab: BundleMinifier.Core and Microsoft.EntityFrameworkCore.Tools, although I can't to anything with them there.

Another package (Microsoft.VisualStudio.Web.CodeGeneration.Tools 1.0.0) caused another confusing error: Package 'Microsoft.VisualStudio.Web.CodeGeneration.Tools 1.0.0' uses features that are not supported by the current version of NuGet. To upgrade NuGet, see http://docs.nuget.org/consume/installing-nuget. Yet restarting Visual Studio led to the disappearance of the CodeGeneration.Tools error.

So I tried to build the project only to be met with yet another project.json corruption error: Can not find runtime target for framework '.NETCoreAPP, Version=v1.0' compatible with one of the target runtimes: 'win10-x64, win81-x64, win8-x64, win7-x64'. Possible causes: [blah blah] The project does not list one of 'win10-x64, win81-x64, win7-x64' in the 'runtimes' [blah blah]. I found the fix here, which was to add
"runtimes": {
"win10-x64": { }
},
to project.json.

It compiled. It worked.

and has 6 comments

Intro


I want to talk today about principles of software engineering. Just like design patterns, they range from useful to YAA (Yet Another Acronym). Usually, there is some guy or group of people who decide that a set of simple ideas might help software developers write better code. This is great! Unfortunately, they immediately feel the need to assign them to mnemonic acronyms that make you wonder if they didn't miss some principles from their sets because they were bad at anagrams.

Some are very simple and not worth exploring too much. DRY comes from Don't Repeat Yourself, which basically means don't write the same stuff in multiple places, or you will have to keep them synchronized at every change. Simply don't repeat yourself. Don't repeat yourself. See, it's at least annoying. KISS comes from Keep It Simple, Silly - yeah, let's be civil about it - anyway, the last letter is there just so that the acronym is actually a word. The principle states that avoiding unnecessary complexity will make your system more robust. A similar principle is YAGNI (You Aren't Gonna Need It - very New Yorkish sounding), which also frowns upon complexity, in particular the kind you introduce in order to solve a possible future problem that you don't have.

If you really want to fill your head with principles for software engineering, take a look at this huge list: List of software development philosophies.

But what I wanted to talk about was SOLID, which is so cool that not only does it sound like something you might want your software project to be, but it's a meta acronym, each letter coming from another acronym:
  • S - SRP
  • O - OCP
  • L - LSP
  • I - ISP
  • D - DIP

OK, I was just making it look harder than it actually is. Each of the (sub)acronyms stands for a principle (hence the last P in each) and even if they have suspiciously sounding names that hint on how much someone wanted to call their principles SOLID, they are really... err... solid. They refer specifically to object oriented programming, but I am sure they apply to all types. Let's take a look:

Single Responsibility


The idea is that each of your classes (or modules, units of code, functions, etc) should strive towards only one functionality. Why? Because if you want to change your code you should first have a good reason, then you should know where is that single (DRY) point where that reason applies. One responsibility equals one and only one reason to change.

Short example:
function getTeam() {
var result=[];
var strongest=null;
this.fighters.forEach(function(fighter) {
result.push(fighter.name);
if (!strongest||strongest.power<fighter.power) {
strongest=fighter;
}
});
return {
team:result,
strongest:strongest;
};
}

This code iterates through the list of fighters and returns a list of their names. It also finds the strongest fighter and returns that as well. Obviously, it does two different things and you might want to change the code to have two functions that each do only one. But, you will say, this is more efficient! You iterate once and you get two things for the price of one! Fair enough, but let's see what the disadvantages are:
  • You need to know the exact format of the return object - that's not a big deal, but wouldn't you expect to have a team object returned by a getTeam function?
  • Sometimes you might want just the list of fighters, so computing the strongest is superfluous. Similarly, you might only want the strongest player.
  • In order to add stuff in the iteration loop, the code has become more complex - at least when reading it - than it has to be.

Here is how the code could - and should - have looked. First we split it into two functions:
function getTeam() {
var result=[];
this.fighters.forEach(function(fighter) {
result.push(fighter.name);
});
return result;
}

function getStrongestFighter() {
var strongest=null;
this.fighters.forEach(function(fighter) {
if (!strongest||strongest.power<fighter.power) {
strongest=fighter;
}
});
return strongest;
}
Then we refactor it to something simple and readable:
function getTeam() {
return this.fighters
.map(function(fighter) { return fighter.name; });
}

function getStrongestFighter() {
return this.fighters
.reduce(function(strongest,val) {
return !strongest||strongest.power<val.power?val:strongest;
});
}

Open/Closed


When you write your code the last thing you want to do is go back to it and change it again and again whenever you implement a new functionality. You want the old code to work, be tested to work, and allow new functionality to be built on top of it. In object oriented programming that simply means you can extend classes, but practically it also means the entire plumbing necessary to make this work seamlessly. Simple example:
function Fighter() {
this.fight();
}
Fighter.prototype={
fight : function() {
console.log('slap!');
}
};

function ProfessionalFighter() {
Fighter.apply(this);
}
ProfessionalFighter.prototype={
fight : function() {
console.log('punch! kick!');
}
};

function factory(type) {
switch(type) {
case 'f': return new Fighter();
case 'pf': return new ProfessionalFighter();
}
}

OK, this is a silly example, since I am lazily emulating inheritance in a prototype based language such as Javascript. But the result is the same: both constructors will .fight by default, but each of them will have different implementations. Note that while I cannibalized bits of Fighter to build ProfessionalFighter, I didn't have to change it at all. I also built a factory method that returns different objects based on the input. Both possible returns will have a .fight method.

The open/closed principle also applies to non inheritance based languages and even in classic OOP languages. I believe it leads to a natural outcome: preferring composition over inheritance. Write your code so that the various modules just seamlessly connect to each other, like Lego blocks, to build your end product.

Liskov substitution principle


No, L is not coming from any word that they could think of, so they dragged poor Liskov into it. Forget the definition, it's really stupid and complicated. Not KISSy at all! Assume you have the base class Fighter and the more specialized subclass ProfessionalFighter. If you had a piece of code that uses a Fighter object, then this principle says you should be able to replace it with a ProfessionalFighter and the piece of code would still be correct. It may not be what you want, but it would work.

But when does a subclass break this principle? One very ugly antipattern that I have seen is when general types know about their subtypes. Code like "if I am a professional fighter, then I do this" breaks almost all SOLID principles and it is stupid. Another case is when the general class has everything they can think of, either abstract or implemented as empty or throwing an error, then the base classes implement one or the other of the functions. Dude! If your fighter doesn't implement .fight, then it is not a fighter!

I don't want to add code to this, since the principle is simple enough. However, even if it is an idea that primarily applies to OOP it doesn't mean it doesn't have its uses in other types of programming languages. One might say it is not even related to programming languages, but to concepts. It basically says that an orange should behave like an orange if it's blue, otherwise don't call it a blue orange!

Interface segregation principle


This one is simple. It basically is the reverse of the Liskov: split your interfaces - meaning the desired functionality of your code - into pieces that are fully used. If you have a piece of code that uses a ProfessionalFighter, but all it does is use the .fight method, then use a Fighter, instead. Silly example:
public class Fighter {
public virtual void Fight() {
Console.WriteLine("slap!");
}
}

public class EnglishFighter {
public override void Fight() {
Console.WriteLine("box!");
}
public override void Talk() {
Console.WriteLine("Oy!");
}
}

class Program {
public static void Main() {
EnglishFighter f=getMeAFighter();
f.Fight();
}
}

I don't even know if it's valid code, but anyway, the idea here is that there is no reason for me to declare and use the variable f as an EnglishFighter, if all it does is fight. Use a Fighter type. And a YAGNI on you if you thought "wait, but what if I want him to talk later on?".

Dependency inversion principle


Oh, this is a nice one! But it doesn't live in a vacuum. It is related to both SRP and OCP as it states that high level modules should not depend on low level modules, only on their abstractions. In other words, use interfaces instead of implementations wherever possible.

I wrote an entire other post about Inversion of Control, the technique that allows you to properly use and enjoy interfaces, while keeping your modules independent, so I am not going to repeat things here. As a hint, simply replacing Fighter f=new Fighter() with IFighter f=new Fighter() is not enough. You must use a piece of code that decides for you what implementation of an interface will be used, something like IFighter f=GetMeA(typeof(IFighter)).

The principle is related to SRP because it says a boss should not be controlling or depending on what particular things the employees will do. Instead, he should just hire some trustworthy people and let them do their job. If a task depends so much on John that you can't ever fire him, you're in trouble. It is also related to OCP because the boss will behave like a boss no matter what employee changes occur. He may hire more people, replace some, fire some others, the boss does not change. Nor does he accept any responsibility, but that's a whole other principle ;)

Conclusions


I've explored some of the more common acronyms from hell related to software development and stuck more to SOLID, because ... well, you have to admit, calling it SOLID works! Seriously now, following principles such as these (choose your own, based on your own expertise and coding style) will help you a lot later on, when the complexity of your code will explode. In a company there is always that one poor guy who knows everything and can't work well because everybody else is asking him why and how something is like it is. If the code is properly separated on solid principles, you just need to look at it and understand what it does and keep the efficiency of your changes high. One small change for man , like. Let that unsung hero write their code and reach software Valhalla.

SRP keeps modules separated, OCP keeps code free of need for modifications, LSP and ISP make you use the basest of classes or interfaces, reinforcing the separation of modules, while DIP is helping you sharpen the boundaries between modules. Taken together, SOLID principles are obviously about focus, allowing you to work on small, manageable parts of the code without needing knowledge of others or having to make changes to them.

Keep coding!