If you are like me, you want to first establish a nice skeleton app that has everything just right before you start writing your actual code. However, as weird as it may sound, I couldn't find a way to use command line parameters with dependency injection, in the same simple way that one would use a configuration file with IOptions<T> for example. This post shows you how to use CommandLineParser, a nice library that handler everything regarding command line parsing, but in a dependency injection friendly way.

  In order to use command line arguments, we need to obtain them. For any .NET Core application or .NET Framework console application you get it from the parameters of the static Main method from Program. Alternately, you can use Environment.CommandLine, which is actually a string, not an array of strings. But all of these are kind of nudging you towards some ugly code that either has a dependency on the static Environment, either has code early in the application to handle command line arguments, or stores the arguments somehow. What we want is complete separation of modules in our application.

  How can we get the arguments by injection? By creating a new type that encapsulates the simple string array.

// encapsulates the arguments
public class CommandLineArguments
{
    public CommandLineArguments(string[] args)
    {
        this.Args = args;
    }

    public string[] Args { get; }
}

// adds the type to dependency injection
services.AddSingleton<CommandLineArguments>(new CommandLineArguments(args));
// the generic type declaration is superfluous, but the code is easy to read

  With this, we can access the command line arguments anywhere by injecting a CommandLineArguments object and accessing the Args property. But this still implies writing command line parsing code wherever we need that data. We could add some parsing logic in the CommandLineArguments class so that instead of the command line arguments array it would provide us with a strong typed value of the type we want. But then we would put business logic in a command line encapsulation class. Why would it know what type of options we need and why would we need only one type of options? 

  What we would like is something like

public SomeClass(IOptions<MyCommandLineOptions> clOptions) {...}

  Now, we could use this system by writing more complicated that adds a ConfigurationSource and then declaring that certain types are command line options. But I don't want that either for several reasons:

  • writing configuration providers is complex code and at some moment in time one has to ask how much are they willing to write in order to get some damn arguments from the command line
  • declaring the types at the beginning does provide some measure of centralized validation, but on the other hand it's declaring types that we need in business logic somewhere in service configuration, which personally I do not like

  What I propose is adding a new type of IOptions, one that is specific to command line arguments:

// declare the interface for generic command line options
public interface ICommandLineOptions<T> : IOptions<T>
    where T : class, new() { }

// add it to service configuration
services.AddSingleton(typeof(ICommandLineOptions<>), typeof(CommandLineOptions<>));

// put the parsing logic inside the implementation of the interface
public class CommandLineOptions<T> : ICommandLineOptions<T>
    where T : class, new()
{
    private T _value;
    private string[] _args;

    // get the arguments via injection
    public CommandLineOptions(CommandLineArguments arguments)
    {
        _args = arguments.Args;
    }

    public T Value
    {
        get
        {
            if (_value==null)
            {
                // set the value by parsing command line arguments
            }
            return _value;
        }
    }

}

  Now, in order to make it work, we will use CommandLineParser which functions in a very simple way:

  • declare a Parser
  • create a POCO class that has properties decorated with attributes that define what kind of command line parameter they are
  • parse the command line arguments string array into the type of class declared above
  • get the value or handle errors

  Also, to follow the now familiar Microsoft pattern, we will write an extension method to register both arguments and the mechanism for ICommandLineOptions. The end result is:

// extension class to add the system to services
public static class CommandLineExtensions
{
    public static IServiceCollection AddCommandLineOptions(this IServiceCollection services, string[] args)
    {
        return services
            .AddSingleton<CommandLineArguments>(new CommandLineArguments(args))
            .AddSingleton(typeof(ICommandLineOptions<>), typeof(CommandLineOptions<>));
    }
}

public class CommandLineArguments // defined above

public interface ICommandLineOptions<T> // defined above

// full class implementation for commmand line options
public class CommandLineOptions<T> : ICommandLineOptions<T>
    where T : class, new()
{
    private T _value;
    private string[] _args;

    public CommandLineOptions(CommandLineArguments arguments)
    {
        _args = arguments.Args;
    }

    public T Value
    {
        get
        {
            if (_value==null)
            {
                using (var writer = new StringWriter())
                {
                    var parser = new Parser(configuration =>
                    {
                        configuration.AutoHelp = true;
                        configuration.AutoVersion = false;
                        configuration.CaseSensitive = false;
                        configuration.IgnoreUnknownArguments = true;
                        configuration.HelpWriter = writer;
                    });
                    var result = parser.ParseArguments<T>(_args);
                    result.WithNotParsed(errors => HandleErrors(errors, writer));
                    result.WithParsed(value => _value = value);
                }
            }
            return _value;
        }
    }

    private static void HandleErrors(IEnumerable<Error> errors, TextWriter writer)
    {
        if (errors.Any(e => e.Tag != ErrorType.HelpRequestedError && e.Tag != ErrorType.VersionRequestedError))
        {
            string message = writer.ToString();
            throw new CommandLineParseException(message, errors, typeof(T));
        }
    }
}

// usage when configuring dependency injection
services.AddCommandLineOptions(args);

Enjoy!

Now there are some quirks in the implementation above. One of them is that the parser class generates the usage help by writing it to a TextWriter (default being Console.Error), but since we want this to be encapsulated, we declare our own StringWriter and then store the generated help if any errors. In the case above, I am storing the help text as the exception message, but it's the principle that matters.

Also, with this system one can ask for multiple types of command line options classes, depending on the module, without the need to declare said types at the configuration of dependency injection. The downside is that if you want validation of the command line options at the very beginning, you have to write extra code. In the way implemented above, the application will fail when first asking for a command line option that cannot be mapped on the command line arguments.

As a bonus, here is the way to define an option class that CommandLineParser will parse from the arguments:

// the way we want to use the app is
// FileUtil <command> [-loglevel loglevel] [-quiet] -output <outputFile> file1 file2 .. file10
public class FileUtilOptions
{
    // use Value for parameters with no name
    [Value(0, Required = true, HelpText = "You have to enter a command")]
    public string Command { get; set; }

    // use Option for named parameters
    [Option('l',"loglevel",Required = false, HelpText ="Log level can be None, Normal, Verbose")]
    public string LogLevel { get; set; }

    // use bool for named parameters with no value
    [Option('q', "quiet", Default = false, Required = false, HelpText = "Quiet mode produces no console output")]
    public bool Quiet { get; set; }

    // Required for required values
    [Option('o', "output", Required = true, HelpText = "Output file is required")]
    public string OutputFile { get; set; }

    // use Min/Max for enumerables
    [Value(1, Min = 1, Max = 10, HelpText = "At least one file name and at most 10")]
    public IEnumerable<string> Files { get; set; }
}

Note that the short style of a parameter needs to be used with a dash, the long one with two dashes:

  • -o outputFile.txt - correct (value outputFile.txt)
  • --output outputFile.txt - correct (value outputFile.txt)
  • -output outputFile.txt - incorrect (value utput and outputFile.txt is considered an unnamed argument)

Intro

  As I was working on LInQer I was hooked on the multiple optimizations that I was finding. Do you want to compute the average of an iterable? You would need the total count and the sum of the items, which you can get in a single function that you can reuse to get the sum or the count. But what if the iterable is an integer range between 1 and 10? Then you can compute the sum and you already know the count. Inspired by that work and by other concepts like interval types or Maybe/Nullable types, I've decided to write this post, which I do not know if it will lead to any usable code.

What is an iterable/enumerable?

  In Javascript they call it an Iterable, in .NET you have IEnumerable. They mean the same thing: sources of values. With new concepts like async/await you can use Observables as Enumerables as well, although they are theoretically diametrically opposing patterns. In both languages they are implemented as having a method that returns an iterator/enumerator, an object that can move to the next value, give you the next value and maybe reset itself. You can define any stream of values like that, having one or more values or, indeed, none. From now own I will discuss in terms of .NET nomenclature, but I see no reason why it wouldn't apply to any other language that implements this feature.

  For example an array is defined as an IEnumerable<T> in .NET. Its enumerator will return false if trying to move to the next value and the array is empty, or true if there is at least a value and the current value will return the first value in the array. Move next again and it will return true or false depending on whether there is a next value. But there is no need for the values to exist to have an Enumerable. One could write an object that would return all the positive integer numbers. It's length would be infinite and the values would only be generated when requested. There is no differentiation between an Enumerable and a generator in .NET, but there is in Javascript. For this reason whenever I will use the term generator, I will mean an object that generates values rather than produce them from a source of existing ones.

The NULL controversy

  A very popular InfoQ post describes the introduction of the NULL concept in programming languages a the billion dollar mistake. I am not so sure about that, but I can concede they make good points. The alternative to using a special value to describe the absence of a value is use an "option" object that either has Some value or it has None. You would check the existence of a value by calling a method to tell you if it has a value and you would get the value from the current value property. Doesn't it sound familiar? It's a more specific case of an Enumerator! Another popular solution to remove NULLs from code is to never return values from your methods, but arrays. An empty array would represent no value. An array is an Enumerable!

And that last idea opens up an interesting possibility: instead of one or none, you can have multiple values. What then? What would a multiplication mean? What about a decision block?

The LInQer experience

  If you know me, you are probably fed up with me plugging LInQer as the greatest thing since fire was invented. But that's because it is! And while implementing .NET LInQ as a Javascript library I've played with some very interesting concepts.

  For example, while implementing the Last operator on enumerables, I had two different implementations depending on whether one could know the length in advance and one could use indexed access to the values. An array of one billion values has no problem giving you the last item in it because of two things: you know where the array ends and you can access any item at any position without having to go through other values. You just take the value at index one billion minus one. If you would have a generator, though, the only way to get the last value would be to enumerate again and again and again and only when moving to the next value would fail you would have the last value as the last one. And of course, this would be bad if there are no bounds to the generator.

  But there is more. What about very common statistical values like the sum? This, of course, applies to numbers. The Enumerable need not produce numbers, so in other contexts it means nothing. Then there are concepts like statistical distribution. One can make some assumptions if they know the distribution of values. A constant yet infinite generator of numbers will always have the same average value. It would return the same value, regardless of index.

  I spent a lot of time doing sorting that only needs a part of the enumerable, or partial sorting. I've implemented a Quicksort algorithm that works faster than the default sort when there are enough values and that can ignore the parts of the array that I don't need. Also, there are specific algorithm to return the last or first N items. All of this depends on functions that determine the order of items. Randomness is also interesting, as it needs to take into consideration the change of probabilities as the list of items increases with each request. Sampling was fun, too!

  Then there were operators like Distinct or Group which needed to use functions to determine sameness.

  With all this work, I've never intended to make this more than what LInQ is in .NET: a way to dynamically filter and map and enumerate sequences of items without having to go through them all or to create intermediate but unnecessary arrays. What I am talking about now is taking things further and deeper.

Continuous intervals

  What if the Enumerable abstraction is not enough? For example one could define the interval of real numbers between 0 and 1. You can never enumerate the next value, but there are definite boundaries, a clear distribution of values, a very obvious average. What about series and limits? If a generator generates values that depend on previous values, like a geometric progression or a Fibonacci series, you can sometimes compute the minimum or maximum value of the items in it, or of their sums.  

Tools

  So we have more concepts in our bag now:

  • move next (function)
  • current value
  • item length (could be infinite or just unknown)
  • indexed access (or not)
  • boundaries (min, max, limits)
  • distribution (probabilities)
  • order
  • discreteness

  How could we use these?

Concrete cases

  There is one famous probabilities problem: what are the chances you will get a particular value by throwing a number of dice. And it is interesting because there is a marked difference between using one die or more. Using at least two dice and plotting the values you get after multiple throws you get what is called a Normal distribution, a Gauss curve, and that's because there are more combinations of values that sum up to 6 than there are for 2.

  How can we declare a value that belongs to an interval? One solution is to add all kinds of metadata or validations. But what if we just declare an iterable with one value that has a random value between 1 and 6? And what if we add it with another one? What would that mean?

  Here is a demo example. It's silly and it looks too much like the Calculator demos you see for unit testing and I really hate those, but I do want to just demo this. What else can we do with this idea? I will continue to think about it.

class Program
    {
        static void Main(string[] args)
        {
            var die1 = new RandomGenerator(1, 6);
            var die2 = new RandomGenerator(1, 6);
            // just get the value
            var value1 = die1.First() + die2.First();
            // compose the two dice using Linq, then get value
            var value2 = die1.Zip(die2).Select(z => z.First + z.Second).First();
            // compose the two dice using operator overload, then get value
            var value3 = (die1 + die2).First();
            var min = (die1 + die2).Min();
        }

        /// <summary>
        /// Implemented Min alone for demo purposes
        /// </summary>
        /// <typeparam name="T"></typeparam>
        public interface IGenerator<T> : IEnumerable<T>
        {
            int Min();
        }

        /// <summary>
        /// Generates integer values from minValue to maxValue inclusively
        /// </summary>
        public class RandomGenerator : IGenerator<int>
        {
            private readonly Random _rnd;
            private readonly int _minValue;
            private readonly int _maxValue;

            public RandomGenerator(int minValue, int maxValue)
            {
                _rnd = new Random();
                this._minValue = minValue;
                this._maxValue = maxValue;
            }

            public static IGenerator<int> operator +(RandomGenerator gen1, IGenerator<int> gen2)
            {
                return new AdditionGenerator(gen1, gen2);
            }

            public IEnumerator<int> GetEnumerator()
            {
                while (true)
                {
                    yield return _rnd.Next(_minValue, _maxValue + 1);
                }
            }

            IEnumerator IEnumerable.GetEnumerator()
            {
                return ((IEnumerable<int>)this).GetEnumerator();
            }

            public int Min()
            {
                return _minValue;
            }
        }
        
        /// <summary>
        /// Combines two generators through addition
        /// </summary>
        internal class AdditionGenerator : IGenerator<int>
        {
            private IGenerator<int> _gen1;
            private IGenerator<int> _gen2;

            public AdditionGenerator(Program.RandomGenerator gen1, Program.IGenerator<int> gen2)
            {
                this._gen1 = gen1;
                this._gen2 = gen2;
            }

            public IEnumerator<int> GetEnumerator()
            {
                var en1 = _gen1.GetEnumerator();
                var en2 = _gen2.GetEnumerator();
                while (true)
                {
                    var hasValue = en1.MoveNext();
                    if (hasValue != en2.MoveNext())
                    {
                        throw new InvalidOperationException("One generator stopped providing values before the other");
                    }
                    if (!hasValue)
                    {
                        yield break;
                    }
                    yield return en1.Current + en2.Current;
                }

            }

            IEnumerator IEnumerable.GetEnumerator()
            {
                return ((IEnumerable<int>)this).GetEnumerator();
            }

            public int Min()
            {
                return _gen1.Min() + _gen2.Min();
            }
        }
    }

Conclusion (so far)

I am going to think about this some more. It has a lot of potential as type abstraction, but to be honest, I deal very little in numerical values and math and statistics, so I don't see what I personally could do with this. I suspect, though, that other people might find it very useful or at least interesting. And yes, I am aware of mathematical concepts like interval arithmetic and I am sure there are a ton of existing libraries that already do something like that and much more, but I am looking at this more from the standpoint of computer science and quasi-primitive types than from a mathematical or numerical perspective. If you have any suggestions or ideas or requests, let me know!

  You can consider this an interview question, although to be fair if someone did ask me this for an interview I would say they are assholes. What is the difference between the pre-increment operator and the post-increment operator in C#?

  They look the same in C and C# and Javascript and Java and all the languages that share the curly bracket syntax with C, but in fact they are slightly different. Slight enough to make someone an asshole for asking the question as if it were relevant, but important enough for you to read about it. One of the most common interpretations of the syntax is that x++ is incrementing the value after the operation, while ++x is incrementing it before the operation. That is wrong.

  In fact, for C++ the return values are different between pre and post operators. I am not a C++ dev, so I give you this reference link: "Pre operators increment or decrement the value of the object and return a reference to the result. Post operators create a copy of the object, increment or decrement the value of the object and return the copy from before the increment or decrement." So one returns an object, the other returns a reference to an object. It is also possible that the assignment be done after the value was produced in C or C++. In C# the assignment must be done before any value is returned.

  In C#, to paraphrase Eric Lippert, "Both pre and post operators determine the value of the variable, what value will be assigned back to storage and assign the new value to storage. The postfix operator produces the original value, and the prefix operator produces the assigned value." So it's (kindda) like this piece of code:

int Increment(ref int x, bool post) {
  var originalX = x;
  var newX = x+1;
  x = newX;
  return post ? originalX : newX;
}

  So why the hell does it matter? I mean, it's a rather meaningless difference between the programming languages and the before/after mnemonic is making the code pretty clear, doesn't it? OK. Let's try some code and let me see how fast you come up with the answer. Remember, this is supposed to be simple, so if you are thinking too much about it, it doesn't matter you get the correct answer. Ready?

  1. Any difference between x++ and ++x if the resulting value is not used?
  2. var a=1; var b=++a; What's the value of b?
  3. var a=1; var b=a++; var c=++a; What's the value of c?
  4. var i = 0; for (i=0; i<5; ++i) Console.Write(i+" "); Console.WriteLine(i); What is printed at the console?
  5. var i = 0; for (i=0; i<5; i++) Console.Write(i+" "); Console.WriteLine(i); What is printed at the console?
  6. var a=1; a=a++; What's the value of a?

And all of this was about the increment operator as normally used for integer values. There is a big part about operator overloading in there, but I believe less relevant in the context of differences between pre and post increment/decrement operators.

There is one important part to discuss, though, and that is best code practices. When to use post and when to use pre. And they are really easy: separate statements from expressions! Statements execute code with side effects, they should return nothing. Expressions return values without side effects. If you never use the value of an increment or decrement and instead use it as a statement with side-effects, there is no difference between ++a and a++. In fact one doesn't need the preincrement/predecrement operators at all! In this context, the answers for the questions above is 1. No 2,3,6: You are using it wrong! 4,5: the same thing, since without getting the value we have scenario 1.

Just for reference, though, here are the answers:

  1. No
  2. 2
  3. 3 (b is 1)
  4. 0 1 2 3 4 5
  5. 0 1 2 3 4 5
  6. 1

Hope that makes you think.

  This is something that appeared in C# 5, so a long time ago, with .NET 4.5, but I only found out about it recently. Remember when you wanted to know the name of a property when doing INotifyPropertyChanged? Or when you wanted to log the name of the method that was calling? Or you wanted to know which line in which source file is responsible for calling a certain piece of code? All of this can be done with the Caller Information feature.

  And it is easy enough to use, just decorate a method parameter with an explicit default value with any of these three attributes:

The parameter value, if not set when calling the method, will be filled in with the member name or file name or line number. It's something that the compiler does, so no overhead from reflection. Even better, it works on the caller of the method, not the interior of the method. Imagine you had to write a piece of code to do the same. How would you reference the name of the method calling the method you are in?

Example from Microsoft's site:

public void DoProcessing()
{
    TraceMessage("Something happened.");
}

public void TraceMessage(string message,
        [System.Runtime.CompilerServices.CallerMemberName] string memberName = "",
        [System.Runtime.CompilerServices.CallerFilePath] string sourceFilePath = "",
        [System.Runtime.CompilerServices.CallerLineNumber] int sourceLineNumber = 0)
{
    System.Diagnostics.Trace.WriteLine("message: " + message);
    System.Diagnostics.Trace.WriteLine("member name: " + memberName);
    System.Diagnostics.Trace.WriteLine("source file path: " + sourceFilePath);
    System.Diagnostics.Trace.WriteLine("source line number: " + sourceLineNumber);
}

// Sample Output:
//  message: Something happened.
//  member name: DoProcessing
//  source file path: c:\Visual Studio Projects\CallerInfoCS\CallerInfoCS\Form1.cs
//  source line number: 31

First of all, what is TaskCompletionSource<T>? It's a class that returns a task that does not finish immediately and then exposes methods such as TrySetResult. When the result is set, the task completes. We can use this class to turn an event based programming model to an await/async one.

In the example below I will use a Windows Forms app, just so I have access to the Click handler of a Button. Only instead of using the normal EventHandler approach, I will start a thread immediately after InitializeComponent that will react to button clicks.

Here is the Form constructor. Note that I am using Task.Factory.StartNew instead of Task.Run because I need to specify the TaskScheduler in order to have access to a TextBox object. If it were to log something or otherwise not involve the UI, a Task.Run would have been sufficient.

    public Form1()
    {
        InitializeComponent();
        Task.Factory.StartNew(async () =>
        {
            while (true)
            {
                await ClickAsync(button1);
                textBox1.AppendText($"I was clicked at {DateTime.Now:HH:mm:ss.fffff}!\r\n");
            }
        },
        CancellationToken.None,
        TaskCreationOptions.DenyChildAttach,
        TaskScheduler.FromCurrentSynchronizationContext());
    }

What's going on here? I have a while (true) block and inside it I am awaiting a method then write something in a text box. Since await is smart enough to not use CPU and not block threads, this approach doesn't have any performance drawbacks.

Now, for the ClickAsync method:

    private Task ClickAsync(Button button1)
    {
        var tcs = new TaskCompletionSource<object>();
        void handler(object s, EventArgs e) => tcs.TrySetResult(null);
        button1.Click += handler;
        return tcs.Task.ContinueWith(_ => button1.Click -= handler);
    }

Here I am creating a task completion source, I am adding a handler to the Click event, then I am returning the task, which I continue with removing the handler. The handler just sets the result on the task source, thus completing the task.

The flow comes as follows:

  1. the source is created
  2. the handler is attached
  3. the task is returned, but does not complete, thus the loop is halted in await
  4. when the button is clicked, the source result is set, then the handler is removed
  5. the task completed, the await finishes and the text is appended to the text box
  6. the loop continues

It would have been cool if the method to turn an event to an async method would have worked like this: await button1.Click.MakeAsync(), but events are not first class citizens in .NET. Instead, something more cumbersome can be used to make this more generic (note that there is no error handling, for demo purposes):

    public Form1()
    {
        InitializeComponent();
        Task.Factory.StartNew(async () =>
        {
            while (true)
            {
                await EventAsync(button1, nameof(Button.Click));
                textBox1.AppendText($"I was clicked at {DateTime.Now:HH:mm:ss.fffff}!\r\n");
            }
        },
        CancellationToken.None,
        TaskCreationOptions.DenyChildAttach,
        TaskScheduler.FromCurrentSynchronizationContext());
    }

    private Task EventAsync(object obj, string eventName)
    {
        var eventInfo = obj.GetType().GetEvent(eventName);
        var tcs = new TaskCompletionSource<object>();
        EventHandler handler = delegate (object s, EventArgs e) { tcs.TrySetResult(null); };
        eventInfo.AddEventHandler(obj, handler);
        return tcs.Task.ContinueWith(_ => eventInfo.RemoveEventHandler(obj, handler));
    }

Notes:

  • is this a better method of doing things? That depends on what you want to do.
  • If you were to use Reactive Extensions, you can turn an event into an Observable with Observable.FromEventPattern.
  • I see it useful not for button clicks (that while true loop scratches at my brain), but for classes that have Completed events.
  • obviously the EventAsync method is not optimal and has no exception handling

  You are writing some code and you find yourself needing to call an async method in your event handler. The event handler, obviously, has a void return type and is not async, so when using await in it, you will get a compile error. I actually had a special class to execute async method synchronously and I used that one, but I didn't actually need it.

  The solution is extremely simple: just mark the event handler as async.

  You should never use async void methods, instead use async Task or async Task<T1,...>. The exception, apparently, is event handlers. And it kind of makes sense: an event handler is designed to be called asynchronously.

  More details here: Tip 1: Async void is for top-level event-handlers only

  And bonus: but what about constructors? They can't be marked as async, they have no return type!

  First, why are you executing code in your constructor? And second, if you absolutely must, you can also create an async void method that you call from the constructor. But the best solution is to make the constructor private and instead use a static async method to create the class, which will execute whatever code you need and then return new YourClass(values returned from async methods).

  That's a very good pattern, regardless of asynchronous methods: if you need to execute code in the constructor, consider hiding it and using a creation method instead.

Recently I found out about custom task schedulers and I wanted to blog about all the wonderful things you can do with them. I also imagined new ways of doing await/async by tweaking task schedulers. After hours of attempts, I've come to the conclusion that custom task schedulers are incompatible with await/async and should not be used. Here is why:

  • a task scheduler is used to execute synchronous code inside tasks while async/await code is already asynchronous
  • while async/await code is transformed by the compiler into a state machine with the code that follows being turned into a task that is scheduled on TaskScheduler.Current, the state machine has nothing to do with the task scheduler (see Dissecting the async methods in C#)
  • there are no methods that are both aware of await/async code and a custom task scheduler; by design they are incompatible (see Task.Run vs Task.Factory.StartNew)
  • while a stubborn developer could reproduce the functionality of Task.Run and specify a custom task scheduler, or detect tasks that return tasks and Unwrap them, there are easier and safer ways of doing the same thing without a custom task scheduler
  • as the scheduler will be used not only by the tasks run by the developer, but also by the code separated by await boundaries, the results will be unpredictable except the most simple of scenarios

And a pretty diagram from Microsoft representing the order of the operations and how complex they are. It's not just a case of method executed somewhere, but a complex flow that uses the ThreadPoolTaskScheduler as the default task scheduler as a fundamental low level functionality that should not be changed.

If you need more convincing, consider that the code after an await instruction may not even execute on the same thread (or indeed thread pool) as the one before, even if as written appears part of the same method (see async - stay on the current thread? for more details). More on thread pools from Jon Skeet here: The Thread Pool and Asynchronous Methods.

I got this exception that doesn't appear anywhere on Google while I was debugging a .NET Core web app. You just have to enable Windows Authentication in the project properties (Debug tab). Duh!

System.InvalidOperationException: The Negotiate Authentication handler cannot be used on a server that directly supports Windows Authentication. Enable Windows Authentication for the server and the Negotiate Authentication handler will defer to it.
   at Microsoft.AspNetCore.Authentication.Negotiate.PostConfigureNegotiateOptions.PostConfigure(String name, NegotiateOptions options)
   at Microsoft.Extensions.Options.OptionsFactory`1.Create(String name)
   at Microsoft.Extensions.Options.OptionsMonitor`1.<>c__DisplayClass11_0.<Get>b__0()
   at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode)
   at System.Lazy`1.ExecutionAndPublication(LazyHelper executionAndPublication, Boolean useDefaultConstructor)
   at System.Lazy`1.CreateValue()
   at System.Lazy`1.get_Value()
   at Microsoft.Extensions.Options.OptionsCache`1.GetOrAdd(String name, Func`1 createOptions)
   at Microsoft.Extensions.Options.OptionsMonitor`1.Get(String name)
   at Microsoft.AspNetCore.Authentication.AuthenticationHandler`1.InitializeAsync(AuthenticationScheme scheme, HttpContext context)
   at Microsoft.AspNetCore.Authentication.AuthenticationHandlerProvider.GetHandlerAsync(HttpContext context, String authenticationScheme)
   at Microsoft.AspNetCore.Authentication.AuthenticationService.ChallengeAsync(HttpContext context, String scheme, AuthenticationProperties properties)
   at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
   at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)

This translates to a change in Properties/launchSettings.json like this:

{
  "iisSettings": {
    "windowsAuthentication": true,
    "anonymousAuthentication": true,
    //...
  },
  //...
}

I will show you some code, like in an interview question. Try to figure out what happened, then read on. The question is: what does the following code write to the console:

class Program
{
    static void Main(string[] args)
    {
        var c = new MyClass();
        c.DoWork();
        Console.ReadKey();
    }
}
 
class BaseClass
{
    public BaseClass()
    {
        DoWork();
    }
 
    public virtual void DoWork()
    {
        Console.WriteLine("Doing work in the base class");
    }
}
 
class MyClass:BaseClass
{
    private readonly string _myString = "I've set the string directly in the field";
 
    public MyClass()
    {
        _myString = "I've set the string in the constructor of MyClass";
    }
 
    public override void DoWork()
    {
        Console.WriteLine($"I am doing work in MyClass with my string: {_myString}");
    }
}

Click to expand

Let's say you have a Type and you want to find it by the simple name, not the entire namespace. So for string, for example, you want to use Boolean, not System.Boolean. And if you try in your code typeof(bool).Name you get "Boolean" and for typeof(bool).FullName you get "System.Boolean".

However, for generic types, that is not the case. Try typeof(int?). For FullName you get "System.Nullable`1[[System.Int32, System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]]", but for Name you get "Nullable`1".

So the "name" of the type is just that, the name. In case of generics, the name of the type is the same as the name of its generic definition. I find this a bit disingenuous, because in the name you get encoded the fact that is a generic type or not and how many generic type attributes it has, but you don't get the attributes themselves.

I admit that if I had to make a choice, I couldn't have come up with one to satisfy all demands, either. Just a heads up that Type.Name should probably not be used anywhere.

A funny feature that I've encountered recently. It's not something most people would find useful, but it helps tremendously with tracing and debugging what is going on. It's easy, just add .TagWith(someString) to your LINQ query and it will generate comments in SQL. More details here: Query tags.

One of the best things you could do in software is unit testing. There are tons of articles, including mine, explaining why people should take the time to write code in a way that makes it easily split into independent parts that then can automatically tested. The part that is painful comes afterwards, when you've written your software, put it in production and you are furiously working for the second iteration. Traditionally, unit tests are great for refactorings, but when you are changing the existing code, you need not only to "fix" the tests, but also cover the new scenarios, allow for changes and expansions of existing ones.

Long story short, you will not be able to be confident your test suite covers the code as it changes until you can compute something called Code Coverage, or the amount of your code that is traversed during unit tests. Mind you, this is not a measure of how much of your functionality is covered, only the lines of code. In Visual Studio, they did a dirty deed and restricted the functionality to the Enterprise edition. But I am here to tell you that in .NET Core (and possibly for Framework, too, but I haven't tested it) it's very easy to have all the functionality and more even for the free version of Visual Studio.

These are the steps you have to take:

  • Add coverlet.msbuild NuGet to your unit tests project
  • Add ReportGenerator NuGet to your unit tests project
  • Write a batch file that looks like

    @ECHO OFF
    REM You need to add references to the nuget packages ReportGenerator and coverlet.msbuild
    IF NOT EXIST "..\..\packages\reportgenerator" (
      ECHO You need to install the ReportGenerator by Daniel Palme nuget
      EXIT 1
    )
    IF NOT EXIST "..\..\packages\coverlet.msbuild" (
      ECHO You need to install the coverlet.msbuild by tonerdo nuget
      EXIT 1
    )
    IF EXIST "bin/CoverageReport" RMDIR /Q /S "bin/CoverageReport"
    IF EXIST "bin/coverage.opencover.xml" DEL /F "bin/coverage.opencover.xml"
    dotnet test "Primus.Core.UnitTests.csproj"  --collect:"code coverage" /p:CollectCoverage=true /p:CoverletOutputFormat=\"opencover\" /p:CoverletOutput=\"bin/coverage\"
    for /f "tokens=*" %%a in ('dir ..\..\packages\reportgenerator /b /od') do set newest=%%a
    "..\..\packages\reportgenerator\%newest%\tools\netcoreapp3.0\ReportGenerator.exe" "-reports:bin\coverage.opencover.xml" "-targetdir:bin/CoverageReport" "-assemblyfilters:-*.DAL*" "-filefilters:-*ServiceCollectionExtensions.cs"
    start "Primus Plumbing Code Coverage Report" "bin/CoverageReport/index.htm"​


    and save it in your unit test project folder
  • Optional: follow this answer on StackOverflow to be able to see the coverage directly in Visual Studio


Notes about the batch file:

  • newest is the current version of ReportGenerator, if that doesn't work, change it with whatever version you have (ex: 4.3.0)
  • the DAL filter tells the report to ignore projects with DAL in their name. You shouldn't have to unit test your data access layer.
  • the ServiceCollectionExtensions.cs filter is for files that should be ignored, like extension methods rarely need to be unit tested


Running the batch should start dotnet test and save the result in both coverage.opencover.xml and also some files in the TestResults folder. Then ReportGenerator will generate an html file with the coverage report that will get open at the end. However, if you followed the optional answer, now you are able to open the files in TestResults and see what parts of your code are covered when opened in the Visual Studio editor! I found this less useful than the HTML report, but some of my colleagues liked the option.

I hope this helps you.

There is one basic functionality of all main programming languages: throwing exceptions. Determining in code that something is wrong, one throws an exception of a certain type with extra messages and values. The problem this solves is breaking an execution flow that has entered an invalid state and being aware of what happened. Traditionally, errors are then caught in higher levels of the application and decisions are made: ignore the error, log it, encapsulate it into another exception with extra information, throw it as it is after some cleanup, etc.

But as the joke goes, now you have two problems. When developing .NET code you have to ask yourself what type of exception you are going to throw, what data to add to it and think of what will catch it above and how will it interpret what you sent. Some people create a different exception type for each little issue, in view of the multiple catch(SpecificExceptionType) functionality, so they can choose later what to do at a higher level. Others try to use the out of the box Microsoft exception types, a clear case of stuffing square pegs in round holes. Inevitably someone will just give up in frustration and throw a new Exception("Something went wrong!"); and be done with it. And recently, in order to solve the problems with the above approaches, I envisioned (with full documentation and implementation) a dependency injected IExceptionFactory which I thought was the greatest invention since fire only to discover it was so unwieldy to use that I despaired and deleted the entire thing.

Discussion


Discussing with friends about this deceptively complicated problem, I think I found a solution that covers all major scenarios. But before doing that, I can just feel that some of you thought "Hey! I am doing that and there is nothing wrong with it!", so let's discuss what's wrong with the approaches above. If you want to skip this, go to The Solution section.

Multiple Exception implementations


Extending Exception is not simple. There are four constructors and I dare you to say out the top of your head what Exception(SerializationInfo, StreamingContext) is and where it is used. There are numerous code analyzers that spew a lot of warnings about how Exceptions should be implemented correctly. That's another story: here is a nice article about it. More importantly, doing all this work for every possible exception takes time and effort and duplicated code. In the end, you will get to the next scenario, but with a larger set of hole shapes.

Also, the try/catch block in C# 6.0 has been updated with the catch when syntax, so you can have multiple catch blocks with the same Exception type and different conditions.

Using existing exception types


If you get an empty string from a method and you expected something there, you should throw an exception, but which one? The value is not null, so ArgumentNullException is kind of not applicable. Is ArgumentOutOfRangeException better? I mean, empty string is not in the range of accepted values for the parameter, maybe it could be it. Or is it just ArgumentException? You decide on ArgumentException and you smugly add the name of the variable with nameof(yourLocalVariable), because you are knowledgeable in the ways of code... and you get a warning that yourLocalVariable is not the name of any parameter of the method you are in. That's right, the value was invalid, but ArgumentExceptions are used specifically for the current method arguments.

You don't want to use multiple custom exception types, because you've read this post and abandoned it half way, but you agreed with the first point. Or maybe you are just lazy. You ignore the warning, you use ArgumentException anyway. Later on you are reading the logs and are trying to remember where in the code you used yourLocalVariable and why does it matter it's empty.

Admit it, the Microsoft exception types were not really meant to help you throw exceptions, they are there for Microsoft's internal code and use. Most of the few cases when the exception type is spot on are probably not what the makers of that exception type envisioned when they made it.

Using Exception and a meaningful message


You are done with pointless standards. You just use throw new Exception($"This really specific thing happened with variable {yourVariable}"); and let God sort them out! You can use catch when to look into the string and parse it for information and make decisions on it. It actually works, you're rightly satisfied with yourself. You've showed them all how it's done. Boom! And then a junior developer comes along and decides your wording it not quite right for a native English speaker and changes the string. Suddenly everything literally goes boom, as exceptions get where they shouldn't and flows change unexpectedly.

After warning the entire team to never change the exception strings as they are used in the functionality of the application and you even consider creating a resource system for Exception strings so that it can be used for decision making regardless of content (and inevitably hate the way you need to store format strings and remember what value goes where), a member of the UI team comes and says "Hey, I need to get the reason the flow failed to the user. And I need to translate it to their language". And you despair.

Using a single type of Exception that has everything you need


A slight variation on all of the points above, this involves creating only one type of custom exception, add to it whatever is needed to determine flow, string resource ids, etc. This is actually a pretty decent idea, as it puts the control back into the developer's hands. Why depend on Microsoft types or parsing strings. Context is for kings and you are a king amongst kings.

However, whenever you want to change something, like add a value to an enum that defines the type of the exception, change the way in which a certain exception is handled, you have to change all the code that uses that exception. It's a single point of use, but not a single point of change.

Moreover, other devs in the team think it is cumbersome to work with it. The exception type is stored in the basest of libraries and they all want to add something to it. It becomes bloated and soon enough it creeps into a huge mess that is handled differently in different code and is not easy to maintain, understand or use.

Another layer of indirection


So why not use an exception factory? Everything else in your code works on the premise that "if you want something, you inject an ISomething in the constructor and worry about the implementation never". Why not inject IExceptionFactory everywhere where you need exceptions, then do something magic with it? The result of the operation is determined by the implementation, too. If you want another implementation, you just inject something else. It's genius!

Only then you have to use it. How do you inject the factory in static methods, extension methods, stuff so basic that it used as utilities classes all over the code and now you have to add an extra dependency to everything that uses those classes? Everybody hates you, hates having to add an extra constructor parameter, an extra field, then throw exceptions with something like throw _exceptionFactory.New("Something went wrong!",new ParameterEmptyExceptionData(nameof(localVariable), localVariable)); while adding a dependency to the logging library that the factory uses to log generated exceptions.

Oh, it's just crap!

The Solution


Let's start from an existing piece of code: throw new ArgumentException("{localVariable} is null or empty");. Optimally, we would just want to change this code slightly to solve several issues:
  • formalize that it is an argument empty exception
  • make it clear it's localVariable that was empty
  • maybe add the actual value of localVariable
  • declare the context in which the exception was thrown
  • declare the message that should be used in the exception
  • throw a meaningful exception type
  • decide if this exception should be ignored or thrown
  • log the exception
  • minimize developer effort
  • minimize dependencies
  • use a solution that is closed for modification, but open for extension

A tall order, especially since we've already decided that we don't want to use the factory idea. Some of the issues above are also non-issues in most cases. What if I don't care about the language of the message or if it is a resource or not, it's something used internally in our code. localVariable is empty, I don't need its value. The context is clear from the Stack trace. The exception is meaningful enough as an ArgumentException. In other words, we need to solve one more issue: all of the issues above are optional.

The software pattern that covers this scenario and has been used extensively by library makers is the build pattern. For the sake of exploration, let's see how this would work:
var exception = new ArgumentException("{localVariable} is null or empty");
var builder = new ExceptionBuilder(ex, logger)
.SetError(Error.EmptyValue)
.SetName(nameof(localVariable))
.AddValue(localVariable)
.SetOrigin("Getting the localVariable in order to save the world")
.SetMessageId(Messages.EmptyWorldNameWhenTryingToSaveIt)
.ShouldBeIgnored();
throw builder.Build(); // this also logs and returns an exception of a type the builder decides relevant

This looks promising, considering that every method above is optional, except the builder instantiation and the build at the end, but it's still too close to the factory idea above. Why use new in a project that is based on dependency injection? Why use .Build() everywhere where you need to throw an exception. Where does the logger come from?

So here is the solution I am proposing, using several resources we have at our disposal in C#:
  • the Exception type has a Data Dictionary property for additional data
  • extension methods can be defined in multiple places for the same type
  • there is no need for an instance of a builder when throwing an exception or Build

The code will look like this:
throw new ArgumentException("{localVariable} is null or empty")
.SetError(Error.EmptyValue)
.SetName(nameof(localVariable))
.AddValue(localVariable)
.SetOrigin("Getting the localVariable in order to save the world")
.SetMessageId(Messages.EmptyWorldNameWhenTryingToSaveIt)
.ShouldBeIgnored()
.Build();

Each method above is an extension method on the Exception type. Any of them can decide to return the original object or a different one, but they all return an instance that extends Exception. The information attaching methods use the Data property to hold the information. The Build method is designed to take every information attached to an Exception and perform more complex actions, like logging or constructing a completely different object to be returned, however that step is also optional.

And here is the source for an ExceptionBuilder static class that acts as both container for the more common extension methods as well as the point where dependencies are being registered:
/// <summary>
/// Add data to exceptions, then build a Custom exception
/// using registered <see cref="IExceptionBuildHandler"/> and optional logging
/// </summary>
public static class ExceptionBuilder
{
private const string CustomPrefix = "Custom.";
 
private static readonly List<IExceptionBuildHandler> _handlers = new List<IExceptionBuildHandler>();
private static ICustomLogger _logger;
 
#region Extended Data
 
/// <summary>
/// Attaches a custom <see cref="Error"/> to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="error"></param>
/// <returns></returns>
public static Exception SetError(this Exception ex, Error error)
{
_logger?.LogTrace($"Setting error {error} in exception {ex}");
return ex.SetData(nameof(Error), error);
}
 
/// <summary>
/// Attaches an object as the exception origin to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="origin"></param>
/// <returns></returns>
public static Exception SetOrigin(this Exception ex, object origin)
{
_logger?.LogTrace($"Setting exception origin {origin} in exception {ex}");
return ex.SetData("origin", origin);
}
 
/// <summary>
/// Attaches a name parameter to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="name"></param>
/// <returns></returns>
public static Exception SetName(this Exception ex, string name)
{
_logger?.LogTrace($"Setting exception name {name} in exception {ex}");
return ex.SetData("name", name);
}
 
/// <summary>
/// Declare an exception as not breaking the execution flow.
/// Implement catch blocks for exceptions like this to support this scenario.
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static Exception TryToIgnore(this Exception ex)
{
_logger?.LogTrace($"Declaring exception {ex} as not breaking execution flow");
return ex.SetData("tryToIgnore", true);
}
 
/// <summary>
/// True if this exception is declared as not breaking execution flow
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static bool ShouldBeIgnored(this Exception ex)
{
return object.Equals(ex.GetData("tryToIgnore"), true);
}
 
/// <summary>
/// Attaches a type to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="type"></param>
/// <returns></returns>
public static Exception AddType(this Exception ex, Type type)
{
_logger?.LogTrace($"Attaching type {type} in exception {ex}");
return ex.AddData("types", type);
}
 
 
/// <summary>
/// Attaches a value to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="value"></param>
/// <returns></returns>
public static Exception AddValue(this Exception ex, object value)
{
_logger?.LogTrace($"Attaching type {value} in exception {ex}");
return ex.AddData("values", value);
}
 
/// <summary>
/// Gets data from exception based on key.
/// Returns null if not found.
/// </summary>
/// <param name="ex"></param>
/// <param name="key"></param>
/// <returns></returns>
public static object GetData(this Exception ex, string key)
{
key = $"{CustomPrefix}{key}";
if (ex.Data?.Contains(key)!=true)
{
return null;
}
return ex.Data[key];
}
 
/// <summary>
/// Attaches an object to exception data replacing any previous one with the same key
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="ex"></param>
/// <param name="key"></param>
/// <param name="value"></param>
/// <returns></returns>
public static Exception SetData<T>(this Exception ex, string key, T value)
{
key = $"{CustomPrefix}{key}";
var result = ex.AsCustomException();
ex.Data[key] = value;
return result;
}
 
/// <summary>
/// Adds an object to a list that resides in the exception data at the given key
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="ex"></param>
/// <param name="key"></param>
/// <param name="value"></param>
/// <returns></returns>
public static Exception AddData<T>(this Exception ex, string key, T value)
{
key = $"{CustomPrefix}{key}";
var result = ex.AsCustomException();
var alreadyExists = ex.Data.Contains(key);
if (!alreadyExists || !(ex.Data[key] is List<T> list))
{
if (alreadyExists)
{
_logger?.LogWarning($"Overwriting data {ex.Data[key]} with key {key} with an empty list of {typeof(T).Name} in exception {ex}.");
_logger?.LogWarning($"Are you using Add* and Set* builder methods at the same time or adding objects of different types?");
}
list = new List<T>();
ex.Data[key] = list;
}
lock (list)
{
list.Add(value);
}
return result;
}
 
#endregion Extended Data
 
/// <summary>
/// Builds the exception from the Data and the provided base exception
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static CustomException Build(this Exception ex)
{
var CustomException = ex.AsCustomException();
lock (_handlers)
{
for (var index = _handlers.Count - 1; index >= 0; index--)
{
var handler = _handlers[index];
var result = handler.Build(CustomException, _logger);
if (result != null)
{
CustomException = result.AsCustomException();
break;
}
}
}
_logger?.LogTrace($"Built exception {CustomException}");
return CustomException;
}
 
 
#region Registration
 
/// <summary>
/// Register an <see cref="IExceptionBuildHandler"/>. The last handler to be added will take precedence.
/// </summary>
/// <param name="handler"></param>
public static void RegisterBuildHandler(IExceptionBuildHandler handler)
{
lock (_handlers)
{
_logger?.LogTrace($"Registering exception build handler {handler}");
_handlers.Add(handler);
}
}
 
/// <summary>
/// Register a logger
/// </summary>
/// <param name="logger"></param>
public static void RegisterLogger(ICustomLogger logger)
{
_logger = logger;
_logger?.LogTrace($"Registered logger in the exception builder");
}
 
#endregion Registration
 
private static CustomException AsCustomException(this Exception ex)
{
return ex is CustomException CustomException
? CustomException
: new CustomException(ex);
}
}


Note a few things:
  • All of the extension methods are returning the same object, with the exception of Build, which returns a CustomException that maybe writes the extra Data values in ToString
  • The external dependencies are being registered via methods. In the class I use, I even replaced those methods with a RegisterServiceProvider method that sets up everything it needs, including the list of handlers, from dependency injection
  • One doesn't need to call Build and every extension method just naturally continues a normal existing code like throw new WhateverException();
  • When using Build, though, you can change the exception object that is being thrown just by injecting another instance of IExceptionBuildHandler
  • In my project, I've devised a method of injecting code via a text configuration file. That means that you can change what happens when an exception if being thrown without recompiling your existing code.

Finally, there is one design decision that I am not sure about: to use throw exception, or to use exception.Throw()? The former is natural to all devs, but it needs special catch blocks to be able to resume execution; whatever the builder returns, it will always throw something. The latter needs a change in all code that throws exceptions, but it could handle the decision to whether to throw anything at all without recompile.

I lean on the first, just because changes in an existing code base can be done incrementally and the code can be understood by all devs, regardless of seniority.

I find this to be a wonderful idea, clear, useful and flexible. I hope you do, too!

.NET Core comes with its own dependency injection engine, separated in the Microsoft.Extensions.DependencyInjection package, and ASP.Net Core uses it by default. In a very simplistic description, it uses an IServiceCollection to add services to, then it builds an IServiceProvider from that list, an interface which returns an implementation based on a type or null if finding none. Any change in the list of services is not supported. There are situations, though, where you want to add new services. One of them being dynamically resolving new types.

Therefore I set up to create a custom implementation of IServiceProvider that fixes that, using the mechanisms already existing in .NET Core. Note that this is just something I did from frustration, "because I could". Most people choose to replace the entire IServiceProvider with an implementation that uses some other DI container, like StructureMap.

First attempt was proxying a normal ServiceProvider and keeping a reference to the collection. Then I would just change the collection and recreate the service provider. That has two major problems. One is that the previous serviceProvider is not disposed. If you try, you automatically dispose all services already resolved and if you do not, you remain with references to the created services. The second, and more dire, is that recreating the service provider will generate new instances for services, even if registered as singletons. That is not good.

I thought of a solution:
  1. keep a list of service providers, instead of just one
  2. use a custom service collection which will let us know when changes occurred
  3. whenever new services are added, add them to a list of new services
  4. whenever a service is resolved, go through the list of providers
  5. if any provider returns a value, provide it
  6. else if any new service create a new provider from the new services and add it to the list
  7. else return null
  8. when disposing, dispose all providers in the list

This works great except the newly added providers are separate from the existing providers so when you try to resolve a type with a second provider and that type has in its constructor a type that was registered in the first provider, you get nothing.

One solution would be to add all services to the second provider, not only the new ones, but then we get back to the original issue of the singletons, only a bit more subtle:
  1. register type1 as a singleton
  2. get an instance of type1 (1)
  3. build the provider
  4. get an instance of type1 (2)
  5. register type2 which receives a type1 in its constructor
  6. get an instance of type2
  7. now, type1 (1) is the same as type1 (2), because it was resolved by the same provider
  8. type1 is different from type2.type1, though, because that was resolved as a different singleton by the second provider in the list

One solution would be to add all previous services as factories, then. For Itype1, instead of returning typeof(type1), return a factory method that resolves the value with our system. And it works... until it reaches a definition (like IOptions) that was registered as an open generic: services.AddSingleton(typeof(IType3<>),typeof(Type3<>)). In case of open generics, you cannot use a descriptor with a factory, because it returns an object, regardless of the generic type argument used. It would not to do return a Type3<Banana> for a requested type of IType3<int>.

So, final version is this:
  1. keep a list of service providers, instead of just one
  2. keep a dictionary of the last object resolved for a type
  3. use a custom service collection which will let us know when changes occurred
  4. whenever new services are added, add them to a list of new services
  5. whenever a service is resolved, go through the list of providers
  6. if any provider returns a value, return it
  7. if no new services registered return null
  8. create a new provider from all the services like this:
    • if it's a new registration, use it as is
    • if it's an open generic definition type:
      • if singleton, add first all the existing resolutions for types that are defined by it
      • use the original descriptor afterwards
    • use a registration that proxies to the advanced resolution mechanism we created
  9. when disposing, dispose all providers in the list

This implementation also has a flaw: if a dependency parameter with a generic type definition descriptor was resolved as a singleton by an additional service provider, then is requested directly and can be resolved by a previous provider, it will return a different instance. Here is the scenario:
  1. the initial provider knows to map I<> to M<>
  2. you add a new singleton mapping from X to Y and Y gets a constructor parameter of type I<Z>
  3. you request an instance of X
  4. the first provider cannot resolve it
  5. the second provider can resolve it, therefore it will also resolve a I<Z> as an M<Z> singleton instance
  6. you request an instance of I<Z>
  7. the first provider can resolve it, therefore it will return a NEW singleton instance of M<Z>

This is an edge case that I don't have the time to solve. So, with the caveat above, here is the final version.
Use it like this:
// IAdvancedServiceProvider either injected 
// or resolved via serviceProvider.GetService<IAdvancedServiceProvider>
// or even serviceProvider as IAdvancedServiceProvider
advancedServiceProvider.ServiceCollection.AddSingleton...

And this is the source code:
/// <summary>
/// Service provider that allows for dynamic adding of new services
/// </summary>
public interface IAdvancedServiceProvider : IServiceProvider
{
/// <summary>
/// Add services to this collection
/// </summary>
IServiceCollection ServiceCollection { get; }
}
 
/// <summary>
/// Service provider that allows for dynamic adding of new services
/// </summary>
public class AdvancedServiceProvider : IAdvancedServiceProvider, IDisposable
{
private readonly List<ServiceProvider> _serviceProviders;
private readonly NotifyChangedServiceCollection _services;
private readonly object _servicesLock = new object();
private List<ServiceDescriptor> _newDescriptors;
private Dictionary<Type, object> _resolvedObjects;
 
/// <summary>
/// Initializes a new instance of the <see cref="AdvancedServiceProvider"/> class.
/// </summary>
/// <param name="services">The services.</param>
public AdvancedServiceProvider(IServiceCollection services)
{
// registers itself in the list of services
services.AddSingleton<IAdvancedServiceProvider>(this);
 
_serviceProviders = new List<ServiceProvider>();
_newDescriptors = new List<ServiceDescriptor>();
_resolvedObjects = new Dictionary<Type, object>();
_services = new NotifyChangedServiceCollection(services);
_services.ServiceAdded += ServiceAdded;
_serviceProviders.Add(services.BuildServiceProvider(true));
}
 
private void ServiceAdded(object sender, ServiceDescriptor item)
{
lock (_servicesLock)
{
_newDescriptors.Add(item);
}
}
 
/// <summary>
/// Add services to this collection
/// </summary>
public IServiceCollection ServiceCollection { get => _services; }
 
/// <summary>
/// Gets the service object of the specified type.
/// </summary>
/// <param name="serviceType">An object that specifies the type of service object to get.</param>
/// <returns>A service object of type serviceType. -or- null if there is no service object of type serviceType.</returns>
public object GetService(Type serviceType)
{
lock (_servicesLock)
{
// go through the service provider chain and resolve the service
var service = GetServiceInternal(serviceType);
// if service was not found and we have new registrations
if (service == null && _newDescriptors.Count > 0)
{
// create a new service collection in order to build the next provider in the chain
var newCollection = new ServiceCollection();
foreach (var descriptor in _services)
{
foreach (var descriptorToAdd in GetDerivedServiceDescriptors(descriptor))
{
((IList<ServiceDescriptor>)newCollection).Add(descriptorToAdd);
}
}
var newServiceProvider = newCollection.BuildServiceProvider(true);
_serviceProviders.Add(newServiceProvider);
_newDescriptors = new List<ServiceDescriptor>();
service = newServiceProvider.GetService(serviceType);
}
if (service != null)
{
_resolvedObjects[serviceType] = service;
}
return service;
}
}
 
private IEnumerable<ServiceDescriptor> GetDerivedServiceDescriptors(ServiceDescriptor descriptor)
{
if (_newDescriptors.Contains(descriptor))
{
// if it's a new registration, just add it
yield return descriptor;
yield break;
}
 
if (!descriptor.ServiceType.IsGenericTypeDefinition)
{
// for a non open type generic singleton descriptor, register a factory that goes through the service provider
yield return ServiceDescriptor.Describe(
descriptor.ServiceType,
_ => GetServiceInternal(descriptor.ServiceType),
descriptor.Lifetime
);
yield break;
}
// if the registered service type for a singleton is an open generic type
// we register as factories all the already resolved specific types that fit this definition
if (descriptor.Lifetime == ServiceLifetime.Singleton)
{
foreach (var servType in _resolvedObjects.Keys.Where(t => t.IsGenericType && t.GetGenericTypeDefinition() == descriptor.ServiceType))
{
 
yield return ServiceDescriptor.Describe(
servType,
_ => _resolvedObjects[servType],
ServiceLifetime.Singleton
);
}
}
// then we add the open type registration for any new types
yield return descriptor;
}
 
private object GetServiceInternal(Type serviceType)
{
foreach (var serviceProvider in _serviceProviders)
{
var service = serviceProvider.GetService(serviceType);
if (service != null)
{
return service;
}
}
return null;
}
 
/// <summary>
/// Dispose the provider and all resolved services
/// </summary>
public void Dispose()
{
lock (_servicesLock)
{
_services.ServiceAdded -= ServiceAdded;
foreach (var serviceProvider in _serviceProviders)
{
try
{
serviceProvider.Dispose();
}
catch
{
// singleton classes might be disposed twice and throw some exception
}
}
_newDescriptors.Clear();
_resolvedObjects.Clear();
_serviceProviders.Clear();
}
}
 
/// <summary>
/// An IServiceCollection implementation that exposes a ServiceAdded event for added service descriptors
/// The collection doesn't support removal or inserting of services
/// </summary>
private class NotifyChangedServiceCollection : IServiceCollection
{
private readonly IServiceCollection _services;
 
/// <summary>
/// Fired when a descriptor is added to the collection
/// </summary>
public event EventHandler<ServiceDescriptor> ServiceAdded;
 
/// <summary>
/// Initializes a new instance of the <see cref="NotifyChangedServiceCollection"/> class.
/// </summary>
/// <param name="services">The services.</param>
public NotifyChangedServiceCollection(IServiceCollection services)
{
_services = services;
}
 
/// <summary>
/// Get the value at index
/// Setting is not supported
/// </summary>
public ServiceDescriptor this[int index]
{
get => _services[index];
set => throw new NotSupportedException("Inserting services in collection is not supported");
}
 
/// <summary>
/// Count of services in the collection
/// </summary>
public int Count { get => _services.Count; }
 
/// <summary>
/// Obviously not
/// </summary>
public bool IsReadOnly { get => false; }
 
/// <summary>
/// Adding a service descriptor will fire the ServiceAdded event
/// </summary>
/// <param name="item"></param>
public void Add(ServiceDescriptor item)
{
_services.Add(item);
ServiceAdded.Invoke(this, item);
}
 
/// <summary>
/// Clear the collection is not supported
/// </summary>
public void Clear() => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// True is the item exists in the collection
/// </summary>
public bool Contains(ServiceDescriptor item) => _services.Contains(item);
 
/// <summary>
/// Copy items to array of service descriptors
/// </summary>
public void CopyTo(ServiceDescriptor[] array, int arrayIndex) => _services.CopyTo(array, arrayIndex);
 
/// <summary>
/// Enumerator for service descriptors
/// </summary>
public IEnumerator<ServiceDescriptor> GetEnumerator() => _services.GetEnumerator();
 
/// <summary>
/// Index of item in the list
/// </summary>
public int IndexOf(ServiceDescriptor item) => _services.IndexOf(item);
 
/// <summary>
/// Inserting is not supported
/// </summary>
public void Insert(int index, ServiceDescriptor item) => throw new NotSupportedException("Inserting services in collection is not supported");
 
/// <summary>
/// Removing items is not supported
/// </summary>
public bool Remove(ServiceDescriptor item) => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// Removing items is not supported
/// </summary>
public void RemoveAt(int index) => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// Enumerator for objects
/// </summary>
IEnumerator IEnumerable.GetEnumerator() => ((IEnumerable)_services).GetEnumerator();
}
}

We already know how to load types in .NET Framework and we know what they say we should use in .NET Core. But what about Standard? Is that a trick question? Sort of. Right now we have two .NET Standard and three .NET Core versions, albeit .NET Core 3 is in preview mode. The signature for AssemblyLoadContext and how it is used has changed dramatically. Core 3 enables context unloading, but Standard 2 does not. So you either are forced to build your library as Core 3 or you have to not use Unloading contexts or use reflection, which is not robust and probably will not be needed with the possible arrival of Standard 3.

But there are subtler issues at work. One of them is that, at least with .NET Core 3 Preview6, when you reference System.Runtime.Loader in a Standard library, so you can access AssemblyLoadContext, you get conflicts between the System.Runtime you are using and the one referenced by System.Runtime.Loader. The only solution is to use the System.Runtime.Loader NuGet package, but that returns you to the Standard 2 version of AssemblyLoadContext, even if the library version is higher!

The setup is this: I have an ITestInterface interface which resides in TestInterfaceLibrary.dll. I also have a TestImplementation class that can be found in TestImplementationLibrary.dll and implements ITestInterface. My program either does not reference any of these libraries or it only references the interface one. The task is to load both these types and then simply convert one instance of TestImplementation to ITestInterface. Simple test would be loading the types and then expecting interfaceType.IsAssignableFrom(implementationType) to be true.

Core 3


Let's first try the Core 3 way:
var context = new AssemblyLoadContext("testContext", true);
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface");
Console.WriteLine(interfaceType?.ToString()??"interface type not loaded");
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation");
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
Console.WriteLine("implementation implements interface: "+interfaceType.IsAssignableFrom(implementationType));
 
context.Unload();
The output is:
TestInterfaceLibrary.ITestInterface
TestImplementationLibrary.TestImplementation
implementation implements interface: True

It works! But only because the interface assembly is loaded first. If you try to load just the implementation type first, it will come up as empty. There are no exceptions thrown unless you get all the assembly types or specify the throwOnError parameter in GetType. The exception is "System.IO.FileNotFoundException: 'Could not load file or assembly 'TestInterfaceLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified.'".

In order to solve this, we need to use the Resolve event of the AssemblyLoadContext class. Let's try this:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation", true);
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine("implementation implements interface: " + interfaceType.IsAssignableFrom(implementationType));
 
context.Resolving -= Context_Resolving;
context.Unload();
 
private static Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}

And now it works again, by assuming the assembly name is the same as the assembly file name and that it is found in the same place.

But... if we try this in different contexts:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation", true);
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
context.Resolving -= Context_Resolving;
context.Unload();
context = new AssemblyLoadContext("testContext2", true);
context.Resolving += Context_Resolving;
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine("implementation implements interface: " + interfaceType.IsAssignableFrom(implementationType));
 
context.Resolving -= Context_Resolving;
context.Unload();
the output will show
implementation implements interface: False

This means that if we want to encapsulate this in a TypeLoader class or something, we cannot use different contexts for dynamically loading types. Even if we had one context that we would unload in order to refresh all the types, it could still be different from the main context, in case the interface is loaded twice or referenced directly in the project.

For example, if you reference TestInterfaceLibrary directly and you load TestImplementation dynamically it will work as expected, because ITestInterface is resolved automatically from the main context. However, if you load ITestInterface dynamically, too, it will be a different type from the referenced ITestInterface, even if they apparently have the same name and full name and assembly qualified name! So it kind of makes sense to not load a type twice. Is this where the context unloading comes in? Not really. Let's define a method that counts the number of types with a certain name in the current domain as
private static int CountTypes(string typeName)
{
return AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes().Where(t => t.FullName == typeName))
.Count();
}

And now let's run this code:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var referencedInterfaceType = typeof(ITestInterface);
Console.WriteLine(referencedInterfaceType?.ToString() ?? "interface type not loaded");
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine($"Types are the same: {interfaceType==referencedInterfaceType}");
 
Console.WriteLine($"Number of types with name {interfaceType.FullName}: {CountTypes(interfaceType.FullName)}");
 
context.Resolving -= Context_Resolving;
context.Unload();
Console.WriteLine($"Number of types with name {interfaceType.FullName}: {CountTypes(interfaceType.FullName)}");

There is the referenced type, then we load the type dynamically again, inside a new context. We count the types loaded in the current domain, we unload the context, we count the types again. The result is
TestInterfaceLibrary.ITestInterface
TestInterfaceLibrary.ITestInterface
Types are the same: False
Number of types with name TestInterfaceLibrary.ITestInterface: 2
Number of types with name TestInterfaceLibrary.ITestInterface: 2
The types are always 2!

Bottom line, even when unloading the AssemblyLoadContext, the types used are not unloaded and trying to find a type by name will result in duplicates.

OK, so let's just agree that types with the same name, once loaded, should remain there and no other type with the same name should be loaded. Let's try to incorporate this into a TypeLoader class:
public class TypeLoader : IDisposable
{
private readonly AssemblyLoadContext _context;
 
public TypeLoader()
{
_context = new AssemblyLoadContext(GetType().FullName, true);
_context.Resolving += Context_Resolving;
}
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = _context.LoadFromAssemblyPath(assemblyPath);
return assembly.GetType(typeName, true);
}
 
public void Dispose()
{
_context?.Resolving -= Context_Resolving;
_context?.Unload();
}
}

The code in our test is now much clearer:
var interfaceAssemblyPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestInterfaceLibrary.dll");
var implementationAssemblyPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestImplementationLibrary.dll");
var interfaceTypeName = "TestInterfaceLibrary.ITestInterface";
var implementationTypeName = "TestImplementationLibrary.TestImplementation";
 
using (var loader = new TypeLoader())
{
Type referencedType = typeof(TestInterfaceLibrary.ITestInterface);
var interfaceType = loader.LoadType(interfaceTypeName, interfaceAssemblyPath);
var implementationType = loader.LoadType(implementationTypeName, implementationAssemblyPath);
Console.WriteLine($@"
referenced type: {referencedType}
interface type: {interfaceType}
implementation type: {implementationType}
referenced and loaded interfaces are the same: {referencedType == interfaceType}
interface implemented: {interfaceType.IsAssignableFrom(implementationType)}"
);
}
and the result is
referenced type: TestInterfaceLibrary.ITestInterface
interface type: TestInterfaceLibrary.ITestInterface
implementation type: TestImplementationLibrary.TestImplementation
referenced and loaded interfaces are the same: True
interface implemented: True

But we still use Unload. Maybe it will work some day as I want it to work, but until then, why not get rid of Unload and make TypeLoader a class in a Standard 2 library?

Standard 2


For this I will create a new Standard 2 library project and then reference it in our test Core 3 project. Then I will move the TypeLoader class in the library project.

The errors in the library project are related to not knowing what an AssemblyLoadContext is, therefore the first solution is to reference System.Runtime.Loader from the framework. I get the immediate error "Assembly 'System.Runtime.Loader' with identity 'System.Runtime.Loader, Version=4.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' uses 'System.Runtime, Version=4.2.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' which has a higher version than referenced assembly 'System.Runtime' with identity 'System.Runtime, Version=4.1.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a".

Solution 2: load the System.Runtime.Loader NuGet package, which at the time of writing this, is version 4.3.0. The error is now gone, but several things are immediately apparent:
  1. the Unload method doesn't exist anymore
  2. the constructor doesn't receive a name and a bool anymore
  3. AssemblyLoadContext is now abstract

In order to solve this I am creating a DynamicAssemblyLoadContext class that inherits from AssemblyLoadContext and just return null from the Load method overload, and I give it an Unload method and a constructor with a string and a bool that don't do anything. And it works again. The updated TypeLoader class is now:
public class TypeLoader : IDisposable
{
private readonly DynamicAssemblyLoadContext _context;
 
public TypeLoader()
{
_context = new DynamicAssemblyLoadContext(GetType().FullName, true);
_context.Resolving += Context_Resolving;
}
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(ass => ass.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = _context.LoadFromAssemblyPath(assemblyPath);
return assembly.GetType(typeName, true);
}
 
public void Dispose()
{
_context?.Resolving -= Context_Resolving;
_context?.Unload();
}
 
 
private class DynamicAssemblyLoadContext : AssemblyLoadContext
{
public DynamicAssemblyLoadContext(string name, bool isCollectible)
{
}
 
protected override Assembly Load(AssemblyName assemblyName)
{
return null;
}
 
public void Unload()
{
}
}
}

The safe way


The code above has an issue, though. If the interface type is dynamically loaded before its referenced type is used, this fails again. This is the case when you use dependency injection. You dynamically load the types in order to register the implementation relationship to the interface, but then, when you ask for a resolution for the interface type, now referenced by the main project, you get another type named just the same.

The safe way, considering that we don't really use Unload and we don't count on it every working, why not use the default context, the one where everything loads, and be done with it. When you do that, the code becomes a little uglier, but it works in all situations.

Final version.
public class TypeLoader
{
private readonly object _resolutionLock = new object();
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var context = AssemblyLoadContext.Default;
lock (_resolutionLock)
{
context.Resolving += Context_Resolving;
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(ass => ass.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = context.LoadFromAssemblyPath(assemblyPath);
 
type = assembly.GetType(typeName, true);
context.Resolving -= Context_Resolving;
return type;
}
}
}

You just gotta hate that adding and removing the event inside a lock, right? Well, if you find a better solution, let me know.