and has 0 comments

One of the best things you could do in software is unit testing. There are tons of articles, including mine, explaining why people should take the time to write code in a way that makes it easily split into independent parts that then can automatically tested. The part that is painful comes afterwards, when you've written your software, put it in production and you are furiously working for the second iteration. Traditionally, unit tests are great for refactorings, but when you are changing the existing code, you need not only to "fix" the tests, but also cover the new scenarios, allow for changes and expansions of existing ones.

Long story short, you will not be able to be confident your test suite covers the code as it changes until you can compute something called Code Coverage, or the amount of your code that is traversed during unit tests. Mind you, this is not a measure of how much of your functionality is covered, only the lines of code. In Visual Studio, they did a dirty deed and restricted the functionality to the Enterprise edition. But I am here to tell you that in .NET Core (and possibly for Framework, too, but I haven't tested it) it's very easy to have all the functionality and more even for the free version of Visual Studio.

These are the steps you have to take:

  • Add coverlet.msbuild NuGet to your unit tests project
  • Add ReportGenerator NuGet to your unit tests project
  • Write a batch file that looks like

    @ECHO OFF
    REM You need to add references to the nuget packages ReportGenerator and coverlet.msbuild
    IF NOT EXIST "..\..\packages\reportgenerator" (
      ECHO You need to install the ReportGenerator by Daniel Palme nuget
      EXIT 1
    )
    IF NOT EXIST "..\..\packages\coverlet.msbuild" (
      ECHO You need to install the coverlet.msbuild by tonerdo nuget
      EXIT 1
    )
    IF EXIST "bin/CoverageReport" RMDIR /Q /S "bin/CoverageReport"
    IF EXIST "bin/coverage.opencover.xml" DEL /F "bin/coverage.opencover.xml"
    dotnet test "Primus.Core.UnitTests.csproj"  --collect:"code coverage" /p:CollectCoverage=true /p:CoverletOutputFormat=\"opencover\" /p:CoverletOutput=\"bin/coverage\"
    for /f "tokens=*" %%a in ('dir ..\..\packages\reportgenerator /b /od') do set newest=%%a
    "..\..\packages\reportgenerator\%newest%\tools\netcoreapp3.0\ReportGenerator.exe" "-reports:bin\coverage.opencover.xml" "-targetdir:bin/CoverageReport" "-assemblyfilters:-*.DAL*" "-filefilters:-*ServiceCollectionExtensions.cs"
    start "Primus Plumbing Code Coverage Report" "bin/CoverageReport/index.htm"​


    and save it in your unit test project folder
  • Optional: follow this answer on StackOverflow to be able to see the coverage directly in Visual Studio


Notes about the batch file:

  • newest is the current version of ReportGenerator, if that doesn't work, change it with whatever version you have (ex: 4.3.0)
  • the DAL filter tells the report to ignore projects with DAL in their name. You shouldn't have to unit test your data access layer.
  • the ServiceCollectionExtensions.cs filter is for files that should be ignored, like extension methods rarely need to be unit tested


Running the batch should start dotnet test and save the result in both coverage.opencover.xml and also some files in the TestResults folder. Then ReportGenerator will generate an html file with the coverage report that will get open at the end. However, if you followed the optional answer, now you are able to open the files in TestResults and see what parts of your code are covered when opened in the Visual Studio editor! I found this less useful than the HTML report, but some of my colleagues liked the option.

I hope this helps you.

and has 0 comments
There is one basic functionality of all main programming languages: throwing exceptions. Determining in code that something is wrong, one throws an exception of a certain type with extra messages and values. The problem this solves is breaking an execution flow that has entered an invalid state and being aware of what happened. Traditionally, errors are then caught in higher levels of the application and decisions are made: ignore the error, log it, encapsulate it into another exception with extra information, throw it as it is after some cleanup, etc.

But as the joke goes, now you have two problems. When developing .NET code you have to ask yourself what type of exception you are going to throw, what data to add to it and think of what will catch it above and how will it interpret what you sent. Some people create a different exception type for each little issue, in view of the multiple catch(SpecificExceptionType) functionality, so they can choose later what to do at a higher level. Others try to use the out of the box Microsoft exception types, a clear case of stuffing square pegs in round holes. Inevitably someone will just give up in frustration and throw a new Exception("Something went wrong!"); and be done with it. And recently, in order to solve the problems with the above approaches, I envisioned (with full documentation and implementation) a dependency injected IExceptionFactory which I thought was the greatest invention since fire only to discover it was so unwieldy to use that I despaired and deleted the entire thing.

Discussion


Discussing with friends about this deceptively complicated problem, I think I found a solution that covers all major scenarios. But before doing that, I can just feel that some of you thought "Hey! I am doing that and there is nothing wrong with it!", so let's discuss what's wrong with the approaches above. If you want to skip this, go to The Solution section.

Multiple Exception implementations


Extending Exception is not simple. There are four constructors and I dare you to say out the top of your head what Exception(SerializationInfo, StreamingContext) is and where it is used. There are numerous code analyzers that spew a lot of warnings about how Exceptions should be implemented correctly. That's another story: here is a nice article about it. More importantly, doing all this work for every possible exception takes time and effort and duplicated code. In the end, you will get to the next scenario, but with a larger set of hole shapes.

Also, the try/catch block in C# 6.0 has been updated with the catch when syntax, so you can have multiple catch blocks with the same Exception type and different conditions.

Using existing exception types


If you get an empty string from a method and you expected something there, you should throw an exception, but which one? The value is not null, so ArgumentNullException is kind of not applicable. Is ArgumentOutOfRangeException better? I mean, empty string is not in the range of accepted values for the parameter, maybe it could be it. Or is it just ArgumentException? You decide on ArgumentException and you smugly add the name of the variable with nameof(yourLocalVariable), because you are knowledgeable in the ways of code... and you get a warning that yourLocalVariable is not the name of any parameter of the method you are in. That's right, the value was invalid, but ArgumentExceptions are used specifically for the current method arguments.

You don't want to use multiple custom exception types, because you've read this post and abandoned it half way, but you agreed with the first point. Or maybe you are just lazy. You ignore the warning, you use ArgumentException anyway. Later on you are reading the logs and are trying to remember where in the code you used yourLocalVariable and why does it matter it's empty.

Admit it, the Microsoft exception types were not really meant to help you throw exceptions, they are there for Microsoft's internal code and use. Most of the few cases when the exception type is spot on are probably not what the makers of that exception type envisioned when they made it.

Using Exception and a meaningful message


You are done with pointless standards. You just use throw new Exception($"This really specific thing happened with variable {yourVariable}"); and let God sort them out! You can use catch when to look into the string and parse it for information and make decisions on it. It actually works, you're rightly satisfied with yourself. You've showed them all how it's done. Boom! And then a junior developer comes along and decides your wording it not quite right for a native English speaker and changes the string. Suddenly everything literally goes boom, as exceptions get where they shouldn't and flows change unexpectedly.

After warning the entire team to never change the exception strings as they are used in the functionality of the application and you even consider creating a resource system for Exception strings so that it can be used for decision making regardless of content (and inevitably hate the way you need to store format strings and remember what value goes where), a member of the UI team comes and says "Hey, I need to get the reason the flow failed to the user. And I need to translate it to their language". And you despair.

Using a single type of Exception that has everything you need


A slight variation on all of the points above, this involves creating only one type of custom exception, add to it whatever is needed to determine flow, string resource ids, etc. This is actually a pretty decent idea, as it puts the control back into the developer's hands. Why depend on Microsoft types or parsing strings. Context is for kings and you are a king amongst kings.

However, whenever you want to change something, like add a value to an enum that defines the type of the exception, change the way in which a certain exception is handled, you have to change all the code that uses that exception. It's a single point of use, but not a single point of change.

Moreover, other devs in the team think it is cumbersome to work with it. The exception type is stored in the basest of libraries and they all want to add something to it. It becomes bloated and soon enough it creeps into a huge mess that is handled differently in different code and is not easy to maintain, understand or use.

Another layer of indirection


So why not use an exception factory? Everything else in your code works on the premise that "if you want something, you inject an ISomething in the constructor and worry about the implementation never". Why not inject IExceptionFactory everywhere where you need exceptions, then do something magic with it? The result of the operation is determined by the implementation, too. If you want another implementation, you just inject something else. It's genius!

Only then you have to use it. How do you inject the factory in static methods, extension methods, stuff so basic that it used as utilities classes all over the code and now you have to add an extra dependency to everything that uses those classes? Everybody hates you, hates having to add an extra constructor parameter, an extra field, then throw exceptions with something like throw _exceptionFactory.New("Something went wrong!",new ParameterEmptyExceptionData(nameof(localVariable), localVariable)); while adding a dependency to the logging library that the factory uses to log generated exceptions.

Oh, it's just crap!

The Solution


Let's start from an existing piece of code: throw new ArgumentException("{localVariable} is null or empty");. Optimally, we would just want to change this code slightly to solve several issues:
  • formalize that it is an argument empty exception
  • make it clear it's localVariable that was empty
  • maybe add the actual value of localVariable
  • declare the context in which the exception was thrown
  • declare the message that should be used in the exception
  • throw a meaningful exception type
  • decide if this exception should be ignored or thrown
  • log the exception
  • minimize developer effort
  • minimize dependencies
  • use a solution that is closed for modification, but open for extension

A tall order, especially since we've already decided that we don't want to use the factory idea. Some of the issues above are also non-issues in most cases. What if I don't care about the language of the message or if it is a resource or not, it's something used internally in our code. localVariable is empty, I don't need its value. The context is clear from the Stack trace. The exception is meaningful enough as an ArgumentException. In other words, we need to solve one more issue: all of the issues above are optional.

The software pattern that covers this scenario and has been used extensively by library makers is the build pattern. For the sake of exploration, let's see how this would work:
var exception = new ArgumentException("{localVariable} is null or empty");
var builder = new ExceptionBuilder(ex, logger)
.SetError(Error.EmptyValue)
.SetName(nameof(localVariable))
.AddValue(localVariable)
.SetOrigin("Getting the localVariable in order to save the world")
.SetMessageId(Messages.EmptyWorldNameWhenTryingToSaveIt)
.ShouldBeIgnored();
throw builder.Build(); // this also logs and returns an exception of a type the builder decides relevant

This looks promising, considering that every method above is optional, except the builder instantiation and the build at the end, but it's still too close to the factory idea above. Why use new in a project that is based on dependency injection? Why use .Build() everywhere where you need to throw an exception. Where does the logger come from?

So here is the solution I am proposing, using several resources we have at our disposal in C#:
  • the Exception type has a Data Dictionary property for additional data
  • extension methods can be defined in multiple places for the same type
  • there is no need for an instance of a builder when throwing an exception or Build

The code will look like this:
throw new ArgumentException("{localVariable} is null or empty")
.SetError(Error.EmptyValue)
.SetName(nameof(localVariable))
.AddValue(localVariable)
.SetOrigin("Getting the localVariable in order to save the world")
.SetMessageId(Messages.EmptyWorldNameWhenTryingToSaveIt)
.ShouldBeIgnored()
.Build();

Each method above is an extension method on the Exception type. Any of them can decide to return the original object or a different one, but they all return an instance that extends Exception. The information attaching methods use the Data property to hold the information. The Build method is designed to take every information attached to an Exception and perform more complex actions, like logging or constructing a completely different object to be returned, however that step is also optional.

And here is the source for an ExceptionBuilder static class that acts as both container for the more common extension methods as well as the point where dependencies are being registered:
/// <summary>
/// Add data to exceptions, then build a Custom exception
/// using registered <see cref="IExceptionBuildHandler"/> and optional logging
/// </summary>
public static class ExceptionBuilder
{
private const string CustomPrefix = "Custom.";
 
private static readonly List<IExceptionBuildHandler> _handlers = new List<IExceptionBuildHandler>();
private static ICustomLogger _logger;
 
#region Extended Data
 
/// <summary>
/// Attaches a custom <see cref="Error"/> to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="error"></param>
/// <returns></returns>
public static Exception SetError(this Exception ex, Error error)
{
_logger?.LogTrace($"Setting error {error} in exception {ex}");
return ex.SetData(nameof(Error), error);
}
 
/// <summary>
/// Attaches an object as the exception origin to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="origin"></param>
/// <returns></returns>
public static Exception SetOrigin(this Exception ex, object origin)
{
_logger?.LogTrace($"Setting exception origin {origin} in exception {ex}");
return ex.SetData("origin", origin);
}
 
/// <summary>
/// Attaches a name parameter to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="name"></param>
/// <returns></returns>
public static Exception SetName(this Exception ex, string name)
{
_logger?.LogTrace($"Setting exception name {name} in exception {ex}");
return ex.SetData("name", name);
}
 
/// <summary>
/// Declare an exception as not breaking the execution flow.
/// Implement catch blocks for exceptions like this to support this scenario.
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static Exception TryToIgnore(this Exception ex)
{
_logger?.LogTrace($"Declaring exception {ex} as not breaking execution flow");
return ex.SetData("tryToIgnore", true);
}
 
/// <summary>
/// True if this exception is declared as not breaking execution flow
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static bool ShouldBeIgnored(this Exception ex)
{
return object.Equals(ex.GetData("tryToIgnore"), true);
}
 
/// <summary>
/// Attaches a type to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="type"></param>
/// <returns></returns>
public static Exception AddType(this Exception ex, Type type)
{
_logger?.LogTrace($"Attaching type {type} in exception {ex}");
return ex.AddData("types", type);
}
 
 
/// <summary>
/// Attaches a value to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="value"></param>
/// <returns></returns>
public static Exception AddValue(this Exception ex, object value)
{
_logger?.LogTrace($"Attaching type {value} in exception {ex}");
return ex.AddData("values", value);
}
 
/// <summary>
/// Gets data from exception based on key.
/// Returns null if not found.
/// </summary>
/// <param name="ex"></param>
/// <param name="key"></param>
/// <returns></returns>
public static object GetData(this Exception ex, string key)
{
key = $"{CustomPrefix}{key}";
if (ex.Data?.Contains(key)!=true)
{
return null;
}
return ex.Data[key];
}
 
/// <summary>
/// Attaches an object to exception data replacing any previous one with the same key
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="ex"></param>
/// <param name="key"></param>
/// <param name="value"></param>
/// <returns></returns>
public static Exception SetData<T>(this Exception ex, string key, T value)
{
key = $"{CustomPrefix}{key}";
var result = ex.AsCustomException();
ex.Data[key] = value;
return result;
}
 
/// <summary>
/// Adds an object to a list that resides in the exception data at the given key
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="ex"></param>
/// <param name="key"></param>
/// <param name="value"></param>
/// <returns></returns>
public static Exception AddData<T>(this Exception ex, string key, T value)
{
key = $"{CustomPrefix}{key}";
var result = ex.AsCustomException();
var alreadyExists = ex.Data.Contains(key);
if (!alreadyExists || !(ex.Data[key] is List<T> list))
{
if (alreadyExists)
{
_logger?.LogWarning($"Overwriting data {ex.Data[key]} with key {key} with an empty list of {typeof(T).Name} in exception {ex}.");
_logger?.LogWarning($"Are you using Add* and Set* builder methods at the same time or adding objects of different types?");
}
list = new List<T>();
ex.Data[key] = list;
}
lock (list)
{
list.Add(value);
}
return result;
}
 
#endregion Extended Data
 
/// <summary>
/// Builds the exception from the Data and the provided base exception
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static CustomException Build(this Exception ex)
{
var CustomException = ex.AsCustomException();
lock (_handlers)
{
for (var index = _handlers.Count - 1; index >= 0; index--)
{
var handler = _handlers[index];
var result = handler.Build(CustomException, _logger);
if (result != null)
{
CustomException = result.AsCustomException();
break;
}
}
}
_logger?.LogTrace($"Built exception {CustomException}");
return CustomException;
}
 
 
#region Registration
 
/// <summary>
/// Register an <see cref="IExceptionBuildHandler"/>. The last handler to be added will take precedence.
/// </summary>
/// <param name="handler"></param>
public static void RegisterBuildHandler(IExceptionBuildHandler handler)
{
lock (_handlers)
{
_logger?.LogTrace($"Registering exception build handler {handler}");
_handlers.Add(handler);
}
}
 
/// <summary>
/// Register a logger
/// </summary>
/// <param name="logger"></param>
public static void RegisterLogger(ICustomLogger logger)
{
_logger = logger;
_logger?.LogTrace($"Registered logger in the exception builder");
}
 
#endregion Registration
 
private static CustomException AsCustomException(this Exception ex)
{
return ex is CustomException CustomException
? CustomException
: new CustomException(ex);
}
}


Note a few things:
  • All of the extension methods are returning the same object, with the exception of Build, which returns a CustomException that maybe writes the extra Data values in ToString
  • The external dependencies are being registered via methods. In the class I use, I even replaced those methods with a RegisterServiceProvider method that sets up everything it needs, including the list of handlers, from dependency injection
  • One doesn't need to call Build and every extension method just naturally continues a normal existing code like throw new WhateverException();
  • When using Build, though, you can change the exception object that is being thrown just by injecting another instance of IExceptionBuildHandler
  • In my project, I've devised a method of injecting code via a text configuration file. That means that you can change what happens when an exception if being thrown without recompiling your existing code.

Finally, there is one design decision that I am not sure about: to use throw exception, or to use exception.Throw()? The former is natural to all devs, but it needs special catch blocks to be able to resume execution; whatever the builder returns, it will always throw something. The latter needs a change in all code that throws exceptions, but it could handle the decision to whether to throw anything at all without recompile.

I lean on the first, just because changes in an existing code base can be done incrementally and the code can be understood by all devs, regardless of seniority.

I find this to be a wonderful idea, clear, useful and flexible. I hope you do, too!

and has 0 comments
It all started with the source code for NonCapturingTimer, a static factory class that was creating a System.Threading.Timer without capturing the execution context and was described as "A convenience API for interacting with System.Threading.Timer in a way that doesn't capture the ExecutionContext. We should be using this (or equivalent) everywhere we use timers to avoid rooting any values stored in asynclocals.". What did that even mean?

An issue opened by David Fowler sheds some light on this: "Any lazy activation of timers will capture the ExecutionContext. Combining this with a lazy initialization of the HttpClient and the handler graph may end up holding onto AsyncLocals for longer than expected. This could end up looking like a memory leak". This follows a Twitter thread from Fowler declaring AsyncLocal as evil.

There are also multiple issues that have crystallized into a proposal for a future version of .NET: "Timer static Create methods that make rooting behavior explicit".

And if you look at the ASP.Net sources on GitHub, they do use the class mostly for one time timer calls and periodic cleanup calls. I should mention that Ben Adams from Microsoft calls this way of creating timers ugly.

I don't have the time to go down further on this rabbit hole, but maybe people will find answers here when looking into this and comment on their findings.

and has 0 comments

Update:

The problem described here was slightly false. The issue was that IOptionsSnapshot was registered as Scoped and I was just getting the service from the root IServiceProvider. The solution is to call provider.CreateScope() and with that scope as a provider use ActivatorUtilities. Even better, create a scope, then use it to get an instance of a business class that now would support Scoped services as well as Transient, just like a Controller would.

Warning, though: you need to dispose the scope, but you need to make sure you don't use any service that was created there outside the scope (after disposing).

I guess another solution would be to somehow register IOptionsSnapshot<> as transient, but haven't tried it.

And now for the original post

I was trying to create an instance of an object from a service provider to resolve any dependencies, using ActivatorUtilities.CreateInstance<MyObject>(_serviceProvider) and I was getting the exception:

System.InvalidOperationException
HResult=0x80131509
Message=Cannot resolve scoped service 'Microsoft.Extensions.Options.IOptionsSnapshot`1[ExternalConfiguration]' from root provider.


My object was receiving a parameter of type IOptionsSnapshot<ExternalConfiguration> and upon further investigation, my service provider (which came as a resolution from the dependency injection for IServiceProvider) was actually a ServiceProviderEngineScope which just refused to resolve any IOptionsSnapshot! Funny enough, if I replaced IOptionsSnapshot with IOptionsMonitor, which in my mind is a heavier interface, it worked without issues. Further still, the problem appeared only inside an IHostedService (a BackgroundService hooked up with services.AddHostedService<T>); if I wrote the same code in a controller action, for instance, it worked fine.

The .NET 2+ implementation of IOptionsSnapshot<T> is OptionsManager<T>. If I manually resolved an instance of OptionsManager before my object, then added it as a parameter, the code worked:

var optionsSnapshot = ActivatorUtilities.CreateInstance<OptionsManager<TestOptions>>(_serviceProvider);
var myObject = ActivatorUtilities.CreateInstance<MyObject>(_serviceProvider, optionsSnapshot);


So, specifically, the issue is that in .NET Core, the service provider implementation cannot resolve IOptionsSnapshot interfaces in worker services. You can still do that manually, but I suspect it is a bug, since there is no problem using an IOptionsMonitor instead of IOptionsSnapshot.

A possible solution is to use an additional service provider only for IOptionsSnapshot. Warning, this will not work in a general situation if the dependencies from the additional service provider also need parameters that would be found in the original service provider:

// initialization code
var serviceCollection = new ServiceCollection();
serviceCollection.AddSingleton(
typeof(IOptionsSnapshot<>),
typeof(OptionsManager<>)
);
serviceCollection.AddSingleton(
typeof(IOptionsFactory<>),
typeof(OptionsFactory<>)
);
_additionalServiceProvider = serviceCollection.BuildServiceProvider();
 
// resolution code
var constructor = typeof(MyObject).GetConstructors()
.Where(ci=>ci.IsPublic)
.Single();
var args = constructor.GetParameters()
.Select(p =>
{
try
{
return _serviceProvider.GetService(p.ParameterType);
}
catch
{
return _additionalServiceProvider.GetService(p.ParameterType);
}
})
.ToArray();
return ActivatorUtilities.CreateInstance<MyObject>(_serviceProvider, args);

and has 1 comment
.NET Core comes with its own dependency injection engine, separated in the Microsoft.Extensions.DependencyInjection package, and ASP.Net Core uses it by default. In a very simplistic description, it uses an IServiceCollection to add services to, then it builds an IServiceProvider from that list, an interface which returns an implementation based on a type or null if finding none. Any change in the list of services is not supported. There are situations, though, where you want to add new services. One of them being dynamically resolving new types.

Therefore I set up to create a custom implementation of IServiceProvider that fixes that, using the mechanisms already existing in .NET Core. Note that this is just something I did from frustration, "because I could". Most people choose to replace the entire IServiceProvider with an implementation that uses some other DI container, like StructureMap.

First attempt was proxying a normal ServiceProvider and keeping a reference to the collection. Then I would just change the collection and recreate the service provider. That has two major problems. One is that the previous serviceProvider is not disposed. If you try, you automatically dispose all services already resolved and if you do not, you remain with references to the created services. The second, and more dire, is that recreating the service provider will generate new instances for services, even if registered as singletons. That is not good.

I thought of a solution:
  1. keep a list of service providers, instead of just one
  2. use a custom service collection which will let us know when changes occurred
  3. whenever new services are added, add them to a list of new services
  4. whenever a service is resolved, go through the list of providers
  5. if any provider returns a value, provide it
  6. else if any new service create a new provider from the new services and add it to the list
  7. else return null
  8. when disposing, dispose all providers in the list

This works great except the newly added providers are separate from the existing providers so when you try to resolve a type with a second provider and that type has in its constructor a type that was registered in the first provider, you get nothing.

One solution would be to add all services to the second provider, not only the new ones, but then we get back to the original issue of the singletons, only a bit more subtle:
  1. register type1 as a singleton
  2. get an instance of type1 (1)
  3. build the provider
  4. get an instance of type1 (2)
  5. register type2 which receives a type1 in its constructor
  6. get an instance of type2
  7. now, type1 (1) is the same as type1 (2), because it was resolved by the same provider
  8. type1 is different from type2.type1, though, because that was resolved as a different singleton by the second provider in the list

One solution would be to add all previous services as factories, then. For Itype1, instead of returning typeof(type1), return a factory method that resolves the value with our system. And it works... until it reaches a definition (like IOptions) that was registered as an open generic: services.AddSingleton(typeof(IType3<>),typeof(Type3<>)). In case of open generics, you cannot use a descriptor with a factory, because it returns an object, regardless of the generic type argument used. It would not to do return a Type3<Banana> for a requested type of IType3<int>.

So, final version is this:
  1. keep a list of service providers, instead of just one
  2. keep a dictionary of the last object resolved for a type
  3. use a custom service collection which will let us know when changes occurred
  4. whenever new services are added, add them to a list of new services
  5. whenever a service is resolved, go through the list of providers
  6. if any provider returns a value, return it
  7. if no new services registered return null
  8. create a new provider from all the services like this:
    • if it's a new registration, use it as is
    • if it's an open generic definition type:
      • if singleton, add first all the existing resolutions for types that are defined by it
      • use the original descriptor afterwards
    • use a registration that proxies to the advanced resolution mechanism we created
  9. when disposing, dispose all providers in the list

This implementation also has a flaw: if a dependency parameter with a generic type definition descriptor was resolved as a singleton by an additional service provider, then is requested directly and can be resolved by a previous provider, it will return a different instance. Here is the scenario:
  1. the initial provider knows to map I<> to M<>
  2. you add a new singleton mapping from X to Y and Y gets a constructor parameter of type I<Z>
  3. you request an instance of X
  4. the first provider cannot resolve it
  5. the second provider can resolve it, therefore it will also resolve a I<Z> as an M<Z> singleton instance
  6. you request an instance of I<Z>
  7. the first provider can resolve it, therefore it will return a NEW singleton instance of M<Z>

This is an edge case that I don't have the time to solve. So, with the caveat above, here is the final version.
Use it like this:
// IAdvancedServiceProvider either injected 
// or resolved via serviceProvider.GetService<IAdvancedServiceProvider>
// or even serviceProvider as IAdvancedServiceProvider
advancedServiceProvider.ServiceCollection.AddSingleton...

And this is the source code:
/// <summary>
/// Service provider that allows for dynamic adding of new services
/// </summary>
public interface IAdvancedServiceProvider : IServiceProvider
{
/// <summary>
/// Add services to this collection
/// </summary>
IServiceCollection ServiceCollection { get; }
}
 
/// <summary>
/// Service provider that allows for dynamic adding of new services
/// </summary>
public class AdvancedServiceProvider : IAdvancedServiceProvider, IDisposable
{
private readonly List<ServiceProvider> _serviceProviders;
private readonly NotifyChangedServiceCollection _services;
private readonly object _servicesLock = new object();
private List<ServiceDescriptor> _newDescriptors;
private Dictionary<Type, object> _resolvedObjects;
 
/// <summary>
/// Initializes a new instance of the <see cref="AdvancedServiceProvider"/> class.
/// </summary>
/// <param name="services">The services.</param>
public AdvancedServiceProvider(IServiceCollection services)
{
// registers itself in the list of services
services.AddSingleton<IAdvancedServiceProvider>(this);
 
_serviceProviders = new List<ServiceProvider>();
_newDescriptors = new List<ServiceDescriptor>();
_resolvedObjects = new Dictionary<Type, object>();
_services = new NotifyChangedServiceCollection(services);
_services.ServiceAdded += ServiceAdded;
_serviceProviders.Add(services.BuildServiceProvider(true));
}
 
private void ServiceAdded(object sender, ServiceDescriptor item)
{
lock (_servicesLock)
{
_newDescriptors.Add(item);
}
}
 
/// <summary>
/// Add services to this collection
/// </summary>
public IServiceCollection ServiceCollection { get => _services; }
 
/// <summary>
/// Gets the service object of the specified type.
/// </summary>
/// <param name="serviceType">An object that specifies the type of service object to get.</param>
/// <returns>A service object of type serviceType. -or- null if there is no service object of type serviceType.</returns>
public object GetService(Type serviceType)
{
lock (_servicesLock)
{
// go through the service provider chain and resolve the service
var service = GetServiceInternal(serviceType);
// if service was not found and we have new registrations
if (service == null && _newDescriptors.Count > 0)
{
// create a new service collection in order to build the next provider in the chain
var newCollection = new ServiceCollection();
foreach (var descriptor in _services)
{
foreach (var descriptorToAdd in GetDerivedServiceDescriptors(descriptor))
{
((IList<ServiceDescriptor>)newCollection).Add(descriptorToAdd);
}
}
var newServiceProvider = newCollection.BuildServiceProvider(true);
_serviceProviders.Add(newServiceProvider);
_newDescriptors = new List<ServiceDescriptor>();
service = newServiceProvider.GetService(serviceType);
}
if (service != null)
{
_resolvedObjects[serviceType] = service;
}
return service;
}
}
 
private IEnumerable<ServiceDescriptor> GetDerivedServiceDescriptors(ServiceDescriptor descriptor)
{
if (_newDescriptors.Contains(descriptor))
{
// if it's a new registration, just add it
yield return descriptor;
yield break;
}
 
if (!descriptor.ServiceType.IsGenericTypeDefinition)
{
// for a non open type generic singleton descriptor, register a factory that goes through the service provider
yield return ServiceDescriptor.Describe(
descriptor.ServiceType,
_ => GetServiceInternal(descriptor.ServiceType),
descriptor.Lifetime
);
yield break;
}
// if the registered service type for a singleton is an open generic type
// we register as factories all the already resolved specific types that fit this definition
if (descriptor.Lifetime == ServiceLifetime.Singleton)
{
foreach (var servType in _resolvedObjects.Keys.Where(t => t.IsGenericType && t.GetGenericTypeDefinition() == descriptor.ServiceType))
{
 
yield return ServiceDescriptor.Describe(
servType,
_ => _resolvedObjects[servType],
ServiceLifetime.Singleton
);
}
}
// then we add the open type registration for any new types
yield return descriptor;
}
 
private object GetServiceInternal(Type serviceType)
{
foreach (var serviceProvider in _serviceProviders)
{
var service = serviceProvider.GetService(serviceType);
if (service != null)
{
return service;
}
}
return null;
}
 
/// <summary>
/// Dispose the provider and all resolved services
/// </summary>
public void Dispose()
{
lock (_servicesLock)
{
_services.ServiceAdded -= ServiceAdded;
foreach (var serviceProvider in _serviceProviders)
{
try
{
serviceProvider.Dispose();
}
catch
{
// singleton classes might be disposed twice and throw some exception
}
}
_newDescriptors.Clear();
_resolvedObjects.Clear();
_serviceProviders.Clear();
}
}
 
/// <summary>
/// An IServiceCollection implementation that exposes a ServiceAdded event for added service descriptors
/// The collection doesn't support removal or inserting of services
/// </summary>
private class NotifyChangedServiceCollection : IServiceCollection
{
private readonly IServiceCollection _services;
 
/// <summary>
/// Fired when a descriptor is added to the collection
/// </summary>
public event EventHandler<ServiceDescriptor> ServiceAdded;
 
/// <summary>
/// Initializes a new instance of the <see cref="NotifyChangedServiceCollection"/> class.
/// </summary>
/// <param name="services">The services.</param>
public NotifyChangedServiceCollection(IServiceCollection services)
{
_services = services;
}
 
/// <summary>
/// Get the value at index
/// Setting is not supported
/// </summary>
public ServiceDescriptor this[int index]
{
get => _services[index];
set => throw new NotSupportedException("Inserting services in collection is not supported");
}
 
/// <summary>
/// Count of services in the collection
/// </summary>
public int Count { get => _services.Count; }
 
/// <summary>
/// Obviously not
/// </summary>
public bool IsReadOnly { get => false; }
 
/// <summary>
/// Adding a service descriptor will fire the ServiceAdded event
/// </summary>
/// <param name="item"></param>
public void Add(ServiceDescriptor item)
{
_services.Add(item);
ServiceAdded.Invoke(this, item);
}
 
/// <summary>
/// Clear the collection is not supported
/// </summary>
public void Clear() => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// True is the item exists in the collection
/// </summary>
public bool Contains(ServiceDescriptor item) => _services.Contains(item);
 
/// <summary>
/// Copy items to array of service descriptors
/// </summary>
public void CopyTo(ServiceDescriptor[] array, int arrayIndex) => _services.CopyTo(array, arrayIndex);
 
/// <summary>
/// Enumerator for service descriptors
/// </summary>
public IEnumerator<ServiceDescriptor> GetEnumerator() => _services.GetEnumerator();
 
/// <summary>
/// Index of item in the list
/// </summary>
public int IndexOf(ServiceDescriptor item) => _services.IndexOf(item);
 
/// <summary>
/// Inserting is not supported
/// </summary>
public void Insert(int index, ServiceDescriptor item) => throw new NotSupportedException("Inserting services in collection is not supported");
 
/// <summary>
/// Removing items is not supported
/// </summary>
public bool Remove(ServiceDescriptor item) => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// Removing items is not supported
/// </summary>
public void RemoveAt(int index) => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// Enumerator for objects
/// </summary>
IEnumerator IEnumerable.GetEnumerator() => ((IEnumerable)_services).GetEnumerator();
}
}

and has 2 comments
We already know how to load types in .NET Framework and we know what they say we should use in .NET Core. But what about Standard? Is that a trick question? Sort of. Right now we have two .NET Standard and three .NET Core versions, albeit .NET Core 3 is in preview mode. The signature for AssemblyLoadContext and how it is used has changed dramatically. Core 3 enables context unloading, but Standard 2 does not. So you either are forced to build your library as Core 3 or you have to not use Unloading contexts or use reflection, which is not robust and probably will not be needed with the possible arrival of Standard 3.

But there are subtler issues at work. One of them is that, at least with .NET Core 3 Preview6, when you reference System.Runtime.Loader in a Standard library, so you can access AssemblyLoadContext, you get conflicts between the System.Runtime you are using and the one referenced by System.Runtime.Loader. The only solution is to use the System.Runtime.Loader NuGet package, but that returns you to the Standard 2 version of AssemblyLoadContext, even if the library version is higher!

The setup is this: I have an ITestInterface interface which resides in TestInterfaceLibrary.dll. I also have a TestImplementation class that can be found in TestImplementationLibrary.dll and implements ITestInterface. My program either does not reference any of these libraries or it only references the interface one. The task is to load both these types and then simply convert one instance of TestImplementation to ITestInterface. Simple test would be loading the types and then expecting interfaceType.IsAssignableFrom(implementationType) to be true.

Core 3


Let's first try the Core 3 way:
var context = new AssemblyLoadContext("testContext", true);
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface");
Console.WriteLine(interfaceType?.ToString()??"interface type not loaded");
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation");
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
Console.WriteLine("implementation implements interface: "+interfaceType.IsAssignableFrom(implementationType));
 
context.Unload();
The output is:
TestInterfaceLibrary.ITestInterface
TestImplementationLibrary.TestImplementation
implementation implements interface: True

It works! But only because the interface assembly is loaded first. If you try to load just the implementation type first, it will come up as empty. There are no exceptions thrown unless you get all the assembly types or specify the throwOnError parameter in GetType. The exception is "System.IO.FileNotFoundException: 'Could not load file or assembly 'TestInterfaceLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified.'".

In order to solve this, we need to use the Resolve event of the AssemblyLoadContext class. Let's try this:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation", true);
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine("implementation implements interface: " + interfaceType.IsAssignableFrom(implementationType));
 
context.Resolving -= Context_Resolving;
context.Unload();
 
private static Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}

And now it works again, by assuming the assembly name is the same as the assembly file name and that it is found in the same place.

But... if we try this in different contexts:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation", true);
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
context.Resolving -= Context_Resolving;
context.Unload();
context = new AssemblyLoadContext("testContext2", true);
context.Resolving += Context_Resolving;
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine("implementation implements interface: " + interfaceType.IsAssignableFrom(implementationType));
 
context.Resolving -= Context_Resolving;
context.Unload();
the output will show
implementation implements interface: False

This means that if we want to encapsulate this in a TypeLoader class or something, we cannot use different contexts for dynamically loading types. Even if we had one context that we would unload in order to refresh all the types, it could still be different from the main context, in case the interface is loaded twice or referenced directly in the project.

For example, if you reference TestInterfaceLibrary directly and you load TestImplementation dynamically it will work as expected, because ITestInterface is resolved automatically from the main context. However, if you load ITestInterface dynamically, too, it will be a different type from the referenced ITestInterface, even if they apparently have the same name and full name and assembly qualified name! So it kind of makes sense to not load a type twice. Is this where the context unloading comes in? Not really. Let's define a method that counts the number of types with a certain name in the current domain as
private static int CountTypes(string typeName)
{
return AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes().Where(t => t.FullName == typeName))
.Count();
}

And now let's run this code:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var referencedInterfaceType = typeof(ITestInterface);
Console.WriteLine(referencedInterfaceType?.ToString() ?? "interface type not loaded");
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine($"Types are the same: {interfaceType==referencedInterfaceType}");
 
Console.WriteLine($"Number of types with name {interfaceType.FullName}: {CountTypes(interfaceType.FullName)}");
 
context.Resolving -= Context_Resolving;
context.Unload();
Console.WriteLine($"Number of types with name {interfaceType.FullName}: {CountTypes(interfaceType.FullName)}");

There is the referenced type, then we load the type dynamically again, inside a new context. We count the types loaded in the current domain, we unload the context, we count the types again. The result is
TestInterfaceLibrary.ITestInterface
TestInterfaceLibrary.ITestInterface
Types are the same: False
Number of types with name TestInterfaceLibrary.ITestInterface: 2
Number of types with name TestInterfaceLibrary.ITestInterface: 2
The types are always 2!

Bottom line, even when unloading the AssemblyLoadContext, the types used are not unloaded and trying to find a type by name will result in duplicates.

OK, so let's just agree that types with the same name, once loaded, should remain there and no other type with the same name should be loaded. Let's try to incorporate this into a TypeLoader class:
public class TypeLoader : IDisposable
{
private readonly AssemblyLoadContext _context;
 
public TypeLoader()
{
_context = new AssemblyLoadContext(GetType().FullName, true);
_context.Resolving += Context_Resolving;
}
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = _context.LoadFromAssemblyPath(assemblyPath);
return assembly.GetType(typeName, true);
}
 
public void Dispose()
{
_context?.Resolving -= Context_Resolving;
_context?.Unload();
}
}

The code in our test is now much clearer:
var interfaceAssemblyPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestInterfaceLibrary.dll");
var implementationAssemblyPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestImplementationLibrary.dll");
var interfaceTypeName = "TestInterfaceLibrary.ITestInterface";
var implementationTypeName = "TestImplementationLibrary.TestImplementation";
 
using (var loader = new TypeLoader())
{
Type referencedType = typeof(TestInterfaceLibrary.ITestInterface);
var interfaceType = loader.LoadType(interfaceTypeName, interfaceAssemblyPath);
var implementationType = loader.LoadType(implementationTypeName, implementationAssemblyPath);
Console.WriteLine($@"
referenced type: {referencedType}
interface type: {interfaceType}
implementation type: {implementationType}
referenced and loaded interfaces are the same: {referencedType == interfaceType}
interface implemented: {interfaceType.IsAssignableFrom(implementationType)}"
);
}
and the result is
referenced type: TestInterfaceLibrary.ITestInterface
interface type: TestInterfaceLibrary.ITestInterface
implementation type: TestImplementationLibrary.TestImplementation
referenced and loaded interfaces are the same: True
interface implemented: True

But we still use Unload. Maybe it will work some day as I want it to work, but until then, why not get rid of Unload and make TypeLoader a class in a Standard 2 library?

Standard 2


For this I will create a new Standard 2 library project and then reference it in our test Core 3 project. Then I will move the TypeLoader class in the library project.

The errors in the library project are related to not knowing what an AssemblyLoadContext is, therefore the first solution is to reference System.Runtime.Loader from the framework. I get the immediate error "Assembly 'System.Runtime.Loader' with identity 'System.Runtime.Loader, Version=4.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' uses 'System.Runtime, Version=4.2.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' which has a higher version than referenced assembly 'System.Runtime' with identity 'System.Runtime, Version=4.1.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a".

Solution 2: load the System.Runtime.Loader NuGet package, which at the time of writing this, is version 4.3.0. The error is now gone, but several things are immediately apparent:
  1. the Unload method doesn't exist anymore
  2. the constructor doesn't receive a name and a bool anymore
  3. AssemblyLoadContext is now abstract

In order to solve this I am creating a DynamicAssemblyLoadContext class that inherits from AssemblyLoadContext and just return null from the Load method overload, and I give it an Unload method and a constructor with a string and a bool that don't do anything. And it works again. The updated TypeLoader class is now:
public class TypeLoader : IDisposable
{
private readonly DynamicAssemblyLoadContext _context;
 
public TypeLoader()
{
_context = new DynamicAssemblyLoadContext(GetType().FullName, true);
_context.Resolving += Context_Resolving;
}
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(ass => ass.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = _context.LoadFromAssemblyPath(assemblyPath);
return assembly.GetType(typeName, true);
}
 
public void Dispose()
{
_context?.Resolving -= Context_Resolving;
_context?.Unload();
}
 
 
private class DynamicAssemblyLoadContext : AssemblyLoadContext
{
public DynamicAssemblyLoadContext(string name, bool isCollectible)
{
}
 
protected override Assembly Load(AssemblyName assemblyName)
{
return null;
}
 
public void Unload()
{
}
}
}

The safe way


The code above has an issue, though. If the interface type is dynamically loaded before its referenced type is used, this fails again. This is the case when you use dependency injection. You dynamically load the types in order to register the implementation relationship to the interface, but then, when you ask for a resolution for the interface type, now referenced by the main project, you get another type named just the same.

The safe way, considering that we don't really use Unload and we don't count on it every working, why not use the default context, the one where everything loads, and be done with it. When you do that, the code becomes a little uglier, but it works in all situations.

Final version.
public class TypeLoader
{
private readonly object _resolutionLock = new object();
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var context = AssemblyLoadContext.Default;
lock (_resolutionLock)
{
context.Resolving += Context_Resolving;
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(ass => ass.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = context.LoadFromAssemblyPath(assemblyPath);
 
type = assembly.GetType(typeName, true);
context.Resolving -= Context_Resolving;
return type;
}
}
}

You just gotta hate that adding and removing the event inside a lock, right? Well, if you find a better solution, let me know.

and has 0 comments

The Problem


Phew, that's a mouthful. But the issue is that trying to serialize a FileInfo or a DirectoryInfo object with Newtonsoft's Json library in .NET Core fails with a vague exception:
Newtonsoft.Json.JsonSerializationException: Unable to serialize instance of 'System.IO.DirectoryInfo'.
at Newtonsoft.Json.Serialization.DefaultContractResolver.ThrowUnableToSerializeError(Object o, StreamingContext context)
at Newtonsoft.Json.Serialization.JsonContract.InvokeOnSerializing(Object o, StreamingContext context)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.OnSerializing(JsonWriter writer, JsonContract contract, Object value)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject(JsonWriter writer, Object value, JsonObjectContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.Serialize(JsonWriter jsonWriter, Object value, Type objectType)

It doesn't say why it fails, just that a method called ThrowUnableToSerializeError threw um... an unable to serialize error?

The Cause


Looking at the Newtonsoft code, we eventually get to this piece of code:
// serializing DirectoryInfo without ISerializable will stackoverflow
// https://github.com/JamesNK/Newtonsoft.Json/issues/1541
if
(Array.IndexOf(BlacklistedTypeNames, objectType.FullName) != -1)
{
contract.OnSerializingCallbacks.Add(ThrowUnableToSerializeError);
}

Later, another piece of code will execute the serializing callbacks and throw the exception. We can get rid of this functionality, by using a custom contract resolver, like this:
var settings = new JsonSerializerSettings
{
ContractResolver = new FileInfoContractResolver()
};
 
private class FileInfoContractResolver : DefaultContractResolver
{
protected override JsonContract CreateContract(Type objectType)
{
var result = base.CreateContract(objectType);
if (typeof(FileSystemInfo).IsAssignableFrom(objectType))
{
result.OnSerializingCallbacks.Clear();
}
return result;
}
}

Yet now, when trying to serialize, we get the stack overflow exception described in the original Newtonsoft.Json issue. It stems from the difference between the .NET Framework implementation and the .NET Core implementation of ISerializable in FileSystemInfo, which in Core just throws PlatformNotSupportedException. It's still not clear why it goes to a StackOverflowException, probably some conflict with Newtonsoft code, but it's clear Microsoft does not intend to make these classes serializable. If you think about it, those classes suck for so many reasons!

The Solution


So, in order to solve it, we will use a custom JSON converter:
private class FileSystemInfoConverter:JsonConverter
{
public override bool CanConvert(Type objectType)
{
return typeof(FileSystemInfo).IsAssignableFrom(objectType);
}
 
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
if (reader.TokenType == JsonToken.Null)
return null;
var jObject = JObject.Load(reader);
var fullPath = jObject["FullPath"].Value<string>();
return Activator.CreateInstance(objectType, fullPath);
}
 
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
var info = value as FileSystemInfo;
var obj = info == null
? null
: new
{
FullPath = info.FullName
};
var token = JToken.FromObject(obj);
token.WriteTo(writer);
}
}
And we use it like this:
var settings = new JsonSerializerSettings
{
Converters = new List<JsonConverter>
{
new FileSystemInfoConverter()
}
};
var json = JsonConvert.SerializeObject(dir, settings);
var info = JsonConvert.DeserializeObject<DirectoryInfo>(json, settings);

Why FileInfo and DirectoryInfo suck


The answer of a senior developer to any question should be "Why?" or "Why on Earth or anywhere in the Solar System would you want to do a dumb thing like that?!?!". Why would you want to serialize a directory or file info object? The answer is that you should not. The info objects are defined by only one thing: a path, but they have so much baggage: properties that access the file system, unsafe methods, no interfaces or factory methods that can allow them to be mocked in unit tests. They might look like data objects, but they are not!

Imagine a scenario where you have a list of all the files in your drive. You enumerated them all and now you want to serialize them. Should the serializer save Exists or Length, for example? Because that means it will access the file system for each of them in the process of serialization, leading to a lot of work, propensity to access errors and so on.

Best practices say you should either use some model classes to move around data, like an empty FileSystemInfoModel with Type and FullPath and maybe Attributes or Size properties or whatever you want to save, but that you set yourself as a separate responsibility. And if you want to use the functionality of the Info classes, use System.IO.Abstractions or the new Core IFileProvider abstraction to get implementations of interfaces that you can mock in unit tests.

Tell me what you think.

and has 0 comments
This post starts from a simple question: how do I start a task with timeout? You go to StackOverflow, of course, and find this answer: Asynchronously wait for Task<T> to complete with timeout. It's an elegant solution, mainly to also start a Task.Delay and continue when either task completes. However, in order to cancel the initial operation, one needs to pass the cancellation token to the original task and manually handle it, meaning polluting the entire business code with cancellation logic. This might be OK, yet are there alternatives?

But, isn't there the Task.Run(action) method that also accepts a CancellationToken? Yes, there is, and if you thought this runs an action until you cancel it, think again. Here is what Task.Run says it does: "Queues the specified work to run on the thread pool and returns a Task object that represents that work. A cancellation token allows the work to be cancelled." and if you scroll down to Remarks, here is what it actually does: "If cancellation is requested before the task begins execution, the task does not execute. Instead it is set to the Canceled state and throws a TaskCanceledException exception". You read that right: the token is only taken into account when the task starts running, not while it is actually executing.

Surely, then, there must be a way to cancel a running Task. How about Task.Dispose()? Dispose throws a funny exception if you try it: "System.InvalidOperationException: 'A task may only be disposed if it is in a completion state (RanToCompletion, Faulted or Canceled).'". In normal speech, it means "Fuck you!". If you think about it, how would you abort a task execution? What if it does nasty things, leaves resources occupied, has to clean up after it? The .NET team took the safe path and refused to give you an out of the box unsafe cancelling mechanism.

So, what is the solution? The recommended one is that you pass the token to all methods that can be cancelled and then check inside if cancellation was requested. Of course this only works if
  1. you control what the task does
  2. you can split the operation into small chunks that are either executed sequentially or in a loop so you interrupt their flow
. If you have something like an external process that is being executed, or a long running operation, you are almost out of luck. Why almost? Well, CancellationSource or CancellationToken do not have events, but the token exposes a "wait handle" that you can wait for synchronously. And here it gets funky. Check out an example of a method that executes some long running action and can react to token cancelling:
/// <summary>
/// Executes the long running action and cancels it when needed
/// </summary>
/// <param name="token"></param>
private void LongRunningAction(CancellationToken token)
{
// instantiate a container and keep its reference
var container = new IdentificationContainer();
Task.Run(() =>
{
// wait until the token gets cancelled on another thread
token.WaitHandle.WaitOne();
// this will use the information in the container to kill the action
// (presumably by interrupting external processes or sending some kill signal)
KillLongRunningAction();
});
// this executes the action and populates the identification container if needed
RunLongRunningAction();
}
This introduces some other issues, like what happens to the monitoring task if you never cancel the token or dispose of the cancellation source, but that's a bit too deep.

In the code above we get a sort of a solution if we can control the code and we can actually cancel things gracefully inside of it. But what if I can't (or won't)? Can I get something that does what I wanted Task.Run to do: execute something and, when I cancel it, stop it from executing?

And the answer, using what we learned above, is yes, but as explained at the beginning, it may have effects like resource leaks. Here it is:
/// <summary>
/// Run an action and kill it when canceling the token
/// </summary>
/// <param name="action">The action to execute</param>
/// <param name="token">The token</param>
/// <param name="waitForGracefulTermination">If set, the task will be killed with delay so as to allow the action to end gracefully</param>
private static Task RunCancellable(Action action, CancellationToken token, TimeSpan? waitForGracefulTermination=null)
{
// we need a thread, because Tasks cannot be forcefully stopped
var thread = new Thread(new ThreadStart(action));
// we hold the reference to the task so we can check its state
Task task = null;
task = Task.Run(() =>
{
// task monitoring the token
Task.Run(() =>
{
// wait for the token to be canceled
token.WaitHandle.WaitOne();
// if we wanted graceful termination we wait
// in this case, action needs to know about the token as well and handle the cancellation itself
if (waitForGracefulTermination != null)
{
Thread.Sleep(waitForGracefulTermination.Value);
}
// if the task has not ended, we kill the thread
if (!task.IsCompleted)
{
thread.Abort();
}
});
// simply start the thread (and the action)
thread.Start();
// and wait for it to end so we return to the current thread
thread.Join();
// throw exception if the token was canceled
// this will not be reached unless the thread completes or is aborted
token.ThrowIfCancellationRequested();
}, token);
return task;
}

As you can see, the solution is to run the action on a thread and then manually kill the thread. This means that any control of where and how the action is executed is wrestled from the default TaskScheduler and given to you. Also, in order to force the stopping of the task, you use Thread.Abort, which may have nasty side effects. Here is what Microsoft says about it:



Bummer! .NET Core doesn't want you to kill threads. However, if you are really determined, there is a way :) Use ThreadEx.Abort(thread);


Bonus code: How do you get the cancellation token if you have the task?
var token = new TaskCanceledException(task).CancellationToken;
It might not help too much, especially if you want to use it inside the task itself, but it might help clean up the code.

Conclusion


Just like async/await, using the provided cancellation token method will only pollute your code with little effect. However, considering you want to use a common interface for the purpose, use RunCancellable instead of Task.Run and handle the token manually whenever you feel resources have been allocated and need to be cleaned up first.

Sometimes you get an annoying error after updating your .NET Framework or some of the packages or libraries in your project: "Some NuGet packages were installed using a target framework different from the current target framework and may need to be reinstalled. Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information. Packages affected: <name-of-nuget-package>".

The problem stems from the fact that NuGet packages have variants for different .NET flavors and in your project they are "hinted" at by the <HintPath> child element in the <Reference> elements in your .csproj file. Somehow, the hint still points to a different variant than the one you need and that's why you get this error. The explanation in length can be found in this great post: Why, when and how to reinstall NuGet packages after upgrading a project, but just in case his blog disappears (as so many great ones did in the past), here is the gist of the solution:

In Visual Studio go to Tools → NuGet Package Manager → Package Manager Console and type:
Update-Package <name-of-nuget-package> -Reinstall -ProjectName <name-of-project>

To add some value to Derriey's post, you can solve all the similar issues in your solution by copying the entire list of errors from all projects by going to the Output pane, selecting them all and right clicking Copy, then run search and replace in your favorite editor with this regular expression:
^.*?Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information.  Packages affected: ((?:[^,\s]+(?:, )?)+)\t([^\t]+)\t\t\d+\t\t$
and replacement pattern
Update-Package $1 -Reinstall -ProjectName $2

Then make sure there is only one project on each line, copy paste the result into the Package Manager Console window and the entire solution will get fixed.

Example: Error Some NuGet packages were installed using a target framework different from the current target framework and may need to be reinstalled. Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information. Packages affected: Microsoft.Extensions.Configuration, Serilog MyProject.Common 0

Turns into: Update-Package Microsoft.Extensions.Configuration, Serilog -Reinstall -ProjectName MyProject.Common Since Update-Package only supports one package and regex replace doesn't have a syntax for multiple captures in the same group, you will have to manually turn this into:
Update-Package Microsoft.Extensions.Configuration -Reinstall -ProjectName MyProject.Common
Update-Package Serilog -Reinstall -ProjectName MyProject.Common

Copy paste the result and the two projects will be reinstalled on the affected projects in your solution.

I spent hours trying to manually fix the assembly redirects in a web.config, only to give up and use the default Add-BindingRedirect in the NuGet package manager. And it worked! I have no idea if this won't break something else, but I got it from Rick Strahl's blog and it worked for me. More in his article. Thanks, Rick!

One thing to remember is that you first have to delete the dependentAssembly elements from the .config file in order for the command to work.

and has 2 comments
Sonar Source code static analysis rule RSPEC-3906 states:
Delegate event handlers (i.e. delegates used as type of an event) should have a very specific signature:
  • Return type void.
  • First argument of type System.Object and named 'sender'.
  • Second argument of type System.EventArgs (or any derived type) and is named 'e'.


The problem was that I was getting the warning on a simple event declared as EventHandler<TEventArgs>. Going to its source code page revealed the reason in a comment: // Removed TEventArgs constraint post-.NET 4.

and has 0 comments
This is another post discussing a static analysis rule that made me learn something new. SonarSource Rule 3898 says: If you're using a struct, it is likely because you're interested in performance. But by failing to implement IEquatable<T> you're loosing performance when comparisons are made because without IEquatable<T>, boxing and reflection are used to make comparisons.

There is a StackOverflow entry that discusses just that and the answer to this particular problem is not actually the accepted one. In pure StackOverflow fashion I will quote the relevant bit of the answer, just in case the site will go offline in the future: I'm amazed that the most important reason is not mentioned here. IEquatable<> was introduced mainly for structs for two reasons:
  1. For value types (read structs) the non-generic Equals(object) requires boxing. IEquatable<> lets a structure implement a strongly typed Equals method so that no boxing is required.
  2. For structs, the default implementation of Object.Equals(Object) (which is the overridden version in System.ValueType) performs a value equality check by using reflection to compare the values of every field in the type. When an implementer overrides the virtual Equals method in a struct, the purpose is to provide a more efficient means of performing the value equality check and optionally to base the comparison on some subset of the struct's field or properties.

I thought this was worth mentioning, for those performance critical struct equality scenarios.

and has 0 comments
I was playing with code analysis rule sets in Visual Studio (see my blog post about it) and I got hit by come conflicting rules. I will discuss only SonarSource rules, but a lot of other analyzers have similar rules.

OK, one of them is something that I intuitively thought was universally good: RSPEC-3962: "static readonly" constants should be "const" instead. Makes sense, right? A constant is compiled better, integrated faster, it's a constant! No overhead, nothing changes it. This rule was marked as a minor improvement to the code, anyway.

Then, bam!, RSPEC-2339: Public constant members should not be used. Critical rule! Basically it says the opposite: turn your constant into static readonly. What's going on?!

This is not one of those pairs of rules that contradict each other based on user preference, like using var instead of the type name when the type is obvious and viceversa. These are two different, apparently conflicting, yet complementary concepts.

But what is really the difference between a static readonly field and a constant, other than constants can only be value types? Constant values are retrieved at compile time, as an optimization, since they are not expected to change, while static readonly values are retrieved at runtime. This means that if you use a library in your project, the constants it declares will be incorporated into your application when you compile it. You may change the .dll of the library afterwards, with inconsistent results, since readonly statics will now have changed values and the constants not.

Here, an example. In the creatively named project Library there is a Container class with a public constant ingeniously named Constant and a public static readonly field that has the same value as Constant.
namespace Library
{
public class Container
{
public const int Constant = 1;
public static readonly int StaticReadonly = Constant;
}
}

Then there is a program that uses these two values to display them:
class Program
{
static void Main(string[] args)
{
Console.WriteLine($"Container.Constant: {Container.Constant} Container.StaticReadonly: {Container.StaticReadonly}");
Console.ReadKey();
}
}

The expected output is Container.Constant: 1 Container.StaticReadonly: 1. Now change the value of Constant to 2, right click the Library project and only build it, not the program. Then take the resulting .dll and copy it in the bin folder of the program, then run it manually. The output is now... Container.Constant: 1 Container.StaticReadonly: 2 and that from a code like StaticReadonly = Constant;.

Conclusion: public constants should be avoided if they are used between projects and since you don't know where they will be used, better to avoid them at all times. This will really annoy people who like to create separate classes to store constants, but that's OK, because the feeling is mutual.

and has 0 comments
So I was watching this Entity Framework presentation and I noticed one example that looked like this:
db.ExecuteSqlCommand($"delete from Log where Time<{time}");

Was this an invitation to SQL injection? Apparently not, since the resulting SQL was something like DELETE FROM Log WHERE Time < @_p0. But how could that be? Enter FormattableString, which is a class implementing the venerable IFormattable interface, but which is available in .NET Framework only from version 4.6 and in .NET Core from the very beginning. Apparently, when an interpolated string is assigned to a FormattableString, it is compiled as an instance with all the values from the string before the formatting. In our case ExecuteSqlCommand had a FormattableString overload. Note that the method is an extension method from RelationalDatabaseFacadeExtensions, not Database.ExecuteSqlCommand.

Let's test this with a little program:
class Program
{
static void Main(string[] args)
{
var timeDisplay = new TimeDisplay();
Test($"Time display:{timeDisplay}");
Console.ReadKey();
}
 
private static void Test(string text)
{
Console.WriteLine(text);
}
 
private class TimeDisplay
{
public override string ToString()
{
return DateTime.Now.ToString("s");
}
}
}

Here I create an instance of TimeDisplay and then use it in an interpolated string which is then sent to the Test method, which Console.WriteLines it. The ToString method of TimeDisplay is overridden to display the current time. The result is predictable: Time display:2018-12-13T11:24:02. I will then change the type of the parameter of Test to be FormattableString. It still works and it displays the same thing. Note that if I have both a FormattableString and a string version of the same method, string will be used first when an interpolated string is sent as a parameter!

But what do I get in that instance? Let's change the Test method even more:
private static void Test(FormattableString text)
{
Console.WriteLine($"Format: {text.Format} " +
$"ArgumentCount: {text.ArgumentCount} " +
$"Arguments: {string.Join(", ",text.GetArguments())}");
}

The displayed result of the program is now Format: Time display:{0} ArgumentCount: 1 Arguments: 2018-12-13T11:28:35. Note that the argument is in fact a TimeDisplay instance and it is displayed as a time stamp because of the ToString override.

What does this mean?

Well, we can do great things like Entity Framework does, interpreting the intent of the developer and providing a more informed output. I am considering this as a solution for logging. Logger.LogDebug($"{someObjectWithAHeavyToString}") now doesn't have to execute the ToString() method of the object unless the Debug log level is enabled, for example.

But we can also really mess things up. I will get past the possible yet unlikely security problem where you believe you pass an object as .ToString() and in fact it is passed as the entire object, allowing a malicious library to do whatever it wants with it. Let's consider more probable scenarios.

One is that a code reviewer will tell you "put magic strings in their own variables or constants", so you immediately take the string sent to test and automatically move it a local variable (which Visual Studio will create it as a FormattableString), then you replace that with var (because the type is obvious, right?). Suddenly the test variable is a string.

Another is even worse, although if you decided to code like this you have other issues. Let's get back to something similar to the original example:
db.ExecuteSqlCommand($"delete from Log where Id = {id}");

And let's change it:
var sql=$"delete from Log where Id = {id}";
db.ExecuteSqlCommand(sql);

Now sql is a string, its value is computed from the id, which might be provided by the user. Replace this with Bobby Tables and you got a nice SQL injection.

Conclusion: an interesting, if somewhat confusing, concept. Other than the logging idea, which I admit is pretty interesting, I am yet to find a good place to use it.

Intro


An adapter is a software pattern that exposes functionality through an interface different from the original one. Let's say you have an oven, with the function Bake(int temperature, TimeSpan time) and you expose a MakePizza() interface. It still bakes at a specific temperature for an amount of time, but you use it differently. Sometimes we have similar libraries with a common goal, but different scope, that one is tempted to hide under a common adapter. You might want to just cook things, not bake or fry.

So here is a post about good practices of designing a library project (complete with the use of software patterns, ugh!).



Examples


An example in .NET would be the WebRequest.Create method. It receives an URI as a parameter and, based on its type, returns a different implementation that will handle the resource in the way declared by the WebRequest. For HTTP, it will used an HttpWebRequest, for FTP an FtpWebRequest, for file access a FileWebRequest and so on. They are all implementations of the abstract class WebRequest which would be our adapter. The Create method itself is an example of the factory method pattern.

But there are issues with this. Let's assume that we have different libraries/projects that handle a specific resource scope. They may be so different as to be managed by different organizations. A team works on files, another on HTTP and the FTP one is an open source third party library. Your team works on the WebRequest class and has to consider the implications of having a Create factory method. Is there a switch there? "if URI starts with http or https, return new HttpWebRequest"? In that case, your WebRequest library will need to depend on the library that contains HttpWebRequest! And it's just not possible, since it would be a circular reference. Had your project control over all implementations, it would still be a bad idea to let a base class know about a derived class. If you move the factory into a factory class it still means your adapter library has to depend on every implementation of the common interface. As Joe Armstrong would say You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

So how did Microsoft solve it? Well, they did move the implementation of the factory in another creator class that would implement IWebRequestCreate. Then they used configuration to associate a prefix with an implementation of WebRequest. Guess you didn't know that, did you? You can register your own implementations via code or configuration! It's such an obscure feature that if you Google WebRequestModulesSection you mostly get links to source code.

Another very successful example of an adapter library is jQuery. Yes, the one they now say you don't need anymore, it took industry only 12 years to catch up after all. Anyway, at the time there were very different implementations of what people thought a web browser should be. The way the DOM was represented, the Javascript objects and methods, the way they actually worked compared to the way they should have worked, everything was different. So developers were often either favoring a browser over others or were forced to write code for each possible version. Something like "if Internet Explorer, do A, if Netscape, do B". The problem with this is that if you tried to use a browser that was neither Internet Explorer or Netscape, it would either break or show you one of those annoying "browser not supported" messages.

Enter jQuery, which abstracted access over all these different interfaces with a common (and very nicely designed) one. Not only did it have a fluent interface that allowed you to do multiple things with a single target (stuff like $('#myElement').show().css({opacity:0.7}).text('My text');), but it was extensible, allowing third parties to add modules that would allow even more functionality ($('#myElement').doSomethingCool();). Sound familiar? Extensibility seems to be an important common feature of well designed adapters.

Speaking of jQuery, one very used feature was jQuery.browser, which told you what browser you were using. It had a very sophisticated and complex code to get around the quirks of every browser out there. Now you had the ability to do something like if ($.browser.msie) say('OMG! You like Microsoft, you must suck!'); Guess what, the browser extension was deprecated in jQuery 1.9 and not because it was not inclusive. Well, that's the actual reason, but from a technical point of view, not political correctness. You see, now you have all this brand new interface that works great on all browsers and yet still your browser can't access a page correctly. It's either an untested version of a particular browser, or a different type of browser, or the conditions for letting the user in were too restrictive.

The solution was to rely on feature detection, not product versions. For example you use another Javascript library called Modernizr and write code like if (Modernizr.localstorage) { /* supported */ } else { /* not-supported */ }. There are so many possible features to detect that Modernizr lets you pick and choose the ones you need and then constructs the library that handles each instead of bundling it all in one huge package. They are themselves extensible. You might ask what all this has to do with libraries in .NET. I am getting there.

The last example: Entity Framework. This is a hugely popular framework for database access from Microsoft. It would abstract the type of the database behind a very nice (also fluent) interface in .NET code. But how does it do that? I mean, what if I need SQL Server? What if I want MongoDB or PostgreSQL?

The way is having different "providers" to translate .NET code Expressions into whatever the storage needs. The individual providers are added as dependencies to your project, without the need for Entity Framework to know about them. Then they are configured for use in code, because they implement some common interfaces, and they are ready for use.

Principles for adapters


So now we have some idea about what is good in an adapter:
  • Ease of use
  • Common interface
  • Extensibility
  • No direct dependency between the interface and what is adapted
  • An interface per feature

Now that I wrote it down, it sounds kind of weird: the interface should not depend on what it adapts. It is correct, though. In the case of Entity Framework, for example, the provider for MySql is an adapter between the use interface of MySql and the .NET interfaces declared by Entity Framework; interfaces are just declarations of what something should do, not implementation.

Picture time!


The factory and the common interface are one library that will use that library in your project. Each individual adapter depends on it, as well, but your project doesn't need to know about it until needed.

Now, it's your choice if you register the adapters dynamically (so, let's say you load the .dll and extract the objects that implement a specific interface and they know themselves to what they apply, like FtpWebRequest for ftp: strings) or you add dependencies to individual adapters to your project and then manually register them yourself and strong typed. The important thing is that you don't reference the factory library and automatically be forced to get all the possible implementations added to your project.

It seems I've covered all points except the last one. That is pretty important, so read on!

Imagine that the things you want to adapt are not really that similar. You want to force them into a common shape, but there will be bits that are specific to one domain only and you might want them. Now here is an example of how NOT to do things:
var target = new TargetFactory().Get(connectionString);
if
(target is SomeSpecificTarget specificTarget) {
specificTarget.Authenticate(username, password);
}
target.DoTargetStuff();
In this case I use the adapter for Target, but then bring in the knowledge of a specific target called SomeSpecificTarget and use a method that I just know is there. This is bad for several reasons:
  1. For someone to understand this code they must know what SomeSpecificTarget does, invalidating the concept of an adapter
  2. I need to know that for that specific connection string a certain type will always be returned, which might not be the case if the factory changes
  3. I need to know how SomeSpecificTarget works internally, which might also change in the future
  4. I must add a dependency to SomeSpecificTarget to my project, which is at least inconsistent as I didn't add dependencies to all possible Target implementations
  5. If different types of Target will be available, I will have to write code for all possibilities
  6. If new types of Target become available, I will have to change the code for each new addition to what is essentially third party code

And now I will show you two different versions that I think are good. The first is simple enough:
var target = new TargetFactory().Get(connectionString);
if
(target is IAuthenticationTarget authTarget) {
authTarget.Authenticate(username, password);
}
target.DoTargetStuff();
No major change other than I am checking if the target implements IAuthenticationTarget (which would best be an interface in the common interface project). Now every target that requires (or will ever require) authentication will receive the credentials without the need to change your code.

The other solution is more complex, but it allows for greater flexibility:
var serviceProvider = new TargetFactory()
.GetServiceProvider(connectionString);
var target = serviceProvider.Get<ITargetProvider>()
.Get();
serviceProvider.Get<ICredentialsManager>()
?.AddCredentials(target, new Credentials(username, password));
target.DoTargetStuff();
So here I am not getting a target, but a service provider (which is another software pattern, BTW), based on the same connection string. This provider will give me implementations of a target provider and a credentials manager. Now I don't even need to have a credentials manager available: if it doesn't exist, this will do nothing. If I do have one, it will decide by itself what it needs to do with the credentials with a target. Does it need to authenticate now or later? You don't care. You just add the credentials and let the provider decide what needs to be done.

This last approach is related to the concept of inversion of control. Your code declares intent while the framework decides what to do. I don't need to know of the existence of specific implementations of Target or indeed of how credentials are being used.

Here is the final version, using extension methods in a method chaining fashion, similar to jQuery and Entity Framework, in order to reinforce that Ease of use principle:
// your code
var target = new TargetFactory()
.Get(connectionString)
.WithCredentials(username,password);
 
 
// in a static extensions class
 
public static Target WithCredentials(this Target target, string username, string password)
{
target.Get<ICredentialsProvider>()
?.AddCredentials(target, new Credentials(username, password));
return target;
}
 
public static T Get<T>(this Target target)
{
return target.GetServiceProvider()
.Get<T>();
}
This assumes that a Target has a method called GetServiceProvider which will return the provider for any interface required so that the whole code is centered on the Target type, not IServiceProvider, but that's just one possible solution.

Conclusion


As long as the principles above are respected, your library should be easy to use and easy to extend without the need to change existing code or consider individual implementations. The projects using it will only use the minimum amount of code required to do the job and themselves be dependent only on interface declarations. As well as those are respected, the code will work without change. It's really meta: if you respect the interface described in this blog then all interfaces will be respected in the code all the way down! Only some developer locked in a cellar somewhere will need to know how things are actually getting done.