and has 0 comments
It all started with the source code for NonCapturingTimer, a static factory class that was creating a System.Threading.Timer without capturing the execution context and was described as "A convenience API for interacting with System.Threading.Timer in a way that doesn't capture the ExecutionContext. We should be using this (or equivalent) everywhere we use timers to avoid rooting any values stored in asynclocals.". What did that even mean?

An issue opened by David Fowler sheds some light on this: "Any lazy activation of timers will capture the ExecutionContext. Combining this with a lazy initialization of the HttpClient and the handler graph may end up holding onto AsyncLocals for longer than expected. This could end up looking like a memory leak". This follows a Twitter thread from Fowler declaring AsyncLocal as evil.

There are also multiple issues that have crystallized into a proposal for a future version of .NET: "Timer static Create methods that make rooting behavior explicit".

And if you look at the ASP.Net sources on GitHub, they do use the class mostly for one time timer calls and periodic cleanup calls. I should mention that Ben Adams from Microsoft calls this way of creating timers ugly.

I don't have the time to go down further on this rabbit hole, but maybe people will find answers here when looking into this and comment on their findings.

and has 0 comments

Update:

The problem described here was slightly false. The issue was that IOptionsSnapshot was registered as Scoped and I was just getting the service from the root IServiceProvider. The solution is to call provider.CreateScope() and with that scope as a provider use ActivatorUtilities. Even better, create a scope, then use it to get an instance of a business class that now would support Scoped services as well as Transient, just like a Controller would.

Warning, though: you need to dispose the scope, but you need to make sure you don't use any service that was created there outside the scope (after disposing).

I guess another solution would be to somehow register IOptionsSnapshot<> as transient, but haven't tried it.

And now for the original post

I was trying to create an instance of an object from a service provider to resolve any dependencies, using ActivatorUtilities.CreateInstance<MyObject>(_serviceProvider) and I was getting the exception:

System.InvalidOperationException
HResult=0x80131509
Message=Cannot resolve scoped service 'Microsoft.Extensions.Options.IOptionsSnapshot`1[ExternalConfiguration]' from root provider.


My object was receiving a parameter of type IOptionsSnapshot<ExternalConfiguration> and upon further investigation, my service provider (which came as a resolution from the dependency injection for IServiceProvider) was actually a ServiceProviderEngineScope which just refused to resolve any IOptionsSnapshot! Funny enough, if I replaced IOptionsSnapshot with IOptionsMonitor, which in my mind is a heavier interface, it worked without issues. Further still, the problem appeared only inside an IHostedService (a BackgroundService hooked up with services.AddHostedService<T>); if I wrote the same code in a controller action, for instance, it worked fine.

The .NET 2+ implementation of IOptionsSnapshot<T> is OptionsManager<T>. If I manually resolved an instance of OptionsManager before my object, then added it as a parameter, the code worked:

var optionsSnapshot = ActivatorUtilities.CreateInstance<OptionsManager<TestOptions>>(_serviceProvider);
var myObject = ActivatorUtilities.CreateInstance<MyObject>(_serviceProvider, optionsSnapshot);


So, specifically, the issue is that in .NET Core, the service provider implementation cannot resolve IOptionsSnapshot interfaces in worker services. You can still do that manually, but I suspect it is a bug, since there is no problem using an IOptionsMonitor instead of IOptionsSnapshot.

A possible solution is to use an additional service provider only for IOptionsSnapshot. Warning, this will not work in a general situation if the dependencies from the additional service provider also need parameters that would be found in the original service provider:

// initialization code
var serviceCollection = new ServiceCollection();
serviceCollection.AddSingleton(
typeof(IOptionsSnapshot<>),
typeof(OptionsManager<>)
);
serviceCollection.AddSingleton(
typeof(IOptionsFactory<>),
typeof(OptionsFactory<>)
);
_additionalServiceProvider = serviceCollection.BuildServiceProvider();
 
// resolution code
var constructor = typeof(MyObject).GetConstructors()
.Where(ci=>ci.IsPublic)
.Single();
var args = constructor.GetParameters()
.Select(p =>
{
try
{
return _serviceProvider.GetService(p.ParameterType);
}
catch
{
return _additionalServiceProvider.GetService(p.ParameterType);
}
})
.ToArray();
return ActivatorUtilities.CreateInstance<MyObject>(_serviceProvider, args);

and has 1 comment
.NET Core comes with its own dependency injection engine, separated in the Microsoft.Extensions.DependencyInjection package, and ASP.Net Core uses it by default. In a very simplistic description, it uses an IServiceCollection to add services to, then it builds an IServiceProvider from that list, an interface which returns an implementation based on a type or null if finding none. Any change in the list of services is not supported. There are situations, though, where you want to add new services. One of them being dynamically resolving new types.

Therefore I set up to create a custom implementation of IServiceProvider that fixes that, using the mechanisms already existing in .NET Core. Note that this is just something I did from frustration, "because I could". Most people choose to replace the entire IServiceProvider with an implementation that uses some other DI container, like StructureMap.

First attempt was proxying a normal ServiceProvider and keeping a reference to the collection. Then I would just change the collection and recreate the service provider. That has two major problems. One is that the previous serviceProvider is not disposed. If you try, you automatically dispose all services already resolved and if you do not, you remain with references to the created services. The second, and more dire, is that recreating the service provider will generate new instances for services, even if registered as singletons. That is not good.

I thought of a solution:
  1. keep a list of service providers, instead of just one
  2. use a custom service collection which will let us know when changes occurred
  3. whenever new services are added, add them to a list of new services
  4. whenever a service is resolved, go through the list of providers
  5. if any provider returns a value, provide it
  6. else if any new service create a new provider from the new services and add it to the list
  7. else return null
  8. when disposing, dispose all providers in the list

This works great except the newly added providers are separate from the existing providers so when you try to resolve a type with a second provider and that type has in its constructor a type that was registered in the first provider, you get nothing.

One solution would be to add all services to the second provider, not only the new ones, but then we get back to the original issue of the singletons, only a bit more subtle:
  1. register type1 as a singleton
  2. get an instance of type1 (1)
  3. build the provider
  4. get an instance of type1 (2)
  5. register type2 which receives a type1 in its constructor
  6. get an instance of type2
  7. now, type1 (1) is the same as type1 (2), because it was resolved by the same provider
  8. type1 is different from type2.type1, though, because that was resolved as a different singleton by the second provider in the list

One solution would be to add all previous services as factories, then. For Itype1, instead of returning typeof(type1), return a factory method that resolves the value with our system. And it works... until it reaches a definition (like IOptions) that was registered as an open generic: services.AddSingleton(typeof(IType3<>),typeof(Type3<>)). In case of open generics, you cannot use a descriptor with a factory, because it returns an object, regardless of the generic type argument used. It would not to do return a Type3<Banana> for a requested type of IType3<int>.

So, final version is this:
  1. keep a list of service providers, instead of just one
  2. keep a dictionary of the last object resolved for a type
  3. use a custom service collection which will let us know when changes occurred
  4. whenever new services are added, add them to a list of new services
  5. whenever a service is resolved, go through the list of providers
  6. if any provider returns a value, return it
  7. if no new services registered return null
  8. create a new provider from all the services like this:
    • if it's a new registration, use it as is
    • if it's an open generic definition type:
      • if singleton, add first all the existing resolutions for types that are defined by it
      • use the original descriptor afterwards
    • use a registration that proxies to the advanced resolution mechanism we created
  9. when disposing, dispose all providers in the list

This implementation also has a flaw: if a dependency parameter with a generic type definition descriptor was resolved as a singleton by an additional service provider, then is requested directly and can be resolved by a previous provider, it will return a different instance. Here is the scenario:
  1. the initial provider knows to map I<> to M<>
  2. you add a new singleton mapping from X to Y and Y gets a constructor parameter of type I<Z>
  3. you request an instance of X
  4. the first provider cannot resolve it
  5. the second provider can resolve it, therefore it will also resolve a I<Z> as an M<Z> singleton instance
  6. you request an instance of I<Z>
  7. the first provider can resolve it, therefore it will return a NEW singleton instance of M<Z>

This is an edge case that I don't have the time to solve. So, with the caveat above, here is the final version.
Use it like this:
// IAdvancedServiceProvider either injected 
// or resolved via serviceProvider.GetService<IAdvancedServiceProvider>
// or even serviceProvider as IAdvancedServiceProvider
advancedServiceProvider.ServiceCollection.AddSingleton...

And this is the source code:
/// <summary>
/// Service provider that allows for dynamic adding of new services
/// </summary>
public interface IAdvancedServiceProvider : IServiceProvider
{
/// <summary>
/// Add services to this collection
/// </summary>
IServiceCollection ServiceCollection { get; }
}
 
/// <summary>
/// Service provider that allows for dynamic adding of new services
/// </summary>
public class AdvancedServiceProvider : IAdvancedServiceProvider, IDisposable
{
private readonly List<ServiceProvider> _serviceProviders;
private readonly NotifyChangedServiceCollection _services;
private readonly object _servicesLock = new object();
private List<ServiceDescriptor> _newDescriptors;
private Dictionary<Type, object> _resolvedObjects;
 
/// <summary>
/// Initializes a new instance of the <see cref="AdvancedServiceProvider"/> class.
/// </summary>
/// <param name="services">The services.</param>
public AdvancedServiceProvider(IServiceCollection services)
{
// registers itself in the list of services
services.AddSingleton<IAdvancedServiceProvider>(this);
 
_serviceProviders = new List<ServiceProvider>();
_newDescriptors = new List<ServiceDescriptor>();
_resolvedObjects = new Dictionary<Type, object>();
_services = new NotifyChangedServiceCollection(services);
_services.ServiceAdded += ServiceAdded;
_serviceProviders.Add(services.BuildServiceProvider(true));
}
 
private void ServiceAdded(object sender, ServiceDescriptor item)
{
lock (_servicesLock)
{
_newDescriptors.Add(item);
}
}
 
/// <summary>
/// Add services to this collection
/// </summary>
public IServiceCollection ServiceCollection { get => _services; }
 
/// <summary>
/// Gets the service object of the specified type.
/// </summary>
/// <param name="serviceType">An object that specifies the type of service object to get.</param>
/// <returns>A service object of type serviceType. -or- null if there is no service object of type serviceType.</returns>
public object GetService(Type serviceType)
{
lock (_servicesLock)
{
// go through the service provider chain and resolve the service
var service = GetServiceInternal(serviceType);
// if service was not found and we have new registrations
if (service == null && _newDescriptors.Count > 0)
{
// create a new service collection in order to build the next provider in the chain
var newCollection = new ServiceCollection();
foreach (var descriptor in _services)
{
foreach (var descriptorToAdd in GetDerivedServiceDescriptors(descriptor))
{
((IList<ServiceDescriptor>)newCollection).Add(descriptorToAdd);
}
}
var newServiceProvider = newCollection.BuildServiceProvider(true);
_serviceProviders.Add(newServiceProvider);
_newDescriptors = new List<ServiceDescriptor>();
service = newServiceProvider.GetService(serviceType);
}
if (service != null)
{
_resolvedObjects[serviceType] = service;
}
return service;
}
}
 
private IEnumerable<ServiceDescriptor> GetDerivedServiceDescriptors(ServiceDescriptor descriptor)
{
if (_newDescriptors.Contains(descriptor))
{
// if it's a new registration, just add it
yield return descriptor;
yield break;
}
 
if (!descriptor.ServiceType.IsGenericTypeDefinition)
{
// for a non open type generic singleton descriptor, register a factory that goes through the service provider
yield return ServiceDescriptor.Describe(
descriptor.ServiceType,
_ => GetServiceInternal(descriptor.ServiceType),
descriptor.Lifetime
);
yield break;
}
// if the registered service type for a singleton is an open generic type
// we register as factories all the already resolved specific types that fit this definition
if (descriptor.Lifetime == ServiceLifetime.Singleton)
{
foreach (var servType in _resolvedObjects.Keys.Where(t => t.IsGenericType && t.GetGenericTypeDefinition() == descriptor.ServiceType))
{
 
yield return ServiceDescriptor.Describe(
servType,
_ => _resolvedObjects[servType],
ServiceLifetime.Singleton
);
}
}
// then we add the open type registration for any new types
yield return descriptor;
}
 
private object GetServiceInternal(Type serviceType)
{
foreach (var serviceProvider in _serviceProviders)
{
var service = serviceProvider.GetService(serviceType);
if (service != null)
{
return service;
}
}
return null;
}
 
/// <summary>
/// Dispose the provider and all resolved services
/// </summary>
public void Dispose()
{
lock (_servicesLock)
{
_services.ServiceAdded -= ServiceAdded;
foreach (var serviceProvider in _serviceProviders)
{
try
{
serviceProvider.Dispose();
}
catch
{
// singleton classes might be disposed twice and throw some exception
}
}
_newDescriptors.Clear();
_resolvedObjects.Clear();
_serviceProviders.Clear();
}
}
 
/// <summary>
/// An IServiceCollection implementation that exposes a ServiceAdded event for added service descriptors
/// The collection doesn't support removal or inserting of services
/// </summary>
private class NotifyChangedServiceCollection : IServiceCollection
{
private readonly IServiceCollection _services;
 
/// <summary>
/// Fired when a descriptor is added to the collection
/// </summary>
public event EventHandler<ServiceDescriptor> ServiceAdded;
 
/// <summary>
/// Initializes a new instance of the <see cref="NotifyChangedServiceCollection"/> class.
/// </summary>
/// <param name="services">The services.</param>
public NotifyChangedServiceCollection(IServiceCollection services)
{
_services = services;
}
 
/// <summary>
/// Get the value at index
/// Setting is not supported
/// </summary>
public ServiceDescriptor this[int index]
{
get => _services[index];
set => throw new NotSupportedException("Inserting services in collection is not supported");
}
 
/// <summary>
/// Count of services in the collection
/// </summary>
public int Count { get => _services.Count; }
 
/// <summary>
/// Obviously not
/// </summary>
public bool IsReadOnly { get => false; }
 
/// <summary>
/// Adding a service descriptor will fire the ServiceAdded event
/// </summary>
/// <param name="item"></param>
public void Add(ServiceDescriptor item)
{
_services.Add(item);
ServiceAdded.Invoke(this, item);
}
 
/// <summary>
/// Clear the collection is not supported
/// </summary>
public void Clear() => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// True is the item exists in the collection
/// </summary>
public bool Contains(ServiceDescriptor item) => _services.Contains(item);
 
/// <summary>
/// Copy items to array of service descriptors
/// </summary>
public void CopyTo(ServiceDescriptor[] array, int arrayIndex) => _services.CopyTo(array, arrayIndex);
 
/// <summary>
/// Enumerator for service descriptors
/// </summary>
public IEnumerator<ServiceDescriptor> GetEnumerator() => _services.GetEnumerator();
 
/// <summary>
/// Index of item in the list
/// </summary>
public int IndexOf(ServiceDescriptor item) => _services.IndexOf(item);
 
/// <summary>
/// Inserting is not supported
/// </summary>
public void Insert(int index, ServiceDescriptor item) => throw new NotSupportedException("Inserting services in collection is not supported");
 
/// <summary>
/// Removing items is not supported
/// </summary>
public bool Remove(ServiceDescriptor item) => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// Removing items is not supported
/// </summary>
public void RemoveAt(int index) => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// Enumerator for objects
/// </summary>
IEnumerator IEnumerable.GetEnumerator() => ((IEnumerable)_services).GetEnumerator();
}
}

and has 2 comments
We already know how to load types in .NET Framework and we know what they say we should use in .NET Core. But what about Standard? Is that a trick question? Sort of. Right now we have two .NET Standard and three .NET Core versions, albeit .NET Core 3 is in preview mode. The signature for AssemblyLoadContext and how it is used has changed dramatically. Core 3 enables context unloading, but Standard 2 does not. So you either are forced to build your library as Core 3 or you have to not use Unloading contexts or use reflection, which is not robust and probably will not be needed with the possible arrival of Standard 3.

But there are subtler issues at work. One of them is that, at least with .NET Core 3 Preview6, when you reference System.Runtime.Loader in a Standard library, so you can access AssemblyLoadContext, you get conflicts between the System.Runtime you are using and the one referenced by System.Runtime.Loader. The only solution is to use the System.Runtime.Loader NuGet package, but that returns you to the Standard 2 version of AssemblyLoadContext, even if the library version is higher!

The setup is this: I have an ITestInterface interface which resides in TestInterfaceLibrary.dll. I also have a TestImplementation class that can be found in TestImplementationLibrary.dll and implements ITestInterface. My program either does not reference any of these libraries or it only references the interface one. The task is to load both these types and then simply convert one instance of TestImplementation to ITestInterface. Simple test would be loading the types and then expecting interfaceType.IsAssignableFrom(implementationType) to be true.

Core 3


Let's first try the Core 3 way:
var context = new AssemblyLoadContext("testContext", true);
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface");
Console.WriteLine(interfaceType?.ToString()??"interface type not loaded");
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation");
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
Console.WriteLine("implementation implements interface: "+interfaceType.IsAssignableFrom(implementationType));
 
context.Unload();
The output is:
TestInterfaceLibrary.ITestInterface
TestImplementationLibrary.TestImplementation
implementation implements interface: True

It works! But only because the interface assembly is loaded first. If you try to load just the implementation type first, it will come up as empty. There are no exceptions thrown unless you get all the assembly types or specify the throwOnError parameter in GetType. The exception is "System.IO.FileNotFoundException: 'Could not load file or assembly 'TestInterfaceLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified.'".

In order to solve this, we need to use the Resolve event of the AssemblyLoadContext class. Let's try this:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation", true);
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine("implementation implements interface: " + interfaceType.IsAssignableFrom(implementationType));
 
context.Resolving -= Context_Resolving;
context.Unload();
 
private static Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}

And now it works again, by assuming the assembly name is the same as the assembly file name and that it is found in the same place.

But... if we try this in different contexts:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation", true);
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
context.Resolving -= Context_Resolving;
context.Unload();
context = new AssemblyLoadContext("testContext2", true);
context.Resolving += Context_Resolving;
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine("implementation implements interface: " + interfaceType.IsAssignableFrom(implementationType));
 
context.Resolving -= Context_Resolving;
context.Unload();
the output will show
implementation implements interface: False

This means that if we want to encapsulate this in a TypeLoader class or something, we cannot use different contexts for dynamically loading types. Even if we had one context that we would unload in order to refresh all the types, it could still be different from the main context, in case the interface is loaded twice or referenced directly in the project.

For example, if you reference TestInterfaceLibrary directly and you load TestImplementation dynamically it will work as expected, because ITestInterface is resolved automatically from the main context. However, if you load ITestInterface dynamically, too, it will be a different type from the referenced ITestInterface, even if they apparently have the same name and full name and assembly qualified name! So it kind of makes sense to not load a type twice. Is this where the context unloading comes in? Not really. Let's define a method that counts the number of types with a certain name in the current domain as
private static int CountTypes(string typeName)
{
return AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes().Where(t => t.FullName == typeName))
.Count();
}

And now let's run this code:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var referencedInterfaceType = typeof(ITestInterface);
Console.WriteLine(referencedInterfaceType?.ToString() ?? "interface type not loaded");
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine($"Types are the same: {interfaceType==referencedInterfaceType}");
 
Console.WriteLine($"Number of types with name {interfaceType.FullName}: {CountTypes(interfaceType.FullName)}");
 
context.Resolving -= Context_Resolving;
context.Unload();
Console.WriteLine($"Number of types with name {interfaceType.FullName}: {CountTypes(interfaceType.FullName)}");

There is the referenced type, then we load the type dynamically again, inside a new context. We count the types loaded in the current domain, we unload the context, we count the types again. The result is
TestInterfaceLibrary.ITestInterface
TestInterfaceLibrary.ITestInterface
Types are the same: False
Number of types with name TestInterfaceLibrary.ITestInterface: 2
Number of types with name TestInterfaceLibrary.ITestInterface: 2
The types are always 2!

Bottom line, even when unloading the AssemblyLoadContext, the types used are not unloaded and trying to find a type by name will result in duplicates.

OK, so let's just agree that types with the same name, once loaded, should remain there and no other type with the same name should be loaded. Let's try to incorporate this into a TypeLoader class:
public class TypeLoader : IDisposable
{
private readonly AssemblyLoadContext _context;
 
public TypeLoader()
{
_context = new AssemblyLoadContext(GetType().FullName, true);
_context.Resolving += Context_Resolving;
}
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = _context.LoadFromAssemblyPath(assemblyPath);
return assembly.GetType(typeName, true);
}
 
public void Dispose()
{
_context?.Resolving -= Context_Resolving;
_context?.Unload();
}
}

The code in our test is now much clearer:
var interfaceAssemblyPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestInterfaceLibrary.dll");
var implementationAssemblyPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestImplementationLibrary.dll");
var interfaceTypeName = "TestInterfaceLibrary.ITestInterface";
var implementationTypeName = "TestImplementationLibrary.TestImplementation";
 
using (var loader = new TypeLoader())
{
Type referencedType = typeof(TestInterfaceLibrary.ITestInterface);
var interfaceType = loader.LoadType(interfaceTypeName, interfaceAssemblyPath);
var implementationType = loader.LoadType(implementationTypeName, implementationAssemblyPath);
Console.WriteLine($@"
referenced type: {referencedType}
interface type: {interfaceType}
implementation type: {implementationType}
referenced and loaded interfaces are the same: {referencedType == interfaceType}
interface implemented: {interfaceType.IsAssignableFrom(implementationType)}"
);
}
and the result is
referenced type: TestInterfaceLibrary.ITestInterface
interface type: TestInterfaceLibrary.ITestInterface
implementation type: TestImplementationLibrary.TestImplementation
referenced and loaded interfaces are the same: True
interface implemented: True

But we still use Unload. Maybe it will work some day as I want it to work, but until then, why not get rid of Unload and make TypeLoader a class in a Standard 2 library?

Standard 2


For this I will create a new Standard 2 library project and then reference it in our test Core 3 project. Then I will move the TypeLoader class in the library project.

The errors in the library project are related to not knowing what an AssemblyLoadContext is, therefore the first solution is to reference System.Runtime.Loader from the framework. I get the immediate error "Assembly 'System.Runtime.Loader' with identity 'System.Runtime.Loader, Version=4.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' uses 'System.Runtime, Version=4.2.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' which has a higher version than referenced assembly 'System.Runtime' with identity 'System.Runtime, Version=4.1.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a".

Solution 2: load the System.Runtime.Loader NuGet package, which at the time of writing this, is version 4.3.0. The error is now gone, but several things are immediately apparent:
  1. the Unload method doesn't exist anymore
  2. the constructor doesn't receive a name and a bool anymore
  3. AssemblyLoadContext is now abstract

In order to solve this I am creating a DynamicAssemblyLoadContext class that inherits from AssemblyLoadContext and just return null from the Load method overload, and I give it an Unload method and a constructor with a string and a bool that don't do anything. And it works again. The updated TypeLoader class is now:
public class TypeLoader : IDisposable
{
private readonly DynamicAssemblyLoadContext _context;
 
public TypeLoader()
{
_context = new DynamicAssemblyLoadContext(GetType().FullName, true);
_context.Resolving += Context_Resolving;
}
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(ass => ass.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = _context.LoadFromAssemblyPath(assemblyPath);
return assembly.GetType(typeName, true);
}
 
public void Dispose()
{
_context?.Resolving -= Context_Resolving;
_context?.Unload();
}
 
 
private class DynamicAssemblyLoadContext : AssemblyLoadContext
{
public DynamicAssemblyLoadContext(string name, bool isCollectible)
{
}
 
protected override Assembly Load(AssemblyName assemblyName)
{
return null;
}
 
public void Unload()
{
}
}
}

The safe way


The code above has an issue, though. If the interface type is dynamically loaded before its referenced type is used, this fails again. This is the case when you use dependency injection. You dynamically load the types in order to register the implementation relationship to the interface, but then, when you ask for a resolution for the interface type, now referenced by the main project, you get another type named just the same.

The safe way, considering that we don't really use Unload and we don't count on it every working, why not use the default context, the one where everything loads, and be done with it. When you do that, the code becomes a little uglier, but it works in all situations.

Final version.
public class TypeLoader
{
private readonly object _resolutionLock = new object();
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var context = AssemblyLoadContext.Default;
lock (_resolutionLock)
{
context.Resolving += Context_Resolving;
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(ass => ass.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = context.LoadFromAssemblyPath(assemblyPath);
 
type = assembly.GetType(typeName, true);
context.Resolving -= Context_Resolving;
return type;
}
}
}

You just gotta hate that adding and removing the event inside a lock, right? Well, if you find a better solution, let me know.

and has 0 comments

The Problem


Phew, that's a mouthful. But the issue is that trying to serialize a FileInfo or a DirectoryInfo object with Newtonsoft's Json library in .NET Core fails with a vague exception:
Newtonsoft.Json.JsonSerializationException: Unable to serialize instance of 'System.IO.DirectoryInfo'.
at Newtonsoft.Json.Serialization.DefaultContractResolver.ThrowUnableToSerializeError(Object o, StreamingContext context)
at Newtonsoft.Json.Serialization.JsonContract.InvokeOnSerializing(Object o, StreamingContext context)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.OnSerializing(JsonWriter writer, JsonContract contract, Object value)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject(JsonWriter writer, Object value, JsonObjectContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.Serialize(JsonWriter jsonWriter, Object value, Type objectType)

It doesn't say why it fails, just that a method called ThrowUnableToSerializeError threw um... an unable to serialize error?

The Cause


Looking at the Newtonsoft code, we eventually get to this piece of code:
// serializing DirectoryInfo without ISerializable will stackoverflow
// https://github.com/JamesNK/Newtonsoft.Json/issues/1541
if
(Array.IndexOf(BlacklistedTypeNames, objectType.FullName) != -1)
{
contract.OnSerializingCallbacks.Add(ThrowUnableToSerializeError);
}

Later, another piece of code will execute the serializing callbacks and throw the exception. We can get rid of this functionality, by using a custom contract resolver, like this:
var settings = new JsonSerializerSettings
{
ContractResolver = new FileInfoContractResolver()
};
 
private class FileInfoContractResolver : DefaultContractResolver
{
protected override JsonContract CreateContract(Type objectType)
{
var result = base.CreateContract(objectType);
if (typeof(FileSystemInfo).IsAssignableFrom(objectType))
{
result.OnSerializingCallbacks.Clear();
}
return result;
}
}

Yet now, when trying to serialize, we get the stack overflow exception described in the original Newtonsoft.Json issue. It stems from the difference between the .NET Framework implementation and the .NET Core implementation of ISerializable in FileSystemInfo, which in Core just throws PlatformNotSupportedException. It's still not clear why it goes to a StackOverflowException, probably some conflict with Newtonsoft code, but it's clear Microsoft does not intend to make these classes serializable. If you think about it, those classes suck for so many reasons!

The Solution


So, in order to solve it, we will use a custom JSON converter:
private class FileSystemInfoConverter:JsonConverter
{
public override bool CanConvert(Type objectType)
{
return typeof(FileSystemInfo).IsAssignableFrom(objectType);
}
 
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
if (reader.TokenType == JsonToken.Null)
return null;
var jObject = JObject.Load(reader);
var fullPath = jObject["FullPath"].Value<string>();
return Activator.CreateInstance(objectType, fullPath);
}
 
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
var info = value as FileSystemInfo;
var obj = info == null
? null
: new
{
FullPath = info.FullName
};
var token = JToken.FromObject(obj);
token.WriteTo(writer);
}
}
And we use it like this:
var settings = new JsonSerializerSettings
{
Converters = new List<JsonConverter>
{
new FileSystemInfoConverter()
}
};
var json = JsonConvert.SerializeObject(dir, settings);
var info = JsonConvert.DeserializeObject<DirectoryInfo>(json, settings);

Why FileInfo and DirectoryInfo suck


The answer of a senior developer to any question should be "Why?" or "Why on Earth or anywhere in the Solar System would you want to do a dumb thing like that?!?!". Why would you want to serialize a directory or file info object? The answer is that you should not. The info objects are defined by only one thing: a path, but they have so much baggage: properties that access the file system, unsafe methods, no interfaces or factory methods that can allow them to be mocked in unit tests. They might look like data objects, but they are not!

Imagine a scenario where you have a list of all the files in your drive. You enumerated them all and now you want to serialize them. Should the serializer save Exists or Length, for example? Because that means it will access the file system for each of them in the process of serialization, leading to a lot of work, propensity to access errors and so on.

Best practices say you should either use some model classes to move around data, like an empty FileSystemInfoModel with Type and FullPath and maybe Attributes or Size properties or whatever you want to save, but that you set yourself as a separate responsibility. And if you want to use the functionality of the Info classes, use System.IO.Abstractions or the new Core IFileProvider abstraction to get implementations of interfaces that you can mock in unit tests.

Tell me what you think.

and has 0 comments
This post starts from a simple question: how do I start a task with timeout? You go to StackOverflow, of course, and find this answer: Asynchronously wait for Task<T> to complete with timeout. It's an elegant solution, mainly to also start a Task.Delay and continue when either task completes. However, in order to cancel the initial operation, one needs to pass the cancellation token to the original task and manually handle it, meaning polluting the entire business code with cancellation logic. This might be OK, yet are there alternatives?

But, isn't there the Task.Run(action) method that also accepts a CancellationToken? Yes, there is, and if you thought this runs an action until you cancel it, think again. Here is what Task.Run says it does: "Queues the specified work to run on the thread pool and returns a Task object that represents that work. A cancellation token allows the work to be cancelled." and if you scroll down to Remarks, here is what it actually does: "If cancellation is requested before the task begins execution, the task does not execute. Instead it is set to the Canceled state and throws a TaskCanceledException exception". You read that right: the token is only taken into account when the task starts running, not while it is actually executing.

Surely, then, there must be a way to cancel a running Task. How about Task.Dispose()? Dispose throws a funny exception if you try it: "System.InvalidOperationException: 'A task may only be disposed if it is in a completion state (RanToCompletion, Faulted or Canceled).'". In normal speech, it means "Fuck you!". If you think about it, how would you abort a task execution? What if it does nasty things, leaves resources occupied, has to clean up after it? The .NET team took the safe path and refused to give you an out of the box unsafe cancelling mechanism.

So, what is the solution? The recommended one is that you pass the token to all methods that can be cancelled and then check inside if cancellation was requested. Of course this only works if
  1. you control what the task does
  2. you can split the operation into small chunks that are either executed sequentially or in a loop so you interrupt their flow
. If you have something like an external process that is being executed, or a long running operation, you are almost out of luck. Why almost? Well, CancellationSource or CancellationToken do not have events, but the token exposes a "wait handle" that you can wait for synchronously. And here it gets funky. Check out an example of a method that executes some long running action and can react to token cancelling:
/// <summary>
/// Executes the long running action and cancels it when needed
/// </summary>
/// <param name="token"></param>
private void LongRunningAction(CancellationToken token)
{
// instantiate a container and keep its reference
var container = new IdentificationContainer();
Task.Run(() =>
{
// wait until the token gets cancelled on another thread
token.WaitHandle.WaitOne();
// this will use the information in the container to kill the action
// (presumably by interrupting external processes or sending some kill signal)
KillLongRunningAction();
});
// this executes the action and populates the identification container if needed
RunLongRunningAction();
}
This introduces some other issues, like what happens to the monitoring task if you never cancel the token or dispose of the cancellation source, but that's a bit too deep.

In the code above we get a sort of a solution if we can control the code and we can actually cancel things gracefully inside of it. But what if I can't (or won't)? Can I get something that does what I wanted Task.Run to do: execute something and, when I cancel it, stop it from executing?

And the answer, using what we learned above, is yes, but as explained at the beginning, it may have effects like resource leaks. Here it is:
/// <summary>
/// Run an action and kill it when canceling the token
/// </summary>
/// <param name="action">The action to execute</param>
/// <param name="token">The token</param>
/// <param name="waitForGracefulTermination">If set, the task will be killed with delay so as to allow the action to end gracefully</param>
private static Task RunCancellable(Action action, CancellationToken token, TimeSpan? waitForGracefulTermination=null)
{
// we need a thread, because Tasks cannot be forcefully stopped
var thread = new Thread(new ThreadStart(action));
// we hold the reference to the task so we can check its state
Task task = null;
task = Task.Run(() =>
{
// task monitoring the token
Task.Run(() =>
{
// wait for the token to be canceled
token.WaitHandle.WaitOne();
// if we wanted graceful termination we wait
// in this case, action needs to know about the token as well and handle the cancellation itself
if (waitForGracefulTermination != null)
{
Thread.Sleep(waitForGracefulTermination.Value);
}
// if the task has not ended, we kill the thread
if (!task.IsCompleted)
{
thread.Abort();
}
});
// simply start the thread (and the action)
thread.Start();
// and wait for it to end so we return to the current thread
thread.Join();
// throw exception if the token was canceled
// this will not be reached unless the thread completes or is aborted
token.ThrowIfCancellationRequested();
}, token);
return task;
}

As you can see, the solution is to run the action on a thread and then manually kill the thread. This means that any control of where and how the action is executed is wrestled from the default TaskScheduler and given to you. Also, in order to force the stopping of the task, you use Thread.Abort, which may have nasty side effects. Here is what Microsoft says about it:



Bummer! .NET Core doesn't want you to kill threads. However, if you are really determined, there is a way :) Use ThreadEx.Abort(thread);


Bonus code: How do you get the cancellation token if you have the task?
var token = new TaskCanceledException(task).CancellationToken;
It might not help too much, especially if you want to use it inside the task itself, but it might help clean up the code.

Conclusion


Just like async/await, using the provided cancellation token method will only pollute your code with little effect. However, considering you want to use a common interface for the purpose, use RunCancellable instead of Task.Run and handle the token manually whenever you feel resources have been allocated and need to be cleaned up first.

Sometimes you get an annoying error after updating your .NET Framework or some of the packages or libraries in your project: "Some NuGet packages were installed using a target framework different from the current target framework and may need to be reinstalled. Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information. Packages affected: <name-of-nuget-package>".

The problem stems from the fact that NuGet packages have variants for different .NET flavors and in your project they are "hinted" at by the <HintPath> child element in the <Reference> elements in your .csproj file. Somehow, the hint still points to a different variant than the one you need and that's why you get this error. The explanation in length can be found in this great post: Why, when and how to reinstall NuGet packages after upgrading a project, but just in case his blog disappears (as so many great ones did in the past), here is the gist of the solution:

In Visual Studio go to Tools → NuGet Package Manager → Package Manager Console and type:
Update-Package <name-of-nuget-package> -Reinstall -ProjectName <name-of-project>

To add some value to Derriey's post, you can solve all the similar issues in your solution by copying the entire list of errors from all projects by going to the Output pane, selecting them all and right clicking Copy, then run search and replace in your favorite editor with this regular expression:
^.*?Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information.  Packages affected: ((?:[^,\s]+(?:, )?)+)\t([^\t]+)\t\t\d+\t\t$
and replacement pattern
Update-Package $1 -Reinstall -ProjectName $2

Then make sure there is only one project on each line, copy paste the result into the Package Manager Console window and the entire solution will get fixed.

Example: Error Some NuGet packages were installed using a target framework different from the current target framework and may need to be reinstalled. Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information. Packages affected: Microsoft.Extensions.Configuration, Serilog MyProject.Common 0

Turns into: Update-Package Microsoft.Extensions.Configuration, Serilog -Reinstall -ProjectName MyProject.Common Since Update-Package only supports one package and regex replace doesn't have a syntax for multiple captures in the same group, you will have to manually turn this into:
Update-Package Microsoft.Extensions.Configuration -Reinstall -ProjectName MyProject.Common
Update-Package Serilog -Reinstall -ProjectName MyProject.Common

Copy paste the result and the two projects will be reinstalled on the affected projects in your solution.

I spent hours trying to manually fix the assembly redirects in a web.config, only to give up and use the default Add-BindingRedirect in the NuGet package manager. And it worked! I have no idea if this won't break something else, but I got it from Rick Strahl's blog and it worked for me. More in his article. Thanks, Rick!

One thing to remember is that you first have to delete the dependentAssembly elements from the .config file in order for the command to work.

and has 2 comments
Sonar Source code static analysis rule RSPEC-3906 states:
Delegate event handlers (i.e. delegates used as type of an event) should have a very specific signature:
  • Return type void.
  • First argument of type System.Object and named 'sender'.
  • Second argument of type System.EventArgs (or any derived type) and is named 'e'.


The problem was that I was getting the warning on a simple event declared as EventHandler<TEventArgs>. Going to its source code page revealed the reason in a comment: // Removed TEventArgs constraint post-.NET 4.

and has 0 comments
This is another post discussing a static analysis rule that made me learn something new. SonarSource Rule 3898 says: If you're using a struct, it is likely because you're interested in performance. But by failing to implement IEquatable<T> you're loosing performance when comparisons are made because without IEquatable<T>, boxing and reflection are used to make comparisons.

There is a StackOverflow entry that discusses just that and the answer to this particular problem is not actually the accepted one. In pure StackOverflow fashion I will quote the relevant bit of the answer, just in case the site will go offline in the future: I'm amazed that the most important reason is not mentioned here. IEquatable<> was introduced mainly for structs for two reasons:
  1. For value types (read structs) the non-generic Equals(object) requires boxing. IEquatable<> lets a structure implement a strongly typed Equals method so that no boxing is required.
  2. For structs, the default implementation of Object.Equals(Object) (which is the overridden version in System.ValueType) performs a value equality check by using reflection to compare the values of every field in the type. When an implementer overrides the virtual Equals method in a struct, the purpose is to provide a more efficient means of performing the value equality check and optionally to base the comparison on some subset of the struct's field or properties.

I thought this was worth mentioning, for those performance critical struct equality scenarios.

and has 0 comments
I was playing with code analysis rule sets in Visual Studio (see my blog post about it) and I got hit by come conflicting rules. I will discuss only SonarSource rules, but a lot of other analyzers have similar rules.

OK, one of them is something that I intuitively thought was universally good: RSPEC-3962: "static readonly" constants should be "const" instead. Makes sense, right? A constant is compiled better, integrated faster, it's a constant! No overhead, nothing changes it. This rule was marked as a minor improvement to the code, anyway.

Then, bam!, RSPEC-2339: Public constant members should not be used. Critical rule! Basically it says the opposite: turn your constant into static readonly. What's going on?!

This is not one of those pairs of rules that contradict each other based on user preference, like using var instead of the type name when the type is obvious and viceversa. These are two different, apparently conflicting, yet complementary concepts.

But what is really the difference between a static readonly field and a constant, other than constants can only be value types? Constant values are retrieved at compile time, as an optimization, since they are not expected to change, while static readonly values are retrieved at runtime. This means that if you use a library in your project, the constants it declares will be incorporated into your application when you compile it. You may change the .dll of the library afterwards, with inconsistent results, since readonly statics will now have changed values and the constants not.

Here, an example. In the creatively named project Library there is a Container class with a public constant ingeniously named Constant and a public static readonly field that has the same value as Constant.
namespace Library
{
public class Container
{
public const int Constant = 1;
public static readonly int StaticReadonly = Constant;
}
}

Then there is a program that uses these two values to display them:
class Program
{
static void Main(string[] args)
{
Console.WriteLine($"Container.Constant: {Container.Constant} Container.StaticReadonly: {Container.StaticReadonly}");
Console.ReadKey();
}
}

The expected output is Container.Constant: 1 Container.StaticReadonly: 1. Now change the value of Constant to 2, right click the Library project and only build it, not the program. Then take the resulting .dll and copy it in the bin folder of the program, then run it manually. The output is now... Container.Constant: 1 Container.StaticReadonly: 2 and that from a code like StaticReadonly = Constant;.

Conclusion: public constants should be avoided if they are used between projects and since you don't know where they will be used, better to avoid them at all times. This will really annoy people who like to create separate classes to store constants, but that's OK, because the feeling is mutual.

and has 0 comments
So I was watching this Entity Framework presentation and I noticed one example that looked like this:
db.ExecuteSqlCommand($"delete from Log where Time<{time}");

Was this an invitation to SQL injection? Apparently not, since the resulting SQL was something like DELETE FROM Log WHERE Time < @_p0. But how could that be? Enter FormattableString, which is a class implementing the venerable IFormattable interface, but which is available in .NET Framework only from version 4.6 and in .NET Core from the very beginning. Apparently, when an interpolated string is assigned to a FormattableString, it is compiled as an instance with all the values from the string before the formatting. In our case ExecuteSqlCommand had a FormattableString overload. Note that the method is an extension method from RelationalDatabaseFacadeExtensions, not Database.ExecuteSqlCommand.

Let's test this with a little program:
class Program
{
static void Main(string[] args)
{
var timeDisplay = new TimeDisplay();
Test($"Time display:{timeDisplay}");
Console.ReadKey();
}
 
private static void Test(string text)
{
Console.WriteLine(text);
}
 
private class TimeDisplay
{
public override string ToString()
{
return DateTime.Now.ToString("s");
}
}
}

Here I create an instance of TimeDisplay and then use it in an interpolated string which is then sent to the Test method, which Console.WriteLines it. The ToString method of TimeDisplay is overridden to display the current time. The result is predictable: Time display:2018-12-13T11:24:02. I will then change the type of the parameter of Test to be FormattableString. It still works and it displays the same thing. Note that if I have both a FormattableString and a string version of the same method, string will be used first when an interpolated string is sent as a parameter!

But what do I get in that instance? Let's change the Test method even more:
private static void Test(FormattableString text)
{
Console.WriteLine($"Format: {text.Format} " +
$"ArgumentCount: {text.ArgumentCount} " +
$"Arguments: {string.Join(", ",text.GetArguments())}");
}

The displayed result of the program is now Format: Time display:{0} ArgumentCount: 1 Arguments: 2018-12-13T11:28:35. Note that the argument is in fact a TimeDisplay instance and it is displayed as a time stamp because of the ToString override.

What does this mean?

Well, we can do great things like Entity Framework does, interpreting the intent of the developer and providing a more informed output. I am considering this as a solution for logging. Logger.LogDebug($"{someObjectWithAHeavyToString}") now doesn't have to execute the ToString() method of the object unless the Debug log level is enabled, for example.

But we can also really mess things up. I will get past the possible yet unlikely security problem where you believe you pass an object as .ToString() and in fact it is passed as the entire object, allowing a malicious library to do whatever it wants with it. Let's consider more probable scenarios.

One is that a code reviewer will tell you "put magic strings in their own variables or constants", so you immediately take the string sent to test and automatically move it a local variable (which Visual Studio will create it as a FormattableString), then you replace that with var (because the type is obvious, right?). Suddenly the test variable is a string.

Another is even worse, although if you decided to code like this you have other issues. Let's get back to something similar to the original example:
db.ExecuteSqlCommand($"delete from Log where Id = {id}");

And let's change it:
var sql=$"delete from Log where Id = {id}";
db.ExecuteSqlCommand(sql);

Now sql is a string, its value is computed from the id, which might be provided by the user. Replace this with Bobby Tables and you got a nice SQL injection.

Conclusion: an interesting, if somewhat confusing, concept. Other than the logging idea, which I admit is pretty interesting, I am yet to find a good place to use it.

Intro


An adapter is a software pattern that exposes functionality through an interface different from the original one. Let's say you have an oven, with the function Bake(int temperature, TimeSpan time) and you expose a MakePizza() interface. It still bakes at a specific temperature for an amount of time, but you use it differently. Sometimes we have similar libraries with a common goal, but different scope, that one is tempted to hide under a common adapter. You might want to just cook things, not bake or fry.

So here is a post about good practices of designing a library project (complete with the use of software patterns, ugh!).



Examples


An example in .NET would be the WebRequest.Create method. It receives an URI as a parameter and, based on its type, returns a different implementation that will handle the resource in the way declared by the WebRequest. For HTTP, it will used an HttpWebRequest, for FTP an FtpWebRequest, for file access a FileWebRequest and so on. They are all implementations of the abstract class WebRequest which would be our adapter. The Create method itself is an example of the factory method pattern.

But there are issues with this. Let's assume that we have different libraries/projects that handle a specific resource scope. They may be so different as to be managed by different organizations. A team works on files, another on HTTP and the FTP one is an open source third party library. Your team works on the WebRequest class and has to consider the implications of having a Create factory method. Is there a switch there? "if URI starts with http or https, return new HttpWebRequest"? In that case, your WebRequest library will need to depend on the library that contains HttpWebRequest! And it's just not possible, since it would be a circular reference. Had your project control over all implementations, it would still be a bad idea to let a base class know about a derived class. If you move the factory into a factory class it still means your adapter library has to depend on every implementation of the common interface. As Joe Armstrong would say You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

So how did Microsoft solve it? Well, they did move the implementation of the factory in another creator class that would implement IWebRequestCreate. Then they used configuration to associate a prefix with an implementation of WebRequest. Guess you didn't know that, did you? You can register your own implementations via code or configuration! It's such an obscure feature that if you Google WebRequestModulesSection you mostly get links to source code.

Another very successful example of an adapter library is jQuery. Yes, the one they now say you don't need anymore, it took industry only 12 years to catch up after all. Anyway, at the time there were very different implementations of what people thought a web browser should be. The way the DOM was represented, the Javascript objects and methods, the way they actually worked compared to the way they should have worked, everything was different. So developers were often either favoring a browser over others or were forced to write code for each possible version. Something like "if Internet Explorer, do A, if Netscape, do B". The problem with this is that if you tried to use a browser that was neither Internet Explorer or Netscape, it would either break or show you one of those annoying "browser not supported" messages.

Enter jQuery, which abstracted access over all these different interfaces with a common (and very nicely designed) one. Not only did it have a fluent interface that allowed you to do multiple things with a single target (stuff like $('#myElement').show().css({opacity:0.7}).text('My text');), but it was extensible, allowing third parties to add modules that would allow even more functionality ($('#myElement').doSomethingCool();). Sound familiar? Extensibility seems to be an important common feature of well designed adapters.

Speaking of jQuery, one very used feature was jQuery.browser, which told you what browser you were using. It had a very sophisticated and complex code to get around the quirks of every browser out there. Now you had the ability to do something like if ($.browser.msie) say('OMG! You like Microsoft, you must suck!'); Guess what, the browser extension was deprecated in jQuery 1.9 and not because it was not inclusive. Well, that's the actual reason, but from a technical point of view, not political correctness. You see, now you have all this brand new interface that works great on all browsers and yet still your browser can't access a page correctly. It's either an untested version of a particular browser, or a different type of browser, or the conditions for letting the user in were too restrictive.

The solution was to rely on feature detection, not product versions. For example you use another Javascript library called Modernizr and write code like if (Modernizr.localstorage) { /* supported */ } else { /* not-supported */ }. There are so many possible features to detect that Modernizr lets you pick and choose the ones you need and then constructs the library that handles each instead of bundling it all in one huge package. They are themselves extensible. You might ask what all this has to do with libraries in .NET. I am getting there.

The last example: Entity Framework. This is a hugely popular framework for database access from Microsoft. It would abstract the type of the database behind a very nice (also fluent) interface in .NET code. But how does it do that? I mean, what if I need SQL Server? What if I want MongoDB or PostgreSQL?

The way is having different "providers" to translate .NET code Expressions into whatever the storage needs. The individual providers are added as dependencies to your project, without the need for Entity Framework to know about them. Then they are configured for use in code, because they implement some common interfaces, and they are ready for use.

Principles for adapters


So now we have some idea about what is good in an adapter:
  • Ease of use
  • Common interface
  • Extensibility
  • No direct dependency between the interface and what is adapted
  • An interface per feature

Now that I wrote it down, it sounds kind of weird: the interface should not depend on what it adapts. It is correct, though. In the case of Entity Framework, for example, the provider for MySql is an adapter between the use interface of MySql and the .NET interfaces declared by Entity Framework; interfaces are just declarations of what something should do, not implementation.

Picture time!


The factory and the common interface are one library that will use that library in your project. Each individual adapter depends on it, as well, but your project doesn't need to know about it until needed.

Now, it's your choice if you register the adapters dynamically (so, let's say you load the .dll and extract the objects that implement a specific interface and they know themselves to what they apply, like FtpWebRequest for ftp: strings) or you add dependencies to individual adapters to your project and then manually register them yourself and strong typed. The important thing is that you don't reference the factory library and automatically be forced to get all the possible implementations added to your project.

It seems I've covered all points except the last one. That is pretty important, so read on!

Imagine that the things you want to adapt are not really that similar. You want to force them into a common shape, but there will be bits that are specific to one domain only and you might want them. Now here is an example of how NOT to do things:
var target = new TargetFactory().Get(connectionString);
if
(target is SomeSpecificTarget specificTarget) {
specificTarget.Authenticate(username, password);
}
target.DoTargetStuff();
In this case I use the adapter for Target, but then bring in the knowledge of a specific target called SomeSpecificTarget and use a method that I just know is there. This is bad for several reasons:
  1. For someone to understand this code they must know what SomeSpecificTarget does, invalidating the concept of an adapter
  2. I need to know that for that specific connection string a certain type will always be returned, which might not be the case if the factory changes
  3. I need to know how SomeSpecificTarget works internally, which might also change in the future
  4. I must add a dependency to SomeSpecificTarget to my project, which is at least inconsistent as I didn't add dependencies to all possible Target implementations
  5. If different types of Target will be available, I will have to write code for all possibilities
  6. If new types of Target become available, I will have to change the code for each new addition to what is essentially third party code

And now I will show you two different versions that I think are good. The first is simple enough:
var target = new TargetFactory().Get(connectionString);
if
(target is IAuthenticationTarget authTarget) {
authTarget.Authenticate(username, password);
}
target.DoTargetStuff();
No major change other than I am checking if the target implements IAuthenticationTarget (which would best be an interface in the common interface project). Now every target that requires (or will ever require) authentication will receive the credentials without the need to change your code.

The other solution is more complex, but it allows for greater flexibility:
var serviceProvider = new TargetFactory()
.GetServiceProvider(connectionString);
var target = serviceProvider.Get<ITargetProvider>()
.Get();
serviceProvider.Get<ICredentialsManager>()
?.AddCredentials(target, new Credentials(username, password));
target.DoTargetStuff();
So here I am not getting a target, but a service provider (which is another software pattern, BTW), based on the same connection string. This provider will give me implementations of a target provider and a credentials manager. Now I don't even need to have a credentials manager available: if it doesn't exist, this will do nothing. If I do have one, it will decide by itself what it needs to do with the credentials with a target. Does it need to authenticate now or later? You don't care. You just add the credentials and let the provider decide what needs to be done.

This last approach is related to the concept of inversion of control. Your code declares intent while the framework decides what to do. I don't need to know of the existence of specific implementations of Target or indeed of how credentials are being used.

Here is the final version, using extension methods in a method chaining fashion, similar to jQuery and Entity Framework, in order to reinforce that Ease of use principle:
// your code
var target = new TargetFactory()
.Get(connectionString)
.WithCredentials(username,password);
 
 
// in a static extensions class
 
public static Target WithCredentials(this Target target, string username, string password)
{
target.Get<ICredentialsProvider>()
?.AddCredentials(target, new Credentials(username, password));
return target;
}
 
public static T Get<T>(this Target target)
{
return target.GetServiceProvider()
.Get<T>();
}
This assumes that a Target has a method called GetServiceProvider which will return the provider for any interface required so that the whole code is centered on the Target type, not IServiceProvider, but that's just one possible solution.

Conclusion


As long as the principles above are respected, your library should be easy to use and easy to extend without the need to change existing code or consider individual implementations. The projects using it will only use the minimum amount of code required to do the job and themselves be dependent only on interface declarations. As well as those are respected, the code will work without change. It's really meta: if you respect the interface described in this blog then all interfaces will be respected in the code all the way down! Only some developer locked in a cellar somewhere will need to know how things are actually getting done.

and has 0 comments
I was attempting to add very detailed logging (tracing) to my application. In order to do that, I had to change hundreds of objects. That wouldn't do. Fortunately, .NET has a nice feature called RealProxy. Let's see a quick and dirty example:
    class Program
{
static void Main(string[] args)
{
JsonConvert.DefaultSettings = () => new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
NullValueHandling = NullValueHandling.Ignore,
Formatting = Formatting.Indented
};

Test test = new Test();
test.field = "test";
test.Property = "Test";
string outObject;
string refObject = "ref";
var result = test.Method("test1", "test2", ref refObject, out outObject);
Console.WriteLine(JsonConvert.SerializeObject(new object[] {
test,
result,
refObject,
outObject
})
);
Console.ReadKey();
}
 
}
 
class Test
{
public string field;
public string Property { get; set; }
public string Method(string parameter1, string parameter2,
ref string refObject, out string outObject)
{
if (parameter1 == null)
throw new ArgumentNullException("Parameter one cannot be null");
refObject += " reffed";
outObject = "outed";
return $"{parameter1} : {parameter2}";
}
}

So I defined an object with a field, a property and a method. That's what my application does. It then displays the object and the result of the method call. Here is the result:
[
{
"field": "test",
"Property": "Test"
},
"test1 : test2",
"ref reffed",
"outed"
]

I would like to log everything that happens with my Test object. Well, as such I can't do anything, I need to change the code a bit, like this:
    class Program
{
static void Main(string[] args)
{
JsonConvert.DefaultSettings = ()=>new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
NullValueHandling = NullValueHandling.Ignore,
Formatting = Formatting.Indented
};

Test test = (Test)new LoggingProxy(new Test()).GetTransparentProxy();
test.field = "test";
test.Property = "Test";
string outObject;
string refObject = "ref";
var result = test.Method("test1", "test2", ref refObject, out outObject);
Console.WriteLine(JsonConvert.SerializeObject(new object[] {
test,
result,
refObject,
outObject
})
);
Console.ReadKey();
}
 
}
 
class Test: MarshalByRefObject
{
public string field;
public string Property { get; set; }
public string Method(string parameter1, string parameter2,
ref string refObject, out string outObject)
{
if (parameter1 == null)
throw new ArgumentNullException("Parameter one cannot be null");
refObject += " reffed";
outObject = "outed";
return $"{parameter1} : {parameter2}";
}
}
 
class LoggingProxy : RealProxy
{
private readonly object _target;
 
public LoggingProxy(object obj) : base(obj?.GetType())
{
_target = obj;
}
 
public override IMessage Invoke(IMessage msg)
{
if (msg is IMethodCallMessage methodCall)
{
var arguments = methodCall.Args.ToArray();
var result = methodCall.MethodBase.Invoke(_target, arguments);
return new ReturnMessage(
result,
arguments,
arguments.Length,
methodCall.LogicalCallContext,
methodCall);
}
return null;
}
}

So this is what I did above:
  • I've inherited my Test class from MarshalByRefObject
  • I've created a LoggingProxy class that inherits from RealProxy and implements the Invoke method
  • I've replaced new Test(); with (Test)new LoggingProxy(new Test()).GetTransparentProxy();

Running it we get the same result.

Time to add some logging. I will write stuff on the Console, too, for this demo. Here are the changes to the LoggingProxy class:
    class LoggingProxy : RealProxy
{
private readonly object _target;
 
public LoggingProxy(object obj) : base(obj?.GetType())
{
_target = obj;
}
 
public override IMessage Invoke(IMessage msg)
{
if (msg is IMethodCallMessage methodCall)
{
var arguments = methodCall.Args.ToArray();
string typeName;
try
{
typeName = Type.GetType(methodCall.TypeName).Name;
}
catch
{
typeName = methodCall.TypeName;
}
try
{
Console.WriteLine($"Called {typeName}.{methodCall.MethodName}" +
$"({JsonConvert.SerializeObject(arguments)})");
var result = methodCall.MethodBase.Invoke(_target, arguments);
Console.WriteLine($"Success for {typeName}.{methodCall.MethodName}" +
$"({JsonConvert.SerializeObject(arguments)}): " +
$"{JsonConvert.SerializeObject(result)}");
return new ReturnMessage(
result,
arguments,
arguments.Length,
methodCall.LogicalCallContext,
methodCall);
}
catch (Exception exception)
{
Console.WriteLine($"Error for {typeName}.{methodCall.MethodName}" +
$"({JsonConvert.SerializeObject(arguments)}): " +
$"{exception}");
return new ReturnMessage(exception, methodCall);
}
}
return null;
}
}

It's the same as before, but with a try/catch block and some extra Console.WriteLines. Here is the output:
Called Object.FieldSetter([
"RealProxyTests2.Test",
"field",
"test"
])
Success for Object.FieldSetter([
"RealProxyTests2.Test",
"field",
"test"
]): null
Called Test.set_Property([
"Test"
])
Success for Test.set_Property([
"Test"
]): null
Called Test.Method([
"test1",
"test2",
"ref",
null
])
Success for Test.Method([
"test1",
"test2",
"ref reffed",
"outed"
]): "test1 : test2"
Called Object.GetType([])
Success for Object.GetType([]): "RealProxyTests2.Test, RealProxyTests2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"
Called Object.FieldGetter([
"RealProxyTests2.Test",
"field",
null
])
Success for Object.FieldGetter([
"RealProxyTests2.Test",
"field",
"test"
]): null
Called Test.get_Property([])
Success for Test.get_Property([]): "Test"
[
{
"field": "test",
"Property": "Test"
},
"test1 : test2",
"ref reffed",
"outed"
]

A bounty of information. The first lines are the setting of the field, Property and the execution of Method. But then there are a GetType, a field getter and a Property getter. What's that about? That's JsonConvert, serializing the Test proxy.
Warning: I've copied methodCall.Args into a local property called arguments. At first glance it might appear superfluous, but methodCall.Args is not really an array of object, even if it appears that way in the debugger. It is read-only, changing any of its items has no effect.

Remember how we defined the proxy in Program.Main? We can use a method in our proxy object, let's call it Wrap:
    public static T Wrap<T>(T obj)
{
if (obj is MarshalByRefObject marshalByRefObject)
{
return (T)new LoggingProxy(marshalByRefObject).GetTransparentProxy();
}
return obj;
}

Conclusion:

While this works, there are a series of issues related to it:
  1. RealProxy is only available for .NET Framework, not .NET Core (see DispatchProxy for that)
  2. Serialization is not so straightforward as in this demo (see my blog post about Newtonsoft serialization
  3. You need to inherit from MarshalByRefObject, so this solution doesn't work for classes that already have a base class
  4. Performance wise you should make sure you are not logging or executing anything unless the correct log level is set (something like logger.LogTrace(JsonConvert(...)) would not work because the JSON serialization occurs no matter before executing LogTrace)
  5. Also, this wrapping is not free. With a simple proxy that did nothing than execute the code it took 43 times more time to run. Of course, that's because the actual execution of setting properties or returning a string is basically zero. When I added a Thread.Sleep(1) in the method, it took almost the same amount of time. Just don't use it in performance sensitive applications

Final thoughts: in the same namespace with RealProxy there is a ProxyAttribute class that at first glance seems to be even better: you just decorate a class with the attribute and BANG! instant AOP. But it's not that simple. First of all it works only on object that inherit from ContextBoundObject which itself inherit from MarshalByRefObject. And while it seems like a good idea to just replace MarshalByRefObject with ContextBoundObject in the code above, know that no generic class can inherit from it. There might be other restrictions, too. If you make Test inherit ContextBoundObject, the debugger will already show new Test() as being a transparent proxy, without wrapping it with any code. It might still be usable in certain conditions, though.

and has 0 comments
I was trying to log some stuff (a lot of stuff) and I noticed that my memory went to the roof (16GB in a few seconds), then an OutOfMemoryException was thrown. I've finally narrowed it down to the JSON serializer from Newtonsoft.

First of all, some introduction on how to serialize any object into JSON: whether you use JsonConvert.SerializeObject(yourObject,new JsonSerializerSettings { <settings> }) or new JsonSerializer { <settings> }.Serialize(writer, object) (where <settings> are some properties set via the object initializer syntax) you will need to consider these properties:
We will use these classes to test the results:
class Parent
{
public string Name { get; set; }
public Child Child1 { get; set; }
public Child Child2 { get; set; }
public Child[] Children { get; set; }
public Parent Self { get; internal set; }
}
 
class Child
{
public string Name { get; set; }
public Parent Parent { get; internal set; }
}

For this piece of code:
JsonConvert.SerializeObject(new Child { Name = "other child" }, settings)
you will get either
{"Name":"other child","Parent":null}
or
{
"Name": "other child",
"Parent": null
}
based on whether we use Formatting.None or Formatting.Indented. The other properties do not affect the serialization (yet).

Let's set up the following objects:
var child = new Child
{
Name = "Child name"
};
var parent = new Parent
{
Name = "Parent name",
Child1 = child,
Child2 = child,
Children = new[]
{
child, new Child { Name = "other child"}
}
};
parent.Self = parent;
child.Parent = parent;

As you can see, not only does parent have multiple references to child and one to himself, but the child also references the parent. If we try the code
JsonConvert.SerializeObject(parent, settings)
we will get an exception Newtonsoft.Json.JsonSerializationException: 'Self referencing loop detected for property 'Parent' with type 'Parent'. Path 'Child1'.'. In order to avoid this, we can use ReferenceLoopHandling.Ignore, which tells the serializer to ignore circular references. Here is the output when using
var settings = new JsonSerializerSettings
{
Formatting = Formatting.Indented,
ReferenceLoopHandling=ReferenceLoopHandling.Ignore
};

{
"Name": "Parent name",
"Child1": {
"Name": "Child name"
},
"Child2": {
"Name": "Child name"
},
"Children": [
{
"Name": "Child name"
},
{
"Name": "other child",
"Parent": null
}
]
}

If we add NullValueHandling.Ignore we get
{
"Name": "Parent name",
"Child1": {
"Name": "Child name"
},
"Child2": {
"Name": "Child name"
},
"Children": [
{
"Name": "Child name"
},
{
"Name": "other child"
}
]
}
(the "Parent": null bit is now gone)

The default for the ReferenceLoopHandling property is ReferenceLoopHandling.Error, which throws the serialization exception above, but we can also use ReferenceLoopHandling.Serialize besides Error and Ignore. In that case we get a System.StackOverflowException: 'Exception of type 'System.StackOverflowException' was thrown.' as it tries to serialize at infinitum.

PreserveReferencesHandling is rather interesting. It creates extra properties for objects like $id, $ref or $values and then uses those to define objects that are circularly referenced. Let's use this configuration:
var settings = new JsonSerializerSettings
{
Formatting = Formatting.Indented,
NullValueHandling = NullValueHandling.Ignore,
PreserveReferencesHandling = PreserveReferencesHandling.Objects
};

Then the result will be
{
"$id": "1",
"Name": "Parent name",
"Child1": {
"$id": "2",
"Name": "Child name",
"Parent": {
"$ref": "1"
}
},
"Child2": {
"$ref": "2"
},
"Children": [
{
"$ref": "2"
},
{
"$id": "3",
"Name": "other child"
}
],
"Self": {
"$ref": "1"
}
}

Let's try PreserveReferencesHandling.Arrays:
var settings = new JsonSerializerSettings
{
Formatting = Formatting.Indented,
NullValueHandling = NullValueHandling.Ignore,
ReferenceLoopHandling=ReferenceLoopHandling.Ignore,
PreserveReferencesHandling = PreserveReferencesHandling.Arrays
};

The result will then be
{
"Name": "Parent name",
"Child1": {
"Name": "Child name"
},
"Child2": {
"Name": "Child name"
},
"Children": {
"$id": "1",
"$values": [
{
"Name": "Child name"
},
{
"Name": "other child"
}
]
}
}
which annoyingly adds an $id to the Children array. There is one more possible value, PreserveReferencesHandling.All, which causes this output:
{
"$id": "1",
"Name": "Parent name",
"Child1": {
"$id": "2",
"Name": "Child name",
"Parent": {
"$ref": "1"
}
},
"Child2": {
"$ref": "2"
},
"Children": {
"$id": "3",
"$values": [
{
"$ref": "2"
},
{
"$id": "4",
"Name": "other child"
}
]
},
"Self": {
"$ref": "1"
}
}

I personally recommend using PreserveReferencesHandling.Objects, which doesn't need setting the ReferenceLoopHandling property at all. Unfortunately, it adds an $id to every object, even if it is not circularly defined. However, it creates an object that can be safely deserialized back into the original, but if you just want a quick and dirty output of the data in an object, use ReferenceLoopHandling.Ignore with NullValueHandling.Ignore. Note that object references cannot be preserved when a value is set via a non-default constructor such as types that implement ISerializable.

Warning, though, this is still not enough! In my logging code I had used ReferenceLoopHandling.Ignore and the exception was quite different, an OutOfMemoryException. It seems that even with circular references checked, JsonSerializer will messes up some times.

The culprits? Task<T> (or async lambdas send as parameters) and an Entity Framework context object. The solution I employed was to check the type of the objects I send to the serializer and, if any of the offending types, replace them with the full names of their types.

Hope it helps!