and has 1 comment
.NET Core comes with its own dependency injection engine, separated in the Microsoft.Extensions.DependencyInjection package, and ASP.Net Core uses it by default. In a very simplistic description, it uses an IServiceCollection to add services to, then it builds an IServiceProvider from that list, an interface which returns an implementation based on a type or null if finding none. Any change in the list of services is not supported. There are situations, though, where you want to add new services. One of them being dynamically resolving new types.

Therefore I set up to create a custom implementation of IServiceProvider that fixes that, using the mechanisms already existing in .NET Core. Note that this is just something I did from frustration, "because I could". Most people choose to replace the entire IServiceProvider with an implementation that uses some other DI container, like StructureMap.

First attempt was proxying a normal ServiceProvider and keeping a reference to the collection. Then I would just change the collection and recreate the service provider. That has two major problems. One is that the previous serviceProvider is not disposed. If you try, you automatically dispose all services already resolved and if you do not, you remain with references to the created services. The second, and more dire, is that recreating the service provider will generate new instances for services, even if registered as singletons. That is not good.

I thought of a solution:
  1. keep a list of service providers, instead of just one
  2. use a custom service collection which will let us know when changes occurred
  3. whenever new services are added, add them to a list of new services
  4. whenever a service is resolved, go through the list of providers
  5. if any provider returns a value, provide it
  6. else if any new service create a new provider from the new services and add it to the list
  7. else return null
  8. when disposing, dispose all providers in the list

This works great except the newly added providers are separate from the existing providers so when you try to resolve a type with a second provider and that type has in its constructor a type that was registered in the first provider, you get nothing.

One solution would be to add all services to the second provider, not only the new ones, but then we get back to the original issue of the singletons, only a bit more subtle:
  1. register type1 as a singleton
  2. get an instance of type1 (1)
  3. build the provider
  4. get an instance of type1 (2)
  5. register type2 which receives a type1 in its constructor
  6. get an instance of type2
  7. now, type1 (1) is the same as type1 (2), because it was resolved by the same provider
  8. type1 is different from type2.type1, though, because that was resolved as a different singleton by the second provider in the list

One solution would be to add all previous services as factories, then. For Itype1, instead of returning typeof(type1), return a factory method that resolves the value with our system. And it works... until it reaches a definition (like IOptions) that was registered as an open generic: services.AddSingleton(typeof(IType3<>),typeof(Type3<>)). In case of open generics, you cannot use a descriptor with a factory, because it returns an object, regardless of the generic type argument used. It would not to do return a Type3<Banana> for a requested type of IType3<int>.

So, final version is this:
  1. keep a list of service providers, instead of just one
  2. keep a dictionary of the last object resolved for a type
  3. use a custom service collection which will let us know when changes occurred
  4. whenever new services are added, add them to a list of new services
  5. whenever a service is resolved, go through the list of providers
  6. if any provider returns a value, return it
  7. if no new services registered return null
  8. create a new provider from all the services like this:
    • if it's a new registration, use it as is
    • if it's an open generic definition type:
      • if singleton, add first all the existing resolutions for types that are defined by it
      • use the original descriptor afterwards
    • use a registration that proxies to the advanced resolution mechanism we created
  9. when disposing, dispose all providers in the list

This implementation also has a flaw: if a dependency parameter with a generic type definition descriptor was resolved as a singleton by an additional service provider, then is requested directly and can be resolved by a previous provider, it will return a different instance. Here is the scenario:
  1. the initial provider knows to map I<> to M<>
  2. you add a new singleton mapping from X to Y and Y gets a constructor parameter of type I<Z>
  3. you request an instance of X
  4. the first provider cannot resolve it
  5. the second provider can resolve it, therefore it will also resolve a I<Z> as an M<Z> singleton instance
  6. you request an instance of I<Z>
  7. the first provider can resolve it, therefore it will return a NEW singleton instance of M<Z>

This is an edge case that I don't have the time to solve. So, with the caveat above, here is the final version.
Use it like this:
// IAdvancedServiceProvider either injected 
// or resolved via serviceProvider.GetService<IAdvancedServiceProvider>
// or even serviceProvider as IAdvancedServiceProvider
advancedServiceProvider.ServiceCollection.AddSingleton...

And this is the source code:
/// <summary>
/// Service provider that allows for dynamic adding of new services
/// </summary>
public interface IAdvancedServiceProvider : IServiceProvider
{
/// <summary>
/// Add services to this collection
/// </summary>
IServiceCollection ServiceCollection { get; }
}
 
/// <summary>
/// Service provider that allows for dynamic adding of new services
/// </summary>
public class AdvancedServiceProvider : IAdvancedServiceProvider, IDisposable
{
private readonly List<ServiceProvider> _serviceProviders;
private readonly NotifyChangedServiceCollection _services;
private readonly object _servicesLock = new object();
private List<ServiceDescriptor> _newDescriptors;
private Dictionary<Type, object> _resolvedObjects;
 
/// <summary>
/// Initializes a new instance of the <see cref="AdvancedServiceProvider"/> class.
/// </summary>
/// <param name="services">The services.</param>
public AdvancedServiceProvider(IServiceCollection services)
{
// registers itself in the list of services
services.AddSingleton<IAdvancedServiceProvider>(this);
 
_serviceProviders = new List<ServiceProvider>();
_newDescriptors = new List<ServiceDescriptor>();
_resolvedObjects = new Dictionary<Type, object>();
_services = new NotifyChangedServiceCollection(services);
_services.ServiceAdded += ServiceAdded;
_serviceProviders.Add(services.BuildServiceProvider(true));
}
 
private void ServiceAdded(object sender, ServiceDescriptor item)
{
lock (_servicesLock)
{
_newDescriptors.Add(item);
}
}
 
/// <summary>
/// Add services to this collection
/// </summary>
public IServiceCollection ServiceCollection { get => _services; }
 
/// <summary>
/// Gets the service object of the specified type.
/// </summary>
/// <param name="serviceType">An object that specifies the type of service object to get.</param>
/// <returns>A service object of type serviceType. -or- null if there is no service object of type serviceType.</returns>
public object GetService(Type serviceType)
{
lock (_servicesLock)
{
// go through the service provider chain and resolve the service
var service = GetServiceInternal(serviceType);
// if service was not found and we have new registrations
if (service == null && _newDescriptors.Count > 0)
{
// create a new service collection in order to build the next provider in the chain
var newCollection = new ServiceCollection();
foreach (var descriptor in _services)
{
foreach (var descriptorToAdd in GetDerivedServiceDescriptors(descriptor))
{
((IList<ServiceDescriptor>)newCollection).Add(descriptorToAdd);
}
}
var newServiceProvider = newCollection.BuildServiceProvider(true);
_serviceProviders.Add(newServiceProvider);
_newDescriptors = new List<ServiceDescriptor>();
service = newServiceProvider.GetService(serviceType);
}
if (service != null)
{
_resolvedObjects[serviceType] = service;
}
return service;
}
}
 
private IEnumerable<ServiceDescriptor> GetDerivedServiceDescriptors(ServiceDescriptor descriptor)
{
if (_newDescriptors.Contains(descriptor))
{
// if it's a new registration, just add it
yield return descriptor;
yield break;
}
 
if (!descriptor.ServiceType.IsGenericTypeDefinition)
{
// for a non open type generic singleton descriptor, register a factory that goes through the service provider
yield return ServiceDescriptor.Describe(
descriptor.ServiceType,
_ => GetServiceInternal(descriptor.ServiceType),
descriptor.Lifetime
);
yield break;
}
// if the registered service type for a singleton is an open generic type
// we register as factories all the already resolved specific types that fit this definition
if (descriptor.Lifetime == ServiceLifetime.Singleton)
{
foreach (var servType in _resolvedObjects.Keys.Where(t => t.IsGenericType && t.GetGenericTypeDefinition() == descriptor.ServiceType))
{
 
yield return ServiceDescriptor.Describe(
servType,
_ => _resolvedObjects[servType],
ServiceLifetime.Singleton
);
}
}
// then we add the open type registration for any new types
yield return descriptor;
}
 
private object GetServiceInternal(Type serviceType)
{
foreach (var serviceProvider in _serviceProviders)
{
var service = serviceProvider.GetService(serviceType);
if (service != null)
{
return service;
}
}
return null;
}
 
/// <summary>
/// Dispose the provider and all resolved services
/// </summary>
public void Dispose()
{
lock (_servicesLock)
{
_services.ServiceAdded -= ServiceAdded;
foreach (var serviceProvider in _serviceProviders)
{
try
{
serviceProvider.Dispose();
}
catch
{
// singleton classes might be disposed twice and throw some exception
}
}
_newDescriptors.Clear();
_resolvedObjects.Clear();
_serviceProviders.Clear();
}
}
 
/// <summary>
/// An IServiceCollection implementation that exposes a ServiceAdded event for added service descriptors
/// The collection doesn't support removal or inserting of services
/// </summary>
private class NotifyChangedServiceCollection : IServiceCollection
{
private readonly IServiceCollection _services;
 
/// <summary>
/// Fired when a descriptor is added to the collection
/// </summary>
public event EventHandler<ServiceDescriptor> ServiceAdded;
 
/// <summary>
/// Initializes a new instance of the <see cref="NotifyChangedServiceCollection"/> class.
/// </summary>
/// <param name="services">The services.</param>
public NotifyChangedServiceCollection(IServiceCollection services)
{
_services = services;
}
 
/// <summary>
/// Get the value at index
/// Setting is not supported
/// </summary>
public ServiceDescriptor this[int index]
{
get => _services[index];
set => throw new NotSupportedException("Inserting services in collection is not supported");
}
 
/// <summary>
/// Count of services in the collection
/// </summary>
public int Count { get => _services.Count; }
 
/// <summary>
/// Obviously not
/// </summary>
public bool IsReadOnly { get => false; }
 
/// <summary>
/// Adding a service descriptor will fire the ServiceAdded event
/// </summary>
/// <param name="item"></param>
public void Add(ServiceDescriptor item)
{
_services.Add(item);
ServiceAdded.Invoke(this, item);
}
 
/// <summary>
/// Clear the collection is not supported
/// </summary>
public void Clear() => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// True is the item exists in the collection
/// </summary>
public bool Contains(ServiceDescriptor item) => _services.Contains(item);
 
/// <summary>
/// Copy items to array of service descriptors
/// </summary>
public void CopyTo(ServiceDescriptor[] array, int arrayIndex) => _services.CopyTo(array, arrayIndex);
 
/// <summary>
/// Enumerator for service descriptors
/// </summary>
public IEnumerator<ServiceDescriptor> GetEnumerator() => _services.GetEnumerator();
 
/// <summary>
/// Index of item in the list
/// </summary>
public int IndexOf(ServiceDescriptor item) => _services.IndexOf(item);
 
/// <summary>
/// Inserting is not supported
/// </summary>
public void Insert(int index, ServiceDescriptor item) => throw new NotSupportedException("Inserting services in collection is not supported");
 
/// <summary>
/// Removing items is not supported
/// </summary>
public bool Remove(ServiceDescriptor item) => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// Removing items is not supported
/// </summary>
public void RemoveAt(int index) => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// Enumerator for objects
/// </summary>
IEnumerator IEnumerable.GetEnumerator() => ((IEnumerable)_services).GetEnumerator();
}
}

Long story short, check out this answer to a StackOverflow question: Package destination of restore of .net-core projects is always global package directory

The scenario is this, you are used to .NET Framework projects for which Visual Studio restored NuGet packages in a packages folder in the solution folder and then you switch to .NET Core. No packages folder! You google it and you find that there is a global folder in your user's profile where NuGet will download all of these projects, that .NET Core uses it by default and also that you might change this behavior used on a property in nuget.config. So here are the issues you have take into account:
  1. There are two ways of configuring NuGet packages for your projects: a packages.config file and PackageReference elements in your .csproj file
  2. The name of the property you need to set is different based on the type of configuration: repositoryPath and globalPackagesFolder, respectively
  3. There are two formats for configuring nuget properties: using the add element inside the config element and using the repositoryPath element inside the settings element
  4. The format that worked for me in VS 2017 was the config element
  5. There are two locations for the nuget.config file: in a .nuget folder inside your solution folder and directly in the solution folder (or any of its parent folders)
  6. The location that is accepted by the latest versions of NuGet is directly in the solution folder
  7. Sometimes you need to restart Visual Studio for the change in nuget.config files to considered
  8. The path you specify is relative to the nuget.config file, no matter where it is

A bit of an overkill, but try this as the beginning of your nuget.config file that sits next to the .sln file in your solution:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<config>
<add key="repositoryPath" value="./packages" />
<add key="globalPackagesFolder" value="./packages" />
</config>
<settings>
<repositoryPath>./packages</repositoryPath>
</settings>
...
</configuration>

and has 2 comments
We already know how to load types in .NET Framework and we know what they say we should use in .NET Core. But what about Standard? Is that a trick question? Sort of. Right now we have two .NET Standard and three .NET Core versions, albeit .NET Core 3 is in preview mode. The signature for AssemblyLoadContext and how it is used has changed dramatically. Core 3 enables context unloading, but Standard 2 does not. So you either are forced to build your library as Core 3 or you have to not use Unloading contexts or use reflection, which is not robust and probably will not be needed with the possible arrival of Standard 3.

But there are subtler issues at work. One of them is that, at least with .NET Core 3 Preview6, when you reference System.Runtime.Loader in a Standard library, so you can access AssemblyLoadContext, you get conflicts between the System.Runtime you are using and the one referenced by System.Runtime.Loader. The only solution is to use the System.Runtime.Loader NuGet package, but that returns you to the Standard 2 version of AssemblyLoadContext, even if the library version is higher!

The setup is this: I have an ITestInterface interface which resides in TestInterfaceLibrary.dll. I also have a TestImplementation class that can be found in TestImplementationLibrary.dll and implements ITestInterface. My program either does not reference any of these libraries or it only references the interface one. The task is to load both these types and then simply convert one instance of TestImplementation to ITestInterface. Simple test would be loading the types and then expecting interfaceType.IsAssignableFrom(implementationType) to be true.

Core 3


Let's first try the Core 3 way:
var context = new AssemblyLoadContext("testContext", true);
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface");
Console.WriteLine(interfaceType?.ToString()??"interface type not loaded");
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation");
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
Console.WriteLine("implementation implements interface: "+interfaceType.IsAssignableFrom(implementationType));
 
context.Unload();
The output is:
TestInterfaceLibrary.ITestInterface
TestImplementationLibrary.TestImplementation
implementation implements interface: True

It works! But only because the interface assembly is loaded first. If you try to load just the implementation type first, it will come up as empty. There are no exceptions thrown unless you get all the assembly types or specify the throwOnError parameter in GetType. The exception is "System.IO.FileNotFoundException: 'Could not load file or assembly 'TestInterfaceLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified.'".

In order to solve this, we need to use the Resolve event of the AssemblyLoadContext class. Let's try this:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation", true);
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine("implementation implements interface: " + interfaceType.IsAssignableFrom(implementationType));
 
context.Resolving -= Context_Resolving;
context.Unload();
 
private static Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}

And now it works again, by assuming the assembly name is the same as the assembly file name and that it is found in the same place.

But... if we try this in different contexts:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation", true);
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
context.Resolving -= Context_Resolving;
context.Unload();
context = new AssemblyLoadContext("testContext2", true);
context.Resolving += Context_Resolving;
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine("implementation implements interface: " + interfaceType.IsAssignableFrom(implementationType));
 
context.Resolving -= Context_Resolving;
context.Unload();
the output will show
implementation implements interface: False

This means that if we want to encapsulate this in a TypeLoader class or something, we cannot use different contexts for dynamically loading types. Even if we had one context that we would unload in order to refresh all the types, it could still be different from the main context, in case the interface is loaded twice or referenced directly in the project.

For example, if you reference TestInterfaceLibrary directly and you load TestImplementation dynamically it will work as expected, because ITestInterface is resolved automatically from the main context. However, if you load ITestInterface dynamically, too, it will be a different type from the referenced ITestInterface, even if they apparently have the same name and full name and assembly qualified name! So it kind of makes sense to not load a type twice. Is this where the context unloading comes in? Not really. Let's define a method that counts the number of types with a certain name in the current domain as
private static int CountTypes(string typeName)
{
return AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes().Where(t => t.FullName == typeName))
.Count();
}

And now let's run this code:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var referencedInterfaceType = typeof(ITestInterface);
Console.WriteLine(referencedInterfaceType?.ToString() ?? "interface type not loaded");
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine($"Types are the same: {interfaceType==referencedInterfaceType}");
 
Console.WriteLine($"Number of types with name {interfaceType.FullName}: {CountTypes(interfaceType.FullName)}");
 
context.Resolving -= Context_Resolving;
context.Unload();
Console.WriteLine($"Number of types with name {interfaceType.FullName}: {CountTypes(interfaceType.FullName)}");

There is the referenced type, then we load the type dynamically again, inside a new context. We count the types loaded in the current domain, we unload the context, we count the types again. The result is
TestInterfaceLibrary.ITestInterface
TestInterfaceLibrary.ITestInterface
Types are the same: False
Number of types with name TestInterfaceLibrary.ITestInterface: 2
Number of types with name TestInterfaceLibrary.ITestInterface: 2
The types are always 2!

Bottom line, even when unloading the AssemblyLoadContext, the types used are not unloaded and trying to find a type by name will result in duplicates.

OK, so let's just agree that types with the same name, once loaded, should remain there and no other type with the same name should be loaded. Let's try to incorporate this into a TypeLoader class:
public class TypeLoader : IDisposable
{
private readonly AssemblyLoadContext _context;
 
public TypeLoader()
{
_context = new AssemblyLoadContext(GetType().FullName, true);
_context.Resolving += Context_Resolving;
}
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = _context.LoadFromAssemblyPath(assemblyPath);
return assembly.GetType(typeName, true);
}
 
public void Dispose()
{
_context?.Resolving -= Context_Resolving;
_context?.Unload();
}
}

The code in our test is now much clearer:
var interfaceAssemblyPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestInterfaceLibrary.dll");
var implementationAssemblyPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestImplementationLibrary.dll");
var interfaceTypeName = "TestInterfaceLibrary.ITestInterface";
var implementationTypeName = "TestImplementationLibrary.TestImplementation";
 
using (var loader = new TypeLoader())
{
Type referencedType = typeof(TestInterfaceLibrary.ITestInterface);
var interfaceType = loader.LoadType(interfaceTypeName, interfaceAssemblyPath);
var implementationType = loader.LoadType(implementationTypeName, implementationAssemblyPath);
Console.WriteLine($@"
referenced type: {referencedType}
interface type: {interfaceType}
implementation type: {implementationType}
referenced and loaded interfaces are the same: {referencedType == interfaceType}
interface implemented: {interfaceType.IsAssignableFrom(implementationType)}"
);
}
and the result is
referenced type: TestInterfaceLibrary.ITestInterface
interface type: TestInterfaceLibrary.ITestInterface
implementation type: TestImplementationLibrary.TestImplementation
referenced and loaded interfaces are the same: True
interface implemented: True

But we still use Unload. Maybe it will work some day as I want it to work, but until then, why not get rid of Unload and make TypeLoader a class in a Standard 2 library?

Standard 2


For this I will create a new Standard 2 library project and then reference it in our test Core 3 project. Then I will move the TypeLoader class in the library project.

The errors in the library project are related to not knowing what an AssemblyLoadContext is, therefore the first solution is to reference System.Runtime.Loader from the framework. I get the immediate error "Assembly 'System.Runtime.Loader' with identity 'System.Runtime.Loader, Version=4.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' uses 'System.Runtime, Version=4.2.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' which has a higher version than referenced assembly 'System.Runtime' with identity 'System.Runtime, Version=4.1.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a".

Solution 2: load the System.Runtime.Loader NuGet package, which at the time of writing this, is version 4.3.0. The error is now gone, but several things are immediately apparent:
  1. the Unload method doesn't exist anymore
  2. the constructor doesn't receive a name and a bool anymore
  3. AssemblyLoadContext is now abstract

In order to solve this I am creating a DynamicAssemblyLoadContext class that inherits from AssemblyLoadContext and just return null from the Load method overload, and I give it an Unload method and a constructor with a string and a bool that don't do anything. And it works again. The updated TypeLoader class is now:
public class TypeLoader : IDisposable
{
private readonly DynamicAssemblyLoadContext _context;
 
public TypeLoader()
{
_context = new DynamicAssemblyLoadContext(GetType().FullName, true);
_context.Resolving += Context_Resolving;
}
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(ass => ass.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = _context.LoadFromAssemblyPath(assemblyPath);
return assembly.GetType(typeName, true);
}
 
public void Dispose()
{
_context?.Resolving -= Context_Resolving;
_context?.Unload();
}
 
 
private class DynamicAssemblyLoadContext : AssemblyLoadContext
{
public DynamicAssemblyLoadContext(string name, bool isCollectible)
{
}
 
protected override Assembly Load(AssemblyName assemblyName)
{
return null;
}
 
public void Unload()
{
}
}
}

The safe way


The code above has an issue, though. If the interface type is dynamically loaded before its referenced type is used, this fails again. This is the case when you use dependency injection. You dynamically load the types in order to register the implementation relationship to the interface, but then, when you ask for a resolution for the interface type, now referenced by the main project, you get another type named just the same.

The safe way, considering that we don't really use Unload and we don't count on it every working, why not use the default context, the one where everything loads, and be done with it. When you do that, the code becomes a little uglier, but it works in all situations.

Final version.
public class TypeLoader
{
private readonly object _resolutionLock = new object();
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var context = AssemblyLoadContext.Default;
lock (_resolutionLock)
{
context.Resolving += Context_Resolving;
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(ass => ass.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = context.LoadFromAssemblyPath(assemblyPath);
 
type = assembly.GetType(typeName, true);
context.Resolving -= Context_Resolving;
return type;
}
}
}

You just gotta hate that adding and removing the event inside a lock, right? Well, if you find a better solution, let me know.

This is more a backup for the extensions that I have installed on the two main IDEs I'm using for my job: Visual Studio and Visual Studio Code.

Visual Studio Code


In order to list the extensions installed, use the command code --list-extensions. For me, this results in this output:

code --install-extension aeschli.vscode-css-formatter
code --install-extension Angular.ng-template
code --install-extension christian-kohler.npm-intellisense
code --install-extension cmstead.jsrefactor
code --install-extension cssho.vscode-svgviewer
code --install-extension danwahlin.angular2-snippets
code --install-extension dbaeumer.jshint
code --install-extension dbaeumer.vscode-eslint
code --install-extension donjayamanne.jquerysnippets
code --install-extension donjayamanne.jupyter
code --install-extension DotJoshJohnson.xml
code --install-extension ecmel.vscode-html-css
code --install-extension eg2.vscode-npm-script
code --install-extension esbenp.prettier-vscode
code --install-extension fabianlauer.vs-code-xml-format
code --install-extension fknop.vscode-npm
code --install-extension HookyQR.beautify
code --install-extension humao.rest-client
code --install-extension joelday.docthis
code --install-extension k--kato.docomment
code --install-extension michelemelluso.code-beautifier
code --install-extension minhthai.vscode-todo-parser
code --install-extension mohsen1.prettify-json
code --install-extension mrmlnc.vscode-scss
code --install-extension ms-mssql.mssql
code --install-extension ms-vscode.csharp
code --install-extension ms-vscode.typescript-javascript-grammar
code --install-extension ms-vscode.vscode-typescript-tslint-plugin
code --install-extension msjsdiag.debugger-for-chrome
code --install-extension naumovs.color-highlight
code --install-extension pmneo.tsimporter
code --install-extension rbbit.typescript-hero
code --install-extension robinbentley.sass-indenrted
code --install-extension wayou.vscode-todo-highlight

In order to install them, use code --install-extension [extension name] for each line.

Visual Studio


For Visual Studio, funny enough, in order to export and import your extensions you need to use an extension: Extension Manager 2017, which on my system exports a file in .vsext format:

{
"id": "5f191824-b8a6-47c0-9f96-f607dfd3c09b",
"name": "My Visual Studio extensions",
"description": "A collection of my Visual Studio extensions",
"version": "1.0",
"extensions": [
{
"name": ".NET Portability Analyzer",
"vsixId": "55d15546-28ca-40dc-af23-dfa503e9c5fe"
},
{
"name": "Advanced Installer for Visual Studio 2017",
"vsixId": "Caphyon.AdvancedInstaller.23debb5a-cff4-4b91-88bf-6280f72a7ebb"
},
{
"name": "Azure Data Lake and Stream Analytics Tools",
"vsixId": "1e906ff5-9da8-4091-a299-5c253c55fdc9"
},
{
"name": "Azure Functions and Web Jobs Tools",
"vsixId": "Microsoft.VisualStudio.Web.AzureFunctions"
},
{
"name": "BuildVision",
"vsixId": "837c3c3b-8382-4839-9c9a-807b758a929f"
},
{
"name": "Clean Code .NET",
"vsixId": "CleanCode.NET.9ecfa9bb-0775-48d0-9898-4dbbbd529fe3"
},
{
"name": "Cloud Explorer for VS 2017",
"vsixId": "Microsoft.VisualStudio.CloudExplorer"
},
{
"name": "Code Cracker for C#",
"vsixId": "CodeCracker.Vsix..5b99e64c-1418-4a06-990c-fd4cf01f4f63"
},
{
"name": "Code Graph",
"vsixId": "CodeAtlasVSIX.Company.df5456fb-08ea-4256-b5ff-ecdb3a512ad3"
},
{
"name": "CodeMaid",
"vsixId": "4c82e17d-927e-42d2-8460-b473ac7df316"
},
{
"name": "CommentCop",
"vsixId": "CommentCop..0521EE68-1A5D-4C78-9970-B6A46B03FA6D"
},
{
"name": "EntityFramework Reverse POCO Generator",
"vsixId": "EntityFramework_Reverse_POCO_Generator..d542a934-8bd6-4136-b490-5f0049d62033"
},
{
"name": "Extension Manager 2017",
"vsixId": "e83d71b8-8bfc-4e06-b145-b0388910c016"
},
{
"name": "Fix Mixed Tabs",
"vsixId": "FixMixedTabs.9f1d3050-b986-4b10-ae36-97c6efc5e968"
},
{
"name": "Fix Namespace",
"vsixId": "f073da8c-bb52-41f8-b95a-a6346b1a0b52"
},
{
"name": "MetricsAnalyzer",
"vsixId": "MetricsAnalyzer..8026235d-7afc-401b-8f45-ba8624a07ef5"
},
{
"name": "Microsoft Code Analysis 2017",
"vsixId": "4db2d63d-3320-4fbd-bf80-07f8d1500bd3"
},
{
"name": "Moq.Analyzers",
"vsixId": "Moq.Analyzers..c3c7e3f8-2407-428d-beef-c4557253517b"
},
{
"name": "Object Exporter",
"vsixId": "07fb5b16-f4be-4488-9a19-b4f36d2c05a6"
},
{
"name": "Output enhancer",
"vsixId": "VSOutputEnhancer.Nikolay Balakin.a06be4c3-f97e-425c-8a0d-bdef08ac2abb"
},
{
"name": "Power Commands for Visual Studio",
"vsixId": "PowerCommands.3ecdd89b-f985-483d-8c94-be37de4dc083"
},
{
"name": "Ref12",
"vsixId": "SLaks-Ref12-086C4CE4-7061-4B1F-BC77-B64E4ED71B8E"
},
{
"name": "Reference Conflicts Analyser",
"vsixId": "ff477521-e67b-4ca3-931f-3edf36125d28"
},
{
"name": "Regular Expression Tester Extension",
"vsixId": "a65d58d2-ead8-4eea-a47d-fa60865a6043"
},
{
"name": "ResolveUR - Resolve Unused References",
"vsixId": "637ba02c-3388-4e54-9051-3eea7c51b054"
},
{
"name": "Roslyn Security Guard",
"vsixId": "RoslynSecurityGuard..45fa56c2-16f1-4395-8c10-a5a460084018"
},
{
"name": "Roslynator 2017",
"vsixId": "9289a8ab-1bb6-496b-9992-9f7ea27f66a8"
},
{
"name": "Security Code Scan (for VS2017 and newer)",
"vsixId": "955196A7-ACBF-4F6B-820B-51B8507CE853"
},
{
"name": "Solution Error Visualizer",
"vsixId": "SolutionErrorVisualizer.a392f96b-6b33-4b53-b4bb-3376a05f986c"
},
{
"name": "SonarLint for Visual Studio 2017",
"vsixId": "SonarLint.36871a7b-4853-481f-bb52-1835a874e81b"
},
{
"name": "SQL Search",
"vsixId": "Redgate.SQLSearch.VSExtension.9BD7AEDA-C291-4702-8191-4189B099F3A9"
},
{
"name": "Target Framework Migrator",
"vsixId": "TargetFrameworkMigrator..4f7666b9-e62c-46a1-af25-21ab8742ef00"
},
{
"name": "Trailing Whitespace Visualizer",
"vsixId": "4c1a78e6-e7b8-4aa9-8812-4836e051ff6d"
},
{
"name": "Unit Test Boilerplate Generator",
"vsixId": "UnitTestBoilerplate.RandomEngy.ca0bb824-eb5a-41a8-ab39-3b81f03ba3fe"
},
{
"name": "Visual Studio IntelliCode",
"vsixId": "IntelliCode.VSIX.598224b2-b987-401b-8509-f568d0c0b946"
},
{
"name": "Visual Studio Spell Checker (VS2017 and Later)",
"vsixId": "43EA967E-0DE2-4136-8E52-C6DCFB5C2748"
},
{
"name": "Wix Toolset Visual Studio 2017 Extension",
"vsixId": "WixToolset.VisualStudioExtension.Dev15"
}
]
}

and has 0 comments

The Problem


Phew, that's a mouthful. But the issue is that trying to serialize a FileInfo or a DirectoryInfo object with Newtonsoft's Json library in .NET Core fails with a vague exception:
Newtonsoft.Json.JsonSerializationException: Unable to serialize instance of 'System.IO.DirectoryInfo'.
at Newtonsoft.Json.Serialization.DefaultContractResolver.ThrowUnableToSerializeError(Object o, StreamingContext context)
at Newtonsoft.Json.Serialization.JsonContract.InvokeOnSerializing(Object o, StreamingContext context)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.OnSerializing(JsonWriter writer, JsonContract contract, Object value)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject(JsonWriter writer, Object value, JsonObjectContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.Serialize(JsonWriter jsonWriter, Object value, Type objectType)

It doesn't say why it fails, just that a method called ThrowUnableToSerializeError threw um... an unable to serialize error?

The Cause


Looking at the Newtonsoft code, we eventually get to this piece of code:
// serializing DirectoryInfo without ISerializable will stackoverflow
// https://github.com/JamesNK/Newtonsoft.Json/issues/1541
if
(Array.IndexOf(BlacklistedTypeNames, objectType.FullName) != -1)
{
contract.OnSerializingCallbacks.Add(ThrowUnableToSerializeError);
}

Later, another piece of code will execute the serializing callbacks and throw the exception. We can get rid of this functionality, by using a custom contract resolver, like this:
var settings = new JsonSerializerSettings
{
ContractResolver = new FileInfoContractResolver()
};
 
private class FileInfoContractResolver : DefaultContractResolver
{
protected override JsonContract CreateContract(Type objectType)
{
var result = base.CreateContract(objectType);
if (typeof(FileSystemInfo).IsAssignableFrom(objectType))
{
result.OnSerializingCallbacks.Clear();
}
return result;
}
}

Yet now, when trying to serialize, we get the stack overflow exception described in the original Newtonsoft.Json issue. It stems from the difference between the .NET Framework implementation and the .NET Core implementation of ISerializable in FileSystemInfo, which in Core just throws PlatformNotSupportedException. It's still not clear why it goes to a StackOverflowException, probably some conflict with Newtonsoft code, but it's clear Microsoft does not intend to make these classes serializable. If you think about it, those classes suck for so many reasons!

The Solution


So, in order to solve it, we will use a custom JSON converter:
private class FileSystemInfoConverter:JsonConverter
{
public override bool CanConvert(Type objectType)
{
return typeof(FileSystemInfo).IsAssignableFrom(objectType);
}
 
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
if (reader.TokenType == JsonToken.Null)
return null;
var jObject = JObject.Load(reader);
var fullPath = jObject["FullPath"].Value<string>();
return Activator.CreateInstance(objectType, fullPath);
}
 
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
var info = value as FileSystemInfo;
var obj = info == null
? null
: new
{
FullPath = info.FullName
};
var token = JToken.FromObject(obj);
token.WriteTo(writer);
}
}
And we use it like this:
var settings = new JsonSerializerSettings
{
Converters = new List<JsonConverter>
{
new FileSystemInfoConverter()
}
};
var json = JsonConvert.SerializeObject(dir, settings);
var info = JsonConvert.DeserializeObject<DirectoryInfo>(json, settings);

Why FileInfo and DirectoryInfo suck


The answer of a senior developer to any question should be "Why?" or "Why on Earth or anywhere in the Solar System would you want to do a dumb thing like that?!?!". Why would you want to serialize a directory or file info object? The answer is that you should not. The info objects are defined by only one thing: a path, but they have so much baggage: properties that access the file system, unsafe methods, no interfaces or factory methods that can allow them to be mocked in unit tests. They might look like data objects, but they are not!

Imagine a scenario where you have a list of all the files in your drive. You enumerated them all and now you want to serialize them. Should the serializer save Exists or Length, for example? Because that means it will access the file system for each of them in the process of serialization, leading to a lot of work, propensity to access errors and so on.

Best practices say you should either use some model classes to move around data, like an empty FileSystemInfoModel with Type and FullPath and maybe Attributes or Size properties or whatever you want to save, but that you set yourself as a separate responsibility. And if you want to use the functionality of the Info classes, use System.IO.Abstractions or the new Core IFileProvider abstraction to get implementations of interfaces that you can mock in unit tests.

Tell me what you think.

and has 0 comments
This post starts from a simple question: how do I start a task with timeout? You go to StackOverflow, of course, and find this answer: Asynchronously wait for Task<T> to complete with timeout. It's an elegant solution, mainly to also start a Task.Delay and continue when either task completes. However, in order to cancel the initial operation, one needs to pass the cancellation token to the original task and manually handle it, meaning polluting the entire business code with cancellation logic. This might be OK, yet are there alternatives?

But, isn't there the Task.Run(action) method that also accepts a CancellationToken? Yes, there is, and if you thought this runs an action until you cancel it, think again. Here is what Task.Run says it does: "Queues the specified work to run on the thread pool and returns a Task object that represents that work. A cancellation token allows the work to be cancelled." and if you scroll down to Remarks, here is what it actually does: "If cancellation is requested before the task begins execution, the task does not execute. Instead it is set to the Canceled state and throws a TaskCanceledException exception". You read that right: the token is only taken into account when the task starts running, not while it is actually executing.

Surely, then, there must be a way to cancel a running Task. How about Task.Dispose()? Dispose throws a funny exception if you try it: "System.InvalidOperationException: 'A task may only be disposed if it is in a completion state (RanToCompletion, Faulted or Canceled).'". In normal speech, it means "Fuck you!". If you think about it, how would you abort a task execution? What if it does nasty things, leaves resources occupied, has to clean up after it? The .NET team took the safe path and refused to give you an out of the box unsafe cancelling mechanism.

So, what is the solution? The recommended one is that you pass the token to all methods that can be cancelled and then check inside if cancellation was requested. Of course this only works if
  1. you control what the task does
  2. you can split the operation into small chunks that are either executed sequentially or in a loop so you interrupt their flow
. If you have something like an external process that is being executed, or a long running operation, you are almost out of luck. Why almost? Well, CancellationSource or CancellationToken do not have events, but the token exposes a "wait handle" that you can wait for synchronously. And here it gets funky. Check out an example of a method that executes some long running action and can react to token cancelling:
/// <summary>
/// Executes the long running action and cancels it when needed
/// </summary>
/// <param name="token"></param>
private void LongRunningAction(CancellationToken token)
{
// instantiate a container and keep its reference
var container = new IdentificationContainer();
Task.Run(() =>
{
// wait until the token gets cancelled on another thread
token.WaitHandle.WaitOne();
// this will use the information in the container to kill the action
// (presumably by interrupting external processes or sending some kill signal)
KillLongRunningAction();
});
// this executes the action and populates the identification container if needed
RunLongRunningAction();
}
This introduces some other issues, like what happens to the monitoring task if you never cancel the token or dispose of the cancellation source, but that's a bit too deep.

In the code above we get a sort of a solution if we can control the code and we can actually cancel things gracefully inside of it. But what if I can't (or won't)? Can I get something that does what I wanted Task.Run to do: execute something and, when I cancel it, stop it from executing?

And the answer, using what we learned above, is yes, but as explained at the beginning, it may have effects like resource leaks. Here it is:
/// <summary>
/// Run an action and kill it when canceling the token
/// </summary>
/// <param name="action">The action to execute</param>
/// <param name="token">The token</param>
/// <param name="waitForGracefulTermination">If set, the task will be killed with delay so as to allow the action to end gracefully</param>
private static Task RunCancellable(Action action, CancellationToken token, TimeSpan? waitForGracefulTermination=null)
{
// we need a thread, because Tasks cannot be forcefully stopped
var thread = new Thread(new ThreadStart(action));
// we hold the reference to the task so we can check its state
Task task = null;
task = Task.Run(() =>
{
// task monitoring the token
Task.Run(() =>
{
// wait for the token to be canceled
token.WaitHandle.WaitOne();
// if we wanted graceful termination we wait
// in this case, action needs to know about the token as well and handle the cancellation itself
if (waitForGracefulTermination != null)
{
Thread.Sleep(waitForGracefulTermination.Value);
}
// if the task has not ended, we kill the thread
if (!task.IsCompleted)
{
thread.Abort();
}
});
// simply start the thread (and the action)
thread.Start();
// and wait for it to end so we return to the current thread
thread.Join();
// throw exception if the token was canceled
// this will not be reached unless the thread completes or is aborted
token.ThrowIfCancellationRequested();
}, token);
return task;
}

As you can see, the solution is to run the action on a thread and then manually kill the thread. This means that any control of where and how the action is executed is wrestled from the default TaskScheduler and given to you. Also, in order to force the stopping of the task, you use Thread.Abort, which may have nasty side effects. Here is what Microsoft says about it:



Bummer! .NET Core doesn't want you to kill threads. However, if you are really determined, there is a way :) Use ThreadEx.Abort(thread);


Bonus code: How do you get the cancellation token if you have the task?
var token = new TaskCanceledException(task).CancellationToken;
It might not help too much, especially if you want to use it inside the task itself, but it might help clean up the code.

Conclusion


Just like async/await, using the provided cancellation token method will only pollute your code with little effect. However, considering you want to use a common interface for the purpose, use RunCancellable instead of Task.Run and handle the token manually whenever you feel resources have been allocated and need to be cleaned up first.

and has 0 comments
This should be a simple question with a simple answer, but if you google for Angular Material tooltip styling or length or width you get different answers that are not always complete or even correct. The problem: you have something like <a matTooltip="some message" matTooltipClass="myTooltipClass" ... as per the examples easy to find. However, it doesn't seem to be working. The styling for myTooltipClass does not apply. Here are some points to check off while searching for the problem:
  1. Check whether myTooltipClass is defined in the component or the global CSS file. It should either be in the global CSS file (so it applies to everything) or your component should declare encapsulation: ViewEncapsulation.None
  2. Check the declaration of the class is specific enough. DO NOT use !important to fix this, although it would work. Try something like this: mat-tooltip-component .mat-tooltip.myTooltipClass {...}

To see if the class is applied, set the background-color property to red or something. If the class applies, you managed to define it correctly. If it applies partially, it's not specific enough. To change the width, use max-width. To make the tooltip wrap use white-space: wrap; and word-wrap: break-word;

As a reference, this is how the HTML looks for a rendered tooltip:
<div class="cdk-overlay-container">
<div class="cdk-overlay-connected-position-bounding-box" dir="ltr" style="top: 0px; left: 0px; height: 100%; width: 100%;">
<div id="cdk-overlay-1" class="cdk-overlay-pane mat-tooltip-panel" style="pointer-events: auto; top: 8px; left: 417.625px;">
<mat-tooltip-component aria-hidden="true" class="ng-tns-c34-15 ng-star-inserted" style="zoom: 1;">
<div class="mat-tooltip ng-trigger ng-trigger-state" style="transform-origin: left center; transform: scale(1);">Info about the action</div>
</mat-tooltip-component>
</div>
</div>
</div>

I've stumbled on an article called Create A Dark/Light Mode Switch with CSS Variables which explains how to style your site with CSS variables which you can then change for various "themes". Her solution is to use a little Javascript - and let's be honest, there is nothing wrong with that, everything is using Javascript these days, but what if we mix this idea with "the CSS checkbox hack" as explained in Creating useful interactive elements using only CSS, or "The Checkbox Hack"? Then we could have a controllable theme in our website without any Javascript.

Just in case these sites disappear, here is the gist of it:
  1. Define your colors in the root of the document with something like this:
    :root {
    --primary-color: #302AE6;
    --secondary-color: #536390;
    --font-color: #424242;
    --bg-color: #fff;
    --heading-color: #292922;
    }
  2. Define your themes based on attributes, like this:
    [data-theme="dark"] {
    --primary-color: #9A97F3;
    --secondary-color: #818cab;
    --font-color: #e1e1ff;
    --bg-color: #161625;
    --heading-color: #818cab;
    }
  3. Use the colors in your web site like this:
    body {
    background-color: var(--bg-color);
    color: var(--font-color);

    /*other styles*/
    .....
    }
  4. Change theme by setting the attribute data-theme to 'dark' or 'light'
Now, to use the CSS checkbox hack, you need to do the following:
  1. Wrap all your site in an element, a div or something
  2. Add a checkbox input on the same level as the wrapper and styled to be invisible
  3. Add a label with a for attribute having the value the id of the checkbox
  4. Add in the label whatever elements you want to trigger the change (button, fake checkbox, etc)
  5. Change the theme CSS to work not with [data-theme="dark"], but with #hiddenCheckbox:checked ~ #wrapper, which means "the element with id wrapper that is on the same level as the checkbox with id hiddenCheckbox"
This means that whenever you click on the label, you toggle the hidden checkbox, which in turn changes the CSS that gets applied to the entire wrapper element.

And here is a CodePen to prove it:

See the Pen
rbzPmj
by Siderite (@siderite)
on CodePen.

Sometimes you get an annoying error after updating your .NET Framework or some of the packages or libraries in your project: "Some NuGet packages were installed using a target framework different from the current target framework and may need to be reinstalled. Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information. Packages affected: <name-of-nuget-package>".

The problem stems from the fact that NuGet packages have variants for different .NET flavors and in your project they are "hinted" at by the <HintPath> child element in the <Reference> elements in your .csproj file. Somehow, the hint still points to a different variant than the one you need and that's why you get this error. The explanation in length can be found in this great post: Why, when and how to reinstall NuGet packages after upgrading a project, but just in case his blog disappears (as so many great ones did in the past), here is the gist of the solution:

In Visual Studio go to Tools → NuGet Package Manager → Package Manager Console and type:
Update-Package <name-of-nuget-package> -Reinstall -ProjectName <name-of-project>

To add some value to Derriey's post, you can solve all the similar issues in your solution by copying the entire list of errors from all projects by going to the Output pane, selecting them all and right clicking Copy, then run search and replace in your favorite editor with this regular expression:
^.*?Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information.  Packages affected: ((?:[^,\s]+(?:, )?)+)\t([^\t]+)\t\t\d+\t\t$
and replacement pattern
Update-Package $1 -Reinstall -ProjectName $2

Then make sure there is only one project on each line, copy paste the result into the Package Manager Console window and the entire solution will get fixed.

Example: Error Some NuGet packages were installed using a target framework different from the current target framework and may need to be reinstalled. Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information. Packages affected: Microsoft.Extensions.Configuration, Serilog MyProject.Common 0

Turns into: Update-Package Microsoft.Extensions.Configuration, Serilog -Reinstall -ProjectName MyProject.Common Since Update-Package only supports one package and regex replace doesn't have a syntax for multiple captures in the same group, you will have to manually turn this into:
Update-Package Microsoft.Extensions.Configuration -Reinstall -ProjectName MyProject.Common
Update-Package Serilog -Reinstall -ProjectName MyProject.Common

Copy paste the result and the two projects will be reinstalled on the affected projects in your solution.

I spent hours trying to manually fix the assembly redirects in a web.config, only to give up and use the default Add-BindingRedirect in the NuGet package manager. And it worked! I have no idea if this won't break something else, but I got it from Rick Strahl's blog and it worked for me. More in his article. Thanks, Rick!

One thing to remember is that you first have to delete the dependentAssembly elements from the .config file in order for the command to work.

Update: there is an issue related to NuGet packages. I was recommending to run MsBuild with the command line option /t:Restore;Rebuild which should restore packages and rebuild the solution. However, as detailed here, the MSBuild Restore option only restores packages defined in the project PackageReference elements, not the ones in packages.config. In order to restore those, you still need to manually run nuget restore. Where do you get nuget.exe from? Obviously not from the Visual Studio Build Tools... but from the NuGet Gallery.

Now, for the original post.

So I had this medium size Visual Studio solution, in .NET Framework 4.6.1, containing a bunch of projects, including a Wix setup and a web API and I wanted to build it on a machine that did not have Visual Studio, for Continuous Deployment reasons. Since Visual Studio uses MSBuild to compile, I thought it would be a five minute job. Boy, was I wrong!

First of all, the command to build a solution is clear:
MSBuild You.sln
and since it was a .NET 4.0 project, it made sense to use the MSBuild.exe from C:\Windows\Microsoft.NET\Framework64\v4.0.30319. Well, enter the first error: CS1617: Invalid option 'latest' for /langversion; must be ISO-1, ISO-2, 3, 4, 5 or Default. This is caused by the project using C# version 7 which is NOT supported by the MSBuild version in the .NET Framework, you need MSBuild version 15, which comes with Visual Studio. I didn't want to install Visual Studio.

The solution is to install Visual Studio Build Tools, preferably using the Visual Studio Installer. Now, the correct MSBuild version is found at C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\Bin\MSBuild.exe. Note that it is in located in a Visual Studio folder, not the MSBuild folder, which is also there.

An issue that occurred here was that previously warning messages saying the framework is 4.6.1 and the installed framework is 4.7.2 now became errors. The solution is to install the .NET Framework SDK 4.6.1 or to upgrade all your projects to 4.7.2. Warning: you need the Developer Pack, not just the Runtime.

Second error: error MSB4036: The "GetReferenceNearestTargetFrameworkTask" task was not found.. The problem? The NuGet package manager and/or the NuGet targets and build tasks are not installed. In order to install them, run Visual Studio Installer and look under the Individual Components tab, in the Code Tools section. See this Stack Overflow question for more details.

Next problem: The type 'IDisposable' is defined in an assembly that is not referenced. You must add a reference to assembly 'netstandard'. This is a weird one, since the compilation in Visual Studio had no issues whatsoever. This is related to the framework version, though, as .NET 4.6 uses netstandard 1.0 and 4.7 uses 2.0. The solution is to add a <Reference Include="netstandard" /> tag in your .csproj (tip: Search and replace <Reference Include="System" /> with <Reference Include="System" /><Reference Include="netstandard" /> in all your .csproj files)

Another problem similar to the one above is Predefined type 'System.ValueTuple´2´ is not defined or imported and that is because ValueTuple is not in .NET Framework 4.6.2 or earlier and you need to install the System.ValueTuple package in your project (using the NuGet package manager, more details here)

For both problems above as well as for the issue with the framework conflict further up a possible solution is to upgrade all projects to .NET 4.7+ or whatever is latest.

Next, targets errors: error MSB4226: The imported project "Microsoft.WebApplication.targets was not found. and error MSB4057: The target "_WPPCopyWebApplication" does not exist in the project. This is because even if Visual Studio Build Tools is installed, the targets for it are not. The solution is to copy the folders Web and WebApplications from C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\Microsoft\VisualStudio\v15.0 to "\\BuildMachine\C$\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\VisualStudio\v15.0". You may need to copy the NuGet targetss as well, I don't remember if that is what I did or the NuGet package manager installation solved it.

Last but not least, Wix errors. Obviously, for the Setup project to compile you need to install the Wix Toolset. However, you may still run into this error: error MSB3073: The command "heat dir ..blah blah blah" exited with code 9009. If you were trying to executing the build from a command prompt and you installed Wix while it was open, you need to open another one in order to refresh the changes the installer did to your environment PATH variable.

Finally, in order to compile for a specific platform and configuration, use the flags: /property:Configuration=Release /property:Platform=x64.

Then just run the line:
nuget restore
"c:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\Bin\MSBuild.exe" /t:Restore;Rebuild Your.sln /property:Configuration=Release /property:Platform=x64

Hope it helps.

So you have a product that has an MSI installer and installation works perfectly. But you want to install the product using the /quiet flag, so that you automate the installation or for whatever reason and now it fails all the time. First of all, do the normal thing when you have an MSI installation error:
  • Install the package by manually running msiexec and add verbose logging: msiexec /i My.Setup.msi /L*VX theinstall.log ... (your extra parameters, including /quiet here)
  • Open theinstall.log file and look for the string "Value 3" which is the line where the install actually failed
  • See if you can see what caused the fail

My setup would take properties from the registry and only need the user password to be supplied, so I used it like this:
msiexec /i My.Setup.msi /L*VX theinstall.log /quiet USERPASSWORD="thepassword"
For me, the reason for the failure was very unclear: "CreateUser returned actual error code 1603". I had the user, I had the password, what was going on?

The solution was to add ALL the properties needed. With a silent installation, it seems some of the actions in the UI are ignored. It's not just quiet, it's skipping things. In order to get a list of all possible properties, open the same install log and look for "SecureCustomProperties", which should list all the properties that you can set from the command line, separated by semicolon. I am sure I didn't need to set them all in order to work. In fact I didn't. I only used the ones used in the UI.

Hope it helps.

and has 0 comments
There is this old joke about how having a senior developer around is like having a little child sitting next to you. "How do I do that?", you ask. And the SD asks "Why?". "Well, the boss asked me to" "But, why?. "The client wants it" "Why?". "Because he needs this other thing" "Why?". I am afraid the joke is very true. Usually, at the end of the exchange you get to the real need at the base of the request and understand that it is the thing that must be fulfilled, not the request as it came through several filters, each adding or removing from the real purpose of the task. And if there is something I want to impart from my extensive experience as a software developer it is that you must always start from the core need and go from there.

However, I am not here to discuss how to write code, for once, but on how to organize your team (or yourself as a team of one) from this need driven perspective. And please don't call it NDD and read it NEED or whatever, because this is just a general principle that can help you in many domains. So let's analyse Agile. It's been done to death, so what I am trying to do here is not summarize what others have done, but to simplify things until they don't even need a name anymore.

First, what is Agile Development? Wikipedia says: "Agile software development is an approach to software development under which requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer(s)/end user(s). It advocates adaptive planning, evolutionary development, empirical knowledge, and continual improvement, and it encourages rapid and flexible response to change. The term Agile was popularized, in this context, by the Manifesto for Agile Software Development. The values and principles espoused in this manifesto were derived from and underpin a broad range of software development frameworks, including Scrum and Kanban."

So Agile is not Scrum, but Scrum is underpinned by Agile principles and values. That is important, because many shops want to do Agile and so they do Scrum or Kanban and they either try to follow them to the letter (purists) or adapt them to their requirements (which is more in line with Agile principles, but the implementations usually suffer). One wonders how come there are not a lot of different methodologies out there, just like there are software patterns or programming languages. The answer is that no one starts off with understanding their need for Agile and in fact many don't even know or care what it is. They do Scrum so that they get the Agile badge, they tell everyone they "do Agile" and pompously ask their candidates if they "know Agile".

But if someone asks "Why do you want to do Agile?" the answers are not so easy to get. Of course there are advantages in the approach, that is why it's so popular, but you don't get advantages for your collection, they need to drive you to some kind of goal. One might say that for every process you need metrics in order to analyse your progress. Scrum does that. You don't need metrics for the efficacy of the metric system, do you? But still you need some way of gauging the usefulness of your methodology itself, otherwise we would still be using imperial units.

I am more accustomed to Scrum, but I don't want to analyse it, instead I want to measure the usefulness of Agile as a concept by asking the very simple question "Why?". As we've seen, its attributes include:
  • self organizing teams
  • cross functional teams
  • collaborative effort between teams and customers
  • adaptive planning
  • evolutionary development
  • empirical knowledge
  • continual improvement
  • rapid and flexible response to change

Why self organizing teams? That's the first and one of the most important and complex aspects of Agile. If the team self organizes, then everyone in it shares responsibility. There is no need for micromanagement from above, the team acts as a unit, you get it as a neat package that you don't care about. In theory when a team is inefficient from the organization's point of view, you should fire the team, not individual people. If one member is not performing as expected, then it's the responsibility of the team to fix them or get rid of them. Also theoretically, an Agile team doesn't need a manager.

But in practice I haven't seen this implemented except in small startup teams. There is always a manager, a director or more of them. There is always micromanagement. There is always some tendril of HR or organizational graph that affects people individually. Why is that? Because teams are cross functional, too. A developer doesn't need or want in any way to have to measure the work efficiency of their colleagues, have to tell them how to do things or get to the point where they have to push for firing them. A developer doesn't want to administrate the team. So what is a manager in the Agile world? That's right, a member of the team. If devs would not have a manager, they would probably hire one, as part of the self organization of the team. Moreover, for teams inside larger organizations like corporations, the "customer" is the corporation, so the manager role is often combined with the one for "customer representative" or "product owner" or "producer" or something like that. They're the people you go to in order to understand what it is you have to do.

And since I've started talking about cross functional teams, this means that Agile doesn't need or indeed recommend that people be interchangeable, having the same skill sets. Whenever you hear someone say "everyone should know everything" that's not Agile. It's just a sign that someone from above wants to micromanage the team without having to know anyone in it or ever decide the direction it should work in. It's the sign of a top-down approach that is ultimately inefficient from both directions: the team feels pressure it doesn't need while being restricted in what it can do and the management has to perform tasks that they don't need to do. I haven't seen one team of people treated as interchangeable units be efficient, nor have I heard of any. So why cross functional teams? Because people are individual beings and are different and, for every skill they have, they put in effort to first reach a basic level of knowledge, then more effort to become good or even expert at them. An Agile team needs to be flexible enough to accommodate for any type of person, within reason.

As an aside, think of the skills of a challenged person. The legislation of many countries requires now that software be accessible to a wide variety of disabled people. So whenever you hear that people should be versed in all aspects of the team's business ask them for Braille courses.

Cross functional also includes the skill of knowing what the customer wants, whether they are a member of your organization or someone sent by the client. It's one of the most necessary roles to be filled in a team. If a team were a living thing, the product owner would be the eyes and ears. You can be strong, fast, deadly, but it doesn't help if you can't find your prey.

So why collaborative effort between team and customers? Because you need to know why do you what you do. This entire post is dedicated to the question of why, so I won't press this. Just remember that if the team doesn't know why something is required or necessary, then it is not yet fit to perform that task. People are going to hate me for this, but we've already established the need to understand the client and that in large organizations the customer is the organization, so the logical conclusion is clear: office politics, psychopathic displays of power, idiotic decisions coming from up high are all part of the requirements. There is a caveat, though: the need of the customer needs to be clear, not the want. If the liaison to the customer is withholding or unaware of the real reasons something needs to be done, then the team suffers.

As an aside to all manager or high corporate types out there, it's way better to explain to a team that they have to scrap a project because you hate the guts of the person who proposed it, than to invent business reasons that are clearly bullshit for destroying months of loving work. In the first case you honestly admit you are an asshole, but in the second you prove you are one.

Now, what's this about adaptive planning? If you already know what is needed, someone should let you do your job in peace. You come to them when it's done. That was the fallacy of the Waterfall system. That is also why Waterfall still works. In a situation where needs are clear and do not change, there is no need for agility. Unfortunately, these situations are rare, especially since I've included in the list of customer needs the legacy of us being descended from apes. That means adapting quickly, without regret, changing direction, sometimes turning back completely, abandoning projects, starting new ones that you recommended starting years ago and was laughed out from the room for it and so on.

It's not only that. Adapting to your client's need is something that you would have to do anyway, in any realistic scenario. But it also means that you have to adapt to the technological changes in the world. Maybe you have to rewrite your code with another technology or explore new ones that just appeared out of thin air. Adaptation means that a big part of your job as an Agile team is exploration. This is emphasized by the empirical knowledge attribute above. These two go hand in hand. You adapt by understanding and for this you must be in the loop, you need to know things. Well, one of the members of the team needs to, at least. Adapting means "rapid and flexible response to change".

Therefore, whenever someone asks you if you are Agile or demands that you are, you can tell them about all the new tech that is rumored to come out this year and how you can barely contain your enthusiasm to be working for someone who enables exploration in the true spirit of Agile. Preferably after you've signed a contract, maybe.

But I talked about adapting to new requirements and changes in the environment, not adaptive planning. Adaptive planning is planning for all the things above. In other words, when you start a task, plan for the possibility of it changing, having to be abandoned or having to be redone in a different shape. And here is a point of much contention, because you can do this as any level. If you do it whenever you have to change the color of a button, you will overestimate all your work. If you do it at the level of the entire enterprise suite of products, you might as well do Waterfall. In any respect, it is the entire team that needs to know and decide what they plan for drastic changes.

Yet there are two types of reacting to change: planned or chaotic panic. Adaptive planning refers to planning for change, not preparing for working in complete chaos. That is why Scrum, as an example, splits planning into small periods of time (2-4 weeks) called sprints for which the work planned needs to be finished. If anything drastic (like a task losing its relevance) happens, the sprint is aborted. The sprints themselves can be in relative chaos, but inside one planning and order prevail. It doesn't mean planning for an irate boss man to come and tell you every day to change this and that. It means having a list of tasks that you can edit, whether by removing or adding tasks, rearranging their priority, or even adding tasks like "how to deal with this insane man coming into the office and spouting obscenities at me", then execute them in order. In no way does it mean that you do things by the ear or plan for something and then ignore it.

Two aspects remain to be discussed: evolutionary development and continuous improvement. Both strongly imply awareness of the development processes through which the teams goes. It's the team equivalent of introspection commonly referred to as retrospective. What did we do? How does it compare with what we had planned? What does it mean for the future? How can we do better? and the Why of it is obvious, you've got a good thing, how can you get more of it? Yet this is the point that is hardest to put in a box. Various methodologies attempt to introduce metrics to measure progress and performance. Then, with the function of the team properly computed and numbers assigned to its yield, one can scientifically tweak the processes in order to get the most out of the resources the team has. This is a wonderful concept and I agree with it in spirit, but in practice frankly, I think it's bollocks. It pure shit. It's circular logic.

Here we are, discussing a beautiful philosophy of doing work for maximum benefit by letting a team of very diverse people organize themselves towards a common goal, being ready for anything, adapting to everything, pushing through no matter what by leveraging their individual strengths. And then we come and say "Wait, I can take all this, name it, measure it, put it into equations and then tell you how to improve it!". Well, why didn't you do that from the start, then? Why bother with all the self expression and being able to adapt to anything new? If all is old under the sun, why do Agile at all?!

I believe this is the part where Agile methodologies fail utterly and miserably and can't even admit to themselves that they have. I agree that progress needs to be visible and the ability to compare it with other instances is essential, but from here to measuring it numerically is a large distance. Forget the usual arguments against measuring team success: the team composition is not fixed, the type of work it different, team dynamics means people behave differently, personal and local events, and so on. It's bigger and simpler than that. I submit that since the entire concept of Agile rests on the principles of flexibility, the method of gauging success should also be flexible and especially the way the team processes are tweaked for the future.

And I have proof. In every company I've ever been or heard of the Agile methodology of the place didn't quite work in various ways and that was apparent when people were laughing or disrespecting that part to the point of ignoring the rules in place. And the common bit that didn't work for all of them is the retrospective. The idea of asking "what problems did we face?" and then immediately finding and enforcing solutions with "what can we do about it next time?" has become such a mantra that it is impossible to ask these questions separately. It's become reflex (to the horror of wives everywhere) when you hear "there is a problem" to ask immediately "what can we do to fix it?". There is an intermediate question that must be inserted there: "Is this really a problem or do I just have to be aware it happened? Can we ignore it?". This is another topic, though.

A short summary would be:
  • responsibility is shared between all the members of the team and decisions should be taken as a team
  • any member of the team is valued for their contribution towards reaching the team goal
  • the manager is not your boss, it's just the role of a member of the team
  • the Scrum master or Kanban master or whoever does the accounting in whatever methodology you use is still a member of the team
  • having a member of the team understanding the needs of the customer is essential; they are the people you go to ask "Why?" all the time
  • adaptation need to be written in the DNA of the methodology; the team must plan to change its plans, but still do things orderly
  • experience should guide the team to define progress and improve itself
  • transparency of the team goal is necessary for the team's success
  • there is no improvement of the development method (or the code) without exploration and the entire intentional participation of the team
  • never be afraid to ask why; if you work in an environment where you are afraid, consider changing it

Let me tell you the typical story of the development team. First there is chaos, someone decides they are a quick and dirty team and they can do anything, fast, cutting all corners into a straight line to hell. Mistakes are made, blame flies around, people are fired, emotions get high, results go nowhere. There is need for order, all cry out, we need to use the holy grail of software development and the greatest human invention since fire (that week, at least). And then a specialist is called. He either becomes part of the team, with the role of teaching and enforcing rules, or just teaches a few courses, gets their money and leaves before hapless fools try to put in practice whatever it is they thought they understood. The result is the same: developers first like it, then start grumbling about the indecency of having to fill out documentation before any code is written, or updating it, or adding tasks in whatever software is used (or Post-Its on a whiteboard, if you are a purist and still haven't figured out developers can't use a pen to write). So either morale plummets and the operations continue with diminishing results, or the grumbling becomes a rallying cry to make a change. And the change is... chaos again.

In short, just like a revolutionary South American country, they can't decide if they want freedom or dictatorship, because both options have big damn flaws and anything in between is sucking their life away.

But why?

It's because even methodologies are subject to change, to human whim, to altering conditions. If you look at the beginning of this enormous post, it all started with the main goal and meandered from that, just like an Agile methodology in a real firm does. What is the goal of the team? And don't you look at your manager to tell you. As a team, responsibility is shared, remember? If the purpose is to turn perfectly enthusiastic software developers into soulless drones... you should run away from there. But if it is not, think about the things you are doing. Are they helping or are they not helping? This week at least.

In conclusion (or retrospect?) I believe Agile is a great idea, but its underpinning principles of always being aware of the situation and adapting to it are a more general and useful concept to remember and live by. Truth and knowledge comes before adapting and many software shops fail there, so the methodology is irrelevant. Then the method of work chosen should reflect the flexibility it enables. I don't believe in a pure anything, and certainly not pure Scrum, pure Kanban, pure Agile, but whatever you cook up, it should work towards the real goal of the team, as known and acknowledged by everyone in it. What's your function in life?

and has 2 comments
Sonar Source code static analysis rule RSPEC-3906 states:
Delegate event handlers (i.e. delegates used as type of an event) should have a very specific signature:
  • Return type void.
  • First argument of type System.Object and named 'sender'.
  • Second argument of type System.EventArgs (or any derived type) and is named 'e'.


The problem was that I was getting the warning on a simple event declared as EventHandler<TEventArgs>. Going to its source code page revealed the reason in a comment: // Removed TEventArgs constraint post-.NET 4.

and has 0 comments
This is another post discussing a static analysis rule that made me learn something new. SonarSource Rule 3898 says: If you're using a struct, it is likely because you're interested in performance. But by failing to implement IEquatable<T> you're loosing performance when comparisons are made because without IEquatable<T>, boxing and reflection are used to make comparisons.

There is a StackOverflow entry that discusses just that and the answer to this particular problem is not actually the accepted one. In pure StackOverflow fashion I will quote the relevant bit of the answer, just in case the site will go offline in the future: I'm amazed that the most important reason is not mentioned here. IEquatable<> was introduced mainly for structs for two reasons:
  1. For value types (read structs) the non-generic Equals(object) requires boxing. IEquatable<> lets a structure implement a strongly typed Equals method so that no boxing is required.
  2. For structs, the default implementation of Object.Equals(Object) (which is the overridden version in System.ValueType) performs a value equality check by using reflection to compare the values of every field in the type. When an implementer overrides the virtual Equals method in a struct, the purpose is to provide a more efficient means of performing the value equality check and optionally to base the comparison on some subset of the struct's field or properties.

I thought this was worth mentioning, for those performance critical struct equality scenarios.