Long story short, check out this answer to a StackOverflow question: Package destination of restore of .net-core projects is always global package directory

The scenario is this, you are used to .NET Framework projects for which Visual Studio restored NuGet packages in a packages folder in the solution folder and then you switch to .NET Core. No packages folder! You google it and you find that there is a global folder in your user's profile where NuGet will download all of these projects, that .NET Core uses it by default and also that you might change this behavior used on a property in nuget.config. So here are the issues you have take into account:
  1. There are two ways of configuring NuGet packages for your projects: a packages.config file and PackageReference elements in your .csproj file
  2. The name of the property you need to set is different based on the type of configuration: repositoryPath and globalPackagesFolder, respectively
  3. There are two formats for configuring nuget properties: using the add element inside the config element and using the repositoryPath element inside the settings element
  4. The format that worked for me in VS 2017 was the config element
  5. There are two locations for the nuget.config file: in a .nuget folder inside your solution folder and directly in the solution folder (or any of its parent folders)
  6. The location that is accepted by the latest versions of NuGet is directly in the solution folder
  7. Sometimes you need to restart Visual Studio for the change in nuget.config files to considered
  8. The path you specify is relative to the nuget.config file, no matter where it is

A bit of an overkill, but try this as the beginning of your nuget.config file that sits next to the .sln file in your solution:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<config>
<add key="repositoryPath" value="./packages" />
<add key="globalPackagesFolder" value="./packages" />
</config>
<settings>
<repositoryPath>./packages</repositoryPath>
</settings>
...
</configuration>

and has 2 comments
We already know how to load types in .NET Framework and we know what they say we should use in .NET Core. But what about Standard? Is that a trick question? Sort of. Right now we have two .NET Standard and three .NET Core versions, albeit .NET Core 3 is in preview mode. The signature for AssemblyLoadContext and how it is used has changed dramatically. Core 3 enables context unloading, but Standard 2 does not. So you either are forced to build your library as Core 3 or you have to not use Unloading contexts or use reflection, which is not robust and probably will not be needed with the possible arrival of Standard 3.

But there are subtler issues at work. One of them is that, at least with .NET Core 3 Preview6, when you reference System.Runtime.Loader in a Standard library, so you can access AssemblyLoadContext, you get conflicts between the System.Runtime you are using and the one referenced by System.Runtime.Loader. The only solution is to use the System.Runtime.Loader NuGet package, but that returns you to the Standard 2 version of AssemblyLoadContext, even if the library version is higher!

The setup is this: I have an ITestInterface interface which resides in TestInterfaceLibrary.dll. I also have a TestImplementation class that can be found in TestImplementationLibrary.dll and implements ITestInterface. My program either does not reference any of these libraries or it only references the interface one. The task is to load both these types and then simply convert one instance of TestImplementation to ITestInterface. Simple test would be loading the types and then expecting interfaceType.IsAssignableFrom(implementationType) to be true.

Core 3


Let's first try the Core 3 way:
var context = new AssemblyLoadContext("testContext", true);
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface");
Console.WriteLine(interfaceType?.ToString()??"interface type not loaded");
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation");
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
Console.WriteLine("implementation implements interface: "+interfaceType.IsAssignableFrom(implementationType));
 
context.Unload();
The output is:
TestInterfaceLibrary.ITestInterface
TestImplementationLibrary.TestImplementation
implementation implements interface: True

It works! But only because the interface assembly is loaded first. If you try to load just the implementation type first, it will come up as empty. There are no exceptions thrown unless you get all the assembly types or specify the throwOnError parameter in GetType. The exception is "System.IO.FileNotFoundException: 'Could not load file or assembly 'TestInterfaceLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified.'".

In order to solve this, we need to use the Resolve event of the AssemblyLoadContext class. Let's try this:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation", true);
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine("implementation implements interface: " + interfaceType.IsAssignableFrom(implementationType));
 
context.Resolving -= Context_Resolving;
context.Unload();
 
private static Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}

And now it works again, by assuming the assembly name is the same as the assembly file name and that it is found in the same place.

But... if we try this in different contexts:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var implementationAssembly = context.LoadFromAssemblyPath(implementationAssemblyPath);
var implementationType = implementationAssembly.GetType("TestImplementationLibrary.TestImplementation", true);
Console.WriteLine(implementationType?.ToString() ?? "implementation type not loaded");
 
context.Resolving -= Context_Resolving;
context.Unload();
context = new AssemblyLoadContext("testContext2", true);
context.Resolving += Context_Resolving;
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine("implementation implements interface: " + interfaceType.IsAssignableFrom(implementationType));
 
context.Resolving -= Context_Resolving;
context.Unload();
the output will show
implementation implements interface: False

This means that if we want to encapsulate this in a TypeLoader class or something, we cannot use different contexts for dynamically loading types. Even if we had one context that we would unload in order to refresh all the types, it could still be different from the main context, in case the interface is loaded twice or referenced directly in the project.

For example, if you reference TestInterfaceLibrary directly and you load TestImplementation dynamically it will work as expected, because ITestInterface is resolved automatically from the main context. However, if you load ITestInterface dynamically, too, it will be a different type from the referenced ITestInterface, even if they apparently have the same name and full name and assembly qualified name! So it kind of makes sense to not load a type twice. Is this where the context unloading comes in? Not really. Let's define a method that counts the number of types with a certain name in the current domain as
private static int CountTypes(string typeName)
{
return AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes().Where(t => t.FullName == typeName))
.Count();
}

And now let's run this code:
var context = new AssemblyLoadContext("testContext", true);
context.Resolving += Context_Resolving;
 
var referencedInterfaceType = typeof(ITestInterface);
Console.WriteLine(referencedInterfaceType?.ToString() ?? "interface type not loaded");
 
var interfaceAssembly = context.LoadFromAssemblyPath(interfaceAssemblyPath);
var interfaceType = interfaceAssembly.GetType("TestInterfaceLibrary.ITestInterface", true);
Console.WriteLine(interfaceType?.ToString() ?? "interface type not loaded");
 
Console.WriteLine($"Types are the same: {interfaceType==referencedInterfaceType}");
 
Console.WriteLine($"Number of types with name {interfaceType.FullName}: {CountTypes(interfaceType.FullName)}");
 
context.Resolving -= Context_Resolving;
context.Unload();
Console.WriteLine($"Number of types with name {interfaceType.FullName}: {CountTypes(interfaceType.FullName)}");

There is the referenced type, then we load the type dynamically again, inside a new context. We count the types loaded in the current domain, we unload the context, we count the types again. The result is
TestInterfaceLibrary.ITestInterface
TestInterfaceLibrary.ITestInterface
Types are the same: False
Number of types with name TestInterfaceLibrary.ITestInterface: 2
Number of types with name TestInterfaceLibrary.ITestInterface: 2
The types are always 2!

Bottom line, even when unloading the AssemblyLoadContext, the types used are not unloaded and trying to find a type by name will result in duplicates.

OK, so let's just agree that types with the same name, once loaded, should remain there and no other type with the same name should be loaded. Let's try to incorporate this into a TypeLoader class:
public class TypeLoader : IDisposable
{
private readonly AssemblyLoadContext _context;
 
public TypeLoader()
{
_context = new AssemblyLoadContext(GetType().FullName, true);
_context.Resolving += Context_Resolving;
}
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(assembly => assembly.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = _context.LoadFromAssemblyPath(assemblyPath);
return assembly.GetType(typeName, true);
}
 
public void Dispose()
{
_context?.Resolving -= Context_Resolving;
_context?.Unload();
}
}

The code in our test is now much clearer:
var interfaceAssemblyPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestInterfaceLibrary.dll");
var implementationAssemblyPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestImplementationLibrary.dll");
var interfaceTypeName = "TestInterfaceLibrary.ITestInterface";
var implementationTypeName = "TestImplementationLibrary.TestImplementation";
 
using (var loader = new TypeLoader())
{
Type referencedType = typeof(TestInterfaceLibrary.ITestInterface);
var interfaceType = loader.LoadType(interfaceTypeName, interfaceAssemblyPath);
var implementationType = loader.LoadType(implementationTypeName, implementationAssemblyPath);
Console.WriteLine($@"
referenced type: {referencedType}
interface type: {interfaceType}
implementation type: {implementationType}
referenced and loaded interfaces are the same: {referencedType == interfaceType}
interface implemented: {interfaceType.IsAssignableFrom(implementationType)}"
);
}
and the result is
referenced type: TestInterfaceLibrary.ITestInterface
interface type: TestInterfaceLibrary.ITestInterface
implementation type: TestImplementationLibrary.TestImplementation
referenced and loaded interfaces are the same: True
interface implemented: True

But we still use Unload. Maybe it will work some day as I want it to work, but until then, why not get rid of Unload and make TypeLoader a class in a Standard 2 library?

Standard 2


For this I will create a new Standard 2 library project and then reference it in our test Core 3 project. Then I will move the TypeLoader class in the library project.

The errors in the library project are related to not knowing what an AssemblyLoadContext is, therefore the first solution is to reference System.Runtime.Loader from the framework. I get the immediate error "Assembly 'System.Runtime.Loader' with identity 'System.Runtime.Loader, Version=4.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' uses 'System.Runtime, Version=4.2.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' which has a higher version than referenced assembly 'System.Runtime' with identity 'System.Runtime, Version=4.1.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a".

Solution 2: load the System.Runtime.Loader NuGet package, which at the time of writing this, is version 4.3.0. The error is now gone, but several things are immediately apparent:
  1. the Unload method doesn't exist anymore
  2. the constructor doesn't receive a name and a bool anymore
  3. AssemblyLoadContext is now abstract

In order to solve this I am creating a DynamicAssemblyLoadContext class that inherits from AssemblyLoadContext and just return null from the Load method overload, and I give it an Unload method and a constructor with a string and a bool that don't do anything. And it works again. The updated TypeLoader class is now:
public class TypeLoader : IDisposable
{
private readonly DynamicAssemblyLoadContext _context;
 
public TypeLoader()
{
_context = new DynamicAssemblyLoadContext(GetType().FullName, true);
_context.Resolving += Context_Resolving;
}
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(ass => ass.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = _context.LoadFromAssemblyPath(assemblyPath);
return assembly.GetType(typeName, true);
}
 
public void Dispose()
{
_context?.Resolving -= Context_Resolving;
_context?.Unload();
}
 
 
private class DynamicAssemblyLoadContext : AssemblyLoadContext
{
public DynamicAssemblyLoadContext(string name, bool isCollectible)
{
}
 
protected override Assembly Load(AssemblyName assemblyName)
{
return null;
}
 
public void Unload()
{
}
}
}

The safe way


The code above has an issue, though. If the interface type is dynamically loaded before its referenced type is used, this fails again. This is the case when you use dependency injection. You dynamically load the types in order to register the implementation relationship to the interface, but then, when you ask for a resolution for the interface type, now referenced by the main project, you get another type named just the same.

The safe way, considering that we don't really use Unload and we don't count on it every working, why not use the default context, the one where everything loads, and be done with it. When you do that, the code becomes a little uglier, but it works in all situations.

Final version.
public class TypeLoader
{
private readonly object _resolutionLock = new object();
 
private Assembly Context_Resolving(AssemblyLoadContext context, AssemblyName assemblyName)
{
var expectedPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, assemblyName.Name + ".dll");
return context.LoadFromAssemblyPath(expectedPath);
}
 
public Type LoadType(string typeName, string assemblyPath)
{
var context = AssemblyLoadContext.Default;
lock (_resolutionLock)
{
context.Resolving += Context_Resolving;
var type = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(ass => ass.GetTypes().Where(t => t.FullName == typeName))
.FirstOrDefault();
if (type != null)
{
return type;
}
var assembly = context.LoadFromAssemblyPath(assemblyPath);
 
type = assembly.GetType(typeName, true);
context.Resolving -= Context_Resolving;
return type;
}
}
}

You just gotta hate that adding and removing the event inside a lock, right? Well, if you find a better solution, let me know.

This is more a backup for the extensions that I have installed on the two main IDEs I'm using for my job: Visual Studio and Visual Studio Code.

Visual Studio Code


In order to list the extensions installed, use the command code --list-extensions. For me, this results in this output:

code --install-extension aeschli.vscode-css-formatter
code --install-extension Angular.ng-template
code --install-extension christian-kohler.npm-intellisense
code --install-extension cmstead.jsrefactor
code --install-extension cssho.vscode-svgviewer
code --install-extension danwahlin.angular2-snippets
code --install-extension dbaeumer.jshint
code --install-extension dbaeumer.vscode-eslint
code --install-extension donjayamanne.jquerysnippets
code --install-extension donjayamanne.jupyter
code --install-extension DotJoshJohnson.xml
code --install-extension ecmel.vscode-html-css
code --install-extension eg2.vscode-npm-script
code --install-extension esbenp.prettier-vscode
code --install-extension fabianlauer.vs-code-xml-format
code --install-extension fknop.vscode-npm
code --install-extension HookyQR.beautify
code --install-extension humao.rest-client
code --install-extension joelday.docthis
code --install-extension k--kato.docomment
code --install-extension michelemelluso.code-beautifier
code --install-extension minhthai.vscode-todo-parser
code --install-extension mohsen1.prettify-json
code --install-extension mrmlnc.vscode-scss
code --install-extension ms-mssql.mssql
code --install-extension ms-vscode.csharp
code --install-extension ms-vscode.typescript-javascript-grammar
code --install-extension ms-vscode.vscode-typescript-tslint-plugin
code --install-extension msjsdiag.debugger-for-chrome
code --install-extension naumovs.color-highlight
code --install-extension pmneo.tsimporter
code --install-extension rbbit.typescript-hero
code --install-extension robinbentley.sass-indenrted
code --install-extension wayou.vscode-todo-highlight

In order to install them, use code --install-extension [extension name] for each line.

Visual Studio


For Visual Studio, funny enough, in order to export and import your extensions you need to use an extension: Extension Manager 2017, which on my system exports a file in .vsext format:

{
"id": "5f191824-b8a6-47c0-9f96-f607dfd3c09b",
"name": "My Visual Studio extensions",
"description": "A collection of my Visual Studio extensions",
"version": "1.0",
"extensions": [
{
"name": ".NET Portability Analyzer",
"vsixId": "55d15546-28ca-40dc-af23-dfa503e9c5fe"
},
{
"name": "Advanced Installer for Visual Studio 2017",
"vsixId": "Caphyon.AdvancedInstaller.23debb5a-cff4-4b91-88bf-6280f72a7ebb"
},
{
"name": "Azure Data Lake and Stream Analytics Tools",
"vsixId": "1e906ff5-9da8-4091-a299-5c253c55fdc9"
},
{
"name": "Azure Functions and Web Jobs Tools",
"vsixId": "Microsoft.VisualStudio.Web.AzureFunctions"
},
{
"name": "BuildVision",
"vsixId": "837c3c3b-8382-4839-9c9a-807b758a929f"
},
{
"name": "Clean Code .NET",
"vsixId": "CleanCode.NET.9ecfa9bb-0775-48d0-9898-4dbbbd529fe3"
},
{
"name": "Cloud Explorer for VS 2017",
"vsixId": "Microsoft.VisualStudio.CloudExplorer"
},
{
"name": "Code Cracker for C#",
"vsixId": "CodeCracker.Vsix..5b99e64c-1418-4a06-990c-fd4cf01f4f63"
},
{
"name": "Code Graph",
"vsixId": "CodeAtlasVSIX.Company.df5456fb-08ea-4256-b5ff-ecdb3a512ad3"
},
{
"name": "CodeMaid",
"vsixId": "4c82e17d-927e-42d2-8460-b473ac7df316"
},
{
"name": "CommentCop",
"vsixId": "CommentCop..0521EE68-1A5D-4C78-9970-B6A46B03FA6D"
},
{
"name": "EntityFramework Reverse POCO Generator",
"vsixId": "EntityFramework_Reverse_POCO_Generator..d542a934-8bd6-4136-b490-5f0049d62033"
},
{
"name": "Extension Manager 2017",
"vsixId": "e83d71b8-8bfc-4e06-b145-b0388910c016"
},
{
"name": "Fix Mixed Tabs",
"vsixId": "FixMixedTabs.9f1d3050-b986-4b10-ae36-97c6efc5e968"
},
{
"name": "Fix Namespace",
"vsixId": "f073da8c-bb52-41f8-b95a-a6346b1a0b52"
},
{
"name": "MetricsAnalyzer",
"vsixId": "MetricsAnalyzer..8026235d-7afc-401b-8f45-ba8624a07ef5"
},
{
"name": "Microsoft Code Analysis 2017",
"vsixId": "4db2d63d-3320-4fbd-bf80-07f8d1500bd3"
},
{
"name": "Moq.Analyzers",
"vsixId": "Moq.Analyzers..c3c7e3f8-2407-428d-beef-c4557253517b"
},
{
"name": "Object Exporter",
"vsixId": "07fb5b16-f4be-4488-9a19-b4f36d2c05a6"
},
{
"name": "Output enhancer",
"vsixId": "VSOutputEnhancer.Nikolay Balakin.a06be4c3-f97e-425c-8a0d-bdef08ac2abb"
},
{
"name": "Power Commands for Visual Studio",
"vsixId": "PowerCommands.3ecdd89b-f985-483d-8c94-be37de4dc083"
},
{
"name": "Ref12",
"vsixId": "SLaks-Ref12-086C4CE4-7061-4B1F-BC77-B64E4ED71B8E"
},
{
"name": "Reference Conflicts Analyser",
"vsixId": "ff477521-e67b-4ca3-931f-3edf36125d28"
},
{
"name": "Regular Expression Tester Extension",
"vsixId": "a65d58d2-ead8-4eea-a47d-fa60865a6043"
},
{
"name": "ResolveUR - Resolve Unused References",
"vsixId": "637ba02c-3388-4e54-9051-3eea7c51b054"
},
{
"name": "Roslyn Security Guard",
"vsixId": "RoslynSecurityGuard..45fa56c2-16f1-4395-8c10-a5a460084018"
},
{
"name": "Roslynator 2017",
"vsixId": "9289a8ab-1bb6-496b-9992-9f7ea27f66a8"
},
{
"name": "Security Code Scan (for VS2017 and newer)",
"vsixId": "955196A7-ACBF-4F6B-820B-51B8507CE853"
},
{
"name": "Solution Error Visualizer",
"vsixId": "SolutionErrorVisualizer.a392f96b-6b33-4b53-b4bb-3376a05f986c"
},
{
"name": "SonarLint for Visual Studio 2017",
"vsixId": "SonarLint.36871a7b-4853-481f-bb52-1835a874e81b"
},
{
"name": "SQL Search",
"vsixId": "Redgate.SQLSearch.VSExtension.9BD7AEDA-C291-4702-8191-4189B099F3A9"
},
{
"name": "Target Framework Migrator",
"vsixId": "TargetFrameworkMigrator..4f7666b9-e62c-46a1-af25-21ab8742ef00"
},
{
"name": "Trailing Whitespace Visualizer",
"vsixId": "4c1a78e6-e7b8-4aa9-8812-4836e051ff6d"
},
{
"name": "Unit Test Boilerplate Generator",
"vsixId": "UnitTestBoilerplate.RandomEngy.ca0bb824-eb5a-41a8-ab39-3b81f03ba3fe"
},
{
"name": "Visual Studio IntelliCode",
"vsixId": "IntelliCode.VSIX.598224b2-b987-401b-8509-f568d0c0b946"
},
{
"name": "Visual Studio Spell Checker (VS2017 and Later)",
"vsixId": "43EA967E-0DE2-4136-8E52-C6DCFB5C2748"
},
{
"name": "Wix Toolset Visual Studio 2017 Extension",
"vsixId": "WixToolset.VisualStudioExtension.Dev15"
}
]
}

and has 0 comments

The Problem


Phew, that's a mouthful. But the issue is that trying to serialize a FileInfo or a DirectoryInfo object with Newtonsoft's Json library in .NET Core fails with a vague exception:
Newtonsoft.Json.JsonSerializationException: Unable to serialize instance of 'System.IO.DirectoryInfo'.
at Newtonsoft.Json.Serialization.DefaultContractResolver.ThrowUnableToSerializeError(Object o, StreamingContext context)
at Newtonsoft.Json.Serialization.JsonContract.InvokeOnSerializing(Object o, StreamingContext context)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.OnSerializing(JsonWriter writer, JsonContract contract, Object value)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject(JsonWriter writer, Object value, JsonObjectContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.Serialize(JsonWriter jsonWriter, Object value, Type objectType)

It doesn't say why it fails, just that a method called ThrowUnableToSerializeError threw um... an unable to serialize error?

The Cause


Looking at the Newtonsoft code, we eventually get to this piece of code:
// serializing DirectoryInfo without ISerializable will stackoverflow
// https://github.com/JamesNK/Newtonsoft.Json/issues/1541
if
(Array.IndexOf(BlacklistedTypeNames, objectType.FullName) != -1)
{
contract.OnSerializingCallbacks.Add(ThrowUnableToSerializeError);
}

Later, another piece of code will execute the serializing callbacks and throw the exception. We can get rid of this functionality, by using a custom contract resolver, like this:
var settings = new JsonSerializerSettings
{
ContractResolver = new FileInfoContractResolver()
};
 
private class FileInfoContractResolver : DefaultContractResolver
{
protected override JsonContract CreateContract(Type objectType)
{
var result = base.CreateContract(objectType);
if (typeof(FileSystemInfo).IsAssignableFrom(objectType))
{
result.OnSerializingCallbacks.Clear();
}
return result;
}
}

Yet now, when trying to serialize, we get the stack overflow exception described in the original Newtonsoft.Json issue. It stems from the difference between the .NET Framework implementation and the .NET Core implementation of ISerializable in FileSystemInfo, which in Core just throws PlatformNotSupportedException. It's still not clear why it goes to a StackOverflowException, probably some conflict with Newtonsoft code, but it's clear Microsoft does not intend to make these classes serializable. If you think about it, those classes suck for so many reasons!

The Solution


So, in order to solve it, we will use a custom JSON converter:
private class FileSystemInfoConverter:JsonConverter
{
public override bool CanConvert(Type objectType)
{
return typeof(FileSystemInfo).IsAssignableFrom(objectType);
}
 
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
if (reader.TokenType == JsonToken.Null)
return null;
var jObject = JObject.Load(reader);
var fullPath = jObject["FullPath"].Value<string>();
return Activator.CreateInstance(objectType, fullPath);
}
 
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
var info = value as FileSystemInfo;
var obj = info == null
? null
: new
{
FullPath = info.FullName
};
var token = JToken.FromObject(obj);
token.WriteTo(writer);
}
}
And we use it like this:
var settings = new JsonSerializerSettings
{
Converters = new List<JsonConverter>
{
new FileSystemInfoConverter()
}
};
var json = JsonConvert.SerializeObject(dir, settings);
var info = JsonConvert.DeserializeObject<DirectoryInfo>(json, settings);

Why FileInfo and DirectoryInfo suck


The answer of a senior developer to any question should be "Why?" or "Why on Earth or anywhere in the Solar System would you want to do a dumb thing like that?!?!". Why would you want to serialize a directory or file info object? The answer is that you should not. The info objects are defined by only one thing: a path, but they have so much baggage: properties that access the file system, unsafe methods, no interfaces or factory methods that can allow them to be mocked in unit tests. They might look like data objects, but they are not!

Imagine a scenario where you have a list of all the files in your drive. You enumerated them all and now you want to serialize them. Should the serializer save Exists or Length, for example? Because that means it will access the file system for each of them in the process of serialization, leading to a lot of work, propensity to access errors and so on.

Best practices say you should either use some model classes to move around data, like an empty FileSystemInfoModel with Type and FullPath and maybe Attributes or Size properties or whatever you want to save, but that you set yourself as a separate responsibility. And if you want to use the functionality of the Info classes, use System.IO.Abstractions or the new Core IFileProvider abstraction to get implementations of interfaces that you can mock in unit tests.

Tell me what you think.

and has 0 comments
This post starts from a simple question: how do I start a task with timeout? You go to StackOverflow, of course, and find this answer: Asynchronously wait for Task<T> to complete with timeout. It's an elegant solution, mainly to also start a Task.Delay and continue when either task completes. However, in order to cancel the initial operation, one needs to pass the cancellation token to the original task and manually handle it, meaning polluting the entire business code with cancellation logic. This might be OK, yet are there alternatives?

But, isn't there the Task.Run(action) method that also accepts a CancellationToken? Yes, there is, and if you thought this runs an action until you cancel it, think again. Here is what Task.Run says it does: "Queues the specified work to run on the thread pool and returns a Task object that represents that work. A cancellation token allows the work to be cancelled." and if you scroll down to Remarks, here is what it actually does: "If cancellation is requested before the task begins execution, the task does not execute. Instead it is set to the Canceled state and throws a TaskCanceledException exception". You read that right: the token is only taken into account when the task starts running, not while it is actually executing.

Surely, then, there must be a way to cancel a running Task. How about Task.Dispose()? Dispose throws a funny exception if you try it: "System.InvalidOperationException: 'A task may only be disposed if it is in a completion state (RanToCompletion, Faulted or Canceled).'". In normal speech, it means "Fuck you!". If you think about it, how would you abort a task execution? What if it does nasty things, leaves resources occupied, has to clean up after it? The .NET team took the safe path and refused to give you an out of the box unsafe cancelling mechanism.

So, what is the solution? The recommended one is that you pass the token to all methods that can be cancelled and then check inside if cancellation was requested. Of course this only works if
  1. you control what the task does
  2. you can split the operation into small chunks that are either executed sequentially or in a loop so you interrupt their flow
. If you have something like an external process that is being executed, or a long running operation, you are almost out of luck. Why almost? Well, CancellationSource or CancellationToken do not have events, but the token exposes a "wait handle" that you can wait for synchronously. And here it gets funky. Check out an example of a method that executes some long running action and can react to token cancelling:
/// <summary>
/// Executes the long running action and cancels it when needed
/// </summary>
/// <param name="token"></param>
private void LongRunningAction(CancellationToken token)
{
// instantiate a container and keep its reference
var container = new IdentificationContainer();
Task.Run(() =>
{
// wait until the token gets cancelled on another thread
token.WaitHandle.WaitOne();
// this will use the information in the container to kill the action
// (presumably by interrupting external processes or sending some kill signal)
KillLongRunningAction();
});
// this executes the action and populates the identification container if needed
RunLongRunningAction();
}
This introduces some other issues, like what happens to the monitoring task if you never cancel the token or dispose of the cancellation source, but that's a bit too deep.

In the code above we get a sort of a solution if we can control the code and we can actually cancel things gracefully inside of it. But what if I can't (or won't)? Can I get something that does what I wanted Task.Run to do: execute something and, when I cancel it, stop it from executing?

And the answer, using what we learned above, is yes, but as explained at the beginning, it may have effects like resource leaks. Here it is:
/// <summary>
/// Run an action and kill it when canceling the token
/// </summary>
/// <param name="action">The action to execute</param>
/// <param name="token">The token</param>
/// <param name="waitForGracefulTermination">If set, the task will be killed with delay so as to allow the action to end gracefully</param>
private static Task RunCancellable(Action action, CancellationToken token, TimeSpan? waitForGracefulTermination=null)
{
// we need a thread, because Tasks cannot be forcefully stopped
var thread = new Thread(new ThreadStart(action));
// we hold the reference to the task so we can check its state
Task task = null;
task = Task.Run(() =>
{
// task monitoring the token
Task.Run(() =>
{
// wait for the token to be canceled
token.WaitHandle.WaitOne();
// if we wanted graceful termination we wait
// in this case, action needs to know about the token as well and handle the cancellation itself
if (waitForGracefulTermination != null)
{
Thread.Sleep(waitForGracefulTermination.Value);
}
// if the task has not ended, we kill the thread
if (!task.IsCompleted)
{
thread.Abort();
}
});
// simply start the thread (and the action)
thread.Start();
// and wait for it to end so we return to the current thread
thread.Join();
// throw exception if the token was canceled
// this will not be reached unless the thread completes or is aborted
token.ThrowIfCancellationRequested();
}, token);
return task;
}

As you can see, the solution is to run the action on a thread and then manually kill the thread. This means that any control of where and how the action is executed is wrestled from the default TaskScheduler and given to you. Also, in order to force the stopping of the task, you use Thread.Abort, which may have nasty side effects. Here is what Microsoft says about it:



Bummer! .NET Core doesn't want you to kill threads. However, if you are really determined, there is a way :) Use ThreadEx.Abort(thread);


Bonus code: How do you get the cancellation token if you have the task?
var token = new TaskCanceledException(task).CancellationToken;
It might not help too much, especially if you want to use it inside the task itself, but it might help clean up the code.

Conclusion


Just like async/await, using the provided cancellation token method will only pollute your code with little effect. However, considering you want to use a common interface for the purpose, use RunCancellable instead of Task.Run and handle the token manually whenever you feel resources have been allocated and need to be cleaned up first.

and has 0 comments
This should be a simple question with a simple answer, but if you google for Angular Material tooltip styling or length or width you get different answers that are not always complete or even correct. The problem: you have something like <a matTooltip="some message" matTooltipClass="myTooltipClass" ... as per the examples easy to find. However, it doesn't seem to be working. The styling for myTooltipClass does not apply. Here are some points to check off while searching for the problem:
  1. Check whether myTooltipClass is defined in the component or the global CSS file. It should either be in the global CSS file (so it applies to everything) or your component should declare encapsulation: ViewEncapsulation.None
  2. Check the declaration of the class is specific enough. DO NOT use !important to fix this, although it would work. Try something like this: mat-tooltip-component .mat-tooltip.myTooltipClass {...}

To see if the class is applied, set the background-color property to red or something. If the class applies, you managed to define it correctly. If it applies partially, it's not specific enough. To change the width, use max-width. To make the tooltip wrap use white-space: wrap; and word-wrap: break-word;

As a reference, this is how the HTML looks for a rendered tooltip:
<div class="cdk-overlay-container">
<div class="cdk-overlay-connected-position-bounding-box" dir="ltr" style="top: 0px; left: 0px; height: 100%; width: 100%;">
<div id="cdk-overlay-1" class="cdk-overlay-pane mat-tooltip-panel" style="pointer-events: auto; top: 8px; left: 417.625px;">
<mat-tooltip-component aria-hidden="true" class="ng-tns-c34-15 ng-star-inserted" style="zoom: 1;">
<div class="mat-tooltip ng-trigger ng-trigger-state" style="transform-origin: left center; transform: scale(1);">Info about the action</div>
</mat-tooltip-component>
</div>
</div>
</div>

I've stumbled on an article called Create A Dark/Light Mode Switch with CSS Variables which explains how to style your site with CSS variables which you can then change for various "themes". Her solution is to use a little Javascript - and let's be honest, there is nothing wrong with that, everything is using Javascript these days, but what if we mix this idea with "the CSS checkbox hack" as explained in Creating useful interactive elements using only CSS, or "The Checkbox Hack"? Then we could have a controllable theme in our website without any Javascript.

Just in case these sites disappear, here is the gist of it:
  1. Define your colors in the root of the document with something like this:
    :root {
    --primary-color: #302AE6;
    --secondary-color: #536390;
    --font-color: #424242;
    --bg-color: #fff;
    --heading-color: #292922;
    }
  2. Define your themes based on attributes, like this:
    [data-theme="dark"] {
    --primary-color: #9A97F3;
    --secondary-color: #818cab;
    --font-color: #e1e1ff;
    --bg-color: #161625;
    --heading-color: #818cab;
    }
  3. Use the colors in your web site like this:
    body {
    background-color: var(--bg-color);
    color: var(--font-color);

    /*other styles*/
    .....
    }
  4. Change theme by setting the attribute data-theme to 'dark' or 'light'
Now, to use the CSS checkbox hack, you need to do the following:
  1. Wrap all your site in an element, a div or something
  2. Add a checkbox input on the same level as the wrapper and styled to be invisible
  3. Add a label with a for attribute having the value the id of the checkbox
  4. Add in the label whatever elements you want to trigger the change (button, fake checkbox, etc)
  5. Change the theme CSS to work not with [data-theme="dark"], but with #hiddenCheckbox:checked ~ #wrapper, which means "the element with id wrapper that is on the same level as the checkbox with id hiddenCheckbox"
This means that whenever you click on the label, you toggle the hidden checkbox, which in turn changes the CSS that gets applied to the entire wrapper element.

And here is a CodePen to prove it:

See the Pen
rbzPmj
by Siderite (@siderite)
on CodePen.

Sometimes you get an annoying error after updating your .NET Framework or some of the packages or libraries in your project: "Some NuGet packages were installed using a target framework different from the current target framework and may need to be reinstalled. Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information. Packages affected: <name-of-nuget-package>".

The problem stems from the fact that NuGet packages have variants for different .NET flavors and in your project they are "hinted" at by the <HintPath> child element in the <Reference> elements in your .csproj file. Somehow, the hint still points to a different variant than the one you need and that's why you get this error. The explanation in length can be found in this great post: Why, when and how to reinstall NuGet packages after upgrading a project, but just in case his blog disappears (as so many great ones did in the past), here is the gist of the solution:

In Visual Studio go to Tools → NuGet Package Manager → Package Manager Console and type:
Update-Package <name-of-nuget-package> -Reinstall -ProjectName <name-of-project>

To add some value to Derriey's post, you can solve all the similar issues in your solution by copying the entire list of errors from all projects by going to the Output pane, selecting them all and right clicking Copy, then run search and replace in your favorite editor with this regular expression:
^.*?Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information.  Packages affected: ((?:[^,\s]+(?:, )?)+)\t([^\t]+)\t\t\d+\t\t$
and replacement pattern
Update-Package $1 -Reinstall -ProjectName $2

Then make sure there is only one project on each line, copy paste the result into the Package Manager Console window and the entire solution will get fixed.

Example: Error Some NuGet packages were installed using a target framework different from the current target framework and may need to be reinstalled. Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information. Packages affected: Microsoft.Extensions.Configuration, Serilog MyProject.Common 0

Turns into: Update-Package Microsoft.Extensions.Configuration, Serilog -Reinstall -ProjectName MyProject.Common Since Update-Package only supports one package and regex replace doesn't have a syntax for multiple captures in the same group, you will have to manually turn this into:
Update-Package Microsoft.Extensions.Configuration -Reinstall -ProjectName MyProject.Common
Update-Package Serilog -Reinstall -ProjectName MyProject.Common

Copy paste the result and the two projects will be reinstalled on the affected projects in your solution.

I spent hours trying to manually fix the assembly redirects in a web.config, only to give up and use the default Add-BindingRedirect in the NuGet package manager. And it worked! I have no idea if this won't break something else, but I got it from Rick Strahl's blog and it worked for me. More in his article. Thanks, Rick!

One thing to remember is that you first have to delete the dependentAssembly elements from the .config file in order for the command to work.

Update: there is an issue related to NuGet packages. I was recommending to run MsBuild with the command line option /t:Restore;Rebuild which should restore packages and rebuild the solution. However, as detailed here, the MSBuild Restore option only restores packages defined in the project PackageReference elements, not the ones in packages.config. In order to restore those, you still need to manually run nuget restore. Where do you get nuget.exe from? Obviously not from the Visual Studio Build Tools... but from the NuGet Gallery.

Now, for the original post.

So I had this medium size Visual Studio solution, in .NET Framework 4.6.1, containing a bunch of projects, including a Wix setup and a web API and I wanted to build it on a machine that did not have Visual Studio, for Continuous Deployment reasons. Since Visual Studio uses MSBuild to compile, I thought it would be a five minute job. Boy, was I wrong!

First of all, the command to build a solution is clear:
MSBuild You.sln
and since it was a .NET 4.0 project, it made sense to use the MSBuild.exe from C:\Windows\Microsoft.NET\Framework64\v4.0.30319. Well, enter the first error: CS1617: Invalid option 'latest' for /langversion; must be ISO-1, ISO-2, 3, 4, 5 or Default. This is caused by the project using C# version 7 which is NOT supported by the MSBuild version in the .NET Framework, you need MSBuild version 15, which comes with Visual Studio. I didn't want to install Visual Studio.

The solution is to install Visual Studio Build Tools, preferably using the Visual Studio Installer. Now, the correct MSBuild version is found at C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\Bin\MSBuild.exe. Note that it is in located in a Visual Studio folder, not the MSBuild folder, which is also there.

An issue that occurred here was that previously warning messages saying the framework is 4.6.1 and the installed framework is 4.7.2 now became errors. The solution is to install the .NET Framework SDK 4.6.1 or to upgrade all your projects to 4.7.2. Warning: you need the Developer Pack, not just the Runtime.

Second error: error MSB4036: The "GetReferenceNearestTargetFrameworkTask" task was not found.. The problem? The NuGet package manager and/or the NuGet targets and build tasks are not installed. In order to install them, run Visual Studio Installer and look under the Individual Components tab, in the Code Tools section. See this Stack Overflow question for more details.

Next problem: The type 'IDisposable' is defined in an assembly that is not referenced. You must add a reference to assembly 'netstandard'. This is a weird one, since the compilation in Visual Studio had no issues whatsoever. This is related to the framework version, though, as .NET 4.6 uses netstandard 1.0 and 4.7 uses 2.0. The solution is to add a <Reference Include="netstandard" /> tag in your .csproj (tip: Search and replace <Reference Include="System" /> with <Reference Include="System" /><Reference Include="netstandard" /> in all your .csproj files)

Another problem similar to the one above is Predefined type 'System.ValueTuple´2´ is not defined or imported and that is because ValueTuple is not in .NET Framework 4.6.2 or earlier and you need to install the System.ValueTuple package in your project (using the NuGet package manager, more details here)

For both problems above as well as for the issue with the framework conflict further up a possible solution is to upgrade all projects to .NET 4.7+ or whatever is latest.

Next, targets errors: error MSB4226: The imported project "Microsoft.WebApplication.targets was not found. and error MSB4057: The target "_WPPCopyWebApplication" does not exist in the project. This is because even if Visual Studio Build Tools is installed, the targets for it are not. The solution is to copy the folders Web and WebApplications from C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\Microsoft\VisualStudio\v15.0 to "\\BuildMachine\C$\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Microsoft\VisualStudio\v15.0". You may need to copy the NuGet targetss as well, I don't remember if that is what I did or the NuGet package manager installation solved it.

Last but not least, Wix errors. Obviously, for the Setup project to compile you need to install the Wix Toolset. However, you may still run into this error: error MSB3073: The command "heat dir ..blah blah blah" exited with code 9009. If you were trying to executing the build from a command prompt and you installed Wix while it was open, you need to open another one in order to refresh the changes the installer did to your environment PATH variable.

Finally, in order to compile for a specific platform and configuration, use the flags: /property:Configuration=Release /property:Platform=x64.

Then just run the line:
nuget restore
"c:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\Bin\MSBuild.exe" /t:Restore;Rebuild Your.sln /property:Configuration=Release /property:Platform=x64

Hope it helps.

So you have a product that has an MSI installer and installation works perfectly. But you want to install the product using the /quiet flag, so that you automate the installation or for whatever reason and now it fails all the time. First of all, do the normal thing when you have an MSI installation error:
  • Install the package by manually running msiexec and add verbose logging: msiexec /i My.Setup.msi /L*VX theinstall.log ... (your extra parameters, including /quiet here)
  • Open theinstall.log file and look for the string "Value 3" which is the line where the install actually failed
  • See if you can see what caused the fail

My setup would take properties from the registry and only need the user password to be supplied, so I used it like this:
msiexec /i My.Setup.msi /L*VX theinstall.log /quiet USERPASSWORD="thepassword"
For me, the reason for the failure was very unclear: "CreateUser returned actual error code 1603". I had the user, I had the password, what was going on?

The solution was to add ALL the properties needed. With a silent installation, it seems some of the actions in the UI are ignored. It's not just quiet, it's skipping things. In order to get a list of all possible properties, open the same install log and look for "SecureCustomProperties", which should list all the properties that you can set from the command line, separated by semicolon. I am sure I didn't need to set them all in order to work. In fact I didn't. I only used the ones used in the UI.

Hope it helps.

and has 0 comments
There is this old joke about how having a senior developer around is like having a little child sitting next to you. "How do I do that?", you ask. And the SD asks "Why?". "Well, the boss asked me to" "But, why?. "The client wants it" "Why?". "Because he needs this other thing" "Why?". I am afraid the joke is very true. Usually, at the end of the exchange you get to the real need at the base of the request and understand that it is the thing that must be fulfilled, not the request as it came through several filters, each adding or removing from the real purpose of the task. And if there is something I want to impart from my extensive experience as a software developer it is that you must always start from the core need and go from there.

However, I am not here to discuss how to write code, for once, but on how to organize your team (or yourself as a team of one) from this need driven perspective. And please don't call it NDD and read it NEED or whatever, because this is just a general principle that can help you in many domains. So let's analyse Agile. It's been done to death, so what I am trying to do here is not summarize what others have done, but to simplify things until they don't even need a name anymore.

First, what is Agile Development? Wikipedia says: "Agile software development is an approach to software development under which requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer(s)/end user(s). It advocates adaptive planning, evolutionary development, empirical knowledge, and continual improvement, and it encourages rapid and flexible response to change. The term Agile was popularized, in this context, by the Manifesto for Agile Software Development. The values and principles espoused in this manifesto were derived from and underpin a broad range of software development frameworks, including Scrum and Kanban."

So Agile is not Scrum, but Scrum is underpinned by Agile principles and values. That is important, because many shops want to do Agile and so they do Scrum or Kanban and they either try to follow them to the letter (purists) or adapt them to their requirements (which is more in line with Agile principles, but the implementations usually suffer). One wonders how come there are not a lot of different methodologies out there, just like there are software patterns or programming languages. The answer is that no one starts off with understanding their need for Agile and in fact many don't even know or care what it is. They do Scrum so that they get the Agile badge, they tell everyone they "do Agile" and pompously ask their candidates if they "know Agile".

But if someone asks "Why do you want to do Agile?" the answers are not so easy to get. Of course there are advantages in the approach, that is why it's so popular, but you don't get advantages for your collection, they need to drive you to some kind of goal. One might say that for every process you need metrics in order to analyse your progress. Scrum does that. You don't need metrics for the efficacy of the metric system, do you? But still you need some way of gauging the usefulness of your methodology itself, otherwise we would still be using imperial units.

I am more accustomed to Scrum, but I don't want to analyse it, instead I want to measure the usefulness of Agile as a concept by asking the very simple question "Why?". As we've seen, its attributes include:
  • self organizing teams
  • cross functional teams
  • collaborative effort between teams and customers
  • adaptive planning
  • evolutionary development
  • empirical knowledge
  • continual improvement
  • rapid and flexible response to change

Why self organizing teams? That's the first and one of the most important and complex aspects of Agile. If the team self organizes, then everyone in it shares responsibility. There is no need for micromanagement from above, the team acts as a unit, you get it as a neat package that you don't care about. In theory when a team is inefficient from the organization's point of view, you should fire the team, not individual people. If one member is not performing as expected, then it's the responsibility of the team to fix them or get rid of them. Also theoretically, an Agile team doesn't need a manager.

But in practice I haven't seen this implemented except in small startup teams. There is always a manager, a director or more of them. There is always micromanagement. There is always some tendril of HR or organizational graph that affects people individually. Why is that? Because teams are cross functional, too. A developer doesn't need or want in any way to have to measure the work efficiency of their colleagues, have to tell them how to do things or get to the point where they have to push for firing them. A developer doesn't want to administrate the team. So what is a manager in the Agile world? That's right, a member of the team. If devs would not have a manager, they would probably hire one, as part of the self organization of the team. Moreover, for teams inside larger organizations like corporations, the "customer" is the corporation, so the manager role is often combined with the one for "customer representative" or "product owner" or "producer" or something like that. They're the people you go to in order to understand what it is you have to do.

And since I've started talking about cross functional teams, this means that Agile doesn't need or indeed recommend that people be interchangeable, having the same skill sets. Whenever you hear someone say "everyone should know everything" that's not Agile. It's just a sign that someone from above wants to micromanage the team without having to know anyone in it or ever decide the direction it should work in. It's the sign of a top-down approach that is ultimately inefficient from both directions: the team feels pressure it doesn't need while being restricted in what it can do and the management has to perform tasks that they don't need to do. I haven't seen one team of people treated as interchangeable units be efficient, nor have I heard of any. So why cross functional teams? Because people are individual beings and are different and, for every skill they have, they put in effort to first reach a basic level of knowledge, then more effort to become good or even expert at them. An Agile team needs to be flexible enough to accommodate for any type of person, within reason.

As an aside, think of the skills of a challenged person. The legislation of many countries requires now that software be accessible to a wide variety of disabled people. So whenever you hear that people should be versed in all aspects of the team's business ask them for Braille courses.

Cross functional also includes the skill of knowing what the customer wants, whether they are a member of your organization or someone sent by the client. It's one of the most necessary roles to be filled in a team. If a team were a living thing, the product owner would be the eyes and ears. You can be strong, fast, deadly, but it doesn't help if you can't find your prey.

So why collaborative effort between team and customers? Because you need to know why do you what you do. This entire post is dedicated to the question of why, so I won't press this. Just remember that if the team doesn't know why something is required or necessary, then it is not yet fit to perform that task. People are going to hate me for this, but we've already established the need to understand the client and that in large organizations the customer is the organization, so the logical conclusion is clear: office politics, psychopathic displays of power, idiotic decisions coming from up high are all part of the requirements. There is a caveat, though: the need of the customer needs to be clear, not the want. If the liaison to the customer is withholding or unaware of the real reasons something needs to be done, then the team suffers.

As an aside to all manager or high corporate types out there, it's way better to explain to a team that they have to scrap a project because you hate the guts of the person who proposed it, than to invent business reasons that are clearly bullshit for destroying months of loving work. In the first case you honestly admit you are an asshole, but in the second you prove you are one.

Now, what's this about adaptive planning? If you already know what is needed, someone should let you do your job in peace. You come to them when it's done. That was the fallacy of the Waterfall system. That is also why Waterfall still works. In a situation where needs are clear and do not change, there is no need for agility. Unfortunately, these situations are rare, especially since I've included in the list of customer needs the legacy of us being descended from apes. That means adapting quickly, without regret, changing direction, sometimes turning back completely, abandoning projects, starting new ones that you recommended starting years ago and was laughed out from the room for it and so on.

It's not only that. Adapting to your client's need is something that you would have to do anyway, in any realistic scenario. But it also means that you have to adapt to the technological changes in the world. Maybe you have to rewrite your code with another technology or explore new ones that just appeared out of thin air. Adaptation means that a big part of your job as an Agile team is exploration. This is emphasized by the empirical knowledge attribute above. These two go hand in hand. You adapt by understanding and for this you must be in the loop, you need to know things. Well, one of the members of the team needs to, at least. Adapting means "rapid and flexible response to change".

Therefore, whenever someone asks you if you are Agile or demands that you are, you can tell them about all the new tech that is rumored to come out this year and how you can barely contain your enthusiasm to be working for someone who enables exploration in the true spirit of Agile. Preferably after you've signed a contract, maybe.

But I talked about adapting to new requirements and changes in the environment, not adaptive planning. Adaptive planning is planning for all the things above. In other words, when you start a task, plan for the possibility of it changing, having to be abandoned or having to be redone in a different shape. And here is a point of much contention, because you can do this as any level. If you do it whenever you have to change the color of a button, you will overestimate all your work. If you do it at the level of the entire enterprise suite of products, you might as well do Waterfall. In any respect, it is the entire team that needs to know and decide what they plan for drastic changes.

Yet there are two types of reacting to change: planned or chaotic panic. Adaptive planning refers to planning for change, not preparing for working in complete chaos. That is why Scrum, as an example, splits planning into small periods of time (2-4 weeks) called sprints for which the work planned needs to be finished. If anything drastic (like a task losing its relevance) happens, the sprint is aborted. The sprints themselves can be in relative chaos, but inside one planning and order prevail. It doesn't mean planning for an irate boss man to come and tell you every day to change this and that. It means having a list of tasks that you can edit, whether by removing or adding tasks, rearranging their priority, or even adding tasks like "how to deal with this insane man coming into the office and spouting obscenities at me", then execute them in order. In no way does it mean that you do things by the ear or plan for something and then ignore it.

Two aspects remain to be discussed: evolutionary development and continuous improvement. Both strongly imply awareness of the development processes through which the teams goes. It's the team equivalent of introspection commonly referred to as retrospective. What did we do? How does it compare with what we had planned? What does it mean for the future? How can we do better? and the Why of it is obvious, you've got a good thing, how can you get more of it? Yet this is the point that is hardest to put in a box. Various methodologies attempt to introduce metrics to measure progress and performance. Then, with the function of the team properly computed and numbers assigned to its yield, one can scientifically tweak the processes in order to get the most out of the resources the team has. This is a wonderful concept and I agree with it in spirit, but in practice frankly, I think it's bollocks. It pure shit. It's circular logic.

Here we are, discussing a beautiful philosophy of doing work for maximum benefit by letting a team of very diverse people organize themselves towards a common goal, being ready for anything, adapting to everything, pushing through no matter what by leveraging their individual strengths. And then we come and say "Wait, I can take all this, name it, measure it, put it into equations and then tell you how to improve it!". Well, why didn't you do that from the start, then? Why bother with all the self expression and being able to adapt to anything new? If all is old under the sun, why do Agile at all?!

I believe this is the part where Agile methodologies fail utterly and miserably and can't even admit to themselves that they have. I agree that progress needs to be visible and the ability to compare it with other instances is essential, but from here to measuring it numerically is a large distance. Forget the usual arguments against measuring team success: the team composition is not fixed, the type of work it different, team dynamics means people behave differently, personal and local events, and so on. It's bigger and simpler than that. I submit that since the entire concept of Agile rests on the principles of flexibility, the method of gauging success should also be flexible and especially the way the team processes are tweaked for the future.

And I have proof. In every company I've ever been or heard of the Agile methodology of the place didn't quite work in various ways and that was apparent when people were laughing or disrespecting that part to the point of ignoring the rules in place. And the common bit that didn't work for all of them is the retrospective. The idea of asking "what problems did we face?" and then immediately finding and enforcing solutions with "what can we do about it next time?" has become such a mantra that it is impossible to ask these questions separately. It's become reflex (to the horror of wives everywhere) when you hear "there is a problem" to ask immediately "what can we do to fix it?". There is an intermediate question that must be inserted there: "Is this really a problem or do I just have to be aware it happened? Can we ignore it?". This is another topic, though.

A short summary would be:
  • responsibility is shared between all the members of the team and decisions should be taken as a team
  • any member of the team is valued for their contribution towards reaching the team goal
  • the manager is not your boss, it's just the role of a member of the team
  • the Scrum master or Kanban master or whoever does the accounting in whatever methodology you use is still a member of the team
  • having a member of the team understanding the needs of the customer is essential; they are the people you go to ask "Why?" all the time
  • adaptation need to be written in the DNA of the methodology; the team must plan to change its plans, but still do things orderly
  • experience should guide the team to define progress and improve itself
  • transparency of the team goal is necessary for the team's success
  • there is no improvement of the development method (or the code) without exploration and the entire intentional participation of the team
  • never be afraid to ask why; if you work in an environment where you are afraid, consider changing it

Let me tell you the typical story of the development team. First there is chaos, someone decides they are a quick and dirty team and they can do anything, fast, cutting all corners into a straight line to hell. Mistakes are made, blame flies around, people are fired, emotions get high, results go nowhere. There is need for order, all cry out, we need to use the holy grail of software development and the greatest human invention since fire (that week, at least). And then a specialist is called. He either becomes part of the team, with the role of teaching and enforcing rules, or just teaches a few courses, gets their money and leaves before hapless fools try to put in practice whatever it is they thought they understood. The result is the same: developers first like it, then start grumbling about the indecency of having to fill out documentation before any code is written, or updating it, or adding tasks in whatever software is used (or Post-Its on a whiteboard, if you are a purist and still haven't figured out developers can't use a pen to write). So either morale plummets and the operations continue with diminishing results, or the grumbling becomes a rallying cry to make a change. And the change is... chaos again.

In short, just like a revolutionary South American country, they can't decide if they want freedom or dictatorship, because both options have big damn flaws and anything in between is sucking their life away.

But why?

It's because even methodologies are subject to change, to human whim, to altering conditions. If you look at the beginning of this enormous post, it all started with the main goal and meandered from that, just like an Agile methodology in a real firm does. What is the goal of the team? And don't you look at your manager to tell you. As a team, responsibility is shared, remember? If the purpose is to turn perfectly enthusiastic software developers into soulless drones... you should run away from there. But if it is not, think about the things you are doing. Are they helping or are they not helping? This week at least.

In conclusion (or retrospect?) I believe Agile is a great idea, but its underpinning principles of always being aware of the situation and adapting to it are a more general and useful concept to remember and live by. Truth and knowledge comes before adapting and many software shops fail there, so the methodology is irrelevant. Then the method of work chosen should reflect the flexibility it enables. I don't believe in a pure anything, and certainly not pure Scrum, pure Kanban, pure Agile, but whatever you cook up, it should work towards the real goal of the team, as known and acknowledged by everyone in it. What's your function in life?

and has 2 comments
Sonar Source code static analysis rule RSPEC-3906 states:
Delegate event handlers (i.e. delegates used as type of an event) should have a very specific signature:
  • Return type void.
  • First argument of type System.Object and named 'sender'.
  • Second argument of type System.EventArgs (or any derived type) and is named 'e'.


The problem was that I was getting the warning on a simple event declared as EventHandler<TEventArgs>. Going to its source code page revealed the reason in a comment: // Removed TEventArgs constraint post-.NET 4.

and has 0 comments
This is another post discussing a static analysis rule that made me learn something new. SonarSource Rule 3898 says: If you're using a struct, it is likely because you're interested in performance. But by failing to implement IEquatable<T> you're loosing performance when comparisons are made because without IEquatable<T>, boxing and reflection are used to make comparisons.

There is a StackOverflow entry that discusses just that and the answer to this particular problem is not actually the accepted one. In pure StackOverflow fashion I will quote the relevant bit of the answer, just in case the site will go offline in the future: I'm amazed that the most important reason is not mentioned here. IEquatable<> was introduced mainly for structs for two reasons:
  1. For value types (read structs) the non-generic Equals(object) requires boxing. IEquatable<> lets a structure implement a strongly typed Equals method so that no boxing is required.
  2. For structs, the default implementation of Object.Equals(Object) (which is the overridden version in System.ValueType) performs a value equality check by using reflection to compare the values of every field in the type. When an implementer overrides the virtual Equals method in a struct, the purpose is to provide a more efficient means of performing the value equality check and optionally to base the comparison on some subset of the struct's field or properties.

I thought this was worth mentioning, for those performance critical struct equality scenarios.

Intro


Visual Studio has a very interesting feature called Rule Sets. You basically create a file where you declare which warnings from analyzers will be ignored, displayed as info, warning or error. With the built in code analysis, but also with the help of a plethora of extensions and NuGet packages, this can be a very powerful tool. I am using VS2017 Professional for this post.

Create a rule set


Let's start with creating a new project (a .NET Framework console app) called Rulesets, which will create the standard Program.cs file and a solution for the project. Next, right click the solution in Solution Explorer and go to Add → New Item, go to the General category and select Code Analysis Rule Set. Name it whatever you want to name it, I will call it default.ruleset, then save it.



At this point you should be in a rule set editor, showing you a list of rules grouped by code analyzer id. Press F4 or go to the little wrench icon so that the Properties window is open, then give your ruleset a name (Default Rule Set) and save the file (ctrl-S).



You can create as many of these files and they will be saved in the Solution Items or as a file in a project (I recommend the former) and associate them to any number of projects. Let's assign the new rule set to our project: Go to the solution properties, select Common Properties → Code Analysis Settings, then select as many projects as you want in the list in the right. Then click on the little dropdown arrow an you should be prompted with a list of possible rules, including Default Rule Set. Select it.



Obviously, you can just choose a Microsoft included rule set instead, but those do not take into account your own extensions/packages.

Note: in order to start an incremental process, let's say starting with the minimum recommended settings from Microsoft and then adding stuff to it, use the Include element in a .ruleset file. The files coming by default with Visual Studio can be found at %ProgramFiles(x86)%/Microsoft Visual Studio/2017/Professional/Team Tools/Static Analysis Tools/Rule Sets. Example:
<Include Path="minimumrecommendedrules.ruleset" Action="Default" />

From the Visual Studio GUI you can click on the folder icon in the rule set editor top bar to include other sets and the wrench icon to open the settings.

This helps a lot with having small variations between your projects. For example a tests project might have different settings. Or the data access layer project might have auto generated files that don't respect your coding standards. You can just create a new ruleset that includes the default, then disables some of the rules, like mandatory class documentation.

Note that the rule set editor is not perfect. It will only show the rules as defined in the current file, ignoring the included sets. That is why if the default for a rule is None and you set it to Warning in your base set which you then include in another set where you set it back to None, it will not be saved correctly. Some manual checks are required to ensure correctness.

A list of analyzers


Now, I've noticed a list of possible extensions for Visual Studio that use this system. Here is a list of the ones I thought were good enough, free and useful.
Visual Studio extensions:
  • Microsoft Code Analysis 2017 - Live code analysis rules and code fixes addressing API design, performance, security, and best practices for C# and Visual Basic.
  • Security Code Scan - Detects various security vulnerability patterns: SQL Injection, Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), XML eXternal Entity Injection (XXE), etc.
  • MetricsAnalyzer - analyzer extension to check if you code follow metrics rules
  • Moq.Analyzers - Visual Studio extension that helps to write unit tests using Moq mocking library by highlighting typical errors and suggesting quick fixes
  • Code Cracker for C# - analyzer library for C# that uses Roslyn to produce refactorings, code analysis, and other niceties
  • Visual Studio Intellicode - this is interesting in the sense that it uses AI to improve your code and intellisense
  • Roslynator 2017 - A collection of 500+ analyzers, refactorings and fixes for C#, powered by Roslyn.
  • SonarLint for Visual Studio 2017 - Roslyn based static code analysis: Find and instantly fix nasty bugs and code smells in C#, VB.Net, C, C++ and JS.
  • clean-code-net - Set of C# Roslyn analyzers to improve code correctness
  • CommentCop - Analyzes (mostly) xml comments and provides code fixes. Uses Roslyn C# code analyser.

NuGet packages:
  • StyleCop.Analyzers - there is also a StyleCop extension, but weirdly it does not use the Rule Set system and you just run it manually and gives you warnings that you can control only through the extension configuration
  • Public API Analyzer - An analyzer for packages with public APIs.
  • UnityEngineAnalyzer - Roslyn Analyzer for Unity3D
  • AsyncAwaitAnalyzer - A set of Roslyn Diagnostic Analyzers and Code Fixes for Async/Await Programming in C#.
  • ef-perf-analyzer - EntityFramework Performance Analyzer
  • Asyncify-CSharp - an analyzer and codefix that allows you to quickly update your code to use the Task Asynchronous Programming model.

The packages above are NuGets you install in your project. Just check out the list from NuGet: NuGet packages containing Analyzer. Many extensions also have a NuGet counterpart. It's your choice if you want to not bloat your Visual Studio and choose to install analyzers on a per project basis. There is one advantage more in using project packages: the analysis will pop up at build time and you can thus enforce not being able to compile without following the rules in the set.

Curating your rule sets


So you've learned how to choose a rule set for your projects, how to create your own, but what do you use them for? My suggestion is to work on a complete rule set (or sets) for your entire company and then you can just enforce a coding style without having to manually code review everything.

If you are like me, then you probably installed everything in that list above and then tried it on your project... and you got tens of thousands of warnings and errors. How do you curate a rule set without having to go through every single message? I see several ways of doing this.

The perfect project


Some people/companies have a flagship technical project that they are very proud of. It uses all the latest technologies, it is perfectly written, thoroughly code reviewed by all the members of the team. It can do no wrong. If you have such a project, just enable all possible rules, then disable all that make suggestions for change. In the end you will have a rule set enforcing your coding style, for better or worse.

Start from scratch


The other solution is to start with a new project, then review every message until you get none. Then start coding. An iterative process, for sure, one you will never finish, but it will be good enough after a while and it will also engage your team in technical discussions on how to improve their code, which can't possibly hurt.

Add analyzers one by one


Start with a rule set where all rules are disabled, then start reviewing them one by one, as you either chose to disable them or refactor your code. This solution may be the worse, because it gives excuses to just stop the process midway, but it might be the only one available. Anyway, try to install extensions and packages one by one, too.

Start from the coding standards


Perhaps you have a document describing the coding standards in your team. You might start from it, then look for the rules that enforce it. I think that this will only make you see how woefully inadequate your coding standards are, but it might work.

Other notes


I've had the situation where I created derived rulesets from the default one (using Include) and somehow they ended up with an absolute path for the default ruleset file. It might be an issue with how the editor saves the file.

By default, rules in the ruleset editor are grouped by analyzer ID, but multiple analyzers might manage the same rules, so always manage rules individually, else you will see that enabling or disabling an entire group will change other groups as well and you won't know where you started from.

The ruleset editor is not perfect. One very annoying issue is with ruleset inheritance (doesn't load the parent rules). One example is that you want to have a general ruleset that does NOT include an analyzer (let's say the XUnit one) and then you want something inheriting the base ruleset to DO include the XUnit analyzer. While you can do it by hand, the ruleset editor will not allow you to make these changes, as the original ruleset will have every XUnit rule disabled, but the editor for the unit test one will not know this. There are two solutions for this:
  1. Have a very inclusive basic rule set, then remove rules from the others. This is not perfect, as there could be circular needs (one set has one rule and not the other, while the other has them the other way around)
  2. Have a lot of basic rule sets, split on topic. Then manage every rule set that you need as just includes. This works in every situation, but requires you edit the used rulesets by hand. In the case above you could have a special DoNotUseXunitAnalyzers.ruleset, for example.
  3. A third solution that I do not recommend is to never include anything, instead just copy paste the content of the basic ruleset in every inheriting one. While this works and allows the editor and whatever engine behind to work well, it would be a nightmare to maintain.

Conclusion


I've discussed how to define and control static code analysis in your Visual Studio projects. If nothing of what is available is up to your standards, Roslyn now allows making your own code analyzers in a very simple way. Using code analyzers (and refactorings) can improve productivity, engage the team in technical analysis of standards, enforce some coding standards and help you find hard to detect errors in your code.

Hope it helps.