and has 2 comments

  You are writing some code and you find yourself needing to call an async method in your event handler. The event handler, obviously, has a void return type and is not async, so when using await in it, you will get a compile error. I actually had a special class to execute async method synchronously and I used that one, but I didn't actually need it.

  The solution is extremely simple: just mark the event handler as async.

  You should never use async void methods, instead use async Task or async Task<T1,...>. The exception, apparently, is event handlers. And it kind of makes sense: an event handler is designed to be called asynchronously.

  More details here: Tip 1: Async void is for top-level event-handlers only

  And bonus: but what about constructors? They can't be marked as async, they have no return type!

  First, why are you executing code in your constructor? And second, if you absolutely must, you can also create an async void method that you call from the constructor. But the best solution is to make the constructor private and instead use a static async method to create the class, which will execute whatever code you need and then return new YourClass(values returned from async methods).

  That's a very good pattern, regardless of asynchronous methods: if you need to execute code in the constructor, consider hiding it and using a creation method instead.

As you possibly know, I am sending automated Tweets whenever I am writing a new blog post, containing the URL of the post. And since I've changed the blog engine, I was always seeing the Twitter card associated with the URL having the same image, the blog tile. So I made changes to display a separate image for each post, by populating the og:image meta property. It didn't work. The Twitter card now refused to show the image. And it was all because the URL I was using was relative to the blog site.

First of all, in order to test how a URL will look in Twitter, use the Twitter Card validator. Second, the Open Graph doesn't specify that the URL in og:image should be absolute, but in practice both Twitter and Facebook expect it to be.

So whenever populating Open Graph meta tags with URLs, make sure they are absolute, not relative.

and has 0 comments

Recently I found out about custom task schedulers and I wanted to blog about all the wonderful things you can do with them. I also imagined new ways of doing await/async by tweaking task schedulers. After hours of attempts, I've come to the conclusion that custom task schedulers are incompatible with await/async and should not be used. Here is why:

  • a task scheduler is used to execute synchronous code inside tasks while async/await code is already asynchronous
  • while async/await code is transformed by the compiler into a state machine with the code that follows being turned into a task that is scheduled on TaskScheduler.Current, the state machine has nothing to do with the task scheduler (see Dissecting the async methods in C#)
  • there are no methods that are both aware of await/async code and a custom task scheduler; by design they are incompatible (see Task.Run vs Task.Factory.StartNew)
  • while a stubborn developer could reproduce the functionality of Task.Run and specify a custom task scheduler, or detect tasks that return tasks and Unwrap them, there are easier and safer ways of doing the same thing without a custom task scheduler
  • as the scheduler will be used not only by the tasks run by the developer, but also by the code separated by await boundaries, the results will be unpredictable except the most simple of scenarios

And a pretty diagram from Microsoft representing the order of the operations and how complex they are. It's not just a case of method executed somewhere, but a complex flow that uses the ThreadPoolTaskScheduler as the default task scheduler as a fundamental low level functionality that should not be changed.

If you need more convincing, consider that the code after an await instruction may not even execute on the same thread (or indeed thread pool) as the one before, even if as written appears part of the same method (see async - stay on the current thread? for more details). More on thread pools from Jon Skeet here: The Thread Pool and Asynchronous Methods.

I have been asking this of people at the interviews I am conducting and I thought I should document the correct answer and the expected behavior. And yes, we've filled the position in for this interview question, so you can't cheat :)

The question is quite banal: given two tables (TableA and TableB) both having a column ID, select the rows in TableA that don't have any corresponding row in TableB with the same ID.

Whenever you are answering an interview question, remember that your thinking process is just as important as the answer. So saying nothing, while better than "so I am adding 1 and 1 and getting 2", may not be your best option. Assuming you don't know the answer, a reasonable way of tackling any problem is to take it apart and try to solve every part separately. Let's do this here.

As the question requires the rows in A, select them:

SELECT * FROM TableA

Now, a filter should be applied, but which one? Here are some ideas:

  1. WHERE ID NOT IN (SELECT ID FROM TableB)
  2. WHERE NOT EXISTS (SELECT * FROM TableB WHERE TableA.ID=TableB.ID)
  3. EXCEPT SELECT ID FROM TableB -- this requires to select only ID from TableA, as well (EXCEPT and INTERSECT are new additions to SQL 2019)

Think about it. Any issues with any of them? Any other options?

To test performance, I've used two tables with approximately 35 million rows. Here are the results:

  1. After 17 minutes I had to stop the query. Also, NOT IN has issues with NULL as a value is nether equal or unequal to NULL. SELECT * FROM Table WHERE Value NOT IN (NULL) for example, will always return no rows.
  2. It finished within 4 seconds. There are still issues with NULL, though, as a simple equality would not work with NULL. Assuming we wanted the non-null values of TableA, we're good.
  3. It finished within 5 seconds. This doesn't have any issues with NULL. SELECT NULL EXCEPT SELECT NULL will return no rows, while SELECT 1 EXCEPT SELECT NULL will return a row with the value 1. The syntax is pretty ugly though and works badly if the tables have other columns

What about another solution? We've exhausted simple filtering, how about another avenue? Whenever we want to combine information from two tables we use JOIN, but is that the case here?

SELECT * FROM TableA a
JOIN TableB b
ON a.ID = b.ID -- again, while I would ask people in the interview about null values, we will assume for this post that the values are not nullable

I've used a JOIN keyword, which translates to an INNER JOIN. The query above will select rows from A, but only those that have a correspondence in B. A funny solution to a slightly different question: count the items in A that do not have corresponding items in B:

SELECT (SELECT COUNT(*) FROM TableA) - (SELECT COUNT(*) FROM TableA a JOIN TableB b ON a.ID = b.ID)

However, we want the inverse of the INNER JOIN. What other types of JOINs are there? Bonus interview question! And the answers are:

  • INNER JOIN - returns only rows that have been successfully joined
  • OUTER JOIN (LEFT AND RIGHT) - returns all rows of one table joined to the corresponding values of the other table or NULLs if none
  • CROSS JOIN - returns all the rows in A joined with all the rows in B and all the rows in B that have no match in A

INNER would not work, as demonstrated, so what about a CROSS JOIN? Clearly not, as it will generate 100 trillion rows before filtering anything. SQL Server would optimize a lot of the query, but it would look really weird anyway.

Is there a solution with OUTER JOIN? RIGHT OUTER JOIN will get the rows in B, not in A, so LEFT OUTER JOIN, by elimination, is the only remaining possible solution.

SELECT a.* FROM TableA a 
LEFT OUTER JOIN TableB b
ON a.ID=b.ID

This returns ALL the rows in table A and for each of them, rows in table B that have the same id. In case of a mismatch, though, for a row in table A with no correspondence in table B, we get a row of NULL values. So all we have to do is filter for those. We know that there are no NULLs in the tables, so here is another working solution, solution 4:

SELECT a.* FROM TableA a 
LEFT OUTER JOIN TableB b
ON a.ID=b.ID
WHERE b.ID IS NULL

This solves the problem, as well, in about 4 seconds. However, the other working solution within the same time (solution 2 above) only works as well because newer versions of SQL server are optimizing the execution. Maybe it's a personal preference from the times solution 4 was clearly the best in terms of performance, but I would chose that as the winner.

Summary

  • You can either use NOT EXISTS (and not NOT IN!) or a LEFT OUTER JOIN with a filter on NULL b values.
  • It's important to know if you have NULL values in the joining columns and it's extra points for asking that from your interviewer
  • If not asking, I would penalize solutions that do not take NULL values in consideration. Extra complexity of code, as one cannot simply check for NULL for solution 4. Also a decision has to be made on the expected behavior when working with NULL values
  • When trying to find the solution to a problem in an interview:
    • think of concrete examples of the problem so you can test your solutions
    • break the problems into manageable bits if possible
    • think aloud, but to the point
  • Also, there is nothing more annoying than doing that thing pupils in school do: looking puppy eyed at the teacher while listing solutions to elicit a response. You're not in school anymore. Examples are dirty, time is important, no one cares about your grades.
  • Good luck out there!

I got this exception that doesn't appear anywhere on Google while I was debugging a .NET Core web app. You just have to enable Windows Authentication in the project properties (Debug tab). Duh!

System.InvalidOperationException: The Negotiate Authentication handler cannot be used on a server that directly supports Windows Authentication. Enable Windows Authentication for the server and the Negotiate Authentication handler will defer to it.
   at Microsoft.AspNetCore.Authentication.Negotiate.PostConfigureNegotiateOptions.PostConfigure(String name, NegotiateOptions options)
   at Microsoft.Extensions.Options.OptionsFactory`1.Create(String name)
   at Microsoft.Extensions.Options.OptionsMonitor`1.<>c__DisplayClass11_0.<Get>b__0()
   at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode)
   at System.Lazy`1.ExecutionAndPublication(LazyHelper executionAndPublication, Boolean useDefaultConstructor)
   at System.Lazy`1.CreateValue()
   at System.Lazy`1.get_Value()
   at Microsoft.Extensions.Options.OptionsCache`1.GetOrAdd(String name, Func`1 createOptions)
   at Microsoft.Extensions.Options.OptionsMonitor`1.Get(String name)
   at Microsoft.AspNetCore.Authentication.AuthenticationHandler`1.InitializeAsync(AuthenticationScheme scheme, HttpContext context)
   at Microsoft.AspNetCore.Authentication.AuthenticationHandlerProvider.GetHandlerAsync(HttpContext context, String authenticationScheme)
   at Microsoft.AspNetCore.Authentication.AuthenticationService.ChallengeAsync(HttpContext context, String scheme, AuthenticationProperties properties)
   at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
   at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)

This translates to a change in Properties/launchSettings.json like this:

{
  "iisSettings": {
    "windowsAuthentication": true,
    "anonymousAuthentication": true,
    //...
  },
  //...
}

I had this annoying warning that appeared whenever I would build my Wix setup project. I know, Wix sucks, but what can you do? The warning was warning MSB3270: There was a mismatch between the processor architecture of the project being built "MSIL" and the processor architecture of the reference "SomeProject\Some.dll", "AMD64". This mismatch may cause runtime failures. Please consider changing the targeted processor architecture of your project through the Configuration Manager so as to align the processor architectures between your project and references, or take a dependency on references with a processor architecture that matches the targeted processor architecture of your project.

What is worse, the warning would come in the VS Errors window with no warning code, so I couldn't even supress it. And yes, the text above does show a warning code, but it just wouldn't be intrepreted by Visual Studio.

In order to fix it, apparently I had to set ResolveAssemblyWarnOrErrorOnTargetArchitectureMismatch to None, as according to this StackOverflow answer. But where should I add this wonderful property? It didn't work in the .wixproj file, it wouldn't make a difference in the projects mentioned in the warning.

In the end, I decided to customize my build! Final solution:

  • add a file called Directory.Build.props to your solution folder
  • paste in it
<Project>
 <PropertyGroup>
   <ResolveAssemblyWarnOrErrorOnTargetArchitectureMismatch>None</ResolveAssemblyWarnOrErrorOnTargetArchitectureMismatch>
 </PropertyGroup>
</Project>

and has 0 comments

I will show you some code, like in an interview question. Try to figure out what happened, then read on. The question is: what does the following code write to the console:

class Program
{
    static void Main(string[] args)
    {
        var c = new MyClass();
        c.DoWork();
        Console.ReadKey();
    }
}
 
class BaseClass
{
    public BaseClass()
    {
        DoWork();
    }
 
    public virtual void DoWork()
    {
        Console.WriteLine("Doing work in the base class");
    }
}
 
class MyClass:BaseClass
{
    private readonly string _myString = "I've set the string directly in the field";
 
    public MyClass()
    {
        _myString = "I've set the string in the constructor of MyClass";
    }
 
    public override void DoWork()
    {
        Console.WriteLine($"I am doing work in MyClass with my string: {_myString}");
    }
}

Click to expand

and has 0 comments

Let's say you have a Type and you want to find it by the simple name, not the entire namespace. So for string, for example, you want to use Boolean, not System.Boolean. And if you try in your code typeof(bool).Name you get "Boolean" and for typeof(bool).FullName you get "System.Boolean".

However, for generic types, that is not the case. Try typeof(int?). For FullName you get "System.Nullable`1[[System.Int32, System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]]", but for Name you get "Nullable`1".

So the "name" of the type is just that, the name. In case of generics, the name of the type is the same as the name of its generic definition. I find this a bit disingenuous, because in the name you get encoded the fact that is a generic type or not and how many generic type attributes it has, but you don't get the attributes themselves.

I admit that if I had to make a choice, I couldn't have come up with one to satisfy all demands, either. Just a heads up that Type.Name should probably not be used anywhere.

FizzBuzz is a programming task that is often used for job interviews, because it shows the thinking of the candidate in an isolated and concrete case. The requirements may vary slightly, but is goes like this: A loop is going through numbers from 1 to 100. The candidate must write the code in the loop that will display, for each number, a string. This string is: "Fizz" for numbers divisible by 3, "Buzz" divisible by 5 and "FizzBuzz" for numbers that are both divisible by 3 and 5. For all other numbers, just display that number.

There have been many implementations of this, for example my own in JavaScript from a while ago and FizzBuzz Enterprise Edition, which is always a good read about how not to complicate your code. However, since I've last written about it, JavaScript has changed, so I felt compelled to write an updated version. And here it is:
(d=n=>('0369'.includes((f=n=>n>9&&f([...''+n].reduce((s,v)=>+v+s,0))||n)&&f(n))&&'Fizz'||'')+(/[05]$/.test(n)&&'Buzz'||'')||n)*(i=n=>n>100||console.log(n+': '+d(n))+i(n+1))*i(1)

Among the abominations there, some of them inspired by C++ because why not, there are some of the new JavaScript constructs, like arrow functions and the spread operator. I know it seems pointless, but it's not: try to understand what the code does. If you want to see it in action, open Dev Tools in any browser and copy paste it in the console.

A funny feature that I've encountered recently. It's not something most people would find useful, but it helps tremendously with tracing and debugging what is going on. It's easy, just add .TagWith(someString) to your LINQ query and it will generate comments in SQL. More details here: Query tags.

and has 0 comments

One of the best things you could do in software is unit testing. There are tons of articles, including mine, explaining why people should take the time to write code in a way that makes it easily split into independent parts that then can automatically tested. The part that is painful comes afterwards, when you've written your software, put it in production and you are furiously working for the second iteration. Traditionally, unit tests are great for refactorings, but when you are changing the existing code, you need not only to "fix" the tests, but also cover the new scenarios, allow for changes and expansions of existing ones.

Long story short, you will not be able to be confident your test suite covers the code as it changes until you can compute something called Code Coverage, or the amount of your code that is traversed during unit tests. Mind you, this is not a measure of how much of your functionality is covered, only the lines of code. In Visual Studio, they did a dirty deed and restricted the functionality to the Enterprise edition. But I am here to tell you that in .NET Core (and possibly for Framework, too, but I haven't tested it) it's very easy to have all the functionality and more even for the free version of Visual Studio.

These are the steps you have to take:

  • Add coverlet.msbuild NuGet to your unit tests project
  • Add ReportGenerator NuGet to your unit tests project
  • Write a batch file that looks like

    @ECHO OFF
    REM You need to add references to the nuget packages ReportGenerator and coverlet.msbuild
    IF NOT EXIST "..\..\packages\reportgenerator" (
      ECHO You need to install the ReportGenerator by Daniel Palme nuget
      EXIT 1
    )
    IF NOT EXIST "..\..\packages\coverlet.msbuild" (
      ECHO You need to install the coverlet.msbuild by tonerdo nuget
      EXIT 1
    )
    IF EXIST "bin/CoverageReport" RMDIR /Q /S "bin/CoverageReport"
    IF EXIST "bin/coverage.opencover.xml" DEL /F "bin/coverage.opencover.xml"
    dotnet test "Primus.Core.UnitTests.csproj"  --collect:"code coverage" /p:CollectCoverage=true /p:CoverletOutputFormat=\"opencover\" /p:CoverletOutput=\"bin/coverage\"
    for /f "tokens=*" %%a in ('dir ..\..\packages\reportgenerator /b /od') do set newest=%%a
    "..\..\packages\reportgenerator\%newest%\tools\netcoreapp3.0\ReportGenerator.exe" "-reports:bin\coverage.opencover.xml" "-targetdir:bin/CoverageReport" "-assemblyfilters:-*.DAL*" "-filefilters:-*ServiceCollectionExtensions.cs"
    start "Primus Plumbing Code Coverage Report" "bin/CoverageReport/index.htm"​


    and save it in your unit test project folder
  • Optional: follow this answer on StackOverflow to be able to see the coverage directly in Visual Studio


Notes about the batch file:

  • newest is the current version of ReportGenerator, if that doesn't work, change it with whatever version you have (ex: 4.3.0)
  • the DAL filter tells the report to ignore projects with DAL in their name. You shouldn't have to unit test your data access layer.
  • the ServiceCollectionExtensions.cs filter is for files that should be ignored, like extension methods rarely need to be unit tested


Running the batch should start dotnet test and save the result in both coverage.opencover.xml and also some files in the TestResults folder. Then ReportGenerator will generate an html file with the coverage report that will get open at the end. However, if you followed the optional answer, now you are able to open the files in TestResults and see what parts of your code are covered when opened in the Visual Studio editor! I found this less useful than the HTML report, but some of my colleagues liked the option.

I hope this helps you.

and has 0 comments
There is one basic functionality of all main programming languages: throwing exceptions. Determining in code that something is wrong, one throws an exception of a certain type with extra messages and values. The problem this solves is breaking an execution flow that has entered an invalid state and being aware of what happened. Traditionally, errors are then caught in higher levels of the application and decisions are made: ignore the error, log it, encapsulate it into another exception with extra information, throw it as it is after some cleanup, etc.

But as the joke goes, now you have two problems. When developing .NET code you have to ask yourself what type of exception you are going to throw, what data to add to it and think of what will catch it above and how will it interpret what you sent. Some people create a different exception type for each little issue, in view of the multiple catch(SpecificExceptionType) functionality, so they can choose later what to do at a higher level. Others try to use the out of the box Microsoft exception types, a clear case of stuffing square pegs in round holes. Inevitably someone will just give up in frustration and throw a new Exception("Something went wrong!"); and be done with it. And recently, in order to solve the problems with the above approaches, I envisioned (with full documentation and implementation) a dependency injected IExceptionFactory which I thought was the greatest invention since fire only to discover it was so unwieldy to use that I despaired and deleted the entire thing.

Discussion


Discussing with friends about this deceptively complicated problem, I think I found a solution that covers all major scenarios. But before doing that, I can just feel that some of you thought "Hey! I am doing that and there is nothing wrong with it!", so let's discuss what's wrong with the approaches above. If you want to skip this, go to The Solution section.

Multiple Exception implementations


Extending Exception is not simple. There are four constructors and I dare you to say out the top of your head what Exception(SerializationInfo, StreamingContext) is and where it is used. There are numerous code analyzers that spew a lot of warnings about how Exceptions should be implemented correctly. That's another story: here is a nice article about it. More importantly, doing all this work for every possible exception takes time and effort and duplicated code. In the end, you will get to the next scenario, but with a larger set of hole shapes.

Also, the try/catch block in C# 6.0 has been updated with the catch when syntax, so you can have multiple catch blocks with the same Exception type and different conditions.

Using existing exception types


If you get an empty string from a method and you expected something there, you should throw an exception, but which one? The value is not null, so ArgumentNullException is kind of not applicable. Is ArgumentOutOfRangeException better? I mean, empty string is not in the range of accepted values for the parameter, maybe it could be it. Or is it just ArgumentException? You decide on ArgumentException and you smugly add the name of the variable with nameof(yourLocalVariable), because you are knowledgeable in the ways of code... and you get a warning that yourLocalVariable is not the name of any parameter of the method you are in. That's right, the value was invalid, but ArgumentExceptions are used specifically for the current method arguments.

You don't want to use multiple custom exception types, because you've read this post and abandoned it half way, but you agreed with the first point. Or maybe you are just lazy. You ignore the warning, you use ArgumentException anyway. Later on you are reading the logs and are trying to remember where in the code you used yourLocalVariable and why does it matter it's empty.

Admit it, the Microsoft exception types were not really meant to help you throw exceptions, they are there for Microsoft's internal code and use. Most of the few cases when the exception type is spot on are probably not what the makers of that exception type envisioned when they made it.

Using Exception and a meaningful message


You are done with pointless standards. You just use throw new Exception($"This really specific thing happened with variable {yourVariable}"); and let God sort them out! You can use catch when to look into the string and parse it for information and make decisions on it. It actually works, you're rightly satisfied with yourself. You've showed them all how it's done. Boom! And then a junior developer comes along and decides your wording it not quite right for a native English speaker and changes the string. Suddenly everything literally goes boom, as exceptions get where they shouldn't and flows change unexpectedly.

After warning the entire team to never change the exception strings as they are used in the functionality of the application and you even consider creating a resource system for Exception strings so that it can be used for decision making regardless of content (and inevitably hate the way you need to store format strings and remember what value goes where), a member of the UI team comes and says "Hey, I need to get the reason the flow failed to the user. And I need to translate it to their language". And you despair.

Using a single type of Exception that has everything you need


A slight variation on all of the points above, this involves creating only one type of custom exception, add to it whatever is needed to determine flow, string resource ids, etc. This is actually a pretty decent idea, as it puts the control back into the developer's hands. Why depend on Microsoft types or parsing strings. Context is for kings and you are a king amongst kings.

However, whenever you want to change something, like add a value to an enum that defines the type of the exception, change the way in which a certain exception is handled, you have to change all the code that uses that exception. It's a single point of use, but not a single point of change.

Moreover, other devs in the team think it is cumbersome to work with it. The exception type is stored in the basest of libraries and they all want to add something to it. It becomes bloated and soon enough it creeps into a huge mess that is handled differently in different code and is not easy to maintain, understand or use.

Another layer of indirection


So why not use an exception factory? Everything else in your code works on the premise that "if you want something, you inject an ISomething in the constructor and worry about the implementation never". Why not inject IExceptionFactory everywhere where you need exceptions, then do something magic with it? The result of the operation is determined by the implementation, too. If you want another implementation, you just inject something else. It's genius!

Only then you have to use it. How do you inject the factory in static methods, extension methods, stuff so basic that it used as utilities classes all over the code and now you have to add an extra dependency to everything that uses those classes? Everybody hates you, hates having to add an extra constructor parameter, an extra field, then throw exceptions with something like throw _exceptionFactory.New("Something went wrong!",new ParameterEmptyExceptionData(nameof(localVariable), localVariable)); while adding a dependency to the logging library that the factory uses to log generated exceptions.

Oh, it's just crap!

The Solution


Let's start from an existing piece of code: throw new ArgumentException("{localVariable} is null or empty");. Optimally, we would just want to change this code slightly to solve several issues:
  • formalize that it is an argument empty exception
  • make it clear it's localVariable that was empty
  • maybe add the actual value of localVariable
  • declare the context in which the exception was thrown
  • declare the message that should be used in the exception
  • throw a meaningful exception type
  • decide if this exception should be ignored or thrown
  • log the exception
  • minimize developer effort
  • minimize dependencies
  • use a solution that is closed for modification, but open for extension

A tall order, especially since we've already decided that we don't want to use the factory idea. Some of the issues above are also non-issues in most cases. What if I don't care about the language of the message or if it is a resource or not, it's something used internally in our code. localVariable is empty, I don't need its value. The context is clear from the Stack trace. The exception is meaningful enough as an ArgumentException. In other words, we need to solve one more issue: all of the issues above are optional.

The software pattern that covers this scenario and has been used extensively by library makers is the build pattern. For the sake of exploration, let's see how this would work:
var exception = new ArgumentException("{localVariable} is null or empty");
var builder = new ExceptionBuilder(ex, logger)
.SetError(Error.EmptyValue)
.SetName(nameof(localVariable))
.AddValue(localVariable)
.SetOrigin("Getting the localVariable in order to save the world")
.SetMessageId(Messages.EmptyWorldNameWhenTryingToSaveIt)
.ShouldBeIgnored();
throw builder.Build(); // this also logs and returns an exception of a type the builder decides relevant

This looks promising, considering that every method above is optional, except the builder instantiation and the build at the end, but it's still too close to the factory idea above. Why use new in a project that is based on dependency injection? Why use .Build() everywhere where you need to throw an exception. Where does the logger come from?

So here is the solution I am proposing, using several resources we have at our disposal in C#:
  • the Exception type has a Data Dictionary property for additional data
  • extension methods can be defined in multiple places for the same type
  • there is no need for an instance of a builder when throwing an exception or Build

The code will look like this:
throw new ArgumentException("{localVariable} is null or empty")
.SetError(Error.EmptyValue)
.SetName(nameof(localVariable))
.AddValue(localVariable)
.SetOrigin("Getting the localVariable in order to save the world")
.SetMessageId(Messages.EmptyWorldNameWhenTryingToSaveIt)
.ShouldBeIgnored()
.Build();

Each method above is an extension method on the Exception type. Any of them can decide to return the original object or a different one, but they all return an instance that extends Exception. The information attaching methods use the Data property to hold the information. The Build method is designed to take every information attached to an Exception and perform more complex actions, like logging or constructing a completely different object to be returned, however that step is also optional.

And here is the source for an ExceptionBuilder static class that acts as both container for the more common extension methods as well as the point where dependencies are being registered:
/// <summary>
/// Add data to exceptions, then build a Custom exception
/// using registered <see cref="IExceptionBuildHandler"/> and optional logging
/// </summary>
public static class ExceptionBuilder
{
private const string CustomPrefix = "Custom.";
 
private static readonly List<IExceptionBuildHandler> _handlers = new List<IExceptionBuildHandler>();
private static ICustomLogger _logger;
 
#region Extended Data
 
/// <summary>
/// Attaches a custom <see cref="Error"/> to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="error"></param>
/// <returns></returns>
public static Exception SetError(this Exception ex, Error error)
{
_logger?.LogTrace($"Setting error {error} in exception {ex}");
return ex.SetData(nameof(Error), error);
}
 
/// <summary>
/// Attaches an object as the exception origin to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="origin"></param>
/// <returns></returns>
public static Exception SetOrigin(this Exception ex, object origin)
{
_logger?.LogTrace($"Setting exception origin {origin} in exception {ex}");
return ex.SetData("origin", origin);
}
 
/// <summary>
/// Attaches a name parameter to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="name"></param>
/// <returns></returns>
public static Exception SetName(this Exception ex, string name)
{
_logger?.LogTrace($"Setting exception name {name} in exception {ex}");
return ex.SetData("name", name);
}
 
/// <summary>
/// Declare an exception as not breaking the execution flow.
/// Implement catch blocks for exceptions like this to support this scenario.
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static Exception TryToIgnore(this Exception ex)
{
_logger?.LogTrace($"Declaring exception {ex} as not breaking execution flow");
return ex.SetData("tryToIgnore", true);
}
 
/// <summary>
/// True if this exception is declared as not breaking execution flow
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static bool ShouldBeIgnored(this Exception ex)
{
return object.Equals(ex.GetData("tryToIgnore"), true);
}
 
/// <summary>
/// Attaches a type to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="type"></param>
/// <returns></returns>
public static Exception AddType(this Exception ex, Type type)
{
_logger?.LogTrace($"Attaching type {type} in exception {ex}");
return ex.AddData("types", type);
}
 
 
/// <summary>
/// Attaches a value to the exception
/// </summary>
/// <param name="ex"></param>
/// <param name="value"></param>
/// <returns></returns>
public static Exception AddValue(this Exception ex, object value)
{
_logger?.LogTrace($"Attaching type {value} in exception {ex}");
return ex.AddData("values", value);
}
 
/// <summary>
/// Gets data from exception based on key.
/// Returns null if not found.
/// </summary>
/// <param name="ex"></param>
/// <param name="key"></param>
/// <returns></returns>
public static object GetData(this Exception ex, string key)
{
key = $"{CustomPrefix}{key}";
if (ex.Data?.Contains(key)!=true)
{
return null;
}
return ex.Data[key];
}
 
/// <summary>
/// Attaches an object to exception data replacing any previous one with the same key
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="ex"></param>
/// <param name="key"></param>
/// <param name="value"></param>
/// <returns></returns>
public static Exception SetData<T>(this Exception ex, string key, T value)
{
key = $"{CustomPrefix}{key}";
var result = ex.AsCustomException();
ex.Data[key] = value;
return result;
}
 
/// <summary>
/// Adds an object to a list that resides in the exception data at the given key
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="ex"></param>
/// <param name="key"></param>
/// <param name="value"></param>
/// <returns></returns>
public static Exception AddData<T>(this Exception ex, string key, T value)
{
key = $"{CustomPrefix}{key}";
var result = ex.AsCustomException();
var alreadyExists = ex.Data.Contains(key);
if (!alreadyExists || !(ex.Data[key] is List<T> list))
{
if (alreadyExists)
{
_logger?.LogWarning($"Overwriting data {ex.Data[key]} with key {key} with an empty list of {typeof(T).Name} in exception {ex}.");
_logger?.LogWarning($"Are you using Add* and Set* builder methods at the same time or adding objects of different types?");
}
list = new List<T>();
ex.Data[key] = list;
}
lock (list)
{
list.Add(value);
}
return result;
}
 
#endregion Extended Data
 
/// <summary>
/// Builds the exception from the Data and the provided base exception
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static CustomException Build(this Exception ex)
{
var CustomException = ex.AsCustomException();
lock (_handlers)
{
for (var index = _handlers.Count - 1; index >= 0; index--)
{
var handler = _handlers[index];
var result = handler.Build(CustomException, _logger);
if (result != null)
{
CustomException = result.AsCustomException();
break;
}
}
}
_logger?.LogTrace($"Built exception {CustomException}");
return CustomException;
}
 
 
#region Registration
 
/// <summary>
/// Register an <see cref="IExceptionBuildHandler"/>. The last handler to be added will take precedence.
/// </summary>
/// <param name="handler"></param>
public static void RegisterBuildHandler(IExceptionBuildHandler handler)
{
lock (_handlers)
{
_logger?.LogTrace($"Registering exception build handler {handler}");
_handlers.Add(handler);
}
}
 
/// <summary>
/// Register a logger
/// </summary>
/// <param name="logger"></param>
public static void RegisterLogger(ICustomLogger logger)
{
_logger = logger;
_logger?.LogTrace($"Registered logger in the exception builder");
}
 
#endregion Registration
 
private static CustomException AsCustomException(this Exception ex)
{
return ex is CustomException CustomException
? CustomException
: new CustomException(ex);
}
}


Note a few things:
  • All of the extension methods are returning the same object, with the exception of Build, which returns a CustomException that maybe writes the extra Data values in ToString
  • The external dependencies are being registered via methods. In the class I use, I even replaced those methods with a RegisterServiceProvider method that sets up everything it needs, including the list of handlers, from dependency injection
  • One doesn't need to call Build and every extension method just naturally continues a normal existing code like throw new WhateverException();
  • When using Build, though, you can change the exception object that is being thrown just by injecting another instance of IExceptionBuildHandler
  • In my project, I've devised a method of injecting code via a text configuration file. That means that you can change what happens when an exception if being thrown without recompiling your existing code.

Finally, there is one design decision that I am not sure about: to use throw exception, or to use exception.Throw()? The former is natural to all devs, but it needs special catch blocks to be able to resume execution; whatever the builder returns, it will always throw something. The latter needs a change in all code that throws exceptions, but it could handle the decision to whether to throw anything at all without recompile.

I lean on the first, just because changes in an existing code base can be done incrementally and the code can be understood by all devs, regardless of seniority.

I find this to be a wonderful idea, clear, useful and flexible. I hope you do, too!

and has 0 comments
It all started with the source code for NonCapturingTimer, a static factory class that was creating a System.Threading.Timer without capturing the execution context and was described as "A convenience API for interacting with System.Threading.Timer in a way that doesn't capture the ExecutionContext. We should be using this (or equivalent) everywhere we use timers to avoid rooting any values stored in asynclocals.". What did that even mean?

An issue opened by David Fowler sheds some light on this: "Any lazy activation of timers will capture the ExecutionContext. Combining this with a lazy initialization of the HttpClient and the handler graph may end up holding onto AsyncLocals for longer than expected. This could end up looking like a memory leak". This follows a Twitter thread from Fowler declaring AsyncLocal as evil.

There are also multiple issues that have crystallized into a proposal for a future version of .NET: "Timer static Create methods that make rooting behavior explicit".

And if you look at the ASP.Net sources on GitHub, they do use the class mostly for one time timer calls and periodic cleanup calls. I should mention that Ben Adams from Microsoft calls this way of creating timers ugly.

I don't have the time to go down further on this rabbit hole, but maybe people will find answers here when looking into this and comment on their findings.

and has 0 comments

Update:

The problem described here was slightly false. The issue was that IOptionsSnapshot was registered as Scoped and I was just getting the service from the root IServiceProvider. The solution is to call provider.CreateScope() and with that scope as a provider use ActivatorUtilities. Even better, create a scope, then use it to get an instance of a business class that now would support Scoped services as well as Transient, just like a Controller would.

Warning, though: you need to dispose the scope, but you need to make sure you don't use any service that was created there outside the scope (after disposing).

I guess another solution would be to somehow register IOptionsSnapshot<> as transient, but haven't tried it.

And now for the original post

I was trying to create an instance of an object from a service provider to resolve any dependencies, using ActivatorUtilities.CreateInstance<MyObject>(_serviceProvider) and I was getting the exception:

System.InvalidOperationException
HResult=0x80131509
Message=Cannot resolve scoped service 'Microsoft.Extensions.Options.IOptionsSnapshot`1[ExternalConfiguration]' from root provider.


My object was receiving a parameter of type IOptionsSnapshot<ExternalConfiguration> and upon further investigation, my service provider (which came as a resolution from the dependency injection for IServiceProvider) was actually a ServiceProviderEngineScope which just refused to resolve any IOptionsSnapshot! Funny enough, if I replaced IOptionsSnapshot with IOptionsMonitor, which in my mind is a heavier interface, it worked without issues. Further still, the problem appeared only inside an IHostedService (a BackgroundService hooked up with services.AddHostedService<T>); if I wrote the same code in a controller action, for instance, it worked fine.

The .NET 2+ implementation of IOptionsSnapshot<T> is OptionsManager<T>. If I manually resolved an instance of OptionsManager before my object, then added it as a parameter, the code worked:

var optionsSnapshot = ActivatorUtilities.CreateInstance<OptionsManager<TestOptions>>(_serviceProvider);
var myObject = ActivatorUtilities.CreateInstance<MyObject>(_serviceProvider, optionsSnapshot);


So, specifically, the issue is that in .NET Core, the service provider implementation cannot resolve IOptionsSnapshot interfaces in worker services. You can still do that manually, but I suspect it is a bug, since there is no problem using an IOptionsMonitor instead of IOptionsSnapshot.

A possible solution is to use an additional service provider only for IOptionsSnapshot. Warning, this will not work in a general situation if the dependencies from the additional service provider also need parameters that would be found in the original service provider:

// initialization code
var serviceCollection = new ServiceCollection();
serviceCollection.AddSingleton(
typeof(IOptionsSnapshot<>),
typeof(OptionsManager<>)
);
serviceCollection.AddSingleton(
typeof(IOptionsFactory<>),
typeof(OptionsFactory<>)
);
_additionalServiceProvider = serviceCollection.BuildServiceProvider();
 
// resolution code
var constructor = typeof(MyObject).GetConstructors()
.Where(ci=>ci.IsPublic)
.Single();
var args = constructor.GetParameters()
.Select(p =>
{
try
{
return _serviceProvider.GetService(p.ParameterType);
}
catch
{
return _additionalServiceProvider.GetService(p.ParameterType);
}
})
.ToArray();
return ActivatorUtilities.CreateInstance<MyObject>(_serviceProvider, args);

and has 1 comment
.NET Core comes with its own dependency injection engine, separated in the Microsoft.Extensions.DependencyInjection package, and ASP.Net Core uses it by default. In a very simplistic description, it uses an IServiceCollection to add services to, then it builds an IServiceProvider from that list, an interface which returns an implementation based on a type or null if finding none. Any change in the list of services is not supported. There are situations, though, where you want to add new services. One of them being dynamically resolving new types.

Therefore I set up to create a custom implementation of IServiceProvider that fixes that, using the mechanisms already existing in .NET Core. Note that this is just something I did from frustration, "because I could". Most people choose to replace the entire IServiceProvider with an implementation that uses some other DI container, like StructureMap.

First attempt was proxying a normal ServiceProvider and keeping a reference to the collection. Then I would just change the collection and recreate the service provider. That has two major problems. One is that the previous serviceProvider is not disposed. If you try, you automatically dispose all services already resolved and if you do not, you remain with references to the created services. The second, and more dire, is that recreating the service provider will generate new instances for services, even if registered as singletons. That is not good.

I thought of a solution:
  1. keep a list of service providers, instead of just one
  2. use a custom service collection which will let us know when changes occurred
  3. whenever new services are added, add them to a list of new services
  4. whenever a service is resolved, go through the list of providers
  5. if any provider returns a value, provide it
  6. else if any new service create a new provider from the new services and add it to the list
  7. else return null
  8. when disposing, dispose all providers in the list

This works great except the newly added providers are separate from the existing providers so when you try to resolve a type with a second provider and that type has in its constructor a type that was registered in the first provider, you get nothing.

One solution would be to add all services to the second provider, not only the new ones, but then we get back to the original issue of the singletons, only a bit more subtle:
  1. register type1 as a singleton
  2. get an instance of type1 (1)
  3. build the provider
  4. get an instance of type1 (2)
  5. register type2 which receives a type1 in its constructor
  6. get an instance of type2
  7. now, type1 (1) is the same as type1 (2), because it was resolved by the same provider
  8. type1 is different from type2.type1, though, because that was resolved as a different singleton by the second provider in the list

One solution would be to add all previous services as factories, then. For Itype1, instead of returning typeof(type1), return a factory method that resolves the value with our system. And it works... until it reaches a definition (like IOptions) that was registered as an open generic: services.AddSingleton(typeof(IType3<>),typeof(Type3<>)). In case of open generics, you cannot use a descriptor with a factory, because it returns an object, regardless of the generic type argument used. It would not to do return a Type3<Banana> for a requested type of IType3<int>.

So, final version is this:
  1. keep a list of service providers, instead of just one
  2. keep a dictionary of the last object resolved for a type
  3. use a custom service collection which will let us know when changes occurred
  4. whenever new services are added, add them to a list of new services
  5. whenever a service is resolved, go through the list of providers
  6. if any provider returns a value, return it
  7. if no new services registered return null
  8. create a new provider from all the services like this:
    • if it's a new registration, use it as is
    • if it's an open generic definition type:
      • if singleton, add first all the existing resolutions for types that are defined by it
      • use the original descriptor afterwards
    • use a registration that proxies to the advanced resolution mechanism we created
  9. when disposing, dispose all providers in the list

This implementation also has a flaw: if a dependency parameter with a generic type definition descriptor was resolved as a singleton by an additional service provider, then is requested directly and can be resolved by a previous provider, it will return a different instance. Here is the scenario:
  1. the initial provider knows to map I<> to M<>
  2. you add a new singleton mapping from X to Y and Y gets a constructor parameter of type I<Z>
  3. you request an instance of X
  4. the first provider cannot resolve it
  5. the second provider can resolve it, therefore it will also resolve a I<Z> as an M<Z> singleton instance
  6. you request an instance of I<Z>
  7. the first provider can resolve it, therefore it will return a NEW singleton instance of M<Z>

This is an edge case that I don't have the time to solve. So, with the caveat above, here is the final version.
Use it like this:
// IAdvancedServiceProvider either injected 
// or resolved via serviceProvider.GetService<IAdvancedServiceProvider>
// or even serviceProvider as IAdvancedServiceProvider
advancedServiceProvider.ServiceCollection.AddSingleton...

And this is the source code:
/// <summary>
/// Service provider that allows for dynamic adding of new services
/// </summary>
public interface IAdvancedServiceProvider : IServiceProvider
{
/// <summary>
/// Add services to this collection
/// </summary>
IServiceCollection ServiceCollection { get; }
}
 
/// <summary>
/// Service provider that allows for dynamic adding of new services
/// </summary>
public class AdvancedServiceProvider : IAdvancedServiceProvider, IDisposable
{
private readonly List<ServiceProvider> _serviceProviders;
private readonly NotifyChangedServiceCollection _services;
private readonly object _servicesLock = new object();
private List<ServiceDescriptor> _newDescriptors;
private Dictionary<Type, object> _resolvedObjects;
 
/// <summary>
/// Initializes a new instance of the <see cref="AdvancedServiceProvider"/> class.
/// </summary>
/// <param name="services">The services.</param>
public AdvancedServiceProvider(IServiceCollection services)
{
// registers itself in the list of services
services.AddSingleton<IAdvancedServiceProvider>(this);
 
_serviceProviders = new List<ServiceProvider>();
_newDescriptors = new List<ServiceDescriptor>();
_resolvedObjects = new Dictionary<Type, object>();
_services = new NotifyChangedServiceCollection(services);
_services.ServiceAdded += ServiceAdded;
_serviceProviders.Add(services.BuildServiceProvider(true));
}
 
private void ServiceAdded(object sender, ServiceDescriptor item)
{
lock (_servicesLock)
{
_newDescriptors.Add(item);
}
}
 
/// <summary>
/// Add services to this collection
/// </summary>
public IServiceCollection ServiceCollection { get => _services; }
 
/// <summary>
/// Gets the service object of the specified type.
/// </summary>
/// <param name="serviceType">An object that specifies the type of service object to get.</param>
/// <returns>A service object of type serviceType. -or- null if there is no service object of type serviceType.</returns>
public object GetService(Type serviceType)
{
lock (_servicesLock)
{
// go through the service provider chain and resolve the service
var service = GetServiceInternal(serviceType);
// if service was not found and we have new registrations
if (service == null && _newDescriptors.Count > 0)
{
// create a new service collection in order to build the next provider in the chain
var newCollection = new ServiceCollection();
foreach (var descriptor in _services)
{
foreach (var descriptorToAdd in GetDerivedServiceDescriptors(descriptor))
{
((IList<ServiceDescriptor>)newCollection).Add(descriptorToAdd);
}
}
var newServiceProvider = newCollection.BuildServiceProvider(true);
_serviceProviders.Add(newServiceProvider);
_newDescriptors = new List<ServiceDescriptor>();
service = newServiceProvider.GetService(serviceType);
}
if (service != null)
{
_resolvedObjects[serviceType] = service;
}
return service;
}
}
 
private IEnumerable<ServiceDescriptor> GetDerivedServiceDescriptors(ServiceDescriptor descriptor)
{
if (_newDescriptors.Contains(descriptor))
{
// if it's a new registration, just add it
yield return descriptor;
yield break;
}
 
if (!descriptor.ServiceType.IsGenericTypeDefinition)
{
// for a non open type generic singleton descriptor, register a factory that goes through the service provider
yield return ServiceDescriptor.Describe(
descriptor.ServiceType,
_ => GetServiceInternal(descriptor.ServiceType),
descriptor.Lifetime
);
yield break;
}
// if the registered service type for a singleton is an open generic type
// we register as factories all the already resolved specific types that fit this definition
if (descriptor.Lifetime == ServiceLifetime.Singleton)
{
foreach (var servType in _resolvedObjects.Keys.Where(t => t.IsGenericType && t.GetGenericTypeDefinition() == descriptor.ServiceType))
{
 
yield return ServiceDescriptor.Describe(
servType,
_ => _resolvedObjects[servType],
ServiceLifetime.Singleton
);
}
}
// then we add the open type registration for any new types
yield return descriptor;
}
 
private object GetServiceInternal(Type serviceType)
{
foreach (var serviceProvider in _serviceProviders)
{
var service = serviceProvider.GetService(serviceType);
if (service != null)
{
return service;
}
}
return null;
}
 
/// <summary>
/// Dispose the provider and all resolved services
/// </summary>
public void Dispose()
{
lock (_servicesLock)
{
_services.ServiceAdded -= ServiceAdded;
foreach (var serviceProvider in _serviceProviders)
{
try
{
serviceProvider.Dispose();
}
catch
{
// singleton classes might be disposed twice and throw some exception
}
}
_newDescriptors.Clear();
_resolvedObjects.Clear();
_serviceProviders.Clear();
}
}
 
/// <summary>
/// An IServiceCollection implementation that exposes a ServiceAdded event for added service descriptors
/// The collection doesn't support removal or inserting of services
/// </summary>
private class NotifyChangedServiceCollection : IServiceCollection
{
private readonly IServiceCollection _services;
 
/// <summary>
/// Fired when a descriptor is added to the collection
/// </summary>
public event EventHandler<ServiceDescriptor> ServiceAdded;
 
/// <summary>
/// Initializes a new instance of the <see cref="NotifyChangedServiceCollection"/> class.
/// </summary>
/// <param name="services">The services.</param>
public NotifyChangedServiceCollection(IServiceCollection services)
{
_services = services;
}
 
/// <summary>
/// Get the value at index
/// Setting is not supported
/// </summary>
public ServiceDescriptor this[int index]
{
get => _services[index];
set => throw new NotSupportedException("Inserting services in collection is not supported");
}
 
/// <summary>
/// Count of services in the collection
/// </summary>
public int Count { get => _services.Count; }
 
/// <summary>
/// Obviously not
/// </summary>
public bool IsReadOnly { get => false; }
 
/// <summary>
/// Adding a service descriptor will fire the ServiceAdded event
/// </summary>
/// <param name="item"></param>
public void Add(ServiceDescriptor item)
{
_services.Add(item);
ServiceAdded.Invoke(this, item);
}
 
/// <summary>
/// Clear the collection is not supported
/// </summary>
public void Clear() => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// True is the item exists in the collection
/// </summary>
public bool Contains(ServiceDescriptor item) => _services.Contains(item);
 
/// <summary>
/// Copy items to array of service descriptors
/// </summary>
public void CopyTo(ServiceDescriptor[] array, int arrayIndex) => _services.CopyTo(array, arrayIndex);
 
/// <summary>
/// Enumerator for service descriptors
/// </summary>
public IEnumerator<ServiceDescriptor> GetEnumerator() => _services.GetEnumerator();
 
/// <summary>
/// Index of item in the list
/// </summary>
public int IndexOf(ServiceDescriptor item) => _services.IndexOf(item);
 
/// <summary>
/// Inserting is not supported
/// </summary>
public void Insert(int index, ServiceDescriptor item) => throw new NotSupportedException("Inserting services in collection is not supported");
 
/// <summary>
/// Removing items is not supported
/// </summary>
public bool Remove(ServiceDescriptor item) => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// Removing items is not supported
/// </summary>
public void RemoveAt(int index) => throw new NotSupportedException("Removing services from collection is not supported");
 
/// <summary>
/// Enumerator for objects
/// </summary>
IEnumerator IEnumerable.GetEnumerator() => ((IEnumerable)_services).GetEnumerator();
}
}