I've built an application and, like any lazy dev out there, I focused on the business logic, the project structure, the readability, comments, the dependency injection, the unit tests, you know... the code. My preference is to start from top to bottom, so I create more and more detailed implementations of interfaces while going down to the metal. The bottom of this chain is the repository, that class which handles database access, and I've spent little to understand or optimize that code. I mean, it's DB access, you read or you write stuff, how difficult can it be?

  When it was time to actually test it, the performance of the application was unexpectedly bad. I profiled it and I was getting reasonable percentages for different types of code, but it was all taking too long. And suddenly my colleague says "well, I tried a few things and now it works twice as fast". Excuse me?! You did WHAT?! I have been trying a few things too, and managed to do diddly squat! Give me that PR to see what you did! And... it was nothing I could see.

  He didn't change the code, he just added or altered the attributes decorating the properties of models. That pissed me off, because I had previously gone to the generated SQL with the SQL Profiler and it was all OK. So I executed my code and his code and recorded the SQL that came out:

  • was it the lazy loading? Nope. The number of instructions and their order was exactly the same
  • was it the explicit declaration of the names of indexes and foreign keys? Nope. Removing those didn't affect performance.
  • was it the ChangeTracker.LazyLoadingEnabled=false thing? Nope, I wasn't using child entities in a way that could be affected.
  • was there some other structure of the generated SQL? No. It was exactly the same SQL! Just my code was using thousands of CPU units and his was using none.
  • was it magic? Probably, because it made no sense whatsoever! Except...

Entity Framework generates simple SQL queries, but it doesn't execute them as you and I would. It constructs a string, then uses sp_executesql to run it. Something like this:

exec sp_executesql N'SELECT TOP(1) [p].[ID], [p].[TXT], [p].[LUP_TS]

FROM [sch].[table] AS [p]

WHERE [p].[ID] = @__p_0',N'@__p_0 nvarchar(64)',@__p_0='xxxx'

Do you see it? I didn't until I started to compare the same SQL in the two versions. And it was the type of the parameters! Note that the aptly named parameter @__p_0 is an NVARCHAR. The actual column in the database was VARCHAR! Meaning that the code above was unnecessarily always converting values in order to compare them. The waste of resources was staggering!

How do you declare the exact database type of your columns? Multiple ways. In my case there were three different problems:

  • no Unicode(false) attribute on the string columns - meaning EF expected the columns to be NVARCHAR
  • no Typename parameter in the Column attribute where the columns were NTEXT - meaning EF expected them to be NVARCHAR(Max)
    • I guess one could skip the Unicode thing and instead just specify the type name, but I haven't tested it
  • using MaxLength instead of StringLength - because even if their descriptions are very similar and MaxLength sounds like applying in more cases, it's StringLength that EF wants.

From 40-50ms per processing loop, it dropped to 21ms just by fixing these.

Long story short: parametrized SQL executed with sp_executesql hides a possible performance issue if the columns that you compare or extract have slightly different types than the one of the parameters.

Go figure. I hate Entity Framework!

Intro

  This post is about the System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable. which may be because you shared your SqlConnection or you tried to SaveChanges twice and all of the other issues that you can google for. I was not so lucky. I spent a day and a half to understand what's going on and only with a help of another dev did I get close to the issue.

TL;DR;

I used a column with identity generation, but it wasn't also a primary key and EF sucks.

Details

  Imagine my scenario first: I wanted to use a database to assign a unique integer to a string. I was first searching for the entry in the DB and, if not found, I would just insert a new one. The SQL Server IDENTITY(1,1) setting would insure I got a new unique value for the inserted row. So the table would look like this:

CREATE TABLE STR_ID (
  STR NVARCHAR(64) PRIMARY KEY,
  ID INT IDENTITY(1,1)
}

Nothing fancy about this. Now for the C# part, using Entity Framework Core 6.

I created an entity class for it:

[Table("STR_ID")]
public class StrId {

  [Column("STR")]
  [Key]
  public string Text { get; set; }

  [Column("ID")]
  [DatabaseGenerated(DatabaseGeneratedOption.Identity)]
  public int Id { get; set; }

}

And then I proceeded to test it in the following way:

  • create a DbContext instance
  • search for a value by STR/Text in the proper DbSet
  • if it doesn't exist, insert a new row and SaveChanges
  • retrieve the generated id
  • dispose the context

I also ran this 20 times in parallel (well, as Tasks - a minor distinction, but it was using the thread pool).

The result was underwhelming. It would fail EVERY TIME, with either an exception about deadlocks or 

System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable.
   at Microsoft.Data.SqlClient.SqlTransaction.ZombieCheck()
   at Microsoft.Data.SqlClient.SqlTransaction.Commit()

I did what every sane developer would do in this situation and bought myself a shotgun (we all know it's the most effective against zombies) then googled for other people having this issue. I mean, it would be common, right? You do some EF stuff in parallel and you get some errors.

No. This is happening in a parallelism scenario, but that's not the cause. Also, it's not about transactions. EF will wrap SaveChanges operations in a transaction and that is causing the error, but the transaction being completed is the issue and no, it's not your code!

I tried everything I could think of. I disabled the EF transaction and made my own, using all types of IsolationLevel, I tried EnableRetryOnFailure with hilarious results (I was monitoring the values inserted in the database with NOLOCK and they were going to 20, then back to 10, then 15, then back to 1 and it was taking ages trying to retry operations that apparently had dependencies to each other, only to almost all to fail after a long time). I even disabled connection pooling, which probably works, but would have made everything slow.

Solution

While I can't say what EXACTLY caused the problem (I would have to look into the Microsoft code and I don't feel like it now), the solution was ridiculously simple: just make the IDENTITY column a primary key instead:

CREATE TABLE STR_ID (
  ID INT PRIMARY KEY IDENTITY(1,1),
  STR NVARCHAR(64)
}

-- because this is what I am searching for
CREATE UNIQUE INDEX IX_STR_ID_STR ON STR_ID(STR) 
[Table("STR_ID")]
public class StrId {

  [Column("ID")]
  [Key]
  public int Id { get; set; }

  [Column("STR")]
  public string Text { get; set; }

}

I was about to use IsolationLevel.ReadUncommitted for the select or just set AutoTransactionsEnabled to false (which also would have solved the problem), when the other guy suggested I would use this solution. And I refused! It was dumb! Why the hell would that work? You dummy! And of course it worked. Why? Donno! The magical thinking in the design of EF strikes again and I am the dummy.

Conclusion

What happened is probably related to deadlocks, more specifically multiple threads trying to read/write/read again from a table and getting in each other's way. It probably has something to do with how IDENTITY columns need to lock the entire table, even if no one reads that row! But what it is certain to be is a bug: the database functionality for a primary key identity column and a unique indexed identity column is identical! And yet Entity Framework handles them very differently.

So, in conclusion:

  • yay! finally a technical post
  • this had nothing to do with how DbContexts get disposed (since in my actual scenario I was getting this from dependency injection and so I lost hours ruling that out)
  • the error about transactions was misleading, since the issue was what closed the transaction inside the Microsoft code not whatever you did
  • the advice of some of the AggregateExceptions up the stream (An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure' to the 'UseSqlServer' call.) was even more misleading
  • the EF support for IDENTITY columns - well, it needs it because then how would it know not to attempt to save values in those columns - is also misleading, because it doesn't mean it's good support
  • while parallel access to the DB made the problem visible, it has little to do with parallelism 
  • EF knows how to handle PRIMARY KEYs so that's the solution
  • EF sucks!

I really hope this saves time for people in the same situation!

and has 0 comments

C# 3.0 introduced Object Initializer Syntax which was a game changer for code that created new objects or new collections. Here is a contrived example:

var obj = new ComplexObject
{
    // object initializer
    AnItem = new Item("text1"),
    AnotherItem = new Item("text2"),
    // collection initializer
    RestOfItems = new List<Item>
    {
        new Item("text3"),
        new Item("text4"),
        new Item("text5")
    },
    // indexer initializer
    [0]=new Item("text6"),
    [1]=new Item("text7")
};

Before this syntax was available, the same code would have looked like this:

var obj = new ComplexObject();
obj.AnItem = new Item("text1");
obj.AnotherItem = new Item("text2");
obj.RestOfItems = new List<Item>();
obj.RestOfItems.Add(new Item("text3"));
obj.RestOfItems.Add(new Item("text4"));
obj.RestOfItems.Add(new Item("text5"));
obj[0] = new Item("text6");
obj[2] = new Item("text7");

It's not like the number of lines has changed, but both the writability and readability of the code increase with the new syntax. At least that's why I think. However, outside these very simple scenarios, the feature feels like it's encumbering us or that it is missing something. Imagine you want to only add items to a list based on some condition. You might get a code like this:

var list = new List<Item>
{
    new Item("text1")
};
if (condition) list.Add(new Item("text2"));

We use the initializer for one item, but not for the other. We might as well use Add for both items, then, or use some cumbersome syntax that hurts more than it helps:

var list = new[]
{
    new Item("text1"),
    condition?new Item("text2"):null
}
.Where(i => i != null)
.ToList();

It's such an ugly syntax that Visual Studio doesn't know how to indent it properly. What to do? Software patterns to the rescue! 

Seriously now, people who know me know that I scoff at the general concept of software patterns, but the patterns themselves are useful and in this case, even the conceptual framework that I often deride is useful here. Because we are trying to initialize an object or a collection, which means we are attempting to build it. So why not use a Builder pattern? Here are two versions of the same code, one with extension methods (which can be used everywhere, but might pollute the member list for common objects) and another with an actual builder object created specifically for our purposes (which may simplify usage):

// extension methods
var list = new List<Item>()
    .Adding(new Item("text1"))
    .ConditionalAdding(condition, new Item("text2"));
...
public static class ItemListExtensions
{
    public static List<T> Adding<T>(this List<T> list, T item)
    {
        list.Add(item);
        return list;
    }
    public static List<T> ConditionalAdding<T>(this List<T> list, bool condition, T item)
    {
        if (condition)
        {
            list.Add(item);
        }
        return list;
    }
}

// builder object
var list = new ItemListBuilder()
    .Adding("text1")
    .ConditionalAdding(condition, "text2")
    .Build();
...
public class ItemListBuilder
{
    private readonly List<Item> list;

    public ItemListBuilder()
    {
        list = new List<Item>();
    }

    public ItemListBuilder Adding(string text)
    {
        list.Add(new Item(text));
        return this;
    }

    public ItemListBuilder ConditionalAdding(bool condition, string text)
    {
        if (condition)
        {
            list.Add(new Item(text));
        }
        return this;
    }

    public List<Item> Build()
    {
        return list.ToList();
    }
}

Of course, for just a simple collection with some conditions this might feel like overkill, but try to compare the two versions of the code: the one that uses initializer syntax and then the Add method and the one that declares what it wants to do, step by step. Also note that in the case of the builder object I've taken the liberty of creating methods that only take string parameters then build the list of Item, thus simplifying the syntax and clarifying intent.

I had this situation where I had to map an object to another object by copying some properties into collections and values of some type to other types and so on. The original code was building the output using a top-down approach:

public Output BuildOutput(Input input) {
  var output=new Output();
  BuildFirstPart(output, input);
  BuildSecondPart(output, input);
  ...
  return output;
}

public BuildFirstPart(Output output, Input input) {
  var firstSection = BuildFirstSection(input);
  output.FirstPart=new List<Part> {
    new Part(firstSection)
  };
  if (condition) {
    var secondSection=BuildSeconfSection(input);
    output.FirstPart.Add(new Part(secondSection));
  }
}

And so on and so on. I believe that in this case a fluent design makes the code a lot more readable:

var output = new Output {
  FirstPart = new List<Part>()
    .Adding(BuildFirstSection(input))
    .ConditionalAdding(condition, BuildSecondSection(input),
  SecondPart = ...
};

The "build section" methods would also be inlined and replaced with fluent design methods. In this way the structure of "the output" is clearly shown in a method that declares what it will build and populates the various members of the Output class with simple calculations, the only other methods that the builder needs. A human will understand at a glance what thing it will build, see its structure as a tree of code and be able to go to individual methods to see or change the specific calculation that provides a value.

The point of my post is that sometimes the very out-of-the-box features that help us a lot most of the time end up complicating and obfuscating our code in specific situations. If the code starts to smell, to become unreadable, to make you feel bad for writing it, then stop, think of a better solution, then implement it so that it is the best version for your particular case. Use tools when they are useful and discard them when other solutions might prove more effective.

and has 1 comment

Intro

  Some of the most visited posts on this blog relate to dependency injection in .NET. As you may know, dependency injection has been baked in in ASP.Net almost since the beginning, but it culminated with the MVC framework and the .Net Core rewrite. Dependency injection has been separated into packages from where it can be used everywhere. However, probably because they thought it was such a core concept or maybe because it is code that came along since the days of UnityContainer, the entire mechanism is sealed, internalized and without any hooks on which to add custom code. Which, in my view, is crazy, since dependency injection serves, amongst other things, the purpose of one point of change for class instantiations.

  Now, to be fair, I am not an expert in the design patterns used in dependency injection in the .NET code. There might be some weird way in which you can extend the code that I am unaware of. In that case, please illuminate me. But as far as I went in the code, this is the simplest way I found to insert my own hook into the resolution process. If you just want the code, skip to the end.

Using DI

  First of all, a recap on how to use dependency injection (from scratch) in a console application:

// you need the nuget packages Microsoft.Extensions.DependencyInjection 
// and Microsoft.Extensions.DependencyInjection.Abstractions
using Microsoft.Extensions.DependencyInjection;
...

// create a service collection
var services = new ServiceCollection();
// add the mappings between interface and implementation
services.AddSingleton<ITest, Test>();
// build the provider
var provider = services.BuildServiceProvider();

// get the instance of a service
var test = provider.GetService<ITest>();

  Note that this is a very simplified scenario. For more details, please check Creating a console app with Dependency Injection in .NET Core.

Recommended pattern for DI

  Second of all, a recap of the recommended way of using dependency injection (both from Microsoft and myself) which is... constructor injection. It serves two purposes:

  1. It declares all the dependencies of an object in the constructor. You can rest assured that all you would ever need for that thing to work is there.
  2. When the constructor starts to fill a page you get a strong hint that your class may be doing too many things and you should split it up.

  But then again, there is the "Learn the rules. Master the rules. Break the rules" concept. I've familiarized myself with it before writing this post so that now I can safely break the second part and not master anything before I break stuff. I am talking now about property injection, which is generally (for good reason) frowned upon, but which one may want to use in scenarios adjacent to the functionality of the class, like logging. One of the things that always bothered me is having to declare a logger in every constructor ever, even if in itself a logger does nothing to the functionality of the class.

  So I've had this idea that I would use constructor dependency injection EVERYWHERE, except logging. I would create an ILogger<T> property which would be automatically injected with the correct implementation at resolution time. Only there is a problem: Microsoft's dependency injection does not support property injection or resolution hooks (as far as I could find). So I thought of a solution.

How does it work?

  Third of all, a small recap on how ServiceProvider really works.

  When one does services.BuildServiceProvider() they actually call an extension method that does new ServiceProvider(services, someServiceProviderOptions). Only that constructor is internal, so you can't use it yourself. Then, inside the provider class, the GetService method is using a ConcurrentDictionary of service accessors to get your service. In case the service accessor is not there, the method from the field _createServiceAccessor is going to be used. So my solution: replace the field value with a wrapper that will also execute our own code.

The solution

  Before I show you the code, mind that this applies to .NET 7.0. I guess it will work in most .NET Core versions, but they could change the internal field name or functionality in which case this might break.

  Finally, here is the code:

public static class ServiceProviderExtensions
{
    /// <summary>
    /// Adds a custom handler to be executed after service provider resolves a service
    /// </summary>
    /// <param name="provider">The service provider</param>
    /// <param name="handler">An action receiving the service provider, 
    /// the registered type of the service 
    /// and the actual instance of the service</param>
    /// <returns>the same ServiceProvider</returns>
    public static ServiceProvider AddCustomResolveHandler(this ServiceProvider provider,
                 Action<IServiceProvider, Type, object> handler)
    {
        var field = typeof(ServiceProvider).GetField("_createServiceAccessor",
                        BindingFlags.Instance | BindingFlags.NonPublic);
        var accessor = (Delegate)field.GetValue(provider);
        var newAccessor = (Type type) =>
        {
            Func<object, object> newFunc = (object scope) =>
            {
                var resolver = (Delegate)accessor.DynamicInvoke(new[] { type });
                var resolved = resolver.DynamicInvoke(new[] { scope });
                handler(provider, type, resolved);
                return resolved;
            };
            return newFunc;
        };
        field.SetValue(provider, newAccessor);
        return provider;
    }
}

  As you can see, we take the original accessor delegate and we replace it with a version that runs our own handler immediately after the service has been instantiated.

Populating a Logger property

  And we can use it like this to do property injection now:

static void Main(string[] args)
{
    var services = new ServiceCollection();
    services.AddSingleton<ITest, Test>();
    var provider = services.BuildServiceProvider();
    provider.AddCustomResolveHandler(PopulateLogger);

    var test = (Test)provider.GetService<ITest>();
    Assert.IsNotNull(test.Logger);
}

private static void PopulateLogger(IServiceProvider provider, 
                                    Type type, object service)
{
    if (service is null) return;
    var propInfo = service.GetType().GetProperty("Logger",
                    BindingFlags.Instance|BindingFlags.Public);
    if (propInfo is null) return;
    var expectedType = typeof(ILogger<>).MakeGenericType(service.GetType());
    if (propInfo.PropertyType != expectedType) return;
    var logger = provider.GetService(expectedType);
    propInfo.SetValue(service, logger);
}

  See how I've added the PopulateLogger handler in which I am looking for a property like 

public ILogger<Test> Logger { get; private set; }

  (where the generic type of ILogger is the same as the class) and populate it.

Populating any decorated property

  Of course, this is kind of ugly. If you want to enable property injection, why not use an attribute that makes your intention clear and requires less reflection? Fine. Let's do it like this:

// Add handler
provider.AddCustomResolveHandler(InjectProperties);
...

// the handler populates all properties that are decorated with [Inject]
private static void InjectProperties(IServiceProvider provider, Type type, object service)
{
    if (service is null) return;
    var propInfos = service.GetType()
        .GetProperties(BindingFlags.Instance | BindingFlags.Public)
        .Where(p => p.GetCustomAttribute<InjectAttribute>() != null)
        .ToList();
    foreach (var propInfo in propInfos)
    {
        var instance = provider.GetService(propInfo.PropertyType);
        propInfo.SetValue(service, instance);
    }
}
...

// the attribute class
[AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = true)]
public class InjectAttribute : Attribute {}

Conclusion

I have demonstrated how to add a custom handler to be executed after any service instance is resolved by the default Microsoft ServiceProvider class, which in turn enables property injection, one point of change to all classes, etc. I once wrote code to wrap any class into a proxy that would trace all property and method calls with their parameters automatically. You can plug that in with the code above, if you so choose.

Be warned that this solution is using reflection to change the functionality of the .NET 7.0 ServiceProvider class and, if the code there changes for some reason, you might need to adapt it to the latest functionality.

If you know of a more elegant way of doing this, please let me know.

Hope it helps!

Bonus

But what about people who really, really, really hate reflection and don't want to use it? What about situations where you have a dependency injection framework running for you, but you have no access to the service provider builder code? Isn't there any solution?

Yes. And No. (sorry, couldn't help myself)

The issue is that ServiceProvider, ServiceCollection and all that jazz are pretty closed up. There is no solution I know of that solved this issue. However... there is one particular point in the dependency injection setup which can be hijacked and that is... the adding of the service descriptors themselves!

You see, when you do ServiceCollection.AddSingleton<Something,Something>, what gets called is yet another extension method, the ServiceCollection itself is nothing but a list of ServiceDescriptor. The Add* extensions methods come from ServiceCollectionServiceExtensions class, which contains a lot of methods that all defer to just three different effects:

  • adding a ServiceDescriptor on a type (so associating an type with a concrete type) with a specific lifetime (transient, scoped or singleton)
  • adding a ServiceDescriptor on an instance (so associating a type with a specific instance of a class), by default singleton
  • adding a ServiceDescriptor on a factory method (so associating a type with a constructor method)

If you think about it, the first two can be translated into the third. In order to instantiate a type using a service provider you do ActivatorUtilities.CreateInstance(provider, type) and a factory method that returns a specific instance of a class is trivial.

So, the solution: just copy paste the contents of ServiceCollectionServiceExtensions and make all of the methods end up in the Add method using a service factory method descriptor. Now instead of using the extensions from Microsoft, you use your class, with the same effect. Next step: replace the provider factory method with a wrapper that also executes stuff.

Since this is a bonus, I let you implement everything except the Add method, which I will provide here:

// original code
private static IServiceCollection Add(
    IServiceCollection collection,
    Type serviceType,
    Func<IServiceProvider, object> implementationFactory,
    ServiceLifetime lifetime)
{
    var descriptor = new ServiceDescriptor(serviceType, implementationFactory, lifetime);
    collection.Add(descriptor);
    return collection;
}

//updated code
private static IServiceCollection Add(
    IServiceCollection collection,
    Type serviceType,
    Func<IServiceProvider, object> implementationFactory,
    ServiceLifetime lifetime)
{
    Func<IServiceProvider, object> factory = (sp)=> {
        var instance = implementationFactory(sp);
        // no stack overflow, please
        if (instance is IDependencyResolver) return instance;
        // look for a registered instance of IDependencyResolver (our own interface)
        var resolver=sp.GetService<IDependencyResolver>();
        // intercept the resolution and replace it with our own 
        return resolver?.Resolve(sp, serviceType, instance) ?? instance;
    };
    var descriptor = new ServiceDescriptor(serviceType, factory, lifetime);
    collection.Add(descriptor);
    return collection;
}

All you have to do is (create the interface and then) register your own implementation of IDependencyResolver and do whatever you want to do in the Resolve method, including the logger instantiation, the inject attribute handling or the wrapping of objects, as above. All without reflection.

The kick here is that you have to make sure you don't use the default Add* methods when you register your services, or this won't work. 

There you have it, bonus content not found on dev.to ;)

and has 0 comments

  So I was happily minding my own business after a production release only for everything to go BOOM! Apparently, maybe because of something we did, but maybe not, the memory of the production servers was running out. Exception looked something like:

System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
 at System.Reflection.Emit.TypeBuilder.SetMethodIL(RuntimeModulemodule, Int32tk ,BooleanisInitLocals, Byte[] body, Int32 bodyLength, 
    Byte[] LocalSig ,Int32sigLength, Int32maxStackSize, ExceptionHandler[] exceptions, Int32numExceptions ,Int32[] tokenFixups, Int32numTokenFixups)
 at System.Reflection.Emit.TypeBuilder.CreateTypeNoLock() 
 at System.Reflection.Emit.TypeBuilder.CreateType()
 at System.Xml.Serialization.XmlSerializationReaderILGen.GenerateEnd(String []methods, XmlMapping[] xmlMappings, Type[] types) 
 at System.Xml.Serialization.TempAssembly.GenerateRefEmitAssembly(XmlMapping []xmlMappings, Type[] types, StringdefaultNamespace ,Evidenceevidence)
 at System.Xml.Serialization.TempAssembly..ctor(XmlMapping []xmlMappings, Type[] types, StringdefaultNamespace ,Stringlocation, Evidenceevidence)
 at System.Xml.Serialization.XmlSerializer.GenerateTempAssembly(XmlMappingxmlMapping, Typetype ,StringdefaultNamespace, Stringlocation, Evidence evidence)
 at System.Xml.Serialization.XmlSerializer..ctor(Typetype, XmlAttributeOverrides overrides, Type[] extraTypes, 
     XmlRootAttributeroot, StringdefaultNamespace, Stringlocation, Evidence evidence)
 at System.Xml.Serialization.XmlSerializer..ctor(Typetype, XmlAttributeOverrides overrides) 

At first I thought there was something else eating away the memory, but the exception was repeatedly thrown at this specific point. And I did what every senior dev does: googled it! And I found this answer: "When an XmlSerializer is created, an assembly is dynamically generated and loaded into the AppDomain. These assemblies cannot be garbage collected until their AppDomain is unloaded, which in your case is never." It also referenced a Microsoft KB886385 from 2007 which, of course, didn't exist at that URL anymore, but I found it archived by some nice people.

What was going on? I would tell you, but Gergely Kalapos explains things much better in his article How the evil System.Xml.Serialization.XmlSerializer class can bring down a server with 32Gb ram. He also explains what commands he used to debug the issue, which is great!

But since we already know links tend to vanish over time (so much for stuff on the Internet living forever), here is the gist of it all:

  • XmlSerializer generates dynamic code (as dynamic assemblies) in its constructors
  • the most used constructors of the class have a caching mechanism in place:
    • XmlSerializer.XmlSerializer(Type)
    • XmlSerializer.XmlSerializer(Type, String)
  • but the others do not, so every time you use one of those you create, load and never unload another dynamic assembly

I know this is an old class in an old framework, but some of us still work in companies that are firmly rooted in the middle ages. Also since I plan to maintain my blog online until I die, it will live on the Internet for the duration.

Hope it helps!

  I haven't been working on the Sift string distance algorithm for a while, but then I was reminded of it because someone wanted it to use it to suggest corrections to user input. Something like Google's: "Did you mean...?" or like an autocomplete application. And it got me thinking of ways to use Sift for bulk searching. I am still thinking about it, but in the meanwhile, this can be achieved using the Sift4 algorithm, with up to 40% improvement in speed to the naïve comparison with each item in the list.

  Testing this solution, I've realized that the maxDistance parameter did not work correctly. I apologize. The code is now fixed on the algorithm's blog post, so go and get it.

  So what is this solution for mass search? We can use two pieces of knowledge about the problem space:

  • the minimum possible distance between two string of length l1 and l2 will always abs(l1-l2)
    • it's very easy to understand the intuition behind it: one cannot generate a string of size 5 from a string of size 3 without at least adding two new letters, so the minimum distance would be 2
  • as we advance through the list of strings, we have a best distance value that we keep updating
    • this molds very well on the maxDistance option of Sift4

  Thus armed, we can find the best matches for our string from a list using the following steps:

  1. set a bestDistance variable to a very large value
  2. set a matches variable to an empty list
  3. for each of the strings in the list:
    1. compare the minimum distance between the search string and the string in the list (abs(l1-l2)) to bestDistance
      1. if the minimum distance is larger than bestDistance, ignore the string and move to the next
    2. use Sift4 to get the distance between the search string and the string in the list, using bestDistance as the maxDistance parameter
      1. if the algorithm reaches a temporary distance that is larger than bestDistance, it will break early and report the temporary distance, which we will ignore
    3. if distance<bestDistance, then clear the matches list and add the string to it, updating bestDistance to distance
    4. if distance=bestDistance, then add the string to the list of matches

  When using the common Sift4 version, which doesn't compute transpositions, the list of matches is retrieved 40% faster on average than simply searching through the list of strings and updating the distance. (about 15% faster with transpositions) Considering that Sift4 is already a lot faster than Levenshtein, this method will allow searching through hundreds of thousands of strings really fast. The gained time can be used to further refine the matches list using a slower, but more precise algorithm, like Levenshtein, only on a lot smaller set of possible matches.

  Here is a sample written in JavaScript, where we search a random string in the list of English words:

search = getRandomString(); // this is the search string
let matches=[];             // the list of found matches
let bestDistance=1000000;   // the smaller distance to our search found so far
const maxOffset=5;          // a common value for searching similar strings
const l = search.length;    // the length of the search string
for (let word of english) {
    const minDist=Math.abs(l-word.length); // minimum possible distance
    if (minDist>bestDistance) continue;    // if too large, just exit
    const dist=sift4(search,word,maxOffset,bestDistance);
    if (dist<bestDistance) {
        matches = [word];                  // new array with a single item
        bestDistance=dist;
        if (bestDistance==0) break;        // if an exact match, we can exit (optional)
    } else if (dist==bestDistance) {
        matches.push(word);                // add the match to the list
    }
}

  There are further optimizations that can be added, beyond the scope of this post:

  • words can be grouped by length and the minimum distance check can be done on entire buckets of strings of the same lengths
  • words can be sorted, and when a string is rejected as a match, reject all string with the same prefix
    • this requires an update of the Sift algorithm to return the offset at which it stopped (to which the maxOffset must be added)

  I am still thinking of performance improvements. The transposition table gives more control over the precision of the search, but it's rather inefficient and resource consuming, not to mention adding code complexity, making the algorithm harder to read. If I can't find a way to simplify and improve the speed of using transpositions I might give up entirely on the concept. Also, some sort of data structure could be created - regardless of how much time and space is required, assuming that the list of strings to search is large and constant and the number of searches will be very big.

  Let me know what you think in the comments!

and has 0 comments

  Tracing and logging always seem simple, an afterthought, something to do when you've finished your code. Only then you realize that you would want to have it while you are testing your code or when an unexpected issue occurs in production. And all you have to work with is an exception, something that tells you something went wrong, but without any context. Here is a post that attempts to create a simple method to enhance exceptions without actually needing to switch logging level to Trace or anything like that and without great performance losses.

  Note that this is a proof of concept, not production ready code.

  First of all, here is an example of usage:

public string Execute4(DateTime now, string str, double dbl)
{
    using var _ = TraceContext.TraceMethod(new { now, str, dbl });
    throw new InvalidOperationException("Invalid operation");
}

  Obviously, the exception is something that would occur in a different way in real life. The magic, though, happens in the first line. I am using (heh!) the new C# 8.0 syntax for top level using statements so that there is no extra indentation and, I might say, one of the few situations where I would want to use this syntax. In fact, this post started from me thinking of a good place to use it without confusing any reader of the code.

  Also, TraceContext is a static class. That might be OK, since it is a very special class and not part of the business logic. With the new Roslyn source generators, one could insert lines like this automatically, without having to write them by hand. That's another topic altogether, though.

  So, what is going on there? Since there is no metadata information about the names of the currently executing method (without huge performance issues), I am creating an anonymous object that has properties with the same names and values as the arguments of the method. This is the only thing that might differ from one place to another. Then, in TraceMethod I return an IDisposable which will be disposed at the end of the method. Thus, I am generating a context for the entire method run which will be cleared automatically at the end.

  Now for the TraceContext class:

/// <summary>
/// Enhances exceptions with information about their calling context
/// </summary>
public static class TraceContext
{
    static ConcurrentStack<MetaData> _stack = new();

    /// <summary>
    /// Bind to FirstChanceException, which occurs when an exception is thrown in managed code,
    /// before the runtime searches the call stack for an exception handler in the application domain.
    /// </summary>
    static TraceContext()
    {
        AppDomain.CurrentDomain.FirstChanceException += EnhanceException;
    }

    /// <summary>
    /// Add to the exception dictionary information about caller, arguments, source file and line number raising the exception
    /// </summary>
    /// <param name="sender"></param>
    /// <param name="e"></param>
    private static void EnhanceException(object? sender, FirstChanceExceptionEventArgs e)
    {
        if (!_stack.TryPeek(out var metadata)) return;
        var dict = e.Exception.Data;
        if (dict.IsReadOnly) return;
        dict[nameof(metadata.Arguments)] = Serialize(metadata.Arguments);
        dict[nameof(metadata.MemberName)] = metadata.MemberName;
        dict[nameof(metadata.SourceFilePath)] = metadata.SourceFilePath;
        dict[nameof(metadata.SourceLineNumber)] = metadata.SourceLineNumber;
    }

    /// <summary>
    /// Serialize the name and value of arguments received.
    /// </summary>
    /// <param name="arguments">It is assumed this is an anonymous object</param>
    /// <returns></returns>
    private static string? Serialize(object arguments)
    {
        if (arguments == null) return null;
        var fields = arguments.GetType().GetProperties();
        var result = new Dictionary<string, object>();
        foreach (var field in fields)
        {
            var name = field.Name;
            var value = field.GetValue(arguments);
            result[name] = SafeSerialize(value);
        }
        return JsonSerializer.Serialize(result);
    }

    /// <summary>
    /// This would require most effort, as one would like to serialize different types differently and skip some.
    /// </summary>
    /// <param name="value"></param>
    /// <returns></returns>
    private static string SafeSerialize(object? value)
    {
        // naive implementation
        try
        {
            return JsonSerializer.Serialize(value).Trim('\"');
        }
        catch (Exception ex1)
        {
            try
            {
                return value?.ToString() ?? "";
            }
            catch (Exception ex2)
            {
                return "Serialization error: " + ex1.Message + "/" + ex2.Message;
            }
        }
    }

    /// <summary>
    /// Prepare to enhance any thrown exception with the calling context information
    /// </summary>
    /// <param name="args"></param>
    /// <param name="memberName"></param>
    /// <param name="sourceFilePath"></param>
    /// <param name="sourceLineNumber"></param>
    /// <returns></returns>
    public static IDisposable TraceMethod(object args,
                                            [CallerMemberName] string memberName = "",
                                            [CallerFilePath] string sourceFilePath = "",
                                            [CallerLineNumber] int sourceLineNumber = 0)
    {
        _stack.Push(new MetaData(args, memberName, sourceFilePath, sourceLineNumber));
        return new DisposableWrapper(() =>
        {
            _stack.TryPop(out var _);
        });
    }

    /// <summary>
    /// Just a wrapper over a method which will be called on Dipose
    /// </summary>
    public class DisposableWrapper : IDisposable
    {
        private readonly Action _action;

        public DisposableWrapper(Action action)
        {
            _action = action;
        }

        public void Dispose()
        {
            _action();
        }
    }

    /// <summary>
    /// Holds information about the calling context
    /// </summary>
    public class MetaData
    {
        public object Arguments { get; }
        public string MemberName { get; }
        public string SourceFilePath { get; }
        public int SourceLineNumber { get; }

        public MetaData(object args, string memberName, string sourceFilePath, int sourceLineNumber)
        {
            Arguments = args;
            MemberName = memberName;
            SourceFilePath = sourceFilePath;
            SourceLineNumber = sourceLineNumber;
        }
    }
}

Every call to TraceMethod adds a new MetaData object to a stack and every time the method ends, the stack will pop an item. The static constructor of TraceMethod will have subscribed to the FirstChangeException event of the current application domain and, whenever an exception is thrown (caught or otherwise), its Data dictionary is getting enhanced with:

  • name of the method called
  • source file name
  • source file line number where the exception was thrown.
  • serialized arguments (remember Exceptions need to be serializable, including whatever you put in the Data dictionary, so that is why we serialize it all)

(I have written another post about how .NET uses code attributes to get the first three items of information during build time) 

This way, you get information which would normally be "traced" (detailed logging which is usually detrimental to performance) in any thrown exception, but without filling some trace log or having to change production configuration and reproduce the problem again. Assuming your application does not throw exceptions all over the place, this adds very little complexity to the executed code.

Moreover, this will enhance exception with the source code file name and line number even in Release mode!

I am sure there are some issues with code that might fail and it is not caught in a try/catch and of course the serialization code is where people should put a lot of effort, since different types get to be serialized for inspection differently (think async methods and the like). And more methods should be added so that people trace whatever they like in thrown exceptions. Yet, as I said, this is a POC, so I hope it gets you inspired.

and has 0 comments

I got this exception at my work today, a System.ArgumentException with the message "Argument passed in is not serializable.", that I could not quite understand. Where does it come from, since the .NET source repository does not contain the string? How can I fix it?

The stack trace ended up at System.Collection.ListDictionaryInternal.set_Item(Object key, Object value) in a method where, indeed, I was setting a value in a dictionary. But this is not how dictionaries behave! The dictionary in question was the Exception.Data property. It makes sense, because Exception objects are supposed to be serializable, and I was adding a value of type HttpMethod which, even if extremely simple and almost always used as an Enum, it is actually a class of its own which is not serializable!

So, there you have it, always make sure you add serializable objects in an exception's Data dictionary.

But why is this happening? The implementation of the Data property looks like this:

public virtual IDictionary Data { 
  [System.Security.SecuritySafeCritical]
  get {
    if (_data == null)
      if (IsImmutableAgileException(this))
        _data = new EmptyReadOnlyDictionaryInternal();
      else
        _data = new ListDictionaryInternal();
    return _data;
  }
}

Now, EmptyReadOnlyDictionaryInternal is just a dictionary you can't add to. The interesting class is ListDictionaryInternal. Besides being an actual linked list implementation (who does that in anything but C++ classrooms?) it contains this code:

#if FEATURE_SERIALIZATION
  if (!key.GetType().IsSerializable)                 
    throw new ArgumentException(Environment.GetResourceString("Argument_NotSerializable"), "key");                    
  if( (value != null) && (!value.GetType().IsSerializable ) )
    throw new ArgumentException(Environment.GetResourceString("Argument_NotSerializable"), "value");                    
#endif

So both key and value of the Data dictionary property in an Exception instance need to be serializable.

But why didn't I find the string in the source reference? While the Microsoft reference website doesn't seem to support simple string search, it seems Google does NOT index the code GitHub pages either. You have to:

  • manually go to GitHub and search
  • get no results
  • notice that the "Code" section of the results has a question mark instead of a number next to it
  • click on it
  • then it asks you to log in
  • and only then you get results!

So bonus thing: if you are searching for some string in the .NET source code, first of all use the GitHub repo, then make sure you log in when you search.

and has 0 comments

The point of regular expression character classes is to simplify your expressions, but they can introduce subtle bugs or efficiency issues.

Let's check out this StackOverflow answer to question \d less efficient than [0-9]

\d checks all Unicode digits, while [0-9] is limited to these 10 characters. For example, Persian digits, ۱۲۳۴۵۶۷۸۹, are an example of Unicode digits which are matched with \d, but not [0-9].

This makes sense, only it has never occurred to me until this very moment. I would never use a [0-9] notation and I would replace it with a \d if found in code.

What does that mean?

One simple consequence of such a class would be performance: searching for a large list of characters is less efficient. Another would be introducing the possibility for bugs or even malicious attacks. Let's see the code for a calculator that adds two numbers. It's a silly piece of code, but imagine that a more complex one would take the user content and save it into a database, try to process it or display it.

static void Main(string[] args)
{
    Console.InputEncoding = Encoding.Unicode;
    var firstNumber = GetNumberString();
    var secondNumber = GetNumberString();
    Console.WriteLine("Sum = "+(int.Parse(firstNumber) + int.Parse(secondNumber)));
}

private static string GetNumberString()
{
    string result=null;
    var isNumber = false;
    while (!isNumber)
    {
        Console.Write("Enter a number: ");
        result = Console.ReadLine();
        isNumber = Regex.IsMatch(result, @"^\d+$");
        if (!isNumber)
        {
            Console.WriteLine($"{result} is not a number! Try again.");
        }
    }
    return result;
}

This will try to get numbers as a string and test it using the regular expression ^\d+$, which means the string has to consist of one or more digits. Note that I had to set the console input encoding to Unicode in order to be able to paste Persian numbers. This code works fine until I use Arabic or Persian digits, where it breaks in the int.Parse method. Using ^[0-9]$ as the regular expression pattern would solve this issue.

Same issue will occur with \w (warning: \w is letters AND digits) and [a-zA-Z] (or just [a-z] and using RegexOptions.IgnoreCase).

If one uses code to determine the number of matches for each regular expression pattern

var regexPattern = @"\d";
var nr = 0;
for (int i = 0; i < ushort.MaxValue; i++)
{
    string str = Convert.ToChar(i).ToString();
    if (Regex.IsMatch(str, regexPattern))
        nr++;
}
Console.WriteLine(nr);

we get this:

  • for \d : 370
    • ALL digits
  • for \w : 50320
    • ALL word characters (including digits) 
  • for [^\W\d] : 49950
    • ALL word characters, but not the digits 
  • for \p{L} : 48909
    • ALL letters
  • for [A-Za-z] : 52
    • letters from a to z
  • for [0-9] : 10
    • digits from 0 to 9

I hope this helps.

and has 0 comments

Let's start with a simple requirement and the obvious first draft of the code. The requirement is "build a method that receives a string and writes to the console a URL using that string as a parameter".

The obvious first attempt and working in every version of C# would be:

private static void ShowUrl(string something)
{
    var url = "https://siderite.dev/blog/search/formattablestring?param1="+something;
    Console.WriteLine(url);
}

But then you get some problems. You want to use a parameter that has strange characters in it, like an email, another URL, arithmetic symbols, etc. So you realize that you need to use an URL encode class. Next version is:

private static void ShowUrl(string something)
{
    var url = "https://siderite.dev/blog/search/formattablestring?param1="+Uri.EscapeDataString(something);
    Console.WriteLine(url);
}

Now that works for a second. One might argue about the difference between System.Web.HttpUtility.UrlEncode, System.Net.WebUtility.UrlEncode and System.Net.Uri.EscapeDataString. Yet, his is not the post to do that. Follow the above links for more information.

Then someone decides to use a null for the parameter and you get a ArgumentNullException and you rage in frustration "This is complicated and ugly! The first version looked much better and handled nulls, too. And now I have to add this long Uri.EscapeDataString to every URL parameter in every URL building code I write and also check for null!".

But... what if you could write the method just like the first version and get rid of every problem?

First of all, let's get rid of that plus sign and use interpolated strings. They've been around for awhile:

private static void ShowUrl(string something)
{
    var url = $"https://siderite.dev/blog/search/formattablestring?param1={something}";
    Console.WriteLine(url);
}

Now, this has the same problem of the first version: no URL encoding. And you don't want to add an escape function to every parameter (imagine there would be 10 parameters in this URL). How about we use the fact that the URL is an interpolated string?

Check out this code:

private static void ShowUrl(string something)
{
    var url = UrlHelper.Url($"https://siderite.dev/blog/search/formattablestring?param1={something}");
    Console.WriteLine(url);
}

It uses an UrlHelper class to fix the problems with the URL in the string we generate. Or is it a string? I was writing a long time ago a post about FormattableString and how I didn't see a real use scenario for it. Well, this is it! Let's see what UrlHelper looks like:

public static class UrlHelper
{
    public static string Url(FormattableString url)
    {
        return string.Format(
            url.Format, 
            url.GetArguments()
                .Select(a => Uri.EscapeDataString(a?.ToString()??""))
                .ToArray());
    }
}

The Url method receives not a string, but a FormattableString. When an interpolated string is used as a FormattableString, it gives the developer access to a Format string and a list of arguments in GetArguments. In order to convert it to a string, one must only do String.Format(s.Format,url.GetArguments()).  In our case, the Format property will have the value "https://siderite.dev/blog/search/formattablestring?param1={0}" and the arguments list will contain one value: the value of the something parameter.

Therefore, the Url method will be able to format the string, but first URL encode every single parameter in it.

I admit, this might not be the most readable code in the world, which is my pet peeve with FormattableString. Reading the ShowUrl method you have no clue that UrlHelper.Url receives a FormattableString as parameter. One might even look at that string and say "Hey, wait a minute! That's a bug waiting to happen!" and then proceed to fix it. Perhaps the name of the method could be made more intuitive like EncodeUrlParameters.

But wait! We can do better. I don't know how it might help in the future, but perhaps someone might want to process the parameters of the URL further. By returning a string we eliminate some of the information of the incoming parameter and we don't want to do that.

Also, careful people will notice that using String.Format introduces a subtle bug (or feature): we format the string without specifying a culture. This might be fine, using the current one by default, but it might also cause some issues. The FormattableString class does provide the static CurrentCulture and Invariant methods and, of course, the ToString method would work just fine as well, but then we lose the ability to process the parameters.

So here is the final version of the UrlHelper class:

public static class UrlHelper
{
    public static FormattableString EncodeUrlParameters(FormattableString url)
    {
        return FormattableStringFactory.Create(
            url.Format,
            url.GetArguments()
                .Select(a => Uri.EscapeDataString(a?.ToString() ?? ""))
                .ToArray());
    }
}

By using FormattableStringFactory from System.Runtime.CompilerServices we get a FormattableString as an argument and we return one, with parameters URL encoded. Now, if the result of the method is used as a string, it will be automatically handled by ToString and if it will be used as a FormattableString it will contain all the information provided by the original parameters.

Hope that helps!

Bonus time! There's more!

What if you could make this behavior built in by introducing a new type? Methods from, let's say, a REST client class could use FormattableStrings as URL parameters. But what if we could specify the type of the parameters and get the implicit (hint, hint!) result already URL encoded?

public class UrlString
{
    private FormattableString _formattableString;

    public UrlString(FormattableString formattableString)
    {
        _formattableString = formattableString;
    }

    public static implicit operator FormattableString(UrlString urlString)
    {
        if (urlString == null) return null;
        return UrlHelper.EncodeUrlParameters(urlString.ToFormattableString());
    }

    public static implicit operator string(UrlString urlString)
    {
        if (urlString == null) return null;
        return ((FormattableString)urlString).ToString();
    }

    public static implicit operator UrlString(FormattableString formattableString)
    {
        return new UrlString(formattableString);
    }

    private FormattableString ToFormattableString()
    {
        return _formattableString;
    }
}

This UrlString class will implicitly be converted into a FormattableString or a string and a FormattableString will be converted into a UrlString. So now your code might look like this:

// used like this
RestClient.Get($"https://test.com?x={something}");

// this method does not URL encode anything
public RestResponse Get(string url) {
  ...
}

// this method does URL encode everything, just by changing the type
public RestResponse Get(UrlString url) {
  ...
}

Intro

Dependency injection is baked in the ASP.Net Core projects (yes, I still call it Core), but it's missing from console app templates. And while it is easy to add, it's not that clear cut on how to do it. I present here three ways to do it:

  1. The fast one: use the Worker Service template and tweak it to act like a console application
  2. The simple one: use the Console Application template and add dependency injection to it
  3. The hybrid: use the Console Application template and use the same system as in the Worker Service template

Tweak the Worker Service template

It makes sense that if you want a console application you would select the Console Application template when creating a new project, but as mentioned above, it's just the default template, as old as console apps are. Yet there is another default template, called Worker Service, which almost does the same thing, only it has all the dependency injection goodness baked in, just like an ASP.Net Core Web App template.

So start your Visual Studio, add a new project and choose Worker Service:

It will create a project containing a Program.cs, a Worker.cs and an appsettings.json file. Program.cs holds the setup and Worker.cs holds the code to be executed.

Worker.cs has an ExecuteAsync method that logs some stuff every second, but even if we remove the while loop and add our own code, the application doesn't stop. This might be a good thing, as sometimes we just want stuff to work until we press Ctrl-C, but it's not a console app per se.

In order to transform it into something that works just like a console application you need to follow these steps:

  1. inject an IHost instance into your worker
  2. specifically instruct the host to stop whenever your code has finished

So, you go from:

public class Worker : BackgroundService
{
    private readonly ILogger<Worker> _logger;

    public Worker(ILogger<Worker> logger)
    {
        _logger = logger;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
            await Task.Delay(1000, stoppingToken);
        }
    }
}

to:

public class Worker : BackgroundService
{
    private readonly ILogger<Worker> _logger;
    private readonly IHost _host;

    public Worker(ILogger<Worker> logger, IHost host)
    {
        _logger = logger;
        _host = host;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        Console.WriteLine("Hello world!");
        _host.StopAsync();
    }
}

Note that I did not "await" the StopAsync method because I don't actually need to. You are telling the host to stop and it will do it whenever it will see fit.

If we look into the Program.cs code we will see this:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureServices((hostContext, services) =>
            {
                services.AddHostedService<Worker>();
            });
}

I don't know why they bothered with creating a new method and then writing it as an expression body, but that's the template. You see that there is a lambda adding dependencies (by default just the Worker class), but everything starts with Host.CreateDefaultBuilder. In .NET source code, this leads to HostingHostBuilderExtensions.ConfigureDefaults, which adds a lot of stuff:

  • environment variables to config
  • command line arguments to config (via CommandLineConfigurationSource)
  • support for appsettings.json configuration
  • logging based on operating system

That is why, if you want these things by default, your best bet is to tweak the Worker Service template

Add Dependency Injection to an existing console application

Some people want to have total control over what their code is doing. Or maybe you already have a console application doing stuff and you just want to add Dependency Injection. In that case, these are the steps you want to follow:

  1. create a ServiceCollection instance (needs a dependency to Microsoft.Extensions.DependencyInjection)
  2. register all dependencies you want to use to it
  3. create a starting class that will execute stuff (just like Worker above)
  4. register starting class in the service collection
  5. build a service provider from the collection
  6. get an instance of the starting class and execute its one method

Here is an example:

class Program
{
    static void Main(string[] args)
    {
        var services = new ServiceCollection();
        ConfigureServices(services);
        services
            .AddSingleton<Executor,Executor>()
            .BuildServiceProvider()
            .GetService<Executor>()
            .Execute();
    }

    private static void ConfigureServices(IServiceCollection services)
    {
        services
            .AddSingleton<ITest, Test>();
    }
}

public class Executor
{
    private readonly ITest _test;

    public Executor(ITest test)
    {
        _test = test;
    }

    public void Execute()
    {
        _test.DoSomething();
    }
}

The only reason we register the Executor class is in order to get an instance of it later, complete with constructor injection support. You can even make Execute an async method, so you can get full async/await support. Of course, for this example appsettings.json configuration or logging won't work until you add them.

Mix them up

Of course, one could try to get the best of both worlds. This would work kind of like this:

  1. use Host.CreateDefaultBuilder() anyway (needs a dependency to Microsoft.Extensions.Hosting), but in a normal console app
  2. use the resulting service provider to instantiate a starting class
  3. start it

Here is an example:

class Program
{
    static void Main(string[] args)
    {
        Host.CreateDefaultBuilder()
            .ConfigureServices(ConfigureServices)
            .ConfigureServices(services => services.AddSingleton<Executor>())
            .Build()
            .Services
            .GetService<Executor>()
            .Execute();
    }

    private static void ConfigureServices(HostBuilderContext hostContext, IServiceCollection services)
    {
        services.AddSingleton<ITest,Test>();
    }
}

The Executor class would be just like in the section above, but now you get all the default logging and configuration options from the Worker Service section.

Conclusion

What the quickest and best solution is depends on your situation. One could just as well start with a Worker Service template, then tweak it to never Run the builder and instead configure it as above. One can create a Startup class, complete with Configure and ConfigureServices as in an ASP.Net Core Web App template or even start with such a template, then tweak it to work as a console app/web hybrid. In .NET Core everything is a console app anyway, it's just depends on which packages you load and how you configure them.

and has 0 comments

  We have been told again and again that our code should be split into smaller code blocks that are easy to understand and read. The usual example is a linear large method that can easily be split into smaller methods that just execute one after the other. But in real life just selecting a logical bit of our method and applying an Extract Method refactoring doesn't often work because code rarely has a linear flow.

  Let's take a classic example of "exiting early", another software pattern that I personally enjoy. It says that you extract the functionality of retrieving some data into a method and, inside it, return from the method as soon as you fail a necessary condition for extracting that data. Something like this:

public Order OrderCakeForClient(int clientId) {
  var client = _clientRepo.GetClient(clientId);
  if (client == null) {
    _logger.Log("Client doesn't exist");
    return null;
  }
  var preferences = _clientRepo.GetPreferences(client);
  if (preferences == null) {
    _logger.Log("Client has not submitted preferences");
    return null;
  }
  var existingOrder = _orderRepo.GetOrder(client, preferences);
  if (existingOrder != null) {
    _logger.Log("Client has already ordered their cake");
    return existingOrder;
  }
  var cake = _inventoryRepo.GetCakeByPreferences(preferences);
  if (cake == null) {
    cake = MakeNewCake(preferences);
  }
  if (cake == null) {
    _logger.Log("Cannot make cake as requested by client");
    return null;
  }
  var order = _orderRepo.GenerateOrder(client, cake);
  if (order == null) {
    _logger.Log("Generating order failed");
    throw new OrderFailedException();
  }
  return order;
}

This is a long method that appears to be linear, but in fact it is not. At every step there is the opportunity to exit the method therefore this is like a very thin tree, not a line. And it is a contrived example. A real life flow would have multiple decision points, branching and then merging back, with asynchronous logic and so on and so on.

If you want to split it into smaller methods you will have some issues:

  • the most of the code is not actual logic code, but logging code - one could make a method called getPreferences(client), but it would just proxy the _clientRepo method and gain nothing
  • one could try to extract a method that gets the preferences, logs and decides to exit - one cannot do that directly, because you cannot make an exit decision in a child method

Here is a way that this could work. I personally like it, but as you will see later on, it's not sufficient. Let's encode the exit decision as a boolean and get the information we want as out parameters. Then the code would look like this:

public Order OrderCakeForClient(int clientId) {
  if clientExists(clientId, out Client client)
    && clientSubmittedPreferences(client, out Preferences preferences)
    && orderDoesntExist(client, preferences, out Order existingOrder)
    && getACake(preferences, out Cake cake) {
    return generateOrder(client,cake);
  }
  return existingOrder;
}

private bool clientExists(clientId, out Client client) {
  client = _clientRepo.GetClient(clientId);
  if (client == null) {
    _logger.Log("Client doesn't exist");
    return false;
  }
  return true; 
}

private bool clientSubmittedPreferences(Client client, out Preferences preferences) {
  preferences = _clientRepo.GetPreferences(client);
  if (preferences == null) {
    _logger.Log("Client has not submitted preferences");
    return false;
  }
  return true;
}

private bool orderDoesntExist(Client client, Preferences preferences,
                               out Order existingOrder) {
  existingOrder = _orderRepo.GetOrder(client, preferences);
  if (existingOrder != null) {
    _logger.Log("Client has already ordered their cake");
    return false;
  }
  return true;
}

private bool getACake(Preferences preferences, out Cake cake)
  cake = _inventoryRepo.GetCakeByPreferences(preferences);
  if (cake == null) {
    cake = MakeNewCake(preferences);
  }
  if (cake == null) {
    _logger.Log("Cannot make cake as requested by client");
    return false;
  }
  return true;
}

private Order generateOrder(Client client, Cake cake)
  var order = _orderRepo.GenerateOrder(client, cake);
  if (order == null) {
    _logger.Log("Generating order failed");
    throw new OrderFailedException();
  }
  return order;
}

Now reading OrderCakeForClient is a joy! Other than the kind of weird assignment of existingOrder in a condition method, which is valid in C# 7, but not that easy to read, one can grasp the logic of the method from a single glance as a process that only runs if four conditions are met.

However, there are some problems here. One of them is that the out syntax only works for non-async methods and it is generally frowned upon these days. The method assumes a cake can just be made instantly, when we know it takes time, care, effort and kitchen cleaning. Also, modern code rarely features synchronous database access.

So how do we fix it? The classic solution for getting rid of the out syntax is to create custom return types that contain everything the method returns. Let's try to rewrite the method with async in mind:

public async Task<Order> OrderCakeForClient(int clientId) {
  var clientExistsResult = await clientExists(clientId);
  if (!clientExistsResult.Success) return null;
  var orderDoesntExistResult = await orderDoesntExist(client, preferences)
  if (!orderDoesntExistResult.Success) return orderDoesntExistResult.Order;
  var getACakeResult = await getACake(preferences);
  if (!getACakeResult.Success) return null;
  return generateOrder(client,cake);
}

private async Task<ClientExistsResult> clientExists(clientId) {
  client = await _clientRepo.GetClient(clientId);
  if (client == null) {
    _logger.Log("Client doesn't exist");
    return new ClientExistsResult { Success = false };
  }
  return new ClientExistsResult { Client = client, Success = false }; 
}

// the rest ommitted for brevity

Wait... this doesn't look good at all! We wrote a ton of extra code and we got... something that is more similar to the original code than the easy to read one. What happened? Several things:

  • because we couldn't return a bool, he had to revert to early exiting - not bad in itself, but adds code that feels to duplicate the logic in the condition methods
  • we added a return type for each condition method - extra code that provides no extra functionality, not to mention ugly
  • already returning bool values was awkward, returning these result objects is a lot more so
  • the only reason why we needed to return a result value (bool or something more complex) was to move the logging behavior in the smaller methods

"Step aside!" the software architect will say and quickly draw on the whiteboard a workflow framework that will allow us to rewrite any flow as a combination of classes that can be configured in XML. He is dragged away from the room, spitting and shouting.

The problem in the code is that we want to do three things at the same time:

  1. write a simple, easy to read, flow-like code
  2. make decisions based on partial results of that code
  3. store the partial results of that code

There is a common software pattern that is normally used for flow like code: the builder pattern. However, it feels unintuitive to use it here. First of all, the builder pattern needs some masterful code writing to make it work with async/await. Plus, most people, when thinking about a builder, they think of a separate reusable class that handles logic that is independent from the place where it is used. But it doesn't need to be so. Let's rewrite this code using a builder like logic:

public async Task<Order> OrderCakeForClient(int clientId)
{
    SetClientId(clientId);
    await IfClientExists();
    await IfClientSubmittedPreferences();
    await IfOrderDoesntExist();
    await IfGotACake();
    return await GenerateOrder();
}

private bool _continueFlow = true;
private int _clientId;
private Client _client;
private Preferences _preferences;
private Order _existingOrder;
private Cake _cake;

public void SetClientId(int clientId)
{
    _clientId = clientId;
    _existingOrder = null;
    _continueFlow = true;
}

public async Task IfClientExists()
{
    if (!_continueFlow) return;
    _client = await _clientRepo.GetClient(_clientId);
    test(_client != null, "Client doesn't exist");
}

public async Task IfClientSubmittedPreferences()
{
    if (!_continueFlow) return;
    _preferences = await _clientRepo.GetPreferences(_client);
    test(_preferences != null, "Client has not submitted preferences");
}

public async Task IfOrderDoesntExist()
{
    if (!_continueFlow) return;
    _existingOrder = await _orderRepo.GetOrder(_client, _preferences);
    test(_existingOrder == null, "Client has already ordered their cake");
}

public async Task IfGotACake()
{
    if (!_continueFlow) return;
    _cake = await _inventoryRepo.GetCakeByPreferences(_preferences)
        ?? await MakeNewCake(_preferences);
    test(_cake != null, "Cannot make cake as requested by client");
}

public async Task<Order> GenerateOrder()
{
    if (!_continueFlow) return _existingOrder;
    var order = await _orderRepo.GenerateOrder(_client, _cake);
    if (order == null)
    {
        _logger.Log("Generating order failed");
        throw new OrderFailedException();
    }
    return order;
}

private void test(bool condition, string logText)
{
    if (!condition)
    {
        _continueFlow = false;
        _logger.Log(logText);
    }
}

No need for a new builder class, just add builder like methods to the existing class. No need to return the instance of the class, since it's only used inside it. No need for chaining because we don't return anything and chaining with async is a nightmare.

The ugly secret is, of course, using fields of the class instead of local variables. That may add other problems, like multithread safety issues, but those are not in the scope of this post.

I've shown a pretty stupid piece of code that anyway was difficult to properly split into smaller methods. I've demonstrated several ways to do that, the choice of which depends on the actual requirements of your code. Hope that helps!

  Every month or so I see another article posted by some dev, usually with a catchy title using words like "demystifying" or "understanding" or "N array methods you should be using" or "simplify your Javascript" or something similar. It has become so mundane and boring that it makes me mad someone is still trying to cache on these tired ideas to try to appear smart. So stop doing it! There is no need to explain methods that were introduced in 2009!

  But it gets worse. These articles are partially misleading because Javascript has evolved past the need to receive or return data as arrays. Let me demystify the hell out of you.

  First of all, the methods we are discussing here are .filter and .map. There is of course .reduce, but that one doesn't necessarily return an array. Ironically, one can write both .filter and .map as a reduce function, so fix that one and you can get far. There is also .sort, which for performance reasons works a bit differently and returns nothing, so it cannot be chained as the others can. All of these methods from the Array object have something in common: they receive functions as parameters that are then applied to all of the items in the array. Read that again: all of the items.

  Having functions as first class citizens of the language has always been the case for Javascript, so that's not a great new thing to teach developers. And now, with arrow functions, these methods are even easier to use because there are no scope issues that caused so many hidden errors in the past.

  Let's take a common use example for these methods for data display. You have many data records that need to be displayed. You have to first filter them using some search parameters, then you have to order them so you can take just a maximum of n records to display on a page. Because what you display is not necessarily what you have as a data source, you also apply a transformation function before returning something. The code would look like this:

var colors = [
  {    name: 'red',    R: 255,    G: 0,    B: 0  },
  {    name: 'blue',   R: 0,      G: 0,    B: 255  },
  {    name: 'green',  R: 0,      G: 255,  B: 0  },
  {    name: 'pink',   R: 255,    G: 128,  B: 128  }
];

// it would be more efficient to get the reddish colors in an array
// and sort only those, but we want to discuss chaining array methods
colors.sort((c1, c2) => c1.name > c2.name ? 1 : (c1.name < c2.name ? -1 : 0));

const result = colors
  .filter(c => c.R > c.G && c.R > c.B)
  .slice(page * pageSize, (page + 1) * pageSize)
  .map(c => ({
      name: c.name,
      color: `#${hex(c.R)}${hex(c.G)}${hex(c.B)}`
  }));

This code takes a bunch of colors that have RGB values and a name and returns a page (defined by page and pageSize) of the colors that are "reddish" (more red than blue and green) order by name. The resulting objects have a name and an HTML color string.

This works for an array of four elements, it works fine for arrays of thousands of elements, too, but let's look at what it is doing:

  • we pushed the sort up, thus sorting all colors in order to get the nice syntax at the end, rather than sorting just the reddish colors
  • we filtered all colors, even if we needed just pageSize elements
  • we created an array at every step (three times), even if we only needed one with a max size of pageSize

Let's write this in a classical way, with loops, to see how it works:

const result = [];
let i=0;
for (const c of colors) {
	if (c.R<c.G || c.R<c.B) continue;
	i++;
	if (i<page*pageSize) continue;
	result.push({
      name: c.name,
      color: `#${hex(c.R)}${hex(c.G)}${hex(c.B)}`
    });
	if (result.length>=pageSize) break;
}

And it does this:

  • it iterates through the colors array, but it has an exit condition
  • it ignores not reddish colors
  • it ignores the colors of previous pages, but without storing them anywhere
  • it stores the reddish colors in the result as their transformed version directly
  • it exits the loop if the result is the size of a page, thus only going through (page+1)*pageSize loops

No extra arrays, no extra iterations, only some ugly ass code. But what if we could write this as nicely as in the first example and make it work as efficiently as the second? Because of ECMAScript 6 we actually can!

Take a look at this:

const result = Enumerable.from(colors)
  .where(c => c.R > c.G && c.R > c.B)
  //.orderBy(c => c.name)
  .skip(page * pageSize)
  .take(pageSize)
  .select(c => ({
      name: c.name,
      color: `#${hex(c.R)}${hex(c.G)}${hex(c.B)}`
  }))
  .toArray();

What is this Enumerable thing? It's a class I made to encapsulate the methods .where, .skip, .take and .select and will examine it later. Why these names? Because they mirror similar method names in LINQ (Language Integrated Queries from .NET) and because I wanted to clearly separate them from the array methods.

How does it all work? If you look at the "classical" version of the code you see the new for..of loop introduced in ES6. It uses the concept of "iterable" to go through all of the elements it contains. An array is an iterable, but so is a generator function, also an ES6 construct. A generator function is a function that generates values as it is iterated, the advantage being that it doesn't need to hold all of the items in memory (like an array) and any operation that needs doing on the values is done only on the ones requested by code.

Here is what the code above does:

  • it creates an Enumerable wrapper over array (performs no operation, just assignments)
  • it filters by defining a generator function that only returns reddish colors (but performs no operation) and returns an Enumerable wrapper over the function
  • it ignores the items from previous pages by defining a generator function that counts items and only returns items after the specified number (again, no operation) and returns an Enumerable wrapper over the function
  • it then takes a page full of items, stopping immediately after, by defining a generator function that does that (no operation) and returns an Enumerable wrapper over the function
  • it transforms the colors in output items by defining a generator function that iterates existing items and returns the transformed values (no operation) and returns an Enumerable wrapper over the function
  • it iterates the generator function in the current Enumerable and fills an array with the values (all the operations are performed here)

And here is the flow for each item:

  1. .toArray enumerates the generator function of .select
  2. .select enumerates the generator function of .take
  3. .take enumerates the generator function of .skip
  4. .skip enumerates the generator function of .where
  5. .where enumerates the generator function that iterates over the colors array
  6. the first color is red, which is reddish, so .where "yields" it, it passes as the next item in the iteration
  7. the page is 0, let's say, so .skip has nothing to skip, it yields the color
  8. .take still has pageSize items to take, let's assume 20, so it yields the color
  9. .select yields the color transformed for output
  10. .toArray pushes the color in the result
  11. go to 1.

If for some reason you would only need the first item, not the entire page (imagine using a .first method instead of .toArray) only the steps from 1. to 10. would be executed. No extra arrays, no extra filtering, mapping or assigning.

Am I trying too hard to seem smart? Well, imagine that there are three million colors, a third of them are reddish. The first code would create an array of a million items, by iterating and checking all three million colors, then take a page slice from that (another array, however small), then create another array of mapped objects. This code? It is the equivalent of the classical one, but with extreme readability and ease of use.

OK, what is that .orderBy thing that I commented out? It's a possible method that orders items online, as they come, at the moment of execution (so when .toArray is executed). It is too complex for this blog post, but there is a full implementation of Enumerable that I wrote containing everything you will ever need. In that case .orderBy would only order the minimal number of items required to extract the page ((page+1) * pageSize). The implementation can use custom sorting algorithms that take into account .take and .skip operators, just like in LiNQer.

The purpose of this post was to raise awareness on how Javascript evolved and on how we can write code that is both readable AND efficient.

One actually doesn't need an Enumerable wrapper, and can add the methods to the prototype of all generator functions, as well (see LINQ-like functions in JavaScript with deferred execution). As you can see, this was written 5 years ago, and still people "teach" others that .filter and .map are the Javascript equivalents of .Where and .Select from .NET. NO, they are NOT!

The immense advantage for using a dedicated object is that you can store information for each operator and use it in other operators to optimize things even further (like for orderBy). All code is in one place, it can be unit tested and refined to perfection, while the code using it remains the same.

Here is the code for the simplified Enumerable object used for this post:

class Enumerable {
  constructor(generator) {
	this.generator = generator || function* () { };
  }

  static from(arr) {
	return new Enumerable(arr[Symbol.iterator].bind(arr));
  }

  where(condition) {
    const generator = this.generator();
    const gen = function* () {
      let index = 0;
      for (const item of generator) {
        if (condition(item, index)) {
          yield item;
        }
        index++;
      }
    };
    return new Enumerable(gen);
  }

  take(nr) {
    const generator = this.generator();
    const gen = function* () {
      let nrLeft = nr;
      for (const item of generator) {
        if (nrLeft > 0) {
          yield item;
          nrLeft--;
        }
        if (nrLeft <= 0) {
          break;
        }
      }
    };
    return new Enumerable(gen);
  }

  skip(nr) {
    const generator = this.generator();
    const gen = function* () {
      let nrLeft = nr;
      for (const item of generator) {
        if (nrLeft > 0) {
          nrLeft--;
        } else {
          yield item;
        }
      }
    };
    return new Enumerable(gen);
  }

  select(transform) {
    const generator = this.generator();
    const gen = function* () {
      for (const item of generator) {
		yield transform(item);
      }
    };
    return new Enumerable(gen);
  }

  toArray() {
	return Array.from(this.generator());
  }
}

The post is filled with links and for whatever you don't understand from the post, I urge you to search and learn.

and has 0 comments

A few years ago I wrote an article about using RealProxy to intercept methods and properties calls in order to log them. It was only for .NET Framework and suggested you inherit all intercepted classes from MarshalByRefObject. This one is a companion piece that shows how interception can be done in a more general way and without the need for MarshalByRefObject.

To do that I am going to give you two versions of the same class, one for .NET Framework and one for .NET Core which can be used like this:

//Initial code:
IInterface obj = new Implementation();

//Interceptor added:
IInterface obj = new Implementation();
var interceptor = new MyInterceptor<IInterface>();
obj = interceptor.Decorate(obj);

//Interceptor class (every method there is optional):
public class MyInterceptor<T> : ClassInterceptor<T>
{
    protected override void OnInvoked(MethodInfo methodInfo, object[] args, object result)
    {
        // do something when the method or property call ended succesfully
    }

    protected override void OnInvoking(MethodInfo methodInfo, object[] args)
    {
        // do something before the method or property call is invoked
    }

    protected override void OnException(MethodInfo methodInfo, object[] args, Exception exception)
    {
        // do something when a method or property call throws an exception
    }
}

This code would be the same for .NET Framework or Core. The difference is in the ClassInterceptor code and the only restriction is that your class has to implement an interface for the methods and properties intercepted.

Here is the .NET Framework code:

public abstract class ClassInterceptor<TInterface> : RealProxy
{
    private object _decorated;

    public ClassInterceptor()
        : base(typeof(TInterface))
    {
    }

    public TInterface Decorate<TImplementation>(TImplementation decorated)
        where TImplementation:TInterface
    {
        _decorated = decorated;
        return (TInterface)GetTransparentProxy();
    }

    public override IMessage Invoke(IMessage msg)
    {
        var methodCall = msg as IMethodCallMessage;
        var methodInfo = methodCall.MethodBase as MethodInfo;
        OnInvoking(methodInfo,methodCall.Args);
        object result;
        try
        {
            result = methodInfo.Invoke(_decorated, methodCall.InArgs);
        } catch(Exception ex)
        {
            OnException(methodInfo, methodCall.Args, ex);
            throw;
        }
        OnInvoked(methodInfo, methodCall.Args, result);
        return new ReturnMessage(result, null, 0, methodCall.LogicalCallContext, methodCall);
    }

    protected virtual void OnException(MethodInfo methodInfo, object[] args, Exception exception) { }
    protected virtual void OnInvoked(MethodInfo methodInfo, object[] args, object result) { }
    protected virtual void OnInvoking(MethodInfo methodInfo, object[] args) { }
}

In it, we use the power of RealProxy to create a transparent proxy. For Core we use DispatchProxy, which is the .NET Core replacement from Microsoft. Here is the code:

public abstract class ClassInterceptor<TInterface> : DispatchProxy
{
    private object _decorated;

    public ClassInterceptor() : base()
    {
    }

    public TInterface Decorate<TImplementation>(TImplementation decorated)
        where TImplementation : TInterface
    {
        var proxy = typeof(DispatchProxy)
                    .GetMethod("Create")
                    .MakeGenericMethod(typeof(TInterface),GetType())
                    .Invoke(null,Array.Empty<object>())
            as ClassInterceptor<TInterface>;

        proxy._decorated = decorated;

        return (TInterface)(object)proxy;
    }

    protected override object Invoke(MethodInfo targetMethod, object[] args)
    {
        OnInvoking(targetMethod,args);
        try
        {
            var result = targetMethod.Invoke(_decorated, args);
            OnInvoked(targetMethod, args,result);
            return result;
        }
        catch (TargetInvocationException exc)
        {
            OnException(targetMethod, args, exc);
            throw exc.InnerException;
        }
    }


    protected virtual void OnException(MethodInfo methodInfo, object[] args, Exception exception) { }
    protected virtual void OnInvoked(MethodInfo methodInfo, object[] args, object result) { }
    protected virtual void OnInvoking(MethodInfo methodInfo, object[] args) { }
}

DispatchProxy is a weird little class. Look how it generates an object which can be cast simultaneously to T or Class<T>!

There are many other things one can do to improve this class:

  • the base class could make the distinction between a method call and a property call. In the latter case the MethodInfo object will have IsSpecialName true and start with set_ or get_
  • for async/await scenarios and not only, the result of a method would be a Task<T> and if you want to log the result you should check for that, await the task, get the result, then log it. So this class could make this functionality available out of the box
  • support for Dependency Injection scenarios could also be added as the perfect place to use interception is when you register an interface-implementation pair. An extension method like container.RegisterSingletonWithLogging could be used instead of container.RegisterSingleton, by registering a factory which replaces the implementation with a logging proxy

I hope this helps!

P.S. Here is an article helping to migrate from RealProxy to DispatchProxy: Migrating RealProxy Usage to DispatchProxy

Definition

So, the task at hand is the subject of a common interview question: Implement an algorithm to get all valid (opened and closed) combinations of n pairs of parentheses. This means that for n=1 there is only one solution: "()". "((" or "))" are not valid, for 2 you will have "(())" and "()()" and so on. The question is trying to test how the interviewee handles recursion and what is commonly called backtracking. But as usual, there's more than one way to skin a cat, although for the life of me I can't see why you would want to do that.

The solutions here will be in C# and the expected result is an enumeration of strings containing open and closed parentheses. The code can be easily translated into other languages, including Javascript (ECMAScript 2015 introduced iterators and generator functions), but that's left to the reader. Let's begin.

Analysis

Before we solve any problem we need to analyse it and see what are the constraints and the expected results. In this case there are several observations that can be made:

  • the resulting strings will be of length n*2 (n pairs)
  • they will contain n '(' characters and n ')' characters
  • they cannot start with a ')' or end in a '('
  • in order to generate such a string, we can start with a smaller string to which we add '(' or ')'
  • we cannot add a ')' if there isn't at least one corresponding unclosed '(' 
  • if we add a '(' we need to have enough characters left to close the parenthesis, so the number of unclosed parentheses cannot exceed the characters left to fill
  • we could count the open and closed parentheses, but we only care about the number of unclosed ones, so instead of "closed" and "open" values, we can only use "open" to represent unclosed parentheses

Let's go for some variables and calculations:

  • n = number of pairs
  • open = number of unclosed parentheses in a string
  • open cannot be negative
  • one cannot add ')' if open = 0
  • one cannot add '(' if open >= n*2 - substring.length

Recursive solution

A simple implementation of these requirements can done with recursion:

public IEnumerable<string> GenerateRecursive(int n, string root = "", int open = 0)
{
    // substring is long enough, return it and exit
    if (root.Length == n * 2)
    {
        yield return root;
        yield break;
    }
    // if we can add '(' to existing substring, continue the process with the result
    if (open < n * 2 - root.Length)
    {
        // if only C# could just 'yield IteratorFunction()' this would look sleeker
        foreach (var s in GenerateRecursive(n, root + "(", open + 1))
        {
            yield return s;
        }
    }
    // if we can add ')' to existing substring, continue the process with the result
    if (open > 0)
    {
        foreach (var s in GenerateRecursive(n, root + ")", open - 1))
        {
            yield return s;
        }
    }
}

However, every time you see recursion you have to ask yourself: could n be large enough to cause a stack overflow? For example this fails for n=3000. The nice thing about this method, though, is that it can be limited to the number of items you want to see. For example var firstTen = GenerateRecursive(1000).Take(10) is very fast, as the generation is depth first and only computes the first ten values and exits.

So, can we replace the recursion with iteration?

Iterative solution

In order to do thing iteratively, we need to store the results of the previous step and use them in the current step. This means breadth first generation, which has its own problems. Let's see some code:

public IEnumerable<string> GenerateIteration(int n)
{
    // using named tuples to store the unclosed parentheses count with the substring
    var results = new List<(string Value,int Open)>() { ("",0) };
    for (int i = 0; i < n*2; i++)
    {
        // each step we compute the list of new strings from the list in the previous step
        var newResults = new List<(string Value, int Open)>();
        foreach (var (Value, Open) in results)
        {
            if (Open < n * 2 - Value.Length)
            {
                newResults.Add((Value + "(", Open + 1));
            }
            if (Open > 0)
            {
                newResults.Add((Value + ")", Open - 1));
            }
        }
        results = newResults;
    }
    return results.Select(r=>r.Value);
}

It's pretty sleek, but if you try something like var firstTen = GenerateRecursive(1000).Take(10) now it will take forever since all combinations of 1000 parentheses need to be computed and stored before taking the first 10! BTW, we can write this much nicer with LINQ, but be careful at the gotcha in the comment:

public IEnumerable<string> GenerateLinq(int n)
{
    // looks much nicer with LINQ
    IEnumerable<(string Value, int Open)> results = new[] { ("", 0) };
    for (var i = 0; i < n * 2; i++)
    {
        results =
            results
                .Where(r => r.Open < n * 2 - r.Value.Length)
                .Select(r => (Value: r.Value + "(", Open: r.Open + 1))
            .Concat(results
                .Where(r => r.Open > 0)
                .Select(r => (Value: r.Value + ")", Open: r.Open - 1))
            );  // but if you do not end this with a .ToList()
                // it will generate a huge expression that then will be evaluated at runtime! Oops!
    }
    return results.Select(r => r.Value);
}

But can't we do better? One is going to stack overflow, the other memory overflow and the last one kind of does both.

Incremental solution

They did say this requires an incremental solution, right? So why don't we take this literally? '(' and ')' are like 0 and 1, as ')' must always follow a '('. If you view a parenthesis string as a binary number, then all possible combinations can be encoded as numbers. This means that we could conceivably write a very fast function that would compute all possible combinations using bit operations, maybe even special processor instructions that count bits and so on. However, this would work only for n<=32 or 64 depending on the processor architecture and we don't want to get into that. But we can still use the concept!

If a string represents a fictional number, then you can start with the smallest one, increment it and check for validity. If you combine the incremental operation with the validity check you don't need to go through 2n operations to get the result. It doesn't use any memory except the current string and it is depth first generation. The best of both worlds! Let's see some code:

public IEnumerable<string> GenerateIncrement(int n)
{
    // the starting point is n open parentheses and n closing ones
    // we use the same array of characters to generate the strings we display
    var arr = (new string('(', n) + new string(')', n)).ToCharArray();
    // iteration will stop when incrementation reaches the "maximum" valid combination
    var success = true;
    while (success)
    {
        yield return new string(arr);
        success = Increment(arr, n);
    }
}

private bool Increment(char[] arr, int n)
{
    // we begin with a valid string, which means there are no unclosed parentheses
    var open = 0;
    // we start from the end of the string
    for (var i = arr.Length - 1; i >= 0; i--)
    {
        // ')' is equivalent to a 1. To "increment" this string we need to go to the previous position
        // incrementing 01 in binary results in 10
        if (arr[i] == ')')
        {
            open++;
            continue;
        }

        // '(' is equivalent to a 0. We will turn it into a ')' to increment it,
        // but only if there are unclosed parentheses to close
        open--;
        if (open == 0) continue;

        // we 'increment' the value
        arr[i] = ')';
        // now we need to reset the rest of the array
        var k = n - (open + i) / 2;
        // as many opening parenthesis as possible
        for (var j = i + 1; j < i + 1 + k; j++)
        {
            arr[j] = '(';
        }
        // the rest are closing parentheses
        for (var j = i + 1 + k; j < n * 2; j++)
        {
            arr[j] = ')';
        }
        return true;
    }
    // if we reached this point it means we got to a maximum
    return false;
}

Now doing GenerateIncrement(1000000).Take(10) took more to display the results than to actually compute them.

More solutions

As this is a classic interview question, there are a billion solutions to it at LeetCode. Yet the purpose of interview questions is to find out how one thinks, not what the solution of the problem actually is. I hope this helps.