and has 0 comments
I was playing with code analysis rule sets in Visual Studio (see my blog post about it) and I got hit by come conflicting rules. I will discuss only SonarSource rules, but a lot of other analyzers have similar rules.

OK, one of them is something that I intuitively thought was universally good: RSPEC-3962: "static readonly" constants should be "const" instead. Makes sense, right? A constant is compiled better, integrated faster, it's a constant! No overhead, nothing changes it. This rule was marked as a minor improvement to the code, anyway.

Then, bam!, RSPEC-2339: Public constant members should not be used. Critical rule! Basically it says the opposite: turn your constant into static readonly. What's going on?!

This is not one of those pairs of rules that contradict each other based on user preference, like using var instead of the type name when the type is obvious and viceversa. These are two different, apparently conflicting, yet complementary concepts.

But what is really the difference between a static readonly field and a constant, other than constants can only be value types? Constant values are retrieved at compile time, as an optimization, since they are not expected to change, while static readonly values are retrieved at runtime. This means that if you use a library in your project, the constants it declares will be incorporated into your application when you compile it. You may change the .dll of the library afterwards, with inconsistent results, since readonly statics will now have changed values and the constants not.

Here, an example. In the creatively named project Library there is a Container class with a public constant ingeniously named Constant and a public static readonly field that has the same value as Constant.
namespace Library
{
public class Container
{
public const int Constant = 1;
public static readonly int StaticReadonly = Constant;
}
}

Then there is a program that uses these two values to display them:
class Program
{
static void Main(string[] args)
{
Console.WriteLine($"Container.Constant: {Container.Constant} Container.StaticReadonly: {Container.StaticReadonly}");
Console.ReadKey();
}
}

The expected output is Container.Constant: 1 Container.StaticReadonly: 1. Now change the value of Constant to 2, right click the Library project and only build it, not the program. Then take the resulting .dll and copy it in the bin folder of the program, then run it manually. The output is now... Container.Constant: 1 Container.StaticReadonly: 2 and that from a code like StaticReadonly = Constant;.

Conclusion: public constants should be avoided if they are used between projects and since you don't know where they will be used, better to avoid them at all times. This will really annoy people who like to create separate classes to store constants, but that's OK, because the feeling is mutual.

and has 0 comments
So I was watching this Entity Framework presentation and I noticed one example that looked like this:
db.ExecuteSqlCommand($"delete from Log where Time<{time}");

Was this an invitation to SQL injection? Apparently not, since the resulting SQL was something like DELETE FROM Log WHERE Time < @_p0. But how could that be? Enter FormattableString, which is a class implementing the venerable IFormattable interface, but which is available in .NET Framework only from version 4.6 and in .NET Core from the very beginning. Apparently, when an interpolated string is assigned to a FormattableString, it is compiled as an instance with all the values from the string before the formatting. In our case ExecuteSqlCommand had a FormattableString overload. Note that the method is an extension method from RelationalDatabaseFacadeExtensions, not Database.ExecuteSqlCommand.

Let's test this with a little program:
class Program
{
static void Main(string[] args)
{
var timeDisplay = new TimeDisplay();
Test($"Time display:{timeDisplay}");
Console.ReadKey();
}
 
private static void Test(string text)
{
Console.WriteLine(text);
}
 
private class TimeDisplay
{
public override string ToString()
{
return DateTime.Now.ToString("s");
}
}
}

Here I create an instance of TimeDisplay and then use it in an interpolated string which is then sent to the Test method, which Console.WriteLines it. The ToString method of TimeDisplay is overridden to display the current time. The result is predictable: Time display:2018-12-13T11:24:02. I will then change the type of the parameter of Test to be FormattableString. It still works and it displays the same thing. Note that if I have both a FormattableString and a string version of the same method, string will be used first when an interpolated string is sent as a parameter!

But what do I get in that instance? Let's change the Test method even more:
private static void Test(FormattableString text)
{
Console.WriteLine($"Format: {text.Format} " +
$"ArgumentCount: {text.ArgumentCount} " +
$"Arguments: {string.Join(", ",text.GetArguments())}");
}

The displayed result of the program is now Format: Time display:{0} ArgumentCount: 1 Arguments: 2018-12-13T11:28:35. Note that the argument is in fact a TimeDisplay instance and it is displayed as a time stamp because of the ToString override.

What does this mean?

Well, we can do great things like Entity Framework does, interpreting the intent of the developer and providing a more informed output. I am considering this as a solution for logging. Logger.LogDebug($"{someObjectWithAHeavyToString}") now doesn't have to execute the ToString() method of the object unless the Debug log level is enabled, for example.

But we can also really mess things up. I will get past the possible yet unlikely security problem where you believe you pass an object as .ToString() and in fact it is passed as the entire object, allowing a malicious library to do whatever it wants with it. Let's consider more probable scenarios.

One is that a code reviewer will tell you "put magic strings in their own variables or constants", so you immediately take the string sent to test and automatically move it a local variable (which Visual Studio will create it as a FormattableString), then you replace that with var (because the type is obvious, right?). Suddenly the test variable is a string.

Another is even worse, although if you decided to code like this you have other issues. Let's get back to something similar to the original example:
db.ExecuteSqlCommand($"delete from Log where Id = {id}");

And let's change it:
var sql=$"delete from Log where Id = {id}";
db.ExecuteSqlCommand(sql);

Now sql is a string, its value is computed from the id, which might be provided by the user. Replace this with Bobby Tables and you got a nice SQL injection.

Conclusion: an interesting, if somewhat confusing, concept. Other than the logging idea, which I admit is pretty interesting, I am yet to find a good place to use it.

Intro


An adapter is a software pattern that exposes functionality through an interface different from the original one. Let's say you have an oven, with the function Bake(int temperature, TimeSpan time) and you expose a MakePizza() interface. It still bakes at a specific temperature for an amount of time, but you use it differently. Sometimes we have similar libraries with a common goal, but different scope, that one is tempted to hide under a common adapter. You might want to just cook things, not bake or fry.

So here is a post about good practices of designing a library project (complete with the use of software patterns, ugh!).



Examples


An example in .NET would be the WebRequest.Create method. It receives an URI as a parameter and, based on its type, returns a different implementation that will handle the resource in the way declared by the WebRequest. For HTTP, it will used an HttpWebRequest, for FTP an FtpWebRequest, for file access a FileWebRequest and so on. They are all implementations of the abstract class WebRequest which would be our adapter. The Create method itself is an example of the factory method pattern.

But there are issues with this. Let's assume that we have different libraries/projects that handle a specific resource scope. They may be so different as to be managed by different organizations. A team works on files, another on HTTP and the FTP one is an open source third party library. Your team works on the WebRequest class and has to consider the implications of having a Create factory method. Is there a switch there? "if URI starts with http or https, return new HttpWebRequest"? In that case, your WebRequest library will need to depend on the library that contains HttpWebRequest! And it's just not possible, since it would be a circular reference. Had your project control over all implementations, it would still be a bad idea to let a base class know about a derived class. If you move the factory into a factory class it still means your adapter library has to depend on every implementation of the common interface. As Joe Armstrong would say You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

So how did Microsoft solve it? Well, they did move the implementation of the factory in another creator class that would implement IWebRequestCreate. Then they used configuration to associate a prefix with an implementation of WebRequest. Guess you didn't know that, did you? You can register your own implementations via code or configuration! It's such an obscure feature that if you Google WebRequestModulesSection you mostly get links to source code.

Another very successful example of an adapter library is jQuery. Yes, the one they now say you don't need anymore, it took industry only 12 years to catch up after all. Anyway, at the time there were very different implementations of what people thought a web browser should be. The way the DOM was represented, the Javascript objects and methods, the way they actually worked compared to the way they should have worked, everything was different. So developers were often either favoring a browser over others or were forced to write code for each possible version. Something like "if Internet Explorer, do A, if Netscape, do B". The problem with this is that if you tried to use a browser that was neither Internet Explorer or Netscape, it would either break or show you one of those annoying "browser not supported" messages.

Enter jQuery, which abstracted access over all these different interfaces with a common (and very nicely designed) one. Not only did it have a fluent interface that allowed you to do multiple things with a single target (stuff like $('#myElement').show().css({opacity:0.7}).text('My text');), but it was extensible, allowing third parties to add modules that would allow even more functionality ($('#myElement').doSomethingCool();). Sound familiar? Extensibility seems to be an important common feature of well designed adapters.

Speaking of jQuery, one very used feature was jQuery.browser, which told you what browser you were using. It had a very sophisticated and complex code to get around the quirks of every browser out there. Now you had the ability to do something like if ($.browser.msie) say('OMG! You like Microsoft, you must suck!'); Guess what, the browser extension was deprecated in jQuery 1.9 and not because it was not inclusive. Well, that's the actual reason, but from a technical point of view, not political correctness. You see, now you have all this brand new interface that works great on all browsers and yet still your browser can't access a page correctly. It's either an untested version of a particular browser, or a different type of browser, or the conditions for letting the user in were too restrictive.

The solution was to rely on feature detection, not product versions. For example you use another Javascript library called Modernizr and write code like if (Modernizr.localstorage) { /* supported */ } else { /* not-supported */ }. There are so many possible features to detect that Modernizr lets you pick and choose the ones you need and then constructs the library that handles each instead of bundling it all in one huge package. They are themselves extensible. You might ask what all this has to do with libraries in .NET. I am getting there.

The last example: Entity Framework. This is a hugely popular framework for database access from Microsoft. It would abstract the type of the database behind a very nice (also fluent) interface in .NET code. But how does it do that? I mean, what if I need SQL Server? What if I want MongoDB or PostgreSQL?

The way is having different "providers" to translate .NET code Expressions into whatever the storage needs. The individual providers are added as dependencies to your project, without the need for Entity Framework to know about them. Then they are configured for use in code, because they implement some common interfaces, and they are ready for use.

Principles for adapters


So now we have some idea about what is good in an adapter:
  • Ease of use
  • Common interface
  • Extensibility
  • No direct dependency between the interface and what is adapted
  • An interface per feature

Now that I wrote it down, it sounds kind of weird: the interface should not depend on what it adapts. It is correct, though. In the case of Entity Framework, for example, the provider for MySql is an adapter between the use interface of MySql and the .NET interfaces declared by Entity Framework; interfaces are just declarations of what something should do, not implementation.

Picture time!


The factory and the common interface are one library that will use that library in your project. Each individual adapter depends on it, as well, but your project doesn't need to know about it until needed.

Now, it's your choice if you register the adapters dynamically (so, let's say you load the .dll and extract the objects that implement a specific interface and they know themselves to what they apply, like FtpWebRequest for ftp: strings) or you add dependencies to individual adapters to your project and then manually register them yourself and strong typed. The important thing is that you don't reference the factory library and automatically be forced to get all the possible implementations added to your project.

It seems I've covered all points except the last one. That is pretty important, so read on!

Imagine that the things you want to adapt are not really that similar. You want to force them into a common shape, but there will be bits that are specific to one domain only and you might want them. Now here is an example of how NOT to do things:
var target = new TargetFactory().Get(connectionString);
if
(target is SomeSpecificTarget specificTarget) {
specificTarget.Authenticate(username, password);
}
target.DoTargetStuff();
In this case I use the adapter for Target, but then bring in the knowledge of a specific target called SomeSpecificTarget and use a method that I just know is there. This is bad for several reasons:
  1. For someone to understand this code they must know what SomeSpecificTarget does, invalidating the concept of an adapter
  2. I need to know that for that specific connection string a certain type will always be returned, which might not be the case if the factory changes
  3. I need to know how SomeSpecificTarget works internally, which might also change in the future
  4. I must add a dependency to SomeSpecificTarget to my project, which is at least inconsistent as I didn't add dependencies to all possible Target implementations
  5. If different types of Target will be available, I will have to write code for all possibilities
  6. If new types of Target become available, I will have to change the code for each new addition to what is essentially third party code

And now I will show you two different versions that I think are good. The first is simple enough:
var target = new TargetFactory().Get(connectionString);
if
(target is IAuthenticationTarget authTarget) {
authTarget.Authenticate(username, password);
}
target.DoTargetStuff();
No major change other than I am checking if the target implements IAuthenticationTarget (which would best be an interface in the common interface project). Now every target that requires (or will ever require) authentication will receive the credentials without the need to change your code.

The other solution is more complex, but it allows for greater flexibility:
var serviceProvider = new TargetFactory()
.GetServiceProvider(connectionString);
var target = serviceProvider.Get<ITargetProvider>()
.Get();
serviceProvider.Get<ICredentialsManager>()
?.AddCredentials(target, new Credentials(username, password));
target.DoTargetStuff();
So here I am not getting a target, but a service provider (which is another software pattern, BTW), based on the same connection string. This provider will give me implementations of a target provider and a credentials manager. Now I don't even need to have a credentials manager available: if it doesn't exist, this will do nothing. If I do have one, it will decide by itself what it needs to do with the credentials with a target. Does it need to authenticate now or later? You don't care. You just add the credentials and let the provider decide what needs to be done.

This last approach is related to the concept of inversion of control. Your code declares intent while the framework decides what to do. I don't need to know of the existence of specific implementations of Target or indeed of how credentials are being used.

Here is the final version, using extension methods in a method chaining fashion, similar to jQuery and Entity Framework, in order to reinforce that Ease of use principle:
// your code
var target = new TargetFactory()
.Get(connectionString)
.WithCredentials(username,password);
 
 
// in a static extensions class
 
public static Target WithCredentials(this Target target, string username, string password)
{
target.Get<ICredentialsProvider>()
?.AddCredentials(target, new Credentials(username, password));
return target;
}
 
public static T Get<T>(this Target target)
{
return target.GetServiceProvider()
.Get<T>();
}
This assumes that a Target has a method called GetServiceProvider which will return the provider for any interface required so that the whole code is centered on the Target type, not IServiceProvider, but that's just one possible solution.

Conclusion


As long as the principles above are respected, your library should be easy to use and easy to extend without the need to change existing code or consider individual implementations. The projects using it will only use the minimum amount of code required to do the job and themselves be dependent only on interface declarations. As well as those are respected, the code will work without change. It's really meta: if you respect the interface described in this blog then all interfaces will be respected in the code all the way down! Only some developer locked in a cellar somewhere will need to know how things are actually getting done.

and has 0 comments
I was attempting to add very detailed logging (tracing) to my application. In order to do that, I had to change hundreds of objects. That wouldn't do. Fortunately, .NET has a nice feature called RealProxy. Let's see a quick and dirty example:
    class Program
{
static void Main(string[] args)
{
JsonConvert.DefaultSettings = () => new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
NullValueHandling = NullValueHandling.Ignore,
Formatting = Formatting.Indented
};

Test test = new Test();
test.field = "test";
test.Property = "Test";
string outObject;
string refObject = "ref";
var result = test.Method("test1", "test2", ref refObject, out outObject);
Console.WriteLine(JsonConvert.SerializeObject(new object[] {
test,
result,
refObject,
outObject
})
);
Console.ReadKey();
}
 
}
 
class Test
{
public string field;
public string Property { get; set; }
public string Method(string parameter1, string parameter2,
ref string refObject, out string outObject)
{
if (parameter1 == null)
throw new ArgumentNullException("Parameter one cannot be null");
refObject += " reffed";
outObject = "outed";
return $"{parameter1} : {parameter2}";
}
}

So I defined an object with a field, a property and a method. That's what my application does. It then displays the object and the result of the method call. Here is the result:
[
{
"field": "test",
"Property": "Test"
},
"test1 : test2",
"ref reffed",
"outed"
]

I would like to log everything that happens with my Test object. Well, as such I can't do anything, I need to change the code a bit, like this:
    class Program
{
static void Main(string[] args)
{
JsonConvert.DefaultSettings = ()=>new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
NullValueHandling = NullValueHandling.Ignore,
Formatting = Formatting.Indented
};

Test test = (Test)new LoggingProxy(new Test()).GetTransparentProxy();
test.field = "test";
test.Property = "Test";
string outObject;
string refObject = "ref";
var result = test.Method("test1", "test2", ref refObject, out outObject);
Console.WriteLine(JsonConvert.SerializeObject(new object[] {
test,
result,
refObject,
outObject
})
);
Console.ReadKey();
}
 
}
 
class Test: MarshalByRefObject
{
public string field;
public string Property { get; set; }
public string Method(string parameter1, string parameter2,
ref string refObject, out string outObject)
{
if (parameter1 == null)
throw new ArgumentNullException("Parameter one cannot be null");
refObject += " reffed";
outObject = "outed";
return $"{parameter1} : {parameter2}";
}
}
 
class LoggingProxy : RealProxy
{
private readonly object _target;
 
public LoggingProxy(object obj) : base(obj?.GetType())
{
_target = obj;
}
 
public override IMessage Invoke(IMessage msg)
{
if (msg is IMethodCallMessage methodCall)
{
var arguments = methodCall.Args.ToArray();
var result = methodCall.MethodBase.Invoke(_target, arguments);
return new ReturnMessage(
result,
arguments,
arguments.Length,
methodCall.LogicalCallContext,
methodCall);
}
return null;
}
}

So this is what I did above:
  • I've inherited my Test class from MarshalByRefObject
  • I've created a LoggingProxy class that inherits from RealProxy and implements the Invoke method
  • I've replaced new Test(); with (Test)new LoggingProxy(new Test()).GetTransparentProxy();

Running it we get the same result.

Time to add some logging. I will write stuff on the Console, too, for this demo. Here are the changes to the LoggingProxy class:
    class LoggingProxy : RealProxy
{
private readonly object _target;
 
public LoggingProxy(object obj) : base(obj?.GetType())
{
_target = obj;
}
 
public override IMessage Invoke(IMessage msg)
{
if (msg is IMethodCallMessage methodCall)
{
var arguments = methodCall.Args.ToArray();
string typeName;
try
{
typeName = Type.GetType(methodCall.TypeName).Name;
}
catch
{
typeName = methodCall.TypeName;
}
try
{
Console.WriteLine($"Called {typeName}.{methodCall.MethodName}" +
$"({JsonConvert.SerializeObject(arguments)})");
var result = methodCall.MethodBase.Invoke(_target, arguments);
Console.WriteLine($"Success for {typeName}.{methodCall.MethodName}" +
$"({JsonConvert.SerializeObject(arguments)}): " +
$"{JsonConvert.SerializeObject(result)}");
return new ReturnMessage(
result,
arguments,
arguments.Length,
methodCall.LogicalCallContext,
methodCall);
}
catch (Exception exception)
{
Console.WriteLine($"Error for {typeName}.{methodCall.MethodName}" +
$"({JsonConvert.SerializeObject(arguments)}): " +
$"{exception}");
return new ReturnMessage(exception, methodCall);
}
}
return null;
}
}

It's the same as before, but with a try/catch block and some extra Console.WriteLines. Here is the output:
Called Object.FieldSetter([
"RealProxyTests2.Test",
"field",
"test"
])
Success for Object.FieldSetter([
"RealProxyTests2.Test",
"field",
"test"
]): null
Called Test.set_Property([
"Test"
])
Success for Test.set_Property([
"Test"
]): null
Called Test.Method([
"test1",
"test2",
"ref",
null
])
Success for Test.Method([
"test1",
"test2",
"ref reffed",
"outed"
]): "test1 : test2"
Called Object.GetType([])
Success for Object.GetType([]): "RealProxyTests2.Test, RealProxyTests2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"
Called Object.FieldGetter([
"RealProxyTests2.Test",
"field",
null
])
Success for Object.FieldGetter([
"RealProxyTests2.Test",
"field",
"test"
]): null
Called Test.get_Property([])
Success for Test.get_Property([]): "Test"
[
{
"field": "test",
"Property": "Test"
},
"test1 : test2",
"ref reffed",
"outed"
]

A bounty of information. The first lines are the setting of the field, Property and the execution of Method. But then there are a GetType, a field getter and a Property getter. What's that about? That's JsonConvert, serializing the Test proxy.
Warning: I've copied methodCall.Args into a local property called arguments. At first glance it might appear superfluous, but methodCall.Args is not really an array of object, even if it appears that way in the debugger. It is read-only, changing any of its items has no effect.

Remember how we defined the proxy in Program.Main? We can use a method in our proxy object, let's call it Wrap:
    public static T Wrap<T>(T obj)
{
if (obj is MarshalByRefObject marshalByRefObject)
{
return (T)new LoggingProxy(marshalByRefObject).GetTransparentProxy();
}
return obj;
}

Conclusion:

While this works, there are a series of issues related to it:
  1. RealProxy is only available for .NET Framework, not .NET Core (see DispatchProxy for that)
  2. Serialization is not so straightforward as in this demo (see my blog post about Newtonsoft serialization
  3. You need to inherit from MarshalByRefObject, so this solution doesn't work for classes that already have a base class
  4. Performance wise you should make sure you are not logging or executing anything unless the correct log level is set (something like logger.LogTrace(JsonConvert(...)) would not work because the JSON serialization occurs no matter before executing LogTrace)
  5. Also, this wrapping is not free. With a simple proxy that did nothing than execute the code it took 43 times more time to run. Of course, that's because the actual execution of setting properties or returning a string is basically zero. When I added a Thread.Sleep(1) in the method, it took almost the same amount of time. Just don't use it in performance sensitive applications

Final thoughts: in the same namespace with RealProxy there is a ProxyAttribute class that at first glance seems to be even better: you just decorate a class with the attribute and BANG! instant AOP. But it's not that simple. First of all it works only on object that inherit from ContextBoundObject which itself inherit from MarshalByRefObject. And while it seems like a good idea to just replace MarshalByRefObject with ContextBoundObject in the code above, know that no generic class can inherit from it. There might be other restrictions, too. If you make Test inherit ContextBoundObject, the debugger will already show new Test() as being a transparent proxy, without wrapping it with any code. It might still be usable in certain conditions, though.

and has 0 comments
I was trying to log some stuff (a lot of stuff) and I noticed that my memory went to the roof (16GB in a few seconds), then an OutOfMemoryException was thrown. I've finally narrowed it down to the JSON serializer from Newtonsoft.

First of all, some introduction on how to serialize any object into JSON: whether you use JsonConvert.SerializeObject(yourObject,new JsonSerializerSettings { <settings> }) or new JsonSerializer { <settings> }.Serialize(writer, object) (where <settings> are some properties set via the object initializer syntax) you will need to consider these properties:
We will use these classes to test the results:
class Parent
{
public string Name { get; set; }
public Child Child1 { get; set; }
public Child Child2 { get; set; }
public Child[] Children { get; set; }
public Parent Self { get; internal set; }
}
 
class Child
{
public string Name { get; set; }
public Parent Parent { get; internal set; }
}

For this piece of code:
JsonConvert.SerializeObject(new Child { Name = "other child" }, settings)
you will get either
{"Name":"other child","Parent":null}
or
{
"Name": "other child",
"Parent": null
}
based on whether we use Formatting.None or Formatting.Indented. The other properties do not affect the serialization (yet).

Let's set up the following objects:
var child = new Child
{
Name = "Child name"
};
var parent = new Parent
{
Name = "Parent name",
Child1 = child,
Child2 = child,
Children = new[]
{
child, new Child { Name = "other child"}
}
};
parent.Self = parent;
child.Parent = parent;

As you can see, not only does parent have multiple references to child and one to himself, but the child also references the parent. If we try the code
JsonConvert.SerializeObject(parent, settings)
we will get an exception Newtonsoft.Json.JsonSerializationException: 'Self referencing loop detected for property 'Parent' with type 'Parent'. Path 'Child1'.'. In order to avoid this, we can use ReferenceLoopHandling.Ignore, which tells the serializer to ignore circular references. Here is the output when using
var settings = new JsonSerializerSettings
{
Formatting = Formatting.Indented,
ReferenceLoopHandling=ReferenceLoopHandling.Ignore
};

{
"Name": "Parent name",
"Child1": {
"Name": "Child name"
},
"Child2": {
"Name": "Child name"
},
"Children": [
{
"Name": "Child name"
},
{
"Name": "other child",
"Parent": null
}
]
}

If we add NullValueHandling.Ignore we get
{
"Name": "Parent name",
"Child1": {
"Name": "Child name"
},
"Child2": {
"Name": "Child name"
},
"Children": [
{
"Name": "Child name"
},
{
"Name": "other child"
}
]
}
(the "Parent": null bit is now gone)

The default for the ReferenceLoopHandling property is ReferenceLoopHandling.Error, which throws the serialization exception above, but we can also use ReferenceLoopHandling.Serialize besides Error and Ignore. In that case we get a System.StackOverflowException: 'Exception of type 'System.StackOverflowException' was thrown.' as it tries to serialize at infinitum.

PreserveReferencesHandling is rather interesting. It creates extra properties for objects like $id, $ref or $values and then uses those to define objects that are circularly referenced. Let's use this configuration:
var settings = new JsonSerializerSettings
{
Formatting = Formatting.Indented,
NullValueHandling = NullValueHandling.Ignore,
PreserveReferencesHandling = PreserveReferencesHandling.Objects
};

Then the result will be
{
"$id": "1",
"Name": "Parent name",
"Child1": {
"$id": "2",
"Name": "Child name",
"Parent": {
"$ref": "1"
}
},
"Child2": {
"$ref": "2"
},
"Children": [
{
"$ref": "2"
},
{
"$id": "3",
"Name": "other child"
}
],
"Self": {
"$ref": "1"
}
}

Let's try PreserveReferencesHandling.Arrays:
var settings = new JsonSerializerSettings
{
Formatting = Formatting.Indented,
NullValueHandling = NullValueHandling.Ignore,
ReferenceLoopHandling=ReferenceLoopHandling.Ignore,
PreserveReferencesHandling = PreserveReferencesHandling.Arrays
};

The result will then be
{
"Name": "Parent name",
"Child1": {
"Name": "Child name"
},
"Child2": {
"Name": "Child name"
},
"Children": {
"$id": "1",
"$values": [
{
"Name": "Child name"
},
{
"Name": "other child"
}
]
}
}
which annoyingly adds an $id to the Children array. There is one more possible value, PreserveReferencesHandling.All, which causes this output:
{
"$id": "1",
"Name": "Parent name",
"Child1": {
"$id": "2",
"Name": "Child name",
"Parent": {
"$ref": "1"
}
},
"Child2": {
"$ref": "2"
},
"Children": {
"$id": "3",
"$values": [
{
"$ref": "2"
},
{
"$id": "4",
"Name": "other child"
}
]
},
"Self": {
"$ref": "1"
}
}

I personally recommend using PreserveReferencesHandling.Objects, which doesn't need setting the ReferenceLoopHandling property at all. Unfortunately, it adds an $id to every object, even if it is not circularly defined. However, it creates an object that can be safely deserialized back into the original, but if you just want a quick and dirty output of the data in an object, use ReferenceLoopHandling.Ignore with NullValueHandling.Ignore. Note that object references cannot be preserved when a value is set via a non-default constructor such as types that implement ISerializable.

Warning, though, this is still not enough! In my logging code I had used ReferenceLoopHandling.Ignore and the exception was quite different, an OutOfMemoryException. It seems that even with circular references checked, JsonSerializer will messes up some times.

The culprits? Task<T> (or async lambdas send as parameters) and an Entity Framework context object. The solution I employed was to check the type of the objects I send to the serializer and, if any of the offending types, replace them with the full names of their types.

Hope it helps!

This is a simple gotcha related to changing the color of a control. Let's say you have a label that you want to present in a different color. Normally you would do something like this:
<!-- I put it somewhere in Product.wxs -->
<TextStyle Id="WixUI_Font_Normal_Red" FaceName="Tahoma" Size="8" Red="255" Green="55" Blue="55" />
 
<!-- somewhere in your UI -->
<Control Id="LabelRed" Type="Text" X="62" Y="200" Width="270" Height="17" Property="MYPROPERTY">
<Text>{\WixUI_Font_Normal_Red}!(loc.MYPROPERTY)</Text>
</Control>

Yet for some reason, it doesn't work when the control is a checkbox, for example. The simple explanation is that this is by design: only text controls can change color. The solution is to split your control into the edit control without a text, then add a text control next to it with the color you need.

Here is an example of a checkbox that changes the label color based on the check value:
        <Control Id="DoNotRunScriptsCheckbox" Type="CheckBox" X="45" Y="197" Height="17" Width="17" Property="DONOTRUNSCRIPTS" CheckBoxValue="1"/>
 
<Control Id="DoNotRunScriptsLabel" Type="Text" X="62" Y="200" Width="270" Height="17" CheckBoxPropertyRef="DONOTRUNSCRIPTS">
<Text>!(loc.DoNotRunScriptsDescription)</Text>
<Condition Action="hide"><![CDATA[DONOTRUNSCRIPTS]]></Condition>
<Condition Action="show"><![CDATA[NOT DONOTRUNSCRIPTS]]></Condition>
</Control>
 
<Control Id="DoNotRunScriptsLabelRed" Type="Text" X="62" Y="200" Width="270" Height="17" CheckBoxPropertyRef="DONOTRUNSCRIPTS">
<Text>{\WixUI_Font_Normal_Red}!(loc.DoNotRunScriptsDescription)</Text>
<Condition Action="hide"><![CDATA[NOT DONOTRUNSCRIPTS]]></Condition>
<Condition Action="show"><![CDATA[DONOTRUNSCRIPTS]]></Condition>
</Control>

So you have one of those annoyingly XMLish setups from Windows Installer and you want to preserve the values you input so they are prefilled at future upgrades. There are a lot of articles on the Internet on how to do this, but all of them seem to be missing something. I am sure this one will too, but it worked for me.

Let's start with a basic setup.
<?xml version="1.0" encoding="UTF-8"?>
<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
<Fragment>
<UI Id="DatabaseAuthenticationDialogUI">
<Property Id="DATABASEDOMAIN" Secure="yes"/>
<Dialog
Id="DatabaseAuthenticationDialog"
Width="370"
Height="270"
Title="[ProductName] database authentication"
NoMinimize="yes">
<Control Id="DatabaseDomainLabel" Type="Text" X="45" Y="110" Width="100" Height="15" TabSkip="no" Text="!(loc.Domain):" />
<Control Id="DatabaseDomainEdit" Type="Edit" X="45" Y="122" Width="220" Height="18" Property="DATABASEDOMAIN" Text="{80}"/>

So this is a database authentication dialog, with only the relevant lines in it. We have a property defined as DATABASEDOMAIN and then an edit control that edits this property. Ideally, we would want to make sure this property is being saved somewhere at the end of the install and it is retrieved before the install to be populated. To do this we will first define a DATABASEDOMAIN_SAVED property and load/save it in the registry, then link it with DATABASEDOMAIN.

First, there is the issue of where to put this code. Personally, I put them all under Product, as a separate mechanism for preserving and loading values. I am sure there are other places in your XML files where you can do it. Here is how my Product.wxs code looks like (just relevant lines):
<?xml version="1.0" encoding="UTF-8"?>
<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"
xmlns:util="http://schemas.microsoft.com/wix/UtilExtension">
<Product ... >
<Property Id="SAVED_DATABASEDOMAIN" Secure="yes">
<RegistrySearch Id="FindSavedDATABASEDOMAIN"
Root="HKLM"
Key="SOFTWARE\MyCompany\MyProduct"
Name="DatabaseDomain"
Type="raw" />
</Property>
<SetProperty Id="DATABASEDOMAIN" After="AppSearch" Value="[SAVED_DATABASEDOMAIN]" Sequence="ui">
<![CDATA[SAVED_DATABASEDOMAIN]]>
</SetProperty>
</Product>
 
<Fragment>
<Directory Id="TARGETDIR" Name="SourceDir">
<Directory Id="ProgramFiles64Folder">
<Directory Id="MyFolder" Name="MyFolder">
<Directory Id="INSTALLFOLDER" Name="MyProduct">
<Component Id="InstallFolderComponent" Guid="c5ccddcc-8442-49e8-aa17-59f84feb4deb">
<RegistryKey
Root="HKLM"
Key="SOFTWARE\MyCompany\MyProduct"
>
<RegistryValue Id="DatabaseDomain"
Action="write"
Type="string"
Name="DatabaseDomain"
Value="[DATABASEDOMAIN]" />
</RegistryKey>
<CreateFolder/>
</Component>
</Directory>
</Directory>
</Directory>
</Directory>
</Fragment>
</Wix>

This is what happens:
  1. We search in the registry and set the value for DATABASEDOMAIN_SAVED.
  2. We set the DATABASEDOMAIN value from the DATABASEDOMAIN_SAVED value, if that is set. Note that Sequence is set to "ui". This is very important, as the default value is "both". In my case I spent hours to figure out why the values were written in the registry, but then would never change again. It was because there are two sequences: "ui" and "execute". The code would read the value from the registry, the user would then change the values, then, right before installing anything, the value would be read from the registry AGAIN and would overwrite the user input.
  3. Finally, when we install the product we save in the registry the value of DATABASEDOMAIN, whatever it is.

This should be it, but there are a few gotchas. One of them is checkboxes. For Windows Installer the value of a checkbox either is or isn't. It's not set to 0 or 1, true or false or anything like that. So if you save the value attached to an unchecked checkbox control, when read, even if empty, it will be set. Your checkbox will always be set from then on. The solution I used was adding a prefix, then setting the value for the checkbox only if that value is what I expect it to be. Here it is, in a gist:

    <!-- Product.wxs -->
<!-- this doesn't change -->
<Property Id="SAVED_DONOTRUNSCRIPTS" Secure="yes">
<RegistrySearch Id="FindSavedDONOTRUNSCRIPTS"
Root="HKLM"
Key="SOFTWARE\MyCompany\MyProduct"
Name="DoNotRunScripts"
Type="raw" />
</Property>
<!-- here, however, I check for Val1 to set the value of the property to 1 -->
<SetProperty Id="DONOTRUNSCRIPTS" After="AppSearch" Value="1" Sequence="ui">
<![CDATA[SAVED_DONOTRUNSCRIPTS = "Val1"]]>
</SetProperty>
<!-- Note the Val prefix when saving the value -->
<RegistryValue Id="DoNotRunScripts"
Action="write"
Type="string"
Name="DoNotRunScripts"
Value="Val[DONOTRUNSCRIPTS]" />
 
<!-- DatabaseSetup.wxs -->
<!-- Note the checkbox value -->
<Control Id="DoNotRunScriptsCheckbox" Type="CheckBox" X="45" Y="197" Height="17" Width="17" Property="DONOTRUNSCRIPTS" CheckBoxValue="1"/>

I Hope that helps people.

and has 0 comments

Intro


I had this situation where an application that was running in production was getting bigger and bigger until it would devour all memory. I had to fix the problem, but how could I identify the cause? Running it on my computer wouldn't lead to the same increase in memory, what I wanted to see was why a simple web API would take 6Gb in memory and keep growing.

The process is simple enough:
  1. Get a memory dump from the running process
  2. Analyze the memory dump

While this post relates specifically to process memory dumps, one can get whole memory dumps and analyze those. There are two types of process memory dump formats: mini and full. I will be talking about the full format.

Fortunately, for our process, there are only two steps. Unfortunately, both are fraught with problems. Let's go through each.

Getting a memory dump from a running process


Apparently that is as simple as opening Task Manager, right clicking on the running process in the Details tab and selecting Create dump file. However, what this does is suspend the process and then copy all of its memory in a file in %AppData%/Local/Temp/[name of process].dmp. If the process is being watched or some other issue occur, the dumping will fail. Personally, I couldn't dump anything over 2Gb when I was trying to get the info from w3wp.exe (the IIS process) as the process would restart.

There are a lot of other ways of getting a process memory dump, including crash dumps, periodic dumps, event driven dumps. ProcDump, one of the utilities in the free SysInternals suite, can do all of those. The Windows Error Reporting (WER) system also generates dumps for faulty or unresponsive application or kernel. Other options for collecting a dump are userdum.exep, Windows Debugger (ntsd or windbg). Process Explorer, of course, can create two types of memory dumps for a process: mini and full.

Analyzing a process memory dump


This is where it gets tricky. What you would like is something that opens the file and shows a nice interactive interface explaining exactly what hogs your memory. Visual Studio does that, but only the Ultimate version gives you the memory analysis option. dotMemory from JetBrains is great, but it also costs a lot of money.

My personal experience was using dotMemory in the five days in which it is free and it was as seamless as I would have wanted: I opened the file, it showed me a pie graph with what objects were hogging my memory, I clicked on them, looked where and how they were stored and even what paths could be taken to create those instances. It was beautiful.

Another option is SciTech's .NET Memory Profiler, but it is also rather expensive. I am sure RedGate has something, too, but their trials are way to intrusive for me to download.

I have been looking for free alternatives and frankly all I could find is WinDBG, which is included in Windows Debugging Tools.

Windows Debugger is the name of the software. It is a very basic tool, but there are visualizers which use it to show information. One of them is MemoScope, which appears unmaintained at the moment and is kind of weird looking, but it works. Unfortunately, it doesn't even come close to what dotMemory was showing me. If analyzing a random .NET app dump showed that the biggest memory usage was coming from MainWindow (makes sense) which then had a lvItems that held most of memory (it was a ListView with a lot of data), WinDbg and therefore MemoScope show the biggest usage comes from arrays of bytes. And it also makes sense, physically, but there is no logical context. A step by step memory profiling guide using WinDbg can be found here.

Conclusion


I am biased towards JetBrains software, which is usually amazing, although I haven't used it a quite a while because their obnoxious licensing system (you basically rent the software). I loved how quickly I got to the problem I had with dotMemory. .NET Memory Profiler from SciTech is way cheaper and seems to point in the right direction, but doesn't really compare in terms of quality. Anything else I've tried was quite subpar.


Doing it yourself


As they say, if you want to do something right, you gotta do it yourself.

There is a Windows API called MiniDumpWriteDump that you can use programatically to create the dumps. However, as you've seen, that's not the problem, the dump analysis is.

And it's hard to find anything that is not very deep and complicated. Memory dumps are usually analysed for forensic reasons. There are specialists who have written scores of books on memory dump analysis. What's worse, most people focus on crash dumps, with memory leak analysis usually done while debugging an application.

I certainly don't have the time to work on a tool to help me with this particular issue. I barely had the time to look for existing tools. But if you, guys, can add information to this blog post, I am sure it will help a lot of people going through the same thing.

Pretty Pictures


I didn't find a nice way of putting the images in the post's text, I am adding them here.








and has 0 comments
MemoryCache has a really strange behavior where all of a sudden, without any warning or exception, it stops caching. You use Set or Add, it doesn't matter. It returns true for Add, it throws no error for Set, but the cache stubbornly remains empty. This behavior can be replicated easily by running this line:
MemoryCache.Default.Dispose();
From that moment on, MemoryCache.Default will not cache anything anymore.

Of course, you will say, why the hell would you dispose the default object? First of all, why give me the option? It's not like I can set the Default instance after I dispose it, so this breaks a lot of things. Second of all, why pretend to continue to be working when in fact it is not? Most disposable classes throw exceptions if they are used after being disposed. And third of all, it might not be you who does the disposing.

In my case, I was trying to be a good boy and declare all my dependencies upfront. I was using StructureMap as a IoC framework and for my class that was using a MemoryCache I declared a constructor parameter of the basest type I could: ObjectCache. Then I registered MemoryCache.Default as the instance used when needing an ObjectCache. The problem is that each unit test would initialize and tear down the StructureMap container and it would take all the instances used that implement IDisposable and dispose them, including our friend MemoryCache.Default. As I couldn't find a simple way to tell StructureMap to not dispose the object, I had to create a MemoryCacheWrapper that implemented ObjectCache and was NOT IDisposable and was using MemoryCache.Default for all methods and use that as the singleton instance for an ObjectCache in StructureMap.

If you are working with a .NET version prior to .NET 4.5, you may encounter the same issue with a bug in .NET: MemoryCache Empty : Returns null after being set

You are trying to install an application and in the detailed log you get something like this:
InstallSqlData:  Error 0x8007007a: Failed to copy SqlScript.Script: SqlWithWindowsAuthUpv2.9.5UpdateGetInstallJobBuildByInstallJobBuildIdScript
or
InstallSqlData:  Error 0x8007007a: failed to read SqlScripts table

For me it was related to the length of an <sql:SqlScript> tag Id property. If you are trying to use more than 55 characters for the BinaryKey property, you get a build error that you should use a max of 55 characters. For the Id you get no such warning, but instead it fails on install.

I hope it helps people (I lost two hours figuring it out).

and has 0 comments
Sometimes we need to execute an async method synchronously. And before you bristle up and try to deny it think of the following scenarios:
  • event handlers are by default synchronous
  • third party software might require you to implement a synchronous method (like a lot of Microsoft code, like Windows services)

All I want is to take public async Task ExecuteStuff(param1) and run it with ExecuteSynchronously(ExecuteStuff, param1).

"What's the problem?", you might ask, "Just use SomeMethod().Result instead of await SomeMethod()". Well, it doesn't work because of the dreaded deadlock (a dreadlock). Here is an excerpt of the .Net C# code for Task<T>.InternalWait (used by Result and Wait):
// Alert a listening debugger that we can't make forward progress unless it slips threads.
// We call NOCTD for two reasons:
// 1. If the task runs on another thread, then we'll be blocked here indefinitely.
// 2. If the task runs inline but takes some time to complete, it will suffer ThreadAbort with possible state corruption,
// and it is best to prevent this unless the user explicitly asks to view the value with thread-slipping enabled.
Debugger.NotifyOfCrossThreadDependency();
As you can see, it only warns the debugger, it doesn't crash it just freezes. And this is just in case it catches the problem at all.

What about other options? Task has a method called RunSynchronously, it should work, right? What about Task<T>.GetAwaiter().GetResult() ? What about .ConfigureAwait(false)? In my experience all of these caused deadlocks.

Now, I am giving you two options, one that I tried and works and one that I found on a Microsoft forum. I don't have the time and brain power to understand how this all works behind the scenes, so I am asking for assistance in telling me why this works and other things do not.

First of all, two sets of extensions methods I use to execute synchronously an async method or func:
/// <summary>
/// Execute an async function synchronously
/// </summary>
public static TResult ExecuteSynchronously<TResult>(this Func<Task<TResult>> asyncMethod)
{
TResult result = default(TResult);
Task.Run(async () => result = await asyncMethod()).Wait();
return result;
}
 
/// <summary>
/// Execute an async function synchronously
/// </summary>
public static TResult ExecuteSynchronously<TResult, TParam1>(this Func<TParam1, Task<TResult>> asyncMethod, TParam1 t1)
{
TResult result = default(TResult);
Task.Run(async () => result = await asyncMethod(t1)).Wait();
return result;
}
 
/// <summary>
/// Execute an async function synchronously
/// </summary>
public static void ExecuteSynchronously(this Func<Task> asyncMethod)
{
Task.Run(async () => await asyncMethod()).Wait();
}
 
/// <summary>
/// Execute an async function synchronously
/// </summary>
public static void ExecuteSynchronously<TParam1>(this Func<TParam1,Task> asyncMethod, TParam1 t1)
{
Task.Run(async () => await asyncMethod(t1)).Wait();
}

I know, the ones returning a result would probably be better as return Task.Run(...).Result or even Task.Run(...).GetAwaiter().GetResult(), but I didn't want to take any chances. Because of how the compiler works, you can't really use them as extension methods like SomeMethod.ExecuteSynchronously(param1) unless SomeMethod is a variable defined as Func<TParam1, TResult>. Instead you must use them like a normal static method: MethodExtensions.ExecuteSynchronously(SomeMethod, param1).

Second is the solution for this proposed by slang25 :

public static T ExecuteSynchronously<T>(Func<Task<T>> taskFunc)
{
var capturedContext = SynchronizationContext.Current;
try {
SynchronizationContext.SetSynchronizationContext(null);
return taskFunc.Invoke().GetAwaiter().GetResult();
}
finally {
SynchronizationContext.SetSynchronizationContext(capturedContext);
}
}

I haven't tried this, so I can only say that it looks OK.

A third options, I am told, works by configuring every await to run with ConfigureAwait(false). There is a NuGet package that does this for you. Again, untried.

Kind of obvious, but I wanted to make it clear and to remember it for the future. You want to make something appear or not based on a toggle button, usually done by adding a click handler on an element with some code to determine what happens. But you can do it with HTML and CSS only by using the label element pointing to a hidden checkbox with its for attribute. Example:

   I have been toggled!

Here is the HTML and CSS for it:
<style>
/* you have to use a caption element instead of a control element inside the label, so a button would not work and we use a span to look like a button */
.button {
border: solid 1px gray;
padding: 5px;
display: inline-block;
user-select: none;
cursor: pointer;
}
 
/* the element after the unchecked checkbox is visible or not based on the status of the checkbox */
#chkState + span { display: none; }
#chkState:checked + span { display: inline-block; }
</style>
<label for="chkState">
<span class="button">Toggle me!</span>
</label>
&nbsp;&nbsp;&nbsp;
<input type="checkbox" style="display:none" id="chkState" />
<span>I have been toggled!</span>

Update: You might want to use an anchor instead of a span, as browsers and special software will interpret it as a clickable, but not input control.

Update: You can, in fact, instruct the browser to ignore click events on a button by styling it with pointer-events: none;, however that doesn't stop keyboard events, so one could navigate to the button using keys and press Space or Enter and it wouldn't work. Similarly one could try to add an overlay over the button and it still wouldn't work for the same reason, but that's already besides the point. Anyway, either of these solutions would disable the visual style of the clicked button and web sites usually do not use the default button style anyway.

There is one reason why you should not use this, though, and that is usability. I don't know enough about it, though. In theory, a label should be interpreted the same as the checkbox by software for people with disabilities.

There is this channel I am subscribed to, with various people discussing or demonstrating software development concepts, with various degrees of success. I want to tell you about this series called CSS3 in 30 Days which is very well organized and presented by a clearly talking and articulate developer, Brad Hussey. He's nice, he's Canadian.

And before you go all "I am a software programmer, not a web designer", try watching it for a bit. Personally I am sure I would have solved a lot of what he demos using Javascript. Changing your perspective is a good thing. Note it is about CSS3, which had quite a few improvements over the classical CSS you may know already.

Here is the first "day" of the tutorial:

There is a way to execute an installation using msiexec, like this: msiexec /i MySetup.msi /l*v "mylog.log", but what if you routinely install stuff on a machine and want to be able to read the log only when there is a problem? Then you can use the group policy editor to set it up:

  1. Run "Edit group policy" (gpedit.msc)
  2. Go to Computer Configuration → Administrative Templates → Windows Components → Windows Installer
  3. Select "Specify the types of events Windows Installer records in its transaction log"
  4. Select "Enabled"
  5. Type any of the letters in 'voicewarmupx' in the Logging textbox
  6. Click OK





This will create the following registry entry:
[HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\Installer]
"Logging"="voicewarmupx"
"Debug"=dword:00000007

Warning: this setting will add time to the installation process based on the options selected. Here is a list of the possible options:
  • v - Verbose output
  • o - Out-of-disk-space messages
  • i - Status messages
  • c - Initial UI parameters
  • e - All error messages
  • w - Non-fatal warnings
  • a - Start up of actions
  • r - Action-specific records
  • m - Out-of-memory or fatal exit information
  • u - User requests
  • p - Terminal properties
  • + - Append to existing file
  • ! - Flush each line to the log
  • x - Extra debugging information. The "x" flag is available only on Windows Server 2003 and later operating systems, and on the MSI redistributable version 3.0, and on later versions of the MSI redistributable.

The log files will be found in your %TEMP% folder, usually C:\Users\[your user]\AppData\Local\Temp.

and has 0 comments

I wrote some unit tests today (to see if I still feel) and I found a pattern that I believe I will use in the future as well. The problem was that I was testing a class that had multiple services injected in the constructor. Imagine a BigService that needs to receive instances of SmallService1, SmallService2 and SmallService3 in the constructor. In reality there will be more, but I am trying to keep this short. While a dependency injection framework handles this for the code, for unit tests these dependencies must be mocked.
C# Example (using the Moq framework for .NET):

// setup mock for SmallService1 which implements the ISmallService1 interface
var smallService1Mock = new Mock<ISmallService1>();
smallService1Mock
  .Setup(srv=>srv.SomeMethod)
  .Returns("some result");
var smallService1 = smallService1Mock.Object.
// repeat for 2 and 3
var bigService = new BigService(smallService1, smallService2, smallService3);


My desire was to keep tests separate. I didn't want to have methods that populated fields in the test class so that if they run in parallel, it wouldn't be a problem. But still, I didn't want to copy paste the same test a few times only to change some parameter or another. And the many local variables for mocks and object really bothered me. Moreover, there were some operations that I really wanted in all my tests, like a mock of a service that executed an action that I was giving it as a parameter, with some extra work before and after. I wanted that in all test cases, the mock would just execute the action.

Therefore I got to writing something like this:

public class BigServiceTester {
  public Mock<ISmallService1> SmallService1Mock {get;private set;}
  public Mock<ISmallService2> SmallService2Mock {get;private set;}
  public Mock<ISmallService3> SmallService3Mock {get;private set;}
 
  public ISmallService1 SmallService1 => SmallService1Mock.Object;
  public ISmallService2 SmallService2 => SmallService2Mock.Object;
  public ISmallService3 SmallService3 => SmallService3Mock.Object;
 
  public void BigServiceTester() {
    SmallService1Mock = new Mock<ISmallService1>();
    SmallService2Mock = new Mock<ISmallService2>();
    SmallService3Mock = new Mock<ISmallService3>();
  }
 
  public BigServiceTester SetupDefaultSmallService1() {
    SmallService1Mock
      .Setup(ss1=>ss1.Execute(It.IsAny<Action>()))
      .Callback<Action>(a=>a());
    return this;
  }
 
  public IBigService GetService() =>
    new BigService(SmallService1, SmallService2, SmallService3);
}
 
// and here is how to use it in an XUnit test
[Fact]
public void BigServiceShouldDoStuff() {
  var pars = new BigServiceTester()
    .SetupDefaultSmallService1();
 
  pars.SmallService2Mock
    .Setup(ss2=>ss2.SomeMethod())
    .Returns("some value");
 
  var bigService=pars.GetService();
  var stuff = bigService.DoStuff();
  Assert.Equal("expected value", stuff);
}

The idea is that everything related to BigService is encapsulated in BigServiceTester. Tests will be small because one only creates a new instance of tester, then sets up code specific to the test, then the tester will also instantiate the service, followed by the asserts. Since the setup code and the asserts depend on the specific test, everything else is reusable, but also encapsulated. There is no setup code in the constructor but a fluent interface is used to easily execute common solutions.

As a further step, tester classes can implement interfaces, so for example if some other class needs an instance of ISmallService1, all it has to do is implement INeedsSmallService1 and the SetupDefaultSmallService1 method can be written as an extension method. I kind of like this way of doing things. I hope you do, too.