and has 0 comments
So I was watching this Entity Framework presentation and I noticed one example that looked like this:
db.ExecuteSqlCommand($"delete from Log where Time<{time}");

Was this an invitation to SQL injection? Apparently not, since the resulting SQL was something like DELETE FROM Log WHERE Time < @_p0. But how could that be? Enter FormattableString, which is a class implementing the venerable IFormattable interface, but which is available in .NET Framework only from version 4.6 and in .NET Core from the very beginning. Apparently, when an interpolated string is assigned to a FormattableString, it is compiled as an instance with all the values from the string before the formatting. In our case ExecuteSqlCommand had a FormattableString overload. Note that the method is an extension method from RelationalDatabaseFacadeExtensions, not Database.ExecuteSqlCommand.

Let's test this with a little program:
class Program
{
static void Main(string[] args)
{
var timeDisplay = new TimeDisplay();
Test($"Time display:{timeDisplay}");
Console.ReadKey();
}
 
private static void Test(string text)
{
Console.WriteLine(text);
}
 
private class TimeDisplay
{
public override string ToString()
{
return DateTime.Now.ToString("s");
}
}
}

Here I create an instance of TimeDisplay and then use it in an interpolated string which is then sent to the Test method, which Console.WriteLines it. The ToString method of TimeDisplay is overridden to display the current time. The result is predictable: Time display:2018-12-13T11:24:02. I will then change the type of the parameter of Test to be FormattableString. It still works and it displays the same thing. Note that if I have both a FormattableString and a string version of the same method, string will be used first when an interpolated string is sent as a parameter!

But what do I get in that instance? Let's change the Test method even more:
private static void Test(FormattableString text)
{
Console.WriteLine($"Format: {text.Format} " +
$"ArgumentCount: {text.ArgumentCount} " +
$"Arguments: {string.Join(", ",text.GetArguments())}");
}

The displayed result of the program is now Format: Time display:{0} ArgumentCount: 1 Arguments: 2018-12-13T11:28:35. Note that the argument is in fact a TimeDisplay instance and it is displayed as a time stamp because of the ToString override.

What does this mean?

Well, we can do great things like Entity Framework does, interpreting the intent of the developer and providing a more informed output. I am considering this as a solution for logging. Logger.LogDebug($"{someObjectWithAHeavyToString}") now doesn't have to execute the ToString() method of the object unless the Debug log level is enabled, for example.

But we can also really mess things up. I will get past the possible yet unlikely security problem where you believe you pass an object as .ToString() and in fact it is passed as the entire object, allowing a malicious library to do whatever it wants with it. Let's consider more probable scenarios.

One is that a code reviewer will tell you "put magic strings in their own variables or constants", so you immediately take the string sent to test and automatically move it a local variable (which Visual Studio will create it as a FormattableString), then you replace that with var (because the type is obvious, right?). Suddenly the test variable is a string.

Another is even worse, although if you decided to code like this you have other issues. Let's get back to something similar to the original example:
db.ExecuteSqlCommand($"delete from Log where Id = {id}");

And let's change it:
var sql=$"delete from Log where Id = {id}";
db.ExecuteSqlCommand(sql);

Now sql is a string, its value is computed from the id, which might be provided by the user. Replace this with Bobby Tables and you got a nice SQL injection.

Conclusion: an interesting, if somewhat confusing, concept. Other than the logging idea, which I admit is pretty interesting, I am yet to find a good place to use it.

and has 0 comments
The name says it all: "The Everything Creative Writing Book: All you need to know to write novels, plays, short stories, screenplays, poems, articles, or blogs", maybe too much. In this book, Wendy Burt-Thomas takes a holistic approach to writing, discussing everything from how to write poetry and children's books, blogs and technical specs to how to find an agent, self publish and so on. It covers writing techniques and editing advice, writer block solutions and how to deal with rejection (or success for that matter) and many more. In that regard, the book is awesome, it shows everything you might want to know a little about in order to decide what you actually choose to do, but like that Nicholas Butler quote An expert is one who knows more and more about less and less until he knows absolutely everything about nothing, the book is probably not very useful to someone who has already started working on things.

That said, the book is compact, to the point and can help a lot at the very beginning of the writer's journey. It can be used as a reference, so that whenever a particular subject or concern appears, you just flip to that chapter and see what Wendy recommends. Is it good advice? I have no idea. I've certainly read books that go more in depth about topics that interested me more, like how to write a novel or how to set up a scene, but a panoramic view of the business is not bad either. The material also felt a little dated for something released in 2010, especially in the technical sections.

You choose if you find it useful or not.

Intro


An adapter is a software pattern that exposes functionality through an interface different from the original one. Let's say you have an oven, with the function Bake(int temperature, TimeSpan time) and you expose a MakePizza() interface. It still bakes at a specific temperature for an amount of time, but you use it differently. Sometimes we have similar libraries with a common goal, but different scope, that one is tempted to hide under a common adapter. You might want to just cook things, not bake or fry.

So here is a post about good practices of designing a library project (complete with the use of software patterns, ugh!).



Examples


An example in .NET would be the WebRequest.Create method. It receives an URI as a parameter and, based on its type, returns a different implementation that will handle the resource in the way declared by the WebRequest. For HTTP, it will used an HttpWebRequest, for FTP an FtpWebRequest, for file access a FileWebRequest and so on. They are all implementations of the abstract class WebRequest which would be our adapter. The Create method itself is an example of the factory method pattern.

But there are issues with this. Let's assume that we have different libraries/projects that handle a specific resource scope. They may be so different as to be managed by different organizations. A team works on files, another on HTTP and the FTP one is an open source third party library. Your team works on the WebRequest class and has to consider the implications of having a Create factory method. Is there a switch there? "if URI starts with http or https, return new HttpWebRequest"? In that case, your WebRequest library will need to depend on the library that contains HttpWebRequest! And it's just not possible, since it would be a circular reference. Had your project control over all implementations, it would still be a bad idea to let a base class know about a derived class. If you move the factory into a factory class it still means your adapter library has to depend on every implementation of the common interface. As Joe Armstrong would say You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

So how did Microsoft solve it? Well, they did move the implementation of the factory in another creator class that would implement IWebRequestCreate. Then they used configuration to associate a prefix with an implementation of WebRequest. Guess you didn't know that, did you? You can register your own implementations via code or configuration! It's such an obscure feature that if you Google WebRequestModulesSection you mostly get links to source code.

Another very successful example of an adapter library is jQuery. Yes, the one they now say you don't need anymore, it took industry only 12 years to catch up after all. Anyway, at the time there were very different implementations of what people thought a web browser should be. The way the DOM was represented, the Javascript objects and methods, the way they actually worked compared to the way they should have worked, everything was different. So developers were often either favoring a browser over others or were forced to write code for each possible version. Something like "if Internet Explorer, do A, if Netscape, do B". The problem with this is that if you tried to use a browser that was neither Internet Explorer or Netscape, it would either break or show you one of those annoying "browser not supported" messages.

Enter jQuery, which abstracted access over all these different interfaces with a common (and very nicely designed) one. Not only did it have a fluent interface that allowed you to do multiple things with a single target (stuff like $('#myElement').show().css({opacity:0.7}).text('My text');), but it was extensible, allowing third parties to add modules that would allow even more functionality ($('#myElement').doSomethingCool();). Sound familiar? Extensibility seems to be an important common feature of well designed adapters.

Speaking of jQuery, one very used feature was jQuery.browser, which told you what browser you were using. It had a very sophisticated and complex code to get around the quirks of every browser out there. Now you had the ability to do something like if ($.browser.msie) say('OMG! You like Microsoft, you must suck!'); Guess what, the browser extension was deprecated in jQuery 1.9 and not because it was not inclusive. Well, that's the actual reason, but from a technical point of view, not political correctness. You see, now you have all this brand new interface that works great on all browsers and yet still your browser can't access a page correctly. It's either an untested version of a particular browser, or a different type of browser, or the conditions for letting the user in were too restrictive.

The solution was to rely on feature detection, not product versions. For example you use another Javascript library called Modernizr and write code like if (Modernizr.localstorage) { /* supported */ } else { /* not-supported */ }. There are so many possible features to detect that Modernizr lets you pick and choose the ones you need and then constructs the library that handles each instead of bundling it all in one huge package. They are themselves extensible. You might ask what all this has to do with libraries in .NET. I am getting there.

The last example: Entity Framework. This is a hugely popular framework for database access from Microsoft. It would abstract the type of the database behind a very nice (also fluent) interface in .NET code. But how does it do that? I mean, what if I need SQL Server? What if I want MongoDB or PostgreSQL?

The way is having different "providers" to translate .NET code Expressions into whatever the storage needs. The individual providers are added as dependencies to your project, without the need for Entity Framework to know about them. Then they are configured for use in code, because they implement some common interfaces, and they are ready for use.

Principles for adapters


So now we have some idea about what is good in an adapter:
  • Ease of use
  • Common interface
  • Extensibility
  • No direct dependency between the interface and what is adapted
  • An interface per feature

Now that I wrote it down, it sounds kind of weird: the interface should not depend on what it adapts. It is correct, though. In the case of Entity Framework, for example, the provider for MySql is an adapter between the use interface of MySql and the .NET interfaces declared by Entity Framework; interfaces are just declarations of what something should do, not implementation.

Picture time!


The factory and the common interface are one library that will use that library in your project. Each individual adapter depends on it, as well, but your project doesn't need to know about it until needed.

Now, it's your choice if you register the adapters dynamically (so, let's say you load the .dll and extract the objects that implement a specific interface and they know themselves to what they apply, like FtpWebRequest for ftp: strings) or you add dependencies to individual adapters to your project and then manually register them yourself and strong typed. The important thing is that you don't reference the factory library and automatically be forced to get all the possible implementations added to your project.

It seems I've covered all points except the last one. That is pretty important, so read on!

Imagine that the things you want to adapt are not really that similar. You want to force them into a common shape, but there will be bits that are specific to one domain only and you might want them. Now here is an example of how NOT to do things:
var target = new TargetFactory().Get(connectionString);
if
(target is SomeSpecificTarget specificTarget) {
specificTarget.Authenticate(username, password);
}
target.DoTargetStuff();
In this case I use the adapter for Target, but then bring in the knowledge of a specific target called SomeSpecificTarget and use a method that I just know is there. This is bad for several reasons:
  1. For someone to understand this code they must know what SomeSpecificTarget does, invalidating the concept of an adapter
  2. I need to know that for that specific connection string a certain type will always be returned, which might not be the case if the factory changes
  3. I need to know how SomeSpecificTarget works internally, which might also change in the future
  4. I must add a dependency to SomeSpecificTarget to my project, which is at least inconsistent as I didn't add dependencies to all possible Target implementations
  5. If different types of Target will be available, I will have to write code for all possibilities
  6. If new types of Target become available, I will have to change the code for each new addition to what is essentially third party code

And now I will show you two different versions that I think are good. The first is simple enough:
var target = new TargetFactory().Get(connectionString);
if
(target is IAuthenticationTarget authTarget) {
authTarget.Authenticate(username, password);
}
target.DoTargetStuff();
No major change other than I am checking if the target implements IAuthenticationTarget (which would best be an interface in the common interface project). Now every target that requires (or will ever require) authentication will receive the credentials without the need to change your code.

The other solution is more complex, but it allows for greater flexibility:
var serviceProvider = new TargetFactory()
.GetServiceProvider(connectionString);
var target = serviceProvider.Get<ITargetProvider>()
.Get();
serviceProvider.Get<ICredentialsManager>()
?.AddCredentials(target, new Credentials(username, password));
target.DoTargetStuff();
So here I am not getting a target, but a service provider (which is another software pattern, BTW), based on the same connection string. This provider will give me implementations of a target provider and a credentials manager. Now I don't even need to have a credentials manager available: if it doesn't exist, this will do nothing. If I do have one, it will decide by itself what it needs to do with the credentials with a target. Does it need to authenticate now or later? You don't care. You just add the credentials and let the provider decide what needs to be done.

This last approach is related to the concept of inversion of control. Your code declares intent while the framework decides what to do. I don't need to know of the existence of specific implementations of Target or indeed of how credentials are being used.

Here is the final version, using extension methods in a method chaining fashion, similar to jQuery and Entity Framework, in order to reinforce that Ease of use principle:
// your code
var target = new TargetFactory()
.Get(connectionString)
.WithCredentials(username,password);
 
 
// in a static extensions class
 
public static Target WithCredentials(this Target target, string username, string password)
{
target.Get<ICredentialsProvider>()
?.AddCredentials(target, new Credentials(username, password));
return target;
}
 
public static T Get<T>(this Target target)
{
return target.GetServiceProvider()
.Get<T>();
}
This assumes that a Target has a method called GetServiceProvider which will return the provider for any interface required so that the whole code is centered on the Target type, not IServiceProvider, but that's just one possible solution.

Conclusion


As long as the principles above are respected, your library should be easy to use and easy to extend without the need to change existing code or consider individual implementations. The projects using it will only use the minimum amount of code required to do the job and themselves be dependent only on interface declarations. As well as those are respected, the code will work without change. It's really meta: if you respect the interface described in this blog then all interfaces will be respected in the code all the way down! Only some developer locked in a cellar somewhere will need to know how things are actually getting done.

and has 0 comments
No natural phenomenon, except maybe fire, seems more alive than the mist. But while fire is young, angry, destructive, mist is an old grumpy creature, moving slowly, hiding itself in contradictions. It doesn't hide distant features as much as it reveals close ones through contrast, it doesn't absorb light as much as it lets itself glow around sources of illumination, it makes sounds crystal clear by covering the constant hum of far off noise, dense and yet immaterial, its blanket like qualities offset by its cold embrace. Never more life like than when it clings to a still surface of water, the slightest gust of wind prompts annoyed tendrils and every move of another living thing elicits mirror acts, dream like, half finished motions forgotten before they even end. If mist could only remember...

and has 0 comments
I started reading The Things They Carried as a recommendation for writing style and it is, indeed, a very deep personal work. Tim O'Brien writes about the Vietnam war in most if not all of his work, but this novella is a collection of short stories all brought together under the mantle of a sort of a confession. It's a mosaic, each piece beautiful, but together creating the artistic vision of the true war.

I liked the subtlety, most of all. The characters are not overly complex, but they are portrayed in a very personal manner, with details that are important for the overall meaning of the book. I loved how O'Brien described soldiers going to war (instead of running away to Canada, as he almost did) because they were too embarrassed not to. Died in the war because they were afraid to die of shame. Too cowardly to run.

At just 150 pages, the book shows not how the training went, or how the shooting was, it presents everything from the viewpoint of the people there. How it takes over every feeling you have, how it changes you into this creature that is completely different from the man (or woman) who left. It's not about maneuvers or tactical prowess or strategies of survival. They are all meaningless. The important part is to keep a semblance of sanity.

The titled refers to the trinkets people carry to remind them of who they are. And they carry much more: hopes, wounds, fears, diseases, the ever growing arsenal of pointless weapons and ammunition and so on. A bit depressing, but a damn good read.

and has 0 comments
I was attempting to add very detailed logging (tracing) to my application. In order to do that, I had to change hundreds of objects. That wouldn't do. Fortunately, .NET has a nice feature called RealProxy. Let's see a quick and dirty example:
    class Program
{
static void Main(string[] args)
{
JsonConvert.DefaultSettings = () => new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
NullValueHandling = NullValueHandling.Ignore,
Formatting = Formatting.Indented
};

Test test = new Test();
test.field = "test";
test.Property = "Test";
string outObject;
string refObject = "ref";
var result = test.Method("test1", "test2", ref refObject, out outObject);
Console.WriteLine(JsonConvert.SerializeObject(new object[] {
test,
result,
refObject,
outObject
})
);
Console.ReadKey();
}
 
}
 
class Test
{
public string field;
public string Property { get; set; }
public string Method(string parameter1, string parameter2,
ref string refObject, out string outObject)
{
if (parameter1 == null)
throw new ArgumentNullException("Parameter one cannot be null");
refObject += " reffed";
outObject = "outed";
return $"{parameter1} : {parameter2}";
}
}

So I defined an object with a field, a property and a method. That's what my application does. It then displays the object and the result of the method call. Here is the result:
[
{
"field": "test",
"Property": "Test"
},
"test1 : test2",
"ref reffed",
"outed"
]

I would like to log everything that happens with my Test object. Well, as such I can't do anything, I need to change the code a bit, like this:
    class Program
{
static void Main(string[] args)
{
JsonConvert.DefaultSettings = ()=>new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
NullValueHandling = NullValueHandling.Ignore,
Formatting = Formatting.Indented
};

Test test = (Test)new LoggingProxy(new Test()).GetTransparentProxy();
test.field = "test";
test.Property = "Test";
string outObject;
string refObject = "ref";
var result = test.Method("test1", "test2", ref refObject, out outObject);
Console.WriteLine(JsonConvert.SerializeObject(new object[] {
test,
result,
refObject,
outObject
})
);
Console.ReadKey();
}
 
}
 
class Test: MarshalByRefObject
{
public string field;
public string Property { get; set; }
public string Method(string parameter1, string parameter2,
ref string refObject, out string outObject)
{
if (parameter1 == null)
throw new ArgumentNullException("Parameter one cannot be null");
refObject += " reffed";
outObject = "outed";
return $"{parameter1} : {parameter2}";
}
}
 
class LoggingProxy : RealProxy
{
private readonly object _target;
 
public LoggingProxy(object obj) : base(obj?.GetType())
{
_target = obj;
}
 
public override IMessage Invoke(IMessage msg)
{
if (msg is IMethodCallMessage methodCall)
{
var arguments = methodCall.Args.ToArray();
var result = methodCall.MethodBase.Invoke(_target, arguments);
return new ReturnMessage(
result,
arguments,
arguments.Length,
methodCall.LogicalCallContext,
methodCall);
}
return null;
}
}

So this is what I did above:
  • I've inherited my Test class from MarshalByRefObject
  • I've created a LoggingProxy class that inherits from RealProxy and implements the Invoke method
  • I've replaced new Test(); with (Test)new LoggingProxy(new Test()).GetTransparentProxy();

Running it we get the same result.

Time to add some logging. I will write stuff on the Console, too, for this demo. Here are the changes to the LoggingProxy class:
    class LoggingProxy : RealProxy
{
private readonly object _target;
 
public LoggingProxy(object obj) : base(obj?.GetType())
{
_target = obj;
}
 
public override IMessage Invoke(IMessage msg)
{
if (msg is IMethodCallMessage methodCall)
{
var arguments = methodCall.Args.ToArray();
string typeName;
try
{
typeName = Type.GetType(methodCall.TypeName).Name;
}
catch
{
typeName = methodCall.TypeName;
}
try
{
Console.WriteLine($"Called {typeName}.{methodCall.MethodName}" +
$"({JsonConvert.SerializeObject(arguments)})");
var result = methodCall.MethodBase.Invoke(_target, arguments);
Console.WriteLine($"Success for {typeName}.{methodCall.MethodName}" +
$"({JsonConvert.SerializeObject(arguments)}): " +
$"{JsonConvert.SerializeObject(result)}");
return new ReturnMessage(
result,
arguments,
arguments.Length,
methodCall.LogicalCallContext,
methodCall);
}
catch (Exception exception)
{
Console.WriteLine($"Error for {typeName}.{methodCall.MethodName}" +
$"({JsonConvert.SerializeObject(arguments)}): " +
$"{exception}");
return new ReturnMessage(exception, methodCall);
}
}
return null;
}
}

It's the same as before, but with a try/catch block and some extra Console.WriteLines. Here is the output:
Called Object.FieldSetter([
"RealProxyTests2.Test",
"field",
"test"
])
Success for Object.FieldSetter([
"RealProxyTests2.Test",
"field",
"test"
]): null
Called Test.set_Property([
"Test"
])
Success for Test.set_Property([
"Test"
]): null
Called Test.Method([
"test1",
"test2",
"ref",
null
])
Success for Test.Method([
"test1",
"test2",
"ref reffed",
"outed"
]): "test1 : test2"
Called Object.GetType([])
Success for Object.GetType([]): "RealProxyTests2.Test, RealProxyTests2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"
Called Object.FieldGetter([
"RealProxyTests2.Test",
"field",
null
])
Success for Object.FieldGetter([
"RealProxyTests2.Test",
"field",
"test"
]): null
Called Test.get_Property([])
Success for Test.get_Property([]): "Test"
[
{
"field": "test",
"Property": "Test"
},
"test1 : test2",
"ref reffed",
"outed"
]

A bounty of information. The first lines are the setting of the field, Property and the execution of Method. But then there are a GetType, a field getter and a Property getter. What's that about? That's JsonConvert, serializing the Test proxy.
Warning: I've copied methodCall.Args into a local property called arguments. At first glance it might appear superfluous, but methodCall.Args is not really an array of object, even if it appears that way in the debugger. It is read-only, changing any of its items has no effect.

Remember how we defined the proxy in Program.Main? We can use a method in our proxy object, let's call it Wrap:
    public static T Wrap<T>(T obj)
{
if (obj is MarshalByRefObject marshalByRefObject)
{
return (T)new LoggingProxy(marshalByRefObject).GetTransparentProxy();
}
return obj;
}

Conclusion:

While this works, there are a series of issues related to it:
  1. RealProxy is only available for .NET Framework, not .NET Core (see DispatchProxy for that)
  2. Serialization is not so straightforward as in this demo (see my blog post about Newtonsoft serialization
  3. You need to inherit from MarshalByRefObject, so this solution doesn't work for classes that already have a base class
  4. Performance wise you should make sure you are not logging or executing anything unless the correct log level is set (something like logger.LogTrace(JsonConvert(...)) would not work because the JSON serialization occurs no matter before executing LogTrace)
  5. Also, this wrapping is not free. With a simple proxy that did nothing than execute the code it took 43 times more time to run. Of course, that's because the actual execution of setting properties or returning a string is basically zero. When I added a Thread.Sleep(1) in the method, it took almost the same amount of time. Just don't use it in performance sensitive applications

Final thoughts: in the same namespace with RealProxy there is a ProxyAttribute class that at first glance seems to be even better: you just decorate a class with the attribute and BANG! instant AOP. But it's not that simple. First of all it works only on object that inherit from ContextBoundObject which itself inherit from MarshalByRefObject. And while it seems like a good idea to just replace MarshalByRefObject with ContextBoundObject in the code above, know that no generic class can inherit from it. There might be other restrictions, too. If you make Test inherit ContextBoundObject, the debugger will already show new Test() as being a transparent proxy, without wrapping it with any code. It might still be usable in certain conditions, though.

and has 0 comments
I was trying to log some stuff (a lot of stuff) and I noticed that my memory went to the roof (16GB in a few seconds), then an OutOfMemoryException was thrown. I've finally narrowed it down to the JSON serializer from Newtonsoft.

First of all, some introduction on how to serialize any object into JSON: whether you use JsonConvert.SerializeObject(yourObject,new JsonSerializerSettings { <settings> }) or new JsonSerializer { <settings> }.Serialize(writer, object) (where <settings> are some properties set via the object initializer syntax) you will need to consider these properties:
We will use these classes to test the results:
class Parent
{
public string Name { get; set; }
public Child Child1 { get; set; }
public Child Child2 { get; set; }
public Child[] Children { get; set; }
public Parent Self { get; internal set; }
}
 
class Child
{
public string Name { get; set; }
public Parent Parent { get; internal set; }
}

For this piece of code:
JsonConvert.SerializeObject(new Child { Name = "other child" }, settings)
you will get either
{"Name":"other child","Parent":null}
or
{
"Name": "other child",
"Parent": null
}
based on whether we use Formatting.None or Formatting.Indented. The other properties do not affect the serialization (yet).

Let's set up the following objects:
var child = new Child
{
Name = "Child name"
};
var parent = new Parent
{
Name = "Parent name",
Child1 = child,
Child2 = child,
Children = new[]
{
child, new Child { Name = "other child"}
}
};
parent.Self = parent;
child.Parent = parent;

As you can see, not only does parent have multiple references to child and one to himself, but the child also references the parent. If we try the code
JsonConvert.SerializeObject(parent, settings)
we will get an exception Newtonsoft.Json.JsonSerializationException: 'Self referencing loop detected for property 'Parent' with type 'Parent'. Path 'Child1'.'. In order to avoid this, we can use ReferenceLoopHandling.Ignore, which tells the serializer to ignore circular references. Here is the output when using
var settings = new JsonSerializerSettings
{
Formatting = Formatting.Indented,
ReferenceLoopHandling=ReferenceLoopHandling.Ignore
};

{
"Name": "Parent name",
"Child1": {
"Name": "Child name"
},
"Child2": {
"Name": "Child name"
},
"Children": [
{
"Name": "Child name"
},
{
"Name": "other child",
"Parent": null
}
]
}

If we add NullValueHandling.Ignore we get
{
"Name": "Parent name",
"Child1": {
"Name": "Child name"
},
"Child2": {
"Name": "Child name"
},
"Children": [
{
"Name": "Child name"
},
{
"Name": "other child"
}
]
}
(the "Parent": null bit is now gone)

The default for the ReferenceLoopHandling property is ReferenceLoopHandling.Error, which throws the serialization exception above, but we can also use ReferenceLoopHandling.Serialize besides Error and Ignore. In that case we get a System.StackOverflowException: 'Exception of type 'System.StackOverflowException' was thrown.' as it tries to serialize at infinitum.

PreserveReferencesHandling is rather interesting. It creates extra properties for objects like $id, $ref or $values and then uses those to define objects that are circularly referenced. Let's use this configuration:
var settings = new JsonSerializerSettings
{
Formatting = Formatting.Indented,
NullValueHandling = NullValueHandling.Ignore,
PreserveReferencesHandling = PreserveReferencesHandling.Objects
};

Then the result will be
{
"$id": "1",
"Name": "Parent name",
"Child1": {
"$id": "2",
"Name": "Child name",
"Parent": {
"$ref": "1"
}
},
"Child2": {
"$ref": "2"
},
"Children": [
{
"$ref": "2"
},
{
"$id": "3",
"Name": "other child"
}
],
"Self": {
"$ref": "1"
}
}

Let's try PreserveReferencesHandling.Arrays:
var settings = new JsonSerializerSettings
{
Formatting = Formatting.Indented,
NullValueHandling = NullValueHandling.Ignore,
ReferenceLoopHandling=ReferenceLoopHandling.Ignore,
PreserveReferencesHandling = PreserveReferencesHandling.Arrays
};

The result will then be
{
"Name": "Parent name",
"Child1": {
"Name": "Child name"
},
"Child2": {
"Name": "Child name"
},
"Children": {
"$id": "1",
"$values": [
{
"Name": "Child name"
},
{
"Name": "other child"
}
]
}
}
which annoyingly adds an $id to the Children array. There is one more possible value, PreserveReferencesHandling.All, which causes this output:
{
"$id": "1",
"Name": "Parent name",
"Child1": {
"$id": "2",
"Name": "Child name",
"Parent": {
"$ref": "1"
}
},
"Child2": {
"$ref": "2"
},
"Children": {
"$id": "3",
"$values": [
{
"$ref": "2"
},
{
"$id": "4",
"Name": "other child"
}
]
},
"Self": {
"$ref": "1"
}
}

I personally recommend using PreserveReferencesHandling.Objects, which doesn't need setting the ReferenceLoopHandling property at all. Unfortunately, it adds an $id to every object, even if it is not circularly defined. However, it creates an object that can be safely deserialized back into the original, but if you just want a quick and dirty output of the data in an object, use ReferenceLoopHandling.Ignore with NullValueHandling.Ignore. Note that object references cannot be preserved when a value is set via a non-default constructor such as types that implement ISerializable.

Warning, though, this is still not enough! In my logging code I had used ReferenceLoopHandling.Ignore and the exception was quite different, an OutOfMemoryException. It seems that even with circular references checked, JsonSerializer will messes up some times.

The culprits? Task<T> (or async lambdas send as parameters) and an Entity Framework context object. The solution I employed was to check the type of the objects I send to the serializer and, if any of the offending types, replace them with the full names of their types.

Hope it helps!

This is a simple gotcha related to changing the color of a control. Let's say you have a label that you want to present in a different color. Normally you would do something like this:
<!-- I put it somewhere in Product.wxs -->
<TextStyle Id="WixUI_Font_Normal_Red" FaceName="Tahoma" Size="8" Red="255" Green="55" Blue="55" />
 
<!-- somewhere in your UI -->
<Control Id="LabelRed" Type="Text" X="62" Y="200" Width="270" Height="17" Property="MYPROPERTY">
<Text>{\WixUI_Font_Normal_Red}!(loc.MYPROPERTY)</Text>
</Control>

Yet for some reason, it doesn't work when the control is a checkbox, for example. The simple explanation is that this is by design: only text controls can change color. The solution is to split your control into the edit control without a text, then add a text control next to it with the color you need.

Here is an example of a checkbox that changes the label color based on the check value:
        <Control Id="DoNotRunScriptsCheckbox" Type="CheckBox" X="45" Y="197" Height="17" Width="17" Property="DONOTRUNSCRIPTS" CheckBoxValue="1"/>
 
<Control Id="DoNotRunScriptsLabel" Type="Text" X="62" Y="200" Width="270" Height="17" CheckBoxPropertyRef="DONOTRUNSCRIPTS">
<Text>!(loc.DoNotRunScriptsDescription)</Text>
<Condition Action="hide"><![CDATA[DONOTRUNSCRIPTS]]></Condition>
<Condition Action="show"><![CDATA[NOT DONOTRUNSCRIPTS]]></Condition>
</Control>
 
<Control Id="DoNotRunScriptsLabelRed" Type="Text" X="62" Y="200" Width="270" Height="17" CheckBoxPropertyRef="DONOTRUNSCRIPTS">
<Text>{\WixUI_Font_Normal_Red}!(loc.DoNotRunScriptsDescription)</Text>
<Condition Action="hide"><![CDATA[NOT DONOTRUNSCRIPTS]]></Condition>
<Condition Action="show"><![CDATA[DONOTRUNSCRIPTS]]></Condition>
</Control>

So you have one of those annoyingly XMLish setups from Windows Installer and you want to preserve the values you input so they are prefilled at future upgrades. There are a lot of articles on the Internet on how to do this, but all of them seem to be missing something. I am sure this one will too, but it worked for me.

Let's start with a basic setup.
<?xml version="1.0" encoding="UTF-8"?>
<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
<Fragment>
<UI Id="DatabaseAuthenticationDialogUI">
<Property Id="DATABASEDOMAIN" Secure="yes"/>
<Dialog
Id="DatabaseAuthenticationDialog"
Width="370"
Height="270"
Title="[ProductName] database authentication"
NoMinimize="yes">
<Control Id="DatabaseDomainLabel" Type="Text" X="45" Y="110" Width="100" Height="15" TabSkip="no" Text="!(loc.Domain):" />
<Control Id="DatabaseDomainEdit" Type="Edit" X="45" Y="122" Width="220" Height="18" Property="DATABASEDOMAIN" Text="{80}"/>

So this is a database authentication dialog, with only the relevant lines in it. We have a property defined as DATABASEDOMAIN and then an edit control that edits this property. Ideally, we would want to make sure this property is being saved somewhere at the end of the install and it is retrieved before the install to be populated. To do this we will first define a DATABASEDOMAIN_SAVED property and load/save it in the registry, then link it with DATABASEDOMAIN.

First, there is the issue of where to put this code. Personally, I put them all under Product, as a separate mechanism for preserving and loading values. I am sure there are other places in your XML files where you can do it. Here is how my Product.wxs code looks like (just relevant lines):
<?xml version="1.0" encoding="UTF-8"?>
<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"
xmlns:util="http://schemas.microsoft.com/wix/UtilExtension">
<Product ... >
<Property Id="SAVED_DATABASEDOMAIN" Secure="yes">
<RegistrySearch Id="FindSavedDATABASEDOMAIN"
Root="HKLM"
Key="SOFTWARE\MyCompany\MyProduct"
Name="DatabaseDomain"
Type="raw" />
</Property>
<SetProperty Id="DATABASEDOMAIN" After="AppSearch" Value="[SAVED_DATABASEDOMAIN]" Sequence="ui">
<![CDATA[SAVED_DATABASEDOMAIN]]>
</SetProperty>
</Product>
 
<Fragment>
<Directory Id="TARGETDIR" Name="SourceDir">
<Directory Id="ProgramFiles64Folder">
<Directory Id="MyFolder" Name="MyFolder">
<Directory Id="INSTALLFOLDER" Name="MyProduct">
<Component Id="InstallFolderComponent" Guid="c5ccddcc-8442-49e8-aa17-59f84feb4deb">
<RegistryKey
Root="HKLM"
Key="SOFTWARE\MyCompany\MyProduct"
>
<RegistryValue Id="DatabaseDomain"
Action="write"
Type="string"
Name="DatabaseDomain"
Value="[DATABASEDOMAIN]" />
</RegistryKey>
<CreateFolder/>
</Component>
</Directory>
</Directory>
</Directory>
</Directory>
</Fragment>
</Wix>

This is what happens:
  1. We search in the registry and set the value for DATABASEDOMAIN_SAVED.
  2. We set the DATABASEDOMAIN value from the DATABASEDOMAIN_SAVED value, if that is set. Note that Sequence is set to "ui". This is very important, as the default value is "both". In my case I spent hours to figure out why the values were written in the registry, but then would never change again. It was because there are two sequences: "ui" and "execute". The code would read the value from the registry, the user would then change the values, then, right before installing anything, the value would be read from the registry AGAIN and would overwrite the user input.
  3. Finally, when we install the product we save in the registry the value of DATABASEDOMAIN, whatever it is.

This should be it, but there are a few gotchas. One of them is checkboxes. For Windows Installer the value of a checkbox either is or isn't. It's not set to 0 or 1, true or false or anything like that. So if you save the value attached to an unchecked checkbox control, when read, even if empty, it will be set. Your checkbox will always be set from then on. The solution I used was adding a prefix, then setting the value for the checkbox only if that value is what I expect it to be. Here it is, in a gist:

    <!-- Product.wxs -->
<!-- this doesn't change -->
<Property Id="SAVED_DONOTRUNSCRIPTS" Secure="yes">
<RegistrySearch Id="FindSavedDONOTRUNSCRIPTS"
Root="HKLM"
Key="SOFTWARE\MyCompany\MyProduct"
Name="DoNotRunScripts"
Type="raw" />
</Property>
<!-- here, however, I check for Val1 to set the value of the property to 1 -->
<SetProperty Id="DONOTRUNSCRIPTS" After="AppSearch" Value="1" Sequence="ui">
<![CDATA[SAVED_DONOTRUNSCRIPTS = "Val1"]]>
</SetProperty>
<!-- Note the Val prefix when saving the value -->
<RegistryValue Id="DoNotRunScripts"
Action="write"
Type="string"
Name="DoNotRunScripts"
Value="Val[DONOTRUNSCRIPTS]" />
 
<!-- DatabaseSetup.wxs -->
<!-- Note the checkbox value -->
<Control Id="DoNotRunScriptsCheckbox" Type="CheckBox" X="45" Y="197" Height="17" Width="17" Property="DONOTRUNSCRIPTS" CheckBoxValue="1"/>

I Hope that helps people.

and has 0 comments
The Martian was a total surprise as it took the world by storm and became beloved by all science, tech and fast talking pun lovers in the world. Artemis, on the other hand, will not be. Andy Weir writes another book about a supersmart tech savvy character with a great sense of humor, this time solving technical problems on the Moon. Is it as good as The Martian? I don't know, because now I have a template and knew what to expect. Without the element of surprise, it's difficult to compare the two books. Yet I finished reading the novel in a day. That's gotta count for something.

Stylistically it is first person, filled with dialogue and action. It's incredibly easy to read. This time, the main character is a young woman and the location is a city on the Moon. Andy Weir is nothing if not optimistic, so in his book we do get there. I thought the action was following the pattern of danger-action-solution too closely, so much in fact that at one time I saw the entire story as a big adventure game. I will bet you lots of slugs that Weir loved Sierra games as much as the next geek (that being me).

Bottom line is that I liked the book, I gobbled it up and enjoyed it to the end. I didn't like Jazz Bashara as much as Mark Watney and, while the technological descriptions kept me interested, I still think that space mechanics and Martian farming trump EVA shenanigans and vacuum welding any time. It was still damn entertaining, though.

and has 0 comments
All in all I liked Oryx and Crake. Margaret Atwood is writing in a very technical manner, at least it felt like that to me, with scenes and characters constructed in a certain way. That is both instructional and a bit off putting, because once you become aware of it, it distracts from the plot. The plot itself is simple enough: the world has ended and there is this half crazed old man who is reminiscing over what happened. At first it is all unnervingly vague, with things being kept from the reader as the feelings and mental issues of the main (and single) character of the book are being declared. Then, when vagueness cannot be sustained anymore, it is all told as a linear story of what happened, with clear explanations of everything. No mystery left.

To me, it felt like Atwood wanted to warn about the way we treat our world and she just exaggerated some things while letting others be whatever she wanted them to be. From a scientific and sociological standpoint some of the things described are spot on, others are complete bogus. I would go into details, but I don't want to spoil the story. One example is the marketing brands that she cooked up and that are all (and I mean all) phonetically spelled descriptions or sometimes puns. All companies, all product names, all new animal names are like that and it gets really tiresome after a while. Also people are able to do wonderful (or horrible) new things in some areas and not advance at all in others. The timeline is a little screwed up, too.

My favorite quote, the one that made me think the most, was: "All it takes," said Crake, "is the elimination of one generation. One generation of anything. Beetles, trees, microbes, scientists, speakers of French, whatever. Break the link in time between one generation and the next, and it's game over forever" . He is referring to technology after a post apocalyptic event. There are few people that understand any instructions left by makers of advanced technology and the ones that can are ill equipped for survival.

The book does feel like it was written as standalone, with the possibility of being continued, which is a good thing: no ending cliffhangers and too much setting up the next books. The title is weirdly misleading, though, as both Oryx and Crake appear very little in the memories of the main character. Why was the book named so? It makes little sense. I am not sure if I will be reading the next books in the MaddAddam trilogy.

Bottom line: good, although a tad bland writing. A single character reminiscing is all that happens in the book. Grand ideas that most of the time make sense, but sometimes fall flat. Scientifically things don't go as she described.

and has 0 comments

Intro


I had this situation where an application that was running in production was getting bigger and bigger until it would devour all memory. I had to fix the problem, but how could I identify the cause? Running it on my computer wouldn't lead to the same increase in memory, what I wanted to see was why a simple web API would take 6Gb in memory and keep growing.

The process is simple enough:
  1. Get a memory dump from the running process
  2. Analyze the memory dump

While this post relates specifically to process memory dumps, one can get whole memory dumps and analyze those. There are two types of process memory dump formats: mini and full. I will be talking about the full format.

Fortunately, for our process, there are only two steps. Unfortunately, both are fraught with problems. Let's go through each.

Getting a memory dump from a running process


Apparently that is as simple as opening Task Manager, right clicking on the running process in the Details tab and selecting Create dump file. However, what this does is suspend the process and then copy all of its memory in a file in %AppData%/Local/Temp/[name of process].dmp. If the process is being watched or some other issue occur, the dumping will fail. Personally, I couldn't dump anything over 2Gb when I was trying to get the info from w3wp.exe (the IIS process) as the process would restart.

There are a lot of other ways of getting a process memory dump, including crash dumps, periodic dumps, event driven dumps. ProcDump, one of the utilities in the free SysInternals suite, can do all of those. The Windows Error Reporting (WER) system also generates dumps for faulty or unresponsive application or kernel. Other options for collecting a dump are userdum.exep, Windows Debugger (ntsd or windbg). Process Explorer, of course, can create two types of memory dumps for a process: mini and full.

Analyzing a process memory dump


This is where it gets tricky. What you would like is something that opens the file and shows a nice interactive interface explaining exactly what hogs your memory. Visual Studio does that, but only the Ultimate version gives you the memory analysis option. dotMemory from JetBrains is great, but it also costs a lot of money.

My personal experience was using dotMemory in the five days in which it is free and it was as seamless as I would have wanted: I opened the file, it showed me a pie graph with what objects were hogging my memory, I clicked on them, looked where and how they were stored and even what paths could be taken to create those instances. It was beautiful.

Another option is SciTech's .NET Memory Profiler, but it is also rather expensive. I am sure RedGate has something, too, but their trials are way to intrusive for me to download.

I have been looking for free alternatives and frankly all I could find is WinDBG, which is included in Windows Debugging Tools.

Windows Debugger is the name of the software. It is a very basic tool, but there are visualizers which use it to show information. One of them is MemoScope, which appears unmaintained at the moment and is kind of weird looking, but it works. Unfortunately, it doesn't even come close to what dotMemory was showing me. If analyzing a random .NET app dump showed that the biggest memory usage was coming from MainWindow (makes sense) which then had a lvItems that held most of memory (it was a ListView with a lot of data), WinDbg and therefore MemoScope show the biggest usage comes from arrays of bytes. And it also makes sense, physically, but there is no logical context. A step by step memory profiling guide using WinDbg can be found here.

Conclusion


I am biased towards JetBrains software, which is usually amazing, although I haven't used it a quite a while because their obnoxious licensing system (you basically rent the software). I loved how quickly I got to the problem I had with dotMemory. .NET Memory Profiler from SciTech is way cheaper and seems to point in the right direction, but doesn't really compare in terms of quality. Anything else I've tried was quite subpar.


Doing it yourself


As they say, if you want to do something right, you gotta do it yourself.

There is a Windows API called MiniDumpWriteDump that you can use programatically to create the dumps. However, as you've seen, that's not the problem, the dump analysis is.

And it's hard to find anything that is not very deep and complicated. Memory dumps are usually analysed for forensic reasons. There are specialists who have written scores of books on memory dump analysis. What's worse, most people focus on crash dumps, with memory leak analysis usually done while debugging an application.

I certainly don't have the time to work on a tool to help me with this particular issue. I barely had the time to look for existing tools. But if you, guys, can add information to this blog post, I am sure it will help a lot of people going through the same thing.

Pretty Pictures


I didn't find a nice way of putting the images in the post's text, I am adding them here.








and has 0 comments
MemoryCache has a really strange behavior where all of a sudden, without any warning or exception, it stops caching. You use Set or Add, it doesn't matter. It returns true for Add, it throws no error for Set, but the cache stubbornly remains empty. This behavior can be replicated easily by running this line:
MemoryCache.Default.Dispose();
From that moment on, MemoryCache.Default will not cache anything anymore.

Of course, you will say, why the hell would you dispose the default object? First of all, why give me the option? It's not like I can set the Default instance after I dispose it, so this breaks a lot of things. Second of all, why pretend to continue to be working when in fact it is not? Most disposable classes throw exceptions if they are used after being disposed. And third of all, it might not be you who does the disposing.

In my case, I was trying to be a good boy and declare all my dependencies upfront. I was using StructureMap as a IoC framework and for my class that was using a MemoryCache I declared a constructor parameter of the basest type I could: ObjectCache. Then I registered MemoryCache.Default as the instance used when needing an ObjectCache. The problem is that each unit test would initialize and tear down the StructureMap container and it would take all the instances used that implement IDisposable and dispose them, including our friend MemoryCache.Default. As I couldn't find a simple way to tell StructureMap to not dispose the object, I had to create a MemoryCacheWrapper that implemented ObjectCache and was NOT IDisposable and was using MemoryCache.Default for all methods and use that as the singleton instance for an ObjectCache in StructureMap.

If you are working with a .NET version prior to .NET 4.5, you may encounter the same issue with a bug in .NET: MemoryCache Empty : Returns null after being set

and has 0 comments
This guy can write! I mean, Peter Brannen is writing about paleontology and climate change and the words just sing to you, make you see worlds that we actually know very little about and feel for ammonoids like there would be the cutest kittens, looking at you with big eyes and begging not to get extinct. As someone who wants to write himself someday, I really hate this guy. He is that good.

That being said, The Ends of the World is a book about the major extinctions in Earth's history, their causes and how they relate to our present lives. It's a captivating read, evoking lost worlds and very carefully analyzing how they disappeared and why. There is humor, drama, interesting interviews with fascinating people. Dinosaurs? Puh-lease. This takes you back to the good old days of the Ediacaran and slowly brings you around our time and even speculates on what could come after us, the hubris filled species that, geologically speaking, was barely born yesterday - or a few seconds ago, depending on how you measure it - and has the chance to outshine all the other species that came, ruled and went.

There is no way I can do it justice other than to say that I loved the book. In itself it's a summary of life on Earth so summarizing it myself would be pointless. I highly recommend it.

Here is the author, presenting his book at Google Talks:

You are trying to install an application and in the detailed log you get something like this:
InstallSqlData:  Error 0x8007007a: Failed to copy SqlScript.Script: SqlWithWindowsAuthUpv2.9.5UpdateGetInstallJobBuildByInstallJobBuildIdScript
or
InstallSqlData:  Error 0x8007007a: failed to read SqlScripts table

For me it was related to the length of an <sql:SqlScript> tag Id property. If you are trying to use more than 55 characters for the BinaryKey property, you get a build error that you should use a max of 55 characters. For the Id you get no such warning, but instead it fails on install.

I hope it helps people (I lost two hours figuring it out).