I have been working in software development for a while now, but one of the most distracting aspects of the business is that it reinvents itself almost every year. Yet, as opposed to computer science, which uses principles to build a strategy for understanding and using computers and derive other principles, in business things are mostly defined by fashion. The same concept appears again and again under different names, different languages, different frameworks, with a fanatic user base that is ready to defend it to the death from other, often very similar, fads.

That is why most of the time junior developers do not understand basic data object concepts and where they apply, try to solve a problem that has been solved before, fail, then start hating their own work. Am I the definite authority on data objects? Hell, no! I am the guy that always tries to reinvent the wheel. But I've tried so many times in this area alone that I think I've gotten some ideas right. Let me walk you through them.

The mapping to Hell


One of the most common mistakes that I've seen is the "Database first" vs "Code first" debate getting out of hand and somehow infecting an entire application. It somehow assumes there is a direct mapping between what the code wants to do and what the database does. Which is wrong, by the way. This kind of thinking is often coupled with an Object Relational Mapping framework that wants to abstract database access. When doing something simple, like a demo for your favorite ORM, everything works perfectly:

  • The blog has a Blog entity which has Post children and they are all saved in the database without you having to write a single line of SQL.
  • You can even use simple objects (POCOs, POJOs and other "plain" objects that will bite you in the ass later on), with no methods attached.

That immediately leads one to think they can just use the data access objects retrieved from the database as data transfer objects in their APIs, which then doesn't work in a myriad of ways. Attempting to solve every single little issue as it comes up leads to spaghetti code and a complete disintegration of responsibility separation. In the end, you just get to hate your code. And it is all for nothing.

The database first or code first approaches work just fine, but only in regards to the data layer alone. The concepts do not spread to other parts of your application. And even if I have my pet peeves with ORMs, I am not against them in principle. It is difficult not to blame them, though, when I see so many developers trying to translate the ease of working with a database through one such framework to their entire application. When Entity Framework first appeared, for example, all entity objects had to inherit from DbEntity. It was impossible to serialize such an object or use it without having to haul a lot of the EF codebase with it. After a few iterations, the framework knew how to use POCOs, simple objects holding nothing but data and no logic. EF didn't advocate using those objects to pass data around your entire application, but it made it seem easy.

What was the root problem of all that? Thinking that database access and Data Access Objects (DAOs) have a simple and direct bidirectional mapping to Data Transfer Objects (DTOs). DTOs, though, just transfer data from process to process. You might want to generalize that a database is a process and your application another and therefore DAOs are DTOs and you might even be right, but the domain for these objects is different from the domain of DTOs used to transfer data for business purposes. In both cases the idea is to limit costly database access or API calls and get everything you need in one call. Let me explain.

DAOs should be used to get data from the database in an efficient manner. For example you want to get all the posts written by a specific author inside a certain time interval. You want to have the blog entities that are associated with those posts, as well as their metadata like author biography, blog description, linked Facebook account, etc. It wouldn't do to write separate queries for each of these and then combine them in code later. API DTOs, on the other hand, are used to transfer data through the HTTP interface and even if they contain most of the data retrieved from the database, it is not the same thing.

Quick example. Let's say I want to get the posts written in the last year on a blog platform (multiple blogs) and containing a certain word. I need that because I want to display a list of titles that appear as links with the image of the author next to them. In order to get that image, I need the user account for the post author and, if it has no associated image, try to get the image from any linked social media accounts. In this case, the DAO is an array of Post objects that each have properties of type Blog and Author, with Author having a further property holding an array of SocialAccount. Of course, SocialAccount is an abstract class or interface which is implemented differently by persisted entities like FacebookAccount and TwitterAccount. In contrast, what I want as a DTO for the API that gives me the data for the list is an array of objects holding a title, a link URL and an image URL. While there is a mapping between certain properties of certain DAO objects to the properties of DTO objects, it is tiny. The reverse mapping is ridiculous.

Here are some issues that could be encountered by someone who decides they want to use the same objects for database access and API data transfer:

  • serialization issues: Post entities have a Blog property that contains a list of Post entities. When serializing the object in JSON format for API transfer a circular reference is found
  • information leaks: clients using the API get the Facebook and Twitter information of the author of the post, their real name or even the username and password for the blog platform
  • resource waste: each Post entity is way bigger than what was actually requested and the data transferred becomes huge
  • pointless code: since the data objects were not designed with a graphical interface in mind, more client code is required to clean the title name or use the correct image format

Solutions that become their own problems


Using automatic mapping between client models and DTOs is a common one. It hides the use of members from static analysis (think trying to find all the places where a certain property is used in code and missing the "magical" spots where automatic conversion is done), wastes resources by digging through class metadata to understand what it should map and how (reflection is great, but it's slow) and in the cases where someone wants a more complex mapping relationship you get to write what is essentially code in yet another language (the mapper DSL) in order to get things right. It also either introduces unnecessary bidirectional relationships or becomes unbalanced when the conversion from DAO to DTO is different from the one in the other direction.

Another is to just create your own mapping, which takes the form of orphan private methods that convert from one entity to another and become hard to maintain code when you want to add another property to your entity and forget to handle it in all that disparate code. Some try to fix this by inheriting DAOs from DTOs or the other way around. Ugh! A very common solution that is always in search of a non existent problem is adding innumerable code layers. Mapper objects appear that need to have access to both types of data objects and that are really just a wishful thinking brainless replacement of the business logic.

What if the only way to get the image for the author is to access the Facebook API? How does that change your mapping declaration?

Let me repeat this: the business data objects that you use in your user facing application are in a completely different domain (serve a different purpose) than the data objects you use in the internal workings of your code. And there are never just two layers in a modern application. This can also be traced to overengineering and reliance on heaps of frameworks, but it's something you can rarely get away from.

Don't get me wrong, it is impossible to not have code that converts from one object to another, but as opposed to writing the mapping code before you even need to use it, try to write it where you actually have need of it. It's easier to write a conversion method from Post with Blog and Author and Social Account to Title with two links and use it right where you display the data to the user than to imagine an entire fantasy realm when one entity is the reflection of the other. The useful mapping between database and business objects is the role of the business layer. You might see it as a transformation layer that uses algorithms and external services to make the conversion, but get your head out of your code and remember that when you started all of this, you wanted to create an app with a real life purpose. Technically, it is a conversion layer. Logically, it is your software, with a responsibility towards real life needs.

Example number 2: the N-tier application.


You have the Data Access Layer (DAL) which uses a set of entities to communicate with the database. Each such entity is mapped to a database table. This mapping is necessary for the working of the ORM you are using to access said database. The value of the DAL is that you can always replace it with another project that implements the same interfaces and uses the same entities. In order to abstract the specific database you use, or maybe just the communication implementation, you need two projects: the entity and interface project and the actual data code that uses those entities and accesses a specific database.

Then you have the business layer. This is not facing the user in any way, it is just an abstraction of the application logic. It uses the entities of the business domain and, smart as you are, you have built an entire different set of entities that don't (and shouldn't) care how data is accessed or stored in the database. The temptation to map them in a single place is great, since you've designed the database with the business logic in mind anyway, but you've read this blog post that told you it's wrong, so you stick to it and just work it manually. But you make one mistake: you assume that if you have a project for interfaces and DAO entities, you can put the interfaces and entities for the business layer there as well. There are business services that use the business data objects, translate DAOs into business data objects and so on. It all feels natural.

Then you have the user interface layer. The users are what matters here. What they see, what they should see, what they can do is all in this layer. You have yet another set of entities and interfaces, but this is mostly designed with the APIs and the clients that will use it in mind. There are more similarities between TypeScript entities in the client code and this layer's DTOs than between them and the business layer objects. You might even use a mapping software like Protobuff to define your entities for multiple programming languages and that's a good thing, because you actually want to transfer an object to the same type of object, just in another programming language.

It works great, it makes you proud, you get promoted, people come to you for advice, but then you need to build an automated background process to do some stuff using directly the business layer. Now you need to reference the entity and interfaces project in your new background process project and you can't do it without bringing information from both business and data layer. You are pressed for time so you add code that uses business services for their logic but also database entities for accessing data which doesn't have implemented code in the business layer.

Your background process now bypasses the business layer for some little things, which breaks the consistency of the business layer, which now cannot be trusted. Every bit of code you write needs to have extra checks, just in case something broke the underlying data. You business code is more and more concerned with data consistency than business logic. It is the equivalent of writing code assuming the data you use is being changed by a hostile adversary, paranoid code. The worst thing is that it spends resources on data, which should not be its focus. This would not have happened if your data entities were invisible to the new service you've built.

Client vs server


Traditionally, there has always been a difference between client and server code. The server was doing most of the work while the client was lean and fit, usually just displaying the user interface. Yet, as computers have become more powerful and the server client architecture ubiquitous, the focus has shifted to lean server code that just provides necessary data and complex logic on the client machine. If just a few years ago using Model-View-Controller frameworks on the server, with custom DSLs to embed data in web pages rendered there, was state of the art, now the server is supposed to provide data APIs only, while the MVC frameworks have moved exclusively on the browser that consumes those APIs.

In fact, a significant piece of code has jumped the network barrier, but the architecture remained essentially the same. The tiers are still there, the separation, as it was, is still there. The fact that the business logic resides now partly or even completely on the web browser code is irrelevant to this post. However, this shift explains why some of the problems described above happen more often even if they are as old as me. It has to do with the slim API concept.

When someone is designing an application and decides to go all modern with a complex framework on the client consuming a very simple data API on the browser they remember what they were doing a few years back (or what they were taught in school then): the UI, the business logic, the data layer. Since business and UI reside on the browser, it feels as if the API has taken over for the data layer. Ergo, the entities that are sent as JSON through HTTP are something like DAOs, right? And if they are not, surely they can be mapped to them.

The reality of the thing is that the business logic did not cross all the way to the client, but is now spread in both realms. Physically, instead of a three-tier app you now have a four tier app, with an extra business layer on the client. But since both business layers share the same domain, the interfaces and entities from one are the interfaces and entities from the other. They can be safely mapped because they are the same. They are two physical layers, but still just one logical business layer. Think authentication: it has to be done on the server, but most of the logic of your application had moved on the client, where the user interacts directly with it (using their own computer resources). The same business layer spread over client and server alike.

The way


What I advocate is that layers are defined by their purpose in life and the entities that pass between them (transfer objects) are defined even stricter by the necessary interaction between those domains. Somehow, through wishful thinking, framework worshiping or other reasons, people end up with Frankenstein objects that want to perforate these layers and be neither here nor there. Think about it: two different object classes, sharing a definition and a mapping are actually the same object, only mutated across several domains and suffering for it. Think the ending of The Fly 2! The very definition of separation means that you should be able to build an N-layered application with N developers, each handling their own layer. The only thing they should care about are the specific layer entities and interdomain interfaces. By necessity, these should reside in projects that are separated of said layers and used only by the ones that communicate to each other.

I know it is hard. We choose to develop software, not because it is easy, but because it should be! Stop cutting corners and do it right! I know it is difficult to write code that looks essentially the same. The Post entity might have the same form for the data layer and the business layer, but they are not the same in spirit! Once the application evolves into something more complex, the classes will look less like twins and more like Cold War agents from different sides of the Iron Curtain. I know it is difficult to consider that a .NET class should have a one-to-one mapping not to another .NET class, but a JavaScript object or whatever the client is written in. It is right, though, because they are the same in how you use them.

If you approach software development considering that each layer of the application is actually a separate project, not physically, but logically as well, with its own management, roadmap, development, you will not get more work, but less. Instead of a single mammoth application that you need to understand completely, you have three or more projects that are only marginally connected. Even if you are the same person, your role as a data access layer developer is different from your role when working on the business layer.

I hope that my thoughts have come through clear and that this post will help you avoid the same mistakes I made and that I see so many others make. Please let me know where you think I was wrong or if you have anything to add.

I was trying to create an entity with several children entities and persist it to the database using Entity Framework. I was generating the entity, set its entry state to Added, saved the changes in the DbContext, everything right. However, in the database I had one parent entity and one child entity. I suspected it had something to do with the way I created the tables, the foreign keys, something related to the way I was declaring the connection between the entities in the model. It was none of that.

If you look at the way EF generates entities from database tables, it creates a two directional relationship from any foreign key: the parent entity gets an ICollection<Child> property to hold the children and the child entity a virtual property of type Parent to hold the parent. Moreover, the parent entity instantiates the collection in the constructor in the form of a HashSet<Child>. It doesn't have to be a HashSet, though. It works just as well if you overwrite it when you create the entity with something like a List. However, the HashSet approach tells something important of the way EF behaves when considering collections of child objects.

In my case, I was doing something like
var parent = new Parent { 
Children = Enumerable
.Repeat(new Child { SomeProperty = SomeValue }, 3)
.ToList()
};
Can you spot the problem? When I changed the code to
var parent = new Parent();
Enumerable
.Repeat(new Child { SomeProperty = SomeValue }, 3)
.ToList()
.ForEach(child => parent.Children.Add(child));
I was shocked to see that my Parent had only a list of Children with count 1. Why? Because Enumerable.Repeat takes the instance of the object you give it and repeats it:

Enumerable.Repeat(something,3).Distinct().Count() == 1 !

There was no problem that I was setting the Children collection to a List instead of a HashSet, but when saving the children, Entity Framework was considering the distinct instances of Child.

The solution here was to generate different instances of the same object, something like
var parent = new Parent {
Children = Enumerable.Range(0,3)
.Select(i => new Child { SomeProperty = SomeValue }, 3)
.ToList()
};
or, to make it more clear for this case:
var parent = new Parent {
Children = Enumerable
.Repeat(() => new Child { SomeProperty = SomeValue }, 3)
.Select(generator => generator())
.ToList()
};

and has 1 comment
So I was writing this class and I was frustrated with the time it took to complete an operation compared to another class that I wanted to replace. Anyway, I am using Visual Studio 2017 and I get this very nice grayed out font on one of my fields and a nice suggestion complete with an automatic fix for it: "Use auto property". Why should I? Because I have a Count property that wraps a _count field and simply using the property is clearer and with less lines of code. It makes sense, right?

However, let's remember the context here: I was going for performance. So when I did ten million operations with Count++ it took 11 seconds, but when I did ten million operations with _count++ it took only 9 seconds. That's a staggering 20% increase in used resources for wrapping one field into a property.

Just a heads up on a really useful blog post: Script to Delete Data from SQL Server Tables with Foreign Key Constraints

Basically it creates a stored procedure to display the dependencies based on a table name and a condition, then some other code to generate hierarchical delete code.

A lot of people are using Visual Studio Code to work with TypeScript and are getting annoyed by tslint warnings. TSLint is a piece of software that statically checks your code to find issues with how you wrote it. And at first one tries to fix the warnings, but since you already started on an existing project or template, there are a lot of warnings and you have work to do. Even worse, the warnings appear only when you open a file, so you think you're done until you start working in another area and you get red all over your project file. It's annoying! Therefore a lot of people just disable tslint.

But what if there was a way to check all the linting errors in your entire code? Even better, what if there were some magical way of fixing everything that can be fixed? Well, both of these exist. Assuming you have installed tslint globally (npm install tslint -g) you have these two commands to check and fix, respectively, all the errors in the current project:

tslint "src/**/*.ts?(x)" >lint_result
tslint "src/**/*.ts?(x)" --fix >fix_result

Note the src part, which tells tslint to look in the src folder and not in node_modules :-). If you prefer a more unsafe version that checks everything in the current directory, just replace src with a dot.

The fix option is only available from TSLint 4.0.

Update 2024: Sometimes you want to display a Javascript object as a string and when you use JSON.stringify you get an error: Converting circular structure to JSON. The solution is to use a function that keeps a record of objects found and returns a text explaining where the circular reference appeared.

Here is a function like that:

function fixCircularReferences(o) {

  const weirdTypes = [
    Int8Array,
    Uint8Array,
    Uint8ClampedArray,
    Int16Array,
    Uint16Array,
    Int32Array,
    Uint32Array,
    BigInt64Array,
    BigUint64Array,
    //Float16Array,
    Float32Array,
    Float64Array,
    ArrayBuffer,
    SharedArrayBuffer,
    DataView
  ];

  const defs=new Map();
  return (k,v) => {
    if (k && v==o) return '['+k+' is the same as original object]';
    if (v===undefined) return undefined;
    if (v===null) return null;
    const weirdType=weirdTypes.find(t=>v instanceof t);
    if (weirdType) return weirdType.toString();
    if (typeof(v) == 'function') {
      return v.toString();
    }
    if (v && typeof(v) == 'object') {
      const def = defs.get(v);
      if (def) return '['+k+' is the same as '+def+']';
      defs.set(v,k);
    }
    return v;
  }
}

And you use it like this:

const fixFunc = fixCircularReferences(someObject);
const json = JSON.stringify(someObject,fixFunc,2);

Possible improvements:

  • If you want to do more than just search through it, like actually deserialize it into something, you might not want to return the strings about "the same as" and return null instead.
  • If you want to serialize the circular references somehow, take a look at Newtonsoft's JSON PreserveReferencesHandling setting. You will need to also get the right id, not just the immediate property key name, which is more complicated.

and has 2 comments
My colleague showed me today something that I found interesting. It involves (sometimes unwittingly) casting an awaitable method to Action. In my opinion, the cast itself should now work. After all, an awaitable method is a Func<Task> which should not be castable to Action. Or is it? Let's look at some code:
var method = async () => { await Task.Delay(1000); };
This does not work, as compilation fails with Error CS0815 Cannot assign lambda expression to an implicitly-typed variable, which means we need to set the type explicitly. But what is it? It receives no parameter and returns nothing. So it must be an Action, right? But it is also an async/await method, which means it's a Func<Task>. Let's try something else:
Task.Run(async () => { await Task.Delay(1000); });
This compiles. If we hover or go to implementation for the Task.Run method, we reach the public static Task Run(Func<Task> function); signature. So that does it, right? It IS a Func<Task>! Let's try something else, though.
Action action = async() => { await Task.Delay(1000); };
Task.Run(action);
This compiles again! So it IS an Action, too!

What is my point, though? Consider you would want to create a method that receives an Action as a parameter. You want something done, then to execute the function, something like this:
public void ExecuteWithLog(Action action)
{
Console.WriteLine("Start");
action();
Console.WriteLine("End");
}

And then you want to use it like this:
ExecuteWithLog(async () => {
Console.WriteLine("Start delay");
await Task.Delay(1000);
Console.WriteLine("End delay");
});

The output will be:
Start
Start delay
End
End delay
There is NO WAY of awaiting the original method in the ExecuteWithLog method, as it is received as an Action, and while it waits for a second, execution returns to ExecuteWithLog immediately. Write the method like this:
public async void ExecuteWithLog(Func<Task> action)
{
Console.WriteLine("Start");
await action();
Console.WriteLine("End");
}
and now the output is as expected:
Start
Start delay
End delay
End

Why does this happen? Well, as mentioned above, you start with an Action, then you need to await some method (because now everybody NEEDS to use await/async), and then you get an error that your method is not marked with async. Now it's suddenly something else, not an Action anymore. Perhaps that would be annoying, but this ambiguity in defining what an anonymous async parameterless void method is worse.

Visual Studio has a nice little feature called Code Coverage Results, which you can find in the Test menu. However, it does absolutely nothing unless you have the Enterprise edition. If you have it, then probably this post is not for you.

How can we test unit test coverage on .NET projects without spending a ton of money?

I have first tried AxoCover, which is a quick install from NuGet. However, while it looks like it does the job, it requires xUnit 2.2.0 and the developer doesn't have the resources to upgrade the extension to the latest xUnit version (which in my case is 2.3.1). In case you don't use xUnit, AxoCover looks like a good solution.

Then I tried OpenCover, which is what AxoCover uses in the background anyway, which is a library that is suspiciously showing in the Visual Studio Manage NuGet Packages as published in Monday, February 8, 2016 (2/8/2016). However, the latest commits on the GitHub site are from April, only three months ago, which means the library is still being maintained. Unfortunately OpenCover doesn't have any graphical interface. This is where ReportGenerator comes in and some batch file coding :)

Bottom line, these are the steps that you need to go through to enable code coverage reporting on your projects:
  1. install the latest version of OpenCover in your unit test project
  2. install the latest version of ReportGenerator in your unit test project
  3. create a folder in your project root called _coverage (or whatever)
  4. add a new batch file in this folder containing the code to run OpenCover, then ReportGenerator. Below I will publish the code for xUnit
  5. run the batch file

Batch file to run coverage and generate a report for xUnit tests:
@echo off
setlocal EnableDelayedExpansion
 
for /d %%f in (..\..\packages\*.*) do (
set name=%%f
set name="!name:~15!"
if "!name:~1,9!" equ "OpenCover" (
set opencover=%%f
)
if "!name:~1,15!" equ "ReportGenerator" (
set reportgenerator=%%f
)
if "!name:~1,20!" equ "xunit.runner.console" (
set xrunner=%%f
)
)
SET errorlevel=0
if "!opencover!" equ "" (
echo OpenCover not found in ..\..\packages !
SET errorlevel=1
) else (
echo Found OpenCover at !opencover!
)
if "!reportgenerator!" equ "" (
echo ReportGenerator not found in ..\..\packages !
SET errorlevel=1
) else (
echo Found ReportGenerator at !reportgenerator!
)
if "!xrunner!" equ "" (
SET errorlevel=1
echo xunit.runner.console not found in ..\..\packages !
) else (
echo Found xunit.runner.console at !xrunner!
)
if %errorlevel% neq 0 exit /b %errorlevel%
 
set cmdCoverage="!opencover:\=\\!\\tools\\opencover.console -register:user -output:coverage.xml -target:\"!xrunner:\=\\!\\tools\\net452\\xunit.console.exe\" \"-targetargs: ..\\bin\\x64\\Debug\\MYPROJECT.Tests.dll -noshadow -parallel none\" \"-filter:+[MYPROJECT.*]* -[MYPROJECT.Tests]* -[MYPROJECT.IntegrationTests]*\" | tee coverage_output.txt"
set cmdReport="!reportgenerator:\=\\!\\tools\\reportgenerator -reports:coverage.xml -targetdir:.\\html"
 
powershell "!cmdCoverage!"
powershell "!cmdReport!"
start "MYPROJECT test coverage" .\html\index.htm

Notes:
  • replace MYPROJECT with your project name
  • the filter says that I want to calculate the coverage for all the assemblies starting with MYPROJECT, but not Tests and IntegrationTests
  • most of the code above is used to find the OpenCover, ReportGenerator and xunit.runner.console folders in the NuGet packages folder
  • powershell is used to execute the command strings and for the tee command, which displays on the console while also writing to a file
  • the little square characters are Escape characters defining ANSI sequences for showing a red text in case of error.

Please feel free to comment with the lines you would use for other unit testing frameworks, I will update the post.

I was under the impression that .NET Framework can only reference .NET Framework assemblies and .NET Core can only reference .NET Core assemblies. After all, that's why .NET Standard appeared, so you can create assemblies that can be referenced from everywhere. However, one can actually reference .NET Framework assemblies from .NET Core (and not the other way around). Moreover, they work! How does that happen?

I chose a particular functionality that works only in Framework: AppDomain.CreateDomain. I've added it to a .NET 4.6 assembly, then referenced the assembly in a .NET Core program. And it compiled!! Does that mean that I can run whatever I want in .NET Core now?

The answer is no. When running the program, I get a PlatformNotSupportedException, meaning that the IL code is executed by .NET Core, but in its own context. It is basically a .NET Standard cheat. Personally, I don't like this, but I guess it's a measure to support adoption of the Core concept.

What goes on behind the scenes is that .NET Core implements .NET Standard, which can reference .NET Framework assemblies. For this to work you need .NET Core 2.0 and Visual Studio 2017 15.3 or higher.

and has 0 comments
Caveat lector: while this works, meaning it compiles and runs, you might have problems when you are trying to package your work into npm packages. When ng-packagr is used with such a system (often found in older versions of Angular as index.ts files) it throws a really unhelpful exception: TypeError: Cannot read property 'module' of undefined somewhere in bundler.js. The bug is being tracked here, but it doesn't seem to be much desire to address it. Apparently, I have rediscovered the concept of barrel, which now seems to have been abandoned and even completely expunged from Angular docs.

Have you ever seen enormous lists of imports in Angular code and you wondered why couldn't they be more compact? My solution for it is to re-export from a module all the classes that I will need in other modules. Here is an example from LocationModule, which contains a service and a model for locations:
import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { LocationService } from './services/location.service';
import { LocationModel } from './models/location-model';
 
@NgModule({
imports: [ CommonModule ],
providers: [ LocationService ]
})
export class LocationModule { }
 
export { LocationModel, LocationService };

Thing to note: I am exporting the model and the service right away. Now I can do something like
import { LocationModule, LocationModel, LocationService } from '../../../location/location.module';
instead of
import { LocationModule } from '../../../location/location.module';
import { LocationModel } from '../../../location/models/location-model';
import { LocationService } from '../../../location/services/location.service';

In .NET APIs we usually adorn the action methods with [Http<HTTP method>] attributes, like HttpGetAttribute or AcceptVerbsAttribute, to set the HTTP methods that are accepted. However, there are conventions on the names of the methods that make them work when such attributes are not used. How does ASP.Net determine which methods on a controller are considered "actions"? The documentation explains this, but the information is hidden in one of the paragraphs:
  1. Attributes as described above: AcceptVerbs, HttpDelete, HttpGet, HttpHead, HttpOptions, HttpPatch, HttpPost, or HttpPut.
  2. Through the beginning of the method name: "Get", "Post", "Put", "Delete", "Head", "Options", or "Patch"
  3. If none of the rules above apply, POST is assumed

I've just published the package Siderite.StreamRegex, which allows to used regular expressions on Stream and TextReader objects in .NET. Please leave any feedback or feature requests or whatever in the comments below. I am grateful for any input.

Here is the GitHub link: Siderite/StreamRegex
Here is the NuGet package link: Siderite.StreamRegex

How many times did you find some code that looked like this:
// previous code
let someValue1 = 'something';
let someValue2 = 'something else';
let someValue3 = 'something blue';
let someValue4 = 'something entirely different and original';
 
// our code
const parentObject=new ParentObject();
parentObject.childObject.someValue1=someValue1;
parentObject.childObject.someValue2=someValue2;
parentObject.childObject.someValue3=someValue3;
parentObject.childObject.someValue4=someValue4;
and you really wanted to make it look nicer? Well now you can! Introducing the ECMAScript 6 Object.assign method.

And thanks to some other features of ES6, we can write the same code like this:
// our code
const parentObject=new ParentObject();
Object.assign(parentObject.childObject,{someValue1, someValue2, someValue3, someValue4});

Note that { someVariable } is equivalent to the ES5 { someVariable: someVariable } notation.

and has 0 comments
You created a library in .NET Core and you want it to be usable for .NET Framework or Xamarin as well, so you want to turn it into a .NET Standard library. You learn that it is simple: a matter of changing the TargetFramework in the .csproj file, so you do this for all projects in the solution.

But .NET Standard is only designed for libraries, so it makes no sense to change the TargetFramework for other types of projects, including some that are in fact libraries, but are not used as such, like unit test projects.

For example if you attempt to run XUnit tests in Visual Studio you will see that the tests are discovered by Test Explorer, but when you try to run them it says "No test is available in [your project]. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.". While this is an issue with XUnit, more likely, it is also a non-issue, since your test project should use the .NET Standard libraries, not be one itself.

In ECMAScript 6 there is a Map class that looks and feels like a .NET Dictionary. As an extension of JavaScript, TypeScript code like this is totally valid:
let map:Map<number,string> = new Map<number,string>();
map[1]="Category One";
let name:string = map[1]; //name is now "Category One"

However, the code above is wrong. What this does is create a string property named "1" on the map object with a value "Category One". Its "size" is 0. The correct code would be like this:
let map:Map<number,string> = new Map<number,string>();
map.set(1,"Category One");
let name:string = map.get(1); //name is now "Category One", map.size is 1

Similar code for ECMAScript 6, you can just declare the map as let map = new Map();

Just in case you wonder why utilize this hard to use type instead of a normal object, it's a user choice. Most of the pros involve the type of the key (object keys can only be strings or symbols) and ease of counting and iterating, but these issues can be solved trivially in code, if ever needed.