Update 29 August 2017 - Version 3.0.4: The extension has been rewritten in EcmaScript6 and tested on Chrome, Firefox and Opera.

Update 03 March 2017 - Version 2.9.3: added a function to remove marketing URLs from all created bookmarks. Enable it in the Advanced settings section. Please let me know of any particular parameters you need purged. So far it removes utm_*, wkey, wemail, _hsenc, _hsmi and hsCtaTracking.

Update 26 February 2017: Version (2.9.1): added customizing the URL comparison function. People can choose what makes pages different in general or for specific URL patterns
Update 13 June 2016: Stable version (2.5.0): added Settings page, Read Later functionality, undelete bookmarks page and much more.
Update 8 May 2016: Rewritten the extension from scratch, with unit testing.
Update 28 March 2016: The entire source code of the extension is now open sourced at GitHub.

Whenever I read my news, I open a bookmark folder containing my favorite news sites, Twitter, Facebook, etc. I then proceed to open new tabs for each link I find interesting, closing the originating links when I am done. Usually I get a number of 30-60 open tabs. This wreaks havoc on my memory and computer responsiveness. And it's really stupid, because I only need to read them one by one. In the end I've decided to fight my laziness and create my first browser extension to help me out.

The extension is published here: Siderite's Bookmark Explorer and what it does is check if the current page is found in any bookmark folder, then allow you to go forward or backwards inside that folder.

So this is my scenario on using it:
  1. Open the sites that you want to get the links from.
  2. Open new tabs for the articles you want to read or YouTube videos you want to watch,etc.
  3. Bookmark all tabs into a folder.
  4. Close all the tabs.
  5. Navigate to the bookmark folder and open the first link.
  6. Read the link, then press the Bookmark Navigator button and then the right arrow. (now added support for context menu and keyboard shortcuts)
  7. If you went too far by mistake, press the left arrow to go back.

OK, let's talk about how I did it. In order to create your own Chrome browser extension you need to follow these steps:

1. Create the folder


Create a folder and put inside a file called manifest.json. It's possible structure is pretty complex, but let's start with what I used:
{
"manifest_version" : 2,

"name" : "Siderite's Bookmark Explorer",
"description" : "Gives you a nice Next button to go to the next bookmark in the folder",
"version" : "1.0.2",

"permissions" : [
"tabs",
"activeTab",
"bookmarks",
"contextMenus"
],
"browser_action" : {
"default_icon" : "icon.png",
"default_popup" : "popup.html"
},
"background" : {
"scripts" : ["background.js"],
"persistent" : false
},
"commands" : {
"prevBookmark" : {
"suggested_key" : {
"default" : "Ctrl+Shift+K"
},
"description" : "Navigate to previous bookmark in the folder"
},
"nextBookmark" : {
"suggested_key" : {
"default" : "Ctrl+Shift+L"
},
"description" : "Navigate to next bookmark in the folder"
}
}
}

The manifest version must be 2. You need a name, a description and a version number. Start with something small, like 0.0.1, as you will want to increase the value as you make changes. The other thing is that mandatory is the permissions object, which tells the browser what Chrome APIs you intend to use. I've set there activeTab, because I want to know what the active tab is and what is its URL, tabs, because I might want to get the tab by id and then I don't get info like URL if I didn't specify this permission, bookmarks, because I want to access the bookmarks, and contextMenus, because I want to add items in the page context menu. More on permissions here.

Now, we need to know what the extension should behave like.

If you want to click on it and get a popup that does stuff, you need to specify the browser_action object, where you specify the icon that you want to have in the Chrome extensions bar and/or the popup page that you want to open. If you don't specify this, you get a default button that does nothing on click and presents the standard context menu on right click. You may only specify the icon, though. More on browserAction here.

If you want to have an extension that reacts to background events, monitors URL changes on the current page, responds to commands, then you need a background page. Here I specify that the page is a javascript, but you can add HTML and CSS and other stuff as well. More on background here.

Obviously, the files mentioned in the manifest must be created in the same folder.

The last item in the manifest is the commands object. For each command you need to define the id, the keyboard shortcut (only the 0..9 and A..Z are usable unfortunately) and a description. In order to respond to commands you need a background page as shown above.

2. Test the extension


Next you open a Chrome tab and go to chrome://extensions, click on the 'Developer mode' checkbox if it is not checked already and you get a Load unpacked extension button. Click it and point the following dialog to your folder and test that everything works OK.

3. Publish your extension


In order to publish your extension you need to have a Chrome Web Store account. Go to Chrome Web Store Developer Dashboard and create one. You will need to pay a one time 5$ fee to open it. I know, it kind of sucks, but I paid it and was done with it.

Next, you need to Add New Item, where you will be asked for a packed extension, which is nothing but the ZIP archive of all the files in your folder.

That's it.

Let's now discuss actual implementation details.

Adding functionality to popup elements


Getting the popup page elements is easy with vanilla Javascript, because we know we are building for only one browser: Chrome! So getting elements is done via document.getElementById(id), for example, and adding functionality is done via elem.addEventListener(event,handler,false);

One can use the elements as objects directly to set values that are related to those elements. For example my prev/next button functionality takes the URL from the button itself and changes the location of the current tab to that value. Code executed when the popup opens sets the 'url' property on the button object.

Just remember to do it when the popup has finished loading (with document.addEventListener('DOMContentLoaded', function () { /*here*/ }); )

Getting the currently active tab


All the Chrome APIs are asynchronous, so the code is:
chrome.tabs.query({
'active' : true,
'lastFocusedWindow' : true
}, function (tabs) {
var tab = tabs[0];
if (!tab) return;
// do something with tab
});

More on chrome.tabs here.

Changing the URL of a tab


chrome.tabs.update(tab.id, {
url : url
});

Changing the icon in the Chrome extensions bar


if (chrome.browserAction) chrome.browserAction.setIcon({
path : {
'19' : 'anotherIcon.png'
},
tabId : tab.id
});

The icons are 19x19 PNG files. browserAction may not be available, if not declared in the manifest.

Get bookmarks


Remember you need the bookmarks permission in order for this to work.
chrome.bookmarks.getTree(function (tree) {
//do something with bookmarks
});

The tree is an array of items that have title and url or children. The first tree array item is the Bookmarks Bar, for example. More about bookmarks here.

Hooking to Chrome events


chrome.tabs.onUpdated.addListener(refresh);
chrome.tabs.onCreated.addListener(refresh);
chrome.tabs.onActivated.addListener(refresh);
chrome.tabs.onActiveChanged.addListener(refresh);
chrome.contextMenus.onClicked.addListener(function (info, tab) {
navigate(info.menuItemId, tab);
});
chrome.commands.onCommand.addListener(function (command) {
navigate(command, null);
});

In order to get extended info on the tab object received by tabs events, you need the tabs permission. For access to the contextMenus object you need the contextMenus permission.

Warning: if you install your extension from the store and you disable it so you can test your unpacked extension, you will notice that keyboard commands do not work. Seems to be a bug in Chrome. The solution is to remove your extension completely so that the other version can hook into the keyboard shortcuts.

Creating, detecting and removing menu items


To create a menu item is very simple:
chrome.contextMenus.create({
"id" : "menuItemId",
"title" : "Menu item description",
"contexts" : ["page"] //where the menuItem will be available
});
However, there is no way to 'get' a menu item and if you try to blindly remove a menu item with .remove(id) it will throw an exception. My solution was to use an object to store when I created and when I destroyed the menu items so I can safely call .remove().

To hook to the context menu events, use chrome.contextMenus.onClicked.addListener(function (info, tab) { }); where info contains the menuItemId property that is the same as the id used when creating the item.

Again, to access the context menu API, you need the contextMenus permission. More about context menus here.

Commands


You use commands basically to define keyboard shortcuts. You define them in your manifest and then you hook to the event with chrome.commands.onCommand.addListener(function (command) { });, where command is a string containing the key of the command.

Only modifiers, letters and digits can be used. Amazingly, you don't need permissions for using this API, but since commands are defined in the manifest, it would be superfluous, I guess.

That's it for what I wanted to discuss here. Any questions, bug reports, feature requests... use the comments in the post.

Here is a very informative presentation about the internals of await/async, which makes things a lot clearer when you are trying to understand what the hell is going on there:

I have found a new addiction: prowling StackOverflow and answering questions. It does teach a lot, because you must provide in record time a quality answer that is also appreciated by the person asking the question and by the evil reviewers who hunt you down and downvote you if you mess up. OK, they're not evil, they're necessary. Assholes! :) Anyway, in honor of my 1000th point, I want to share with you the code that I have been working on for one of the questions.

The question had a misleading title: How to inherit a textblock properties to a custom control in c# and had a 500 points reward on it (that's a lot) placed there by another person than the original requester. In fact, the question was more about how to use a normal TextBlock control, but also have it display outlined text, with a specific "stroke" and thickness. Funny thing, I had already answered this question a few days before. The bounty, though, was set on a more formal answer, one that would cover any graphical transformation on a TextBlock, considering that the control had sealed the OnRender overload and there was no way of reaching its drawing context.

We need to consider that WPF was designed to be modular, unlike ASP.Net or Windows Forms, for which inheritance was the preferred way to go. Instead WPF favors composition. That is why controls are sealing their OnRender implementation, because there is another way of getting to the drawing context and that is an Adorner. Now, it is also possible to use an Effect, but for the life of me I couldn't understand how to easily write one.

Anyway, adorners have their pros and cons. The pro is that you get to still use a TextBlock or whatever control you want to use and you just adorn it with what you need. It receives a UIElement in the constructor and in its OnRender method you get access to the drawing context of the control. Here is the code of the adorner that I presented in the StackOverflow question:
public class StrokeAdorner : Adorner
{
private TextBlock _textBlock;

private Brush _stroke;
private ushort _strokeThickness;

public Brush Stroke
{
get
{
return _stroke;
}

set
{
_stroke = value;
_textBlock.InvalidateVisual();
InvalidateVisual();
}
}

public ushort StrokeThickness
{
get
{
return _strokeThickness;
}

set
{
_strokeThickness = value;
_textBlock.InvalidateVisual();
InvalidateVisual();
}
}

public StrokeAdorner(UIElement adornedElement) : base(adornedElement)
{
_textBlock = adornedElement as TextBlock;
ensureTextBlock();
foreach (var property in TypeDescriptor.GetProperties(_textBlock).OfType<PropertyDescriptor>())
{
var dp = DependencyPropertyDescriptor.FromProperty(property);
if (dp == null) continue;
var metadata = dp.Metadata as FrameworkPropertyMetadata;
if (metadata == null) continue;
if (!metadata.AffectsRender) continue;
dp.AddValueChanged(_textBlock, (s, e) => this.InvalidateVisual());
}
}

private void ensureTextBlock()
{
if (_textBlock == null) throw new Exception("This adorner works on TextBlocks only");
}

protected override void OnRender(DrawingContext drawingContext)
{
ensureTextBlock();
base.OnRender(drawingContext);
var formattedText = new FormattedText(
_textBlock.Text,
CultureInfo.CurrentUICulture,
_textBlock.FlowDirection,
new Typeface(_textBlock.FontFamily, _textBlock.FontStyle, _textBlock.FontWeight, _textBlock.FontStretch),
_textBlock.FontSize,
Brushes.Black // This brush does not matter since we use the geometry of the text.
);

formattedText.TextAlignment = _textBlock.TextAlignment;
formattedText.Trimming = _textBlock.TextTrimming;
formattedText.LineHeight = _textBlock.LineHeight;
formattedText.MaxTextWidth = _textBlock.ActualWidth - _textBlock.Padding.Left - _textBlock.Padding.Right;
formattedText.MaxTextHeight = _textBlock.ActualHeight - _textBlock.Padding.Top;// - _textBlock.Padding.Bottom;
while (formattedText.Extent==double.NegativeInfinity)
{
formattedText.MaxTextHeight++;
}

// Build the geometry object that represents the text.
var _textGeometry = formattedText.BuildGeometry(new Point(_textBlock.Padding.Left, _textBlock.Padding.Top));
var textPen = new Pen(Stroke, StrokeThickness);
drawingContext.DrawGeometry(Brushes.Transparent, textPen, _textGeometry);
}

}

The first con is that you need to use it in code, there is no native way of using it from XAML. The second con, and the most brutal, is that when the control changes what it renders, the adorner doesn't follow suit! Someone answered it better than I can describe it here. Guess where? On StackOverflow, of course.

The first problem I have solved with another great WPF contraption: attached properties. Here is the code for the properties:
public static class Adorning
{
public static Brush GetStroke(DependencyObject obj)
{
return (Brush)obj.GetValue(StrokeProperty);
}
public static void SetStroke(DependencyObject obj, Brush value)
{
obj.SetValue(StrokeProperty, value);
}
// Using a DependencyProperty as the backing store for Stroke. This enables animation, styling, binding, etc...
public static readonly DependencyProperty StrokeProperty =
DependencyProperty.RegisterAttached("Stroke", typeof(Brush), typeof(Adorning), new PropertyMetadata(Brushes.Transparent, strokeChanged));

private static void strokeChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
var stroke= e.NewValue as Brush;
ensureAdorner(d,a=>a.Stroke=stroke);
}

private static void ensureAdorner(DependencyObject d, Action<StrokeAdorner> action)
{
var tb = d as TextBlock;
if (tb == null) throw new Exception("StrokeAdorner only works on TextBlocks");
EventHandler f = null;
f = new EventHandler((o, e) =>
{
var adornerLayer = AdornerLayer.GetAdornerLayer(tb);
if (adornerLayer == null) throw new Exception("AdornerLayer should not be empty");
var adorners = adornerLayer.GetAdorners(tb);
var adorner = adorners == null ? null : adorners.OfType<StrokeAdorner>().FirstOrDefault();
if (adorner == null)
{
adorner = new StrokeAdorner(tb);
adornerLayer.Add(adorner);
}
tb.LayoutUpdated -= f;
action(adorner);
});
tb.LayoutUpdated += f;
}

public static double GetStrokeThickness(DependencyObject obj)
{
return (double)obj.GetValue(StrokeThicknessProperty);
}
public static void SetStrokeThickness(DependencyObject obj, double value)
{
obj.SetValue(StrokeThicknessProperty, value);
}
// Using a DependencyProperty as the backing store for StrokeThickness. This enables animation, styling, binding, etc...
public static readonly DependencyProperty StrokeThicknessProperty =
DependencyProperty.RegisterAttached("StrokeThickness", typeof(double), typeof(Adorning), new PropertyMetadata(0.0, strokeThicknessChanged));

private static void strokeThicknessChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
ensureAdorner(d, a =>
{
if (DependencyProperty.UnsetValue.Equals(e.NewValue)) return;
a.StrokeThickness = (ushort)(double)e.NewValue;
});
}
}
and an example of use:
<TextBlock
x:Name="t1"
HorizontalAlignment="Stretch"
FontSize="40"
FontWeight="Bold"
local:Adorning.Stroke="Red"
local:Adorning.StrokeThickness="2"
Text="Some text that needs to be outlined"
TextAlignment="Center"
TextWrapping="Wrap"

Width="600">
<TextBlock.Foreground>
<LinearGradientBrush StartPoint="0,0" EndPoint="1,1">
<GradientStop Offset="0" Color="Green" />
<GradientStop Offset="1" Color="Blue" />
</LinearGradientBrush>
</TextBlock.Foreground>
</TextBlock>

Now for the second problem, the StrokeAdorner already has a fix in the code, but I need to be more specific about it, because as it is written now I believe it leaks memory. Nothing terribly serious, but still. The code I am talking about is in the constructor:
foreach (var property in TypeDescriptor.GetProperties(_textBlock).OfType<PropertyDescriptor>())
{
var dp = DependencyPropertyDescriptor.FromProperty(property);
if (dp == null) continue;
var metadata = dp.Metadata as FrameworkPropertyMetadata;
if (metadata == null) continue;
if (!metadata.AffectsRender) continue;
dp.AddValueChanged(_textBlock, (s, e) => this.InvalidateVisual());
}
Here I am enumerating each property of the target (the TextBlock) and checking if they are dependency properties and if they have in their metadata the AffectsRender flag, they I add a property change handler which calls InvalidateVisual on the adorner. Notice that in no part of the code do I remove those handlers. However, at this time I don't think it is a problem. Anyway, the code itself is more about the principles of the thing, rather than the implementation.

If I were to talk about the implementation, I would say that this code doesn't always work. Even if I use the padding of the element and its actual dimensions, the FormattedText sometimes renders things differently from the TextBlock, especially if one plays with TextWrap and TextTrimming. But that is another subject altogether. Yay! 1000 points on StackOverflow! "And what do you do with the points?" [my wife :(]

Inspired by my own post about simulating F# active patterns in C# and remembering an old crazy post about using try/catch to emulate a switch on types, I came up with this utility class that acts and looks like a switch statement, but can do a lot more. The basic idea was to use a fluent interface to get the same functionality of switch, but also add the possibility of using complex objects as case values or even code conditions.


First, here is the source code:
namespace Constructs
{
public static class Do
{
public static Switch<T> Switch<T>(T value)
{
return Constructs.Switch<T>.From(value);
}
}

public class Switch<T>
{
private bool _isDone;
private T _value;
private Type _valueType;

private Switch(T value)
{
this._value = value;
}

public static Switch<T> From(T value)
{
return new Switch<T>(value);
}

public Switch<T> Case(Func<T> valueFunc, Action<T> action, bool fallThrough = false)
{
if (_isDone) return this;
return Case(valueFunc(), action, fallThrough);
}

public Switch<T> Case(T value, Action<T> action, bool fallThrough = false)
{
if (_isDone) return this;
return If(v => object.Equals(value, v), action, fallThrough);
}

public void Default(Action<T> action)
{
if (_isDone) return;
action(_value);
}

public Switch<T> If(Func<T, bool> boolFunc, Action<T> action, bool fallThrough = false)
{
if (_isDone) return this;

if (boolFunc(_value))
{
action(_value);
_isDone = !fallThrough;
}

return this;
}

private Type getValueType()
{
if (_valueType != null) return _valueType;
if (object.Equals(_value, null)) return null;
_valueType = _value.GetType();
return _valueType;
}

public Switch<T> OfStrictType<TType>(Action<T> action, bool fallThrough = false)
{
if (_isDone) return this;
if (getValueType() == typeof(TType))
{
action(_value);
_isDone = !fallThrough;
}
return this;
}

public Switch<T> OfType<TType>(Action<T> action, bool fallThrough = false)
{
if (_isDone) return this;
if (getValueType() == null) return this;
if (typeof(TType).IsAssignableFrom(getValueType()))
{
action(_value);
_isDone = !fallThrough;
}
return this;
}
}
}
I use the static class Do to very easily get a Switch<T> object based on a value, then run actions on that value. The Switch class has a _value field and an _isDone field. When _isDone is set to true, no action is further executed (like breaking from a switch block). The class has the methods Case, and If, as well as OfType and OfStrictType, all of which execute an action if either the value, the function, the condition or the type provided match the initial value. Default is always last, executing an action and setting _isDone to true;

Here is an example of use:
for (var i = 0; i < 25; i++)
{
Do.Switch(i)
.Case(10, v => Console.WriteLine("i is ten"), true)
.Case(() => DateTime.Now.Minute / 2, v =>
Console.WriteLine($"i is the same with half of the minute of the time ({v})"), true)
.If(v => v % 7 == 0, v => Console.WriteLine($"{v} divisible by 7"))
.Default(v => Console.WriteLine($"{v}"));
}
where the numbers from 0 to 25 are compared with 10, the half of the minutes value of the current time and checked if they are divisible by 7, else they are simply displayed. Note that the first two Case methods receive an optional bool parameter that allows the check to fall through, so that the value is checked if it is equal to 10, but also if it is twice the minute value or divisible by 7. On the other hand, if the value is divisible by 7 it will not display the value in the Default method.

Here is an example that solves the type check with the same construct:
var f = new Action<object>(x =>
Do.Switch(x)
.OfType<string>(v => Console.WriteLine($"{v} is a string"))
.OfType<DateTime>(v => Console.WriteLine($"{v} is a DateTime"))
.OfType<int>(v => Console.WriteLine($"{v} is an integer"))
.OfType<object>(v => Console.WriteLine($"{v} is an object"))
);
f(DateTime.Now);
f("Hello, world!");
f(13);
f(0.45);

And finally, here is the solution to the famous FizzBuzz test, using this construct:
for (var i = 0; i < 100; i++)
{
Do.Switch(i)
.If(v => v % 15 == 0, v => Console.WriteLine($"FizzBuzz"))
.If(v => v % 3 == 0, v => Console.WriteLine($"Fizz"))
.If(v => v % 5 == 0, v => Console.WriteLine($"Buzz"))
.Default(v => Console.WriteLine($"{v}"));
}

Now, the question is how does this fare against the traditional switch? How much overhead does it add if we would, let's say, take all switch/case blocks and replace them with this construct? This brings me to the idea of using AOP to check if the construct is of a certain shape and then replace it with the most efficient implementation of it. With the new Roslyn compiler I think it is doable, but beyond the scope of this post.

I have tried, in the few minutes that it took to write the classes, to think of performance. That is why I cache the value type, although I don't think it really matters that much. Also note there is a difference between Case(v=>SomeMethodThatReturnsAValue(),DoSomething()) and Case(SomeMethodThatReturnsAValue(),DoSomething()); In the first case, SomeMethodThatReturnsAValue will only be executed if the switch has not matched something previously, while in the second, the method will be executed to get a value and then, when the time comes, the switch will compare it with the initial value. The first method is better, with only 4 extra characters.

Hope it helps someone.

F# has an interesting feature called Active Patterns. I liked the idea and started thinking how I would implement this in C#. It all started from this StackOverflow question to which only Scala answers were given at the time.

Yeah, if you read the Microsoft definition you can almost see the egghead that wrote that so that you can't understand anything. Let's start with a simple example that I have shamelessly stolen from here.
// create an active pattern

let (|Int|_|) str =
match System.Int32.TryParse(str) with
| (true, int) -> Some(int)
| _ -> None

// create an active pattern

let (|Bool|_|) str =
match System.Boolean.TryParse(str) with
| (true, bool) -> Some(bool)
| _ -> None

// create a function to call the patterns

let testParse str =
match str with
| Int i -> printfn "The value is an int '%i'" i
| Bool b -> printfn "The value is a bool '%b'" b
| _ -> printfn "The value '%s' is something else" str

// test

testParse "12"
testParse "true"
testParse "abc"

The point here is that you have two functions that return a parsed value, either int or bool, and also a matching success thing. That's a problem in C#, because it is strongly typed and if you want to use anything than boxed values in objects, you need to define some sort of class that holds two values. I've done that with a class I called Option<T>. You might want to see the code, but it is basically a kind of Nullable class that accepts any type, not just value types.
Click to expand

Then I wrote code that did what the original code did and it looks like this:
var apInt = new Func<string, Option<int>>(s =>
{
int i;
if (System.Int32.TryParse(s, out i)) return new Option<int>(i);
return Option<int>.Empty;
});
var apBool = new Func<string, Option<bool>>(s =>
{
bool b;
if (System.Boolean.TryParse(s, out b)) return new Option<bool>(b);
return Option<bool>.Empty;
});

var testParse = new Action<string>(s =>
{
var oi = apInt(s);
if (oi.HoldsValue)
{
Console.WriteLine($"The value is an int '{oi.Value}'");
return;
}
var ob = apBool(s);
if (ob.HoldsValue)
{
Console.WriteLine($"The value is an bool '{ob.Value}'");
return;
}
Console.WriteLine($"The value '{s}' is something else");
});

testParse("12");
testParse("true");
testParse("abc");

It's pretty straighforward, but I didn't like the verbosity, so I decided to write it in a fluent way. Using another class called FluidFunc that I created for this purpose, the code now looks like this:
var apInt = Option<int>.From<string>(s =>
{
int i;
return System.Int32.TryParse(s, out i)
? new Option<int>(i)
: Option<int>.Empty;
});

var apBool = Option<bool>.From<string>(s =>
{
bool b;
return System.Boolean.TryParse(s, out b)
? new Option<bool>(b)
: Option<bool>.Empty;
});

var testParse = new Action<string>(s =>
{
FluidFunc
.Match(s)
.With(apInt, r => Console.WriteLine($"The value is an int '{r}'"))
.With(apBool, r => Console.WriteLine($"The value is an bool '{r}'"))
.Else(v => Console.WriteLine($"The value '{v}' is something else"));
});

testParse("12");
testParse("true");
testParse("abc");

Alternately, one might use a Tuple<bool,T> to avoid using the Option class, and the code might look like this:
var apInt = FluidFunc.From<string,int>(s =>
{
int i;
return System.Int32.TryParse(s, out i)
? new Tuple<bool, int>(true, i)
: new Tuple<bool, int>(false, 0);
});

var apBool = FluidFunc.From<string,bool>(s =>
{
bool b;
return System.Boolean.TryParse(s, out b)
? new Tuple<bool, bool>(true, b)
: new Tuple<bool, bool>(false, false);
});

var testParse = new Action<string>(s =>
{
FluidFunc
.Match(s)
.With(apInt, r => Console.WriteLine($"The value is an int '{r}'"))
.With(apBool, r => Console.WriteLine($"The value is an bool '{r}'"))
.Else(v => Console.WriteLine($"The value '{v}' is something else"));
});

testParse("12");
testParse("true");
testParse("abc");

As you can see, the code now looks almost as verbose as the original F# code. I do not pretend that this is the best way of doing it, but this is what I would do. It also kind of reminds me of the classical situation when you want to do a switch, but with dynamic calculated values or with complex object values, like doing something based on the type of a parameter, or on the result of a more complicated condition. I find this fluent format to be quite useful.

One crazy cool idea is to create a sort of Linq provider for regular expressions, creating the same type of fluidity in generating regular expressions, but in the end getting a ... err... regular compiled regular expression. But that is for other, more epic posts.

The demo solution for this is now hosted on Github.

Here is the code of the FluidFunc class, in case you were wondering:
public static class FluidFunc
{
public static FluidFunc<TInput> Match<TInput>(TInput value)
{
return FluidFunc<TInput>.With(value);
}

public static Func<TInput, Tuple<bool, TResult>> From<TInput, TResult>(Func<TInput, Tuple<bool, TResult>> func)
{
return func;
}
}

public class FluidFunc<TInput>
{
private TInput _value;
private static FluidFunc<TInput> _noOp;
private bool _isNoop;

public static FluidFunc<TInput> NoOp
{
get
{
if (_noOp == null) _noOp = new FluidFunc<TInput>();
return _noOp;
}
}

private FluidFunc()
{
this._isNoop = true;
}

private FluidFunc(TInput value)
{
this._value = value;
}

public static FluidFunc<TInput> With(TInput value)
{
return new FluidFunc<TInput>(value);
}

public FluidFunc<TInput> With<TNew>(Func<TInput, Option<TNew>> func, Action<TNew> action)
{
if (this._isNoop)
{
return this;
}
var result = func(_value);
if (result.HoldsValue)
{
action(result.Value);
return FluidFunc<TInput>.NoOp;
}
return new FluidFunc<TInput>(_value);
}

public FluidFunc<TInput> With<TNew>(Func<TInput, Tuple<bool,TNew>> func, Action<TNew> action)
{
if (this._isNoop)
{
return this;
}
var result = func(_value);
if (result.Item1)
{
action(result.Item2);
return FluidFunc<TInput>.NoOp;
}
return new FluidFunc<TInput>(_value);
}

public void Else(Action<TInput> action)
{
if (this._isNoop) return;

action(_value);
}

}

In the previous post I was discussing Firebase, used in Javascript, and that covered initialization, basic security, read all and insert. In this post I want to discuss about complex queries: filtering, ordering, limiting, indexing, etc. For that I will get inspiration (read: copy with impunity) from the Firebase documentation on the subject Retrieving Data, but make it quick and dirty... you know, like sex! Thank you, ma'am!

OK, the fluid interface for getting the data looks a lot like C# LInQ and I plan to work on a Linq2Firebase thing, but not yet. Since LInQ itself got its inspiration from SQL, I was planning to structure the post in a similar manner: how to do order by, top/limit, select conditions, indexing and so on, so we can really use Firebase like a database. An interesting concept to explore is joining, since this is an object database, but we still need it, because we want to filter by the results of the join before we return the result, like getting all the transaction of users that have the name 'Adam'. Aggregating is another thing that I feel Firebase needs to support. I don't want a billion records in order to compute the sum of a property.

However, the Firebase API is rather limited at the moment. You get .orderByChild, then stuff like .equalTo, .startAt and .endAt and then .limitToFirst and .limitToLast. No aggregation, no complex filters, no optimized indexing, no joining. As far as I can see, this is by design, so that the server is as dumb as possible, but think about that 1GB for the free plan. It is a lot.

So, let's try a complex query, see were it gets us.
ref.child('user')
.once('value',function(snapshot) {
var users=[];
snapshot.forEach(function(childSnapshot) {
var item=childSnapshot.val();
if (/adam/i.test(item.name)) {
users.push(item.userId);
}
});
getInvoiceTotalForUsers(users,DoSomethingWithSum);
});


function getInvoiceTotalForUsers(users,callback)
{
var sum=0;
var count=0;
for (var i=0; i<users.length; i++) {
var id=users[i];
ref.child('invoice')
.equalTo(id,'userId')
.orderByChild('price')
.startAt(10)
.endAt(100)
.once('value',function(snapshot) {
snapshot.forEach(function(childSnapshot) {
var item = childSnapshot.val();
sum+=item.price;
count++;
if (count==users.length) callback(sum);
});
});
}
}

First, I selected the users that have 'adam' in the name. I used .once instead of .on because I don't want to wait for new data to arrive, I want the data so far. I used .forEach to enumerate the data from the value event. With the array of userIds I call getInvoiceTotalForUsers, which gets all the invoices for each user, with a price bigger or equal to 10 and less or equal to 100, which finally calls a callback with the resulting sum of invoice prices.

For me this feels very cumbersome. I can think of several methods to simplify this, but the vanilla code would probably look like this.

I have been looking for a long time for this kind of service, mainly because I wanted to monitor and persist stuff for my blog. Firebase is all of that and more and, with a free plan of 1GB, it's pretty awesome. However, as it is a no SQL database and as it can be accessed via Javascript, it may be a bit difficult to get it at first. In this post I will be talking about how to use Firebase as a traditional database using their Javascript library.

So, first off go to the main website and signup with Google. Once you do, you get a page with a 5 minute tutorial, quickstarts, examples, API docs... but you want the ultra-quick start! Copy pasted working code! So click on the Manage App button.

Take note of the URL where you are redirected. It is the one used for all data usage as well. Ok, quick test code:
var testRef = new Firebase('https://*******.firebaseio.com/test');
testRef.push({
val1: "any object you like",
val2: 1,
val3: "as long as it is not undefined or some complex type like a Date object",
val4: "think of it as JSON"
});
What this does is take that object there and save it in your database, in the "test" container. Let's say it's like a table. You can also save objects directly in the root, but I don't recommend it, as the path of the object is the only one telling you what type of object it is.

Now, in order to read inserted objects, you use events. It's a sort of reactive way of doing things that might be a little unfamiliar. For example, when you run the following piece of code, you will get after you connect all the objects you ever inserted into "test".
var testRef = new Firebase('https://*******.firebaseio.com/test');
testRef.on('child_added', function(snapshot) {
var obj = snapshot.val();
handle(obj); //do what you want with the object
});

Note that you can use either child_added or value, as the retrieve event. While 'child_added' is fired on each retrieved object, 'value' returns one snapshot containing all data items, then proceeds to fire on each added item with full snapshots. Beware!, that means if you have a million items and you do a value query, you get all of them (or at least attempt to, I think there are limits), then on the next added item you get a million and one. If you use .limitToLast(50), for example, you will get the last 50 items, then when a new one is added, you get another 50 item snapshot. In my mind, 'value' is to be used with .once(), while 'child_added' with .on(). More details in my Queries post

Just by using that, you have created a way to insert and read values from the database. Of course, you don't want to leave your database unprotected. Anyone could read or change your data this way. You need some sort of authentication. For that go to the left and click on Login & Auth, then you go to Email & Password and you configure what are the users to log in to your application. Notice that every user has a UID defined. Here is the code to use to authenticate:
var testRef = new Firebase('https://*******.firebaseio.com/test');
testRef.authWithPassword({
email : "some@email.com",
password : "password"
}, function(error, authData) {
if (error) {
console.log("Login Failed!", error);
} else {
console.log("Authenticated successfully with payload:", authData);
}
});
There is an extra step you want to take, secure your database so that it can only be accessed by logged users and for that you have to go to Security & Rules. A very simple structure to use is this:
{
"rules": {
"test": {
".read": false,
".write": false,
"$uid": {
// grants write access to the owner of this user account whose uid must exactly match the key ($uid)
".write": "auth !== null && auth.uid === $uid",
// grants read access to any user who is logged in with an email and password
".read": "auth !== null && auth.provider === 'password'"
}
}
}
}
This means that:
  1. It is forbidden to write to test directly, or to read from it
  2. It is allowed to write to test/uid (remember the user UID when you created the email/password pair) only by the user with the same uid
  3. It is allowed to read from test/uid, as long as you are authenticated in any way

Gotcha! This rule list allows you to read and write whatever you want on the root itself. Anyone could just waltz on your URL and fill your database with crap, just not in the "test" path. More than that, they can just listen to the root and get EVERYTHING that you write in. So the correct rule set is this:
{
"rules": {
".read": false,
".write": false,
"test": {
".read": false,
".write": false,
"$uid": {
// grants write access to the owner of this user account whose uid must exactly match the key ($uid)
".write": "auth !== null && auth.uid === $uid",
// grants read access to any user who is logged in with an email and password
".read": "auth !== null && auth.provider === 'password'"
}
}
}
}

In this particular case, in order to get to the path /test/$uid you can use the .child() function, like this: testRef.child(authData.uid).push(...), where authData is the object you retrieve from the authentication method and that contains your logged user's UID.

The rule system is easy to understand: use ".read"/".write" and a Javascript expression to allow or deny that operation, then add children paths and do the same. There are a lot more things you could learn about the way to authenticate: one can authenticate with Google, Twitter, Facebook, or even with custom tokens. Read more at Email & Password Authentication, User Authentication and User Based Security.

But because you want to do a dirty little hack and just make it work, here is one way:
{
"rules": {
".read": false,
".write": false,
"test": {
".read": "auth.uid == 'MyReadUser'",
".write": "auth.uid == 'MyWriteUser'"
}
}
}
This tells Firebase that no one is allowed to read/write except in /test and only if their UID is MyReadUser, MyWriteUser, respectively. In order to authenticate for this, we use this piece of code:
testRef.authWithCustomToken(token,success,error);
The handlers for success and error do the rest. In order to create the token, you need to do some cryptography, but nevermind that, there is an online JsFiddle where you can do just that without any thought. First you need a secret, for which you go into your Firebase console and click on Secrets. Click on "Show" and copy paste that secret into the JsFiddle "secret" textbox. Then enter MyReadUser/MyWriteUser in the "uid" textbox and create the token. You can then authenticate into Firebase using that ugly string that it spews out at you.

Done, now you only need to use the code. Here is an example:
var testRef = new Firebase('https://*****.firebaseio.com/test');
testRef.authWithCustomToken(token, function(err,authData) {
if (err) alert(err);
myDataRef.on('child_added', function(snapshot) {
var message = snapshot.val();
handle(message);
});
});
where token is the generated token and handle is a function that will run with each of the objects in the database.

In my case, I needed a way to write messages on the blog for users to read. I left read access on for everyone (true) and used the token idea from above to restrict writing. My html page that I run locally uses the authentication to write the messages.

There you have it. In the next post I will examine how you can query the database for specific objects.

I was reading this summary of a talk that Dr. Gerard Holzmann held at USENIX Hot Topics in System Dependability mini-conf on 7 Oct 2012 in Hollywood, California. In it there is a link to what the people in the JPL decided to use as the core of the coding standard: The Power of 10. Yeah, it sounds like a self-help system for addicts, but in fact it is a very smart idea. You see, when you code for the JPL you are talking about code that you will design and test on Earth, then run in space, often years after first developed. It needs to be robust, it needs to be as safe as possible and to make easy detecting problems early on. They tried with a style coding standard, but they failed, mostly because people were not being able to follow all the rules they decided on. Here comes the brilliant idea of taking the most risk alleviating ten coding rules and make it a kind of core of their development style. A form of software ten commandments, if you will.

Some of the rules there are quite counterintuitive. You may check them in link format here and in PDF format here. I was particularly interested in rules 2 and 3: allocate everything you need before you run the program (so eliminate things like more memory allocation or garbage collection) and giving all loops an upper bound (so make sure there will never be an infinite loop). The others are either common sense or already implemented in modern programming languages.

If I were to implement this, I would try to encapsulate the idea of finite loops, so instead of foreach/for loops I would use a class with Foreach/For methods (akin to Parallel). The memory allocation thing is trickier in .NET. The idea of garbage collector is already built into the system. The third rule in P10 says "Memory allocators, such as malloc, and garbage collectors often have unpredictable behavior that can significantly impact performance". I wonder if there is any way to quantify the performance losses coming from the framework memory allocation and garbage collection. As for disabling this behavior, I doubt it is even possible. What I could do is instantiate all classes used for data storage (all data models, basically) I will ever need at some initialization stage, then eliminating any usage of new or declaring any new objects and variables of that sort. It kind of goes against the tenets of OOP (and against P10's rule number 6, BTW), but it could be interesting to experiment with.

What do you think? Anyway, feel free to ignore my post, but read the document. People at JPL are not stupid! I loved this minimalist idea they used: just reduce all coding rules to the more important ten.

I have implemented a system that logs what people do on my blog, with the intent of making it more useful to my readers. In doing so I created a live dashboard where people going and leaving are displayed in real time. The conclusion is pretty humbling, but I have also noticed a pattern that might reflect badly on the state of the Internet today.

The conclusion I was talking about is that, even if I write about a lot of things, from books to software, from WPF to Javascript, the most visited posts by far are about why the Bittorrent client gets stuck, how to remove ads by installing Privoxy and Sift3, my string comparison algorithm. All of that info one can get from the Popular posts column in the right of the blog, but I had no idea how many people visit it only to find how they can download their movies faster!

And then there are the programming blog posts. I am filled with pride when people open a link to learn something from my experiences. And then I see that they are looking at the posts about Crystal Reports, AjaxControlToolkit and the old ASP.Net Ajax calls. Occasionally they come for the WPF bit, which is great, but the conclusion is clear: people are mostly interested in the old posts, the ones describing older technologies that no one is talking about anywhere anymore. True, I have not posted anything significant in the last two years, but still, I feel disappointed. My blog's merit here seems to be that it is still online!

But then I realized something else. Sometimes I feel joy at seeing that a visitor opens a post that no one has opened recently. Yet, in a very short time, other people are starting to open the same link. It has happened repeatedly several days in a row, so it can't be a coincidence. And people are coming from all over: Canada, US, Brazil, Mozambique, Ghana! I can only explain it with the theory that once visited, a link increases in visibility, its Google rank goes up, thus passing a threshold that makes it appear on the first search pages. It is a snowball effect, which in part I understand and agree with, but can't stop wondering if it doesn't apply everywhere. Instead of going for the relevance that Google and other big search engines aspire towards, they cheat by treating each click as a Facebook Like! More people read it, so more people should read it, which they do, and so on and so on.

The bottom line is that I wouldn't want to see a race towards a common goal be treated as a common race towards a goal. Let all pages share the glory, rank them based on content, not the preferences of people searching for stuff. How long before Google will helpfully suggest to me to go download a movie rather than search for something for work?

On the 9th of February I basically held the same talk I did at Impact Hub, only I did better, and this time presented to the ADCES group. Unbeknownst to me, my colleague there Andrei Rînea also held a similar presentation with the same organization, more than two years ago, and it is quite difficult to assume that I was not inspired by it when one notices how similar they really were :) Anyway, that means there is no way people can say they didn't get it, now! Here is his blog entry about that presentation: Bing it on, Reactive Extensions! – story, code and slides

The code, as well as a RevealJS slideshow that I didn't use the first time, can be found at Github. I also added a Javascript implementation of the same concept, using a Wikipedia service instead - since DictService doesn't support JSON.

This post is discussing the solution to the NotSupportedException "This type of CollectionView does not support changes to its SourceCollection from a thread different from the Dispatcher thread" and also the InvalidOperationException "An ItemsControl is inconsistent with its items source".

In the first case you want to bind in Windows Presentation Foundation a collection property from your viewmodel and it says no. What happens is that you are using a BindingList<T> or an ObservableCollection<T> and the binding system is using in the background a CollectionView that wraps it which does not support changes via multiple threads. The solution to this is rather simple: use this piece of code:
BindingOperations.EnableCollectionSynchronization(collection, lockObject);
This short blog post from Florent Pellet explains things a little, but that is ending on a dire note: The ViewModel becomes dependent on the view It also suggests that you need to create a lock object for each UI thread, if you have more.

This works in .NET 4.5 and that is the reason that when you are looking the exception up you get all kind of answers that either suggest you invoke any changes on the Dispatcher UI thread (which I believe is against the idea of having a viewmodel) or weird bastardizations of the collection classes used, like trying to invoke the events for list changes on the dispatcher of the invoking delegate. I've tried that and I got the second InvalidOperationException exception that I will be covering later on :)

But let's go further and examine what is going on. If you look at the method declaration, EnableCollectionSynchronization also allows specifying a synchronization callback, something that you could use to manage weird custom collection classes. The Remarks sections says When you call this overload of the EnableCollectionSynchronization(IEnumerable, Object) method, the system locks the collection when you access it, which implies you are losing some performance, but not much else. In case you have many parallel threads that are modifying your collection, you need to lock it, anyway. You may, of course, create your own high performance system of changing a collection and, maybe, run a separate method to marshal changes from your private data structure to the UI bound one.

Now, the InvalidOperationException "An ItemsControl is inconsistent with its items source" is thrown when the ItemsSource property has become out of sync with the Items property, which is usually generated by the ItemsControl. So when I tried to create my own badass collection class, I managed to avoid the first exception and I got this one. The same solution applies to both cases:
BindingOperations.EnableCollectionSynchronization(collection, lockObject);
Funny enough, you have to run this piece of code on the Dispatcher UI thread.


But where to use it? It would be rather simple to use it in the viewodel constructor, using the ((ICollection)collection)SyncRoot object, or even in the constructor of a class that inherits from either BindingList or ObservableCollection and has nothing but this type of initialization. I believe that, since this is a binding issues, something within WPF, then the binding system should handle it, like some type of synchronizing Binding. For a second I thought that the IsAsync property of the Binding class would solve this by itself, but it wouldn't work. Also, Binding doesn't have any methods to override and BindingBase is an abstract class with internal methods to implement, which of course doesn't work. Otherwise it would have been OK, I believe, to create a special SynchronizedCollectionBinding class that enables collection synchronization at bind time. BTW, if you are thinking to implement everything starting with MarkupExtension, forget it. The Binding class is a bit hardcoded in Visual Studio and it wouldn't actually work as expected.

That's it, folks!

Today I was the third presenter in the ReactiveX in Action event, held at Impact Hub, Bucharest. The presentation did not go as well as planned, but was relatively OK. I have to say that probably, after a while, giving talks to so many people turns from terrifying to exciting and then to addictive. Also, you really learn things better when you are preparing to teach them later, rather than just perusing them.

I will be holding the exact same presentation, hopefully with a better performance, on the 9th of February, at ADCES.

For those interested in what I did, it was a code only demo of a dictionary lookup WPF application written in .NET C#. In the ideal code that you can download from Github, there are three projects that do the exact same thing:
  1. The first project is a "classic" program that follows the requirements.
  2. The second is a Reactive Extensions implementation.
  3. The third is a Reactive Extensions implementation written in the MVVM style.

The application has a text field and a listbox. When changing the text of the field, a web service is called to return a list of all words starting with the typed text and list them in the listbox, on the UI thread. It has to catch exceptions, throttle the input, so that you can write a text and only access the web service when you stop typing, implement a timeout if the call takes too long, make sure that no two subsequent calls are being made with the same text argument, retry three times the network call if it fails for any of the uncaught exceptions. There is a "debug" listbox as well as a button that should also result in a web service query.

Unfortunately, the code that you are downloading is the final version, not the simple one that I am writing live during the presentation. In effect, that means you don't understand the massive size reduction and simplification of the code, because of all the extra debugging code. Join me at the ADCES presentation (and together we can rule the galaxy) for the full demo.

Also, I intend to add something to the demo if I have the time and that is unit testing, showing the power of the scheduler paradigm in Reactive Extensions. Wish me luck!

Long story short: I thought FOSDEM 2016 was terribly non-technical.

The entire conference took place at the ULB Solbosch Campus in Brussels, Belgium, which is composed of several buildings in which many rooms are being used for presentations. That meant that not only you had to plan the speeches that you wanted to attend to, but also consider the time it took to move from one building to another (in the cold and rain). Add to this the fact that the space was still insufficient for most talks, and if you didn't get there before the talk started, it wasn't uncommon to find the room full and be turned down at the door for security reasons (meaning fire hazards and the likes, not stupid terrorism). I thought the mobile app FOSDEM Companion was very helpful in keeping track of what is what and where and when.

The talks themselves, though, were mostly 20-25 minutes long. While some reached to 45 minutes, most of them were short presentations of one product or another. Someone would speak in front of a Powerpoint (or some alternative) slide and the most common template was: "I am X I work at Y and we are doing product Z. Here is a history of the product, here is what it can do for you and you can find more at these links." They were open source and free, alright, but other than that it felt like it was a marketing conference, not a technical one. I have seen only one presentation that included actual code.

This doesn't mean I didn't enjoy myself. I've met old friends and some of the presentations were really interesting. I was particularly impressed by something called Ring, which is a completely peer to peer and securely encrypted communication system. Basically it allows you to find people, talk to them (via text, sound, video), while having no central server. It was something that I was looking for and that uses DHT as a discovery mechanism.

So my conclusion is that if you are not there for a specific project or topic, so that you end up finding the people that are interested in the same thing and network with them, FOSDEM is pretty superficial. The talks were recorded and the videos will slowly appear on the FOSDEM video archive site, so actually going there just to see the presentations alone might not be necessary. Being from a slightly different technical domain, I wasn't interested in socialization, and I think that was my biggest mistake.

The people there looked interesting. A friend of mine summarized it well: "one of the few places where there is a queue at the men's bathrooms and not at the women's". There were of course plenty of facially haired, pony-tailed, black leather wearing, Linux laptop carrying hackers running around, but most of the people there didn't look that young or that "hacky". In fact, I think the age average was probably around 40.

That's about it for my FOSDEM report. If you need any more information, leave me a comment and I will fill any holes in the description.

Update:
The talks that I went to and I liked were these:

Stephen Toub wrote this document, as he calls it, but that is so full of useful information that it can be considered a reference book. A 118 pages PDF, Patterns for Parallel Programming taught me a lot of things about .NET parallel programming (although most of them I should have known already :-().

Toub is a program manager lead on the Parallel Computing Platform team at Microsoft, the smart people that gave us Task<T>, Parallel, but also await/async. The team was formed in 2006 and it had the responsibility of helping Microsoft get ready for the shift to multicore and many-core. They had broad responsibility around the company but were centered in the Developer Division because they believed the impact of this fundamental shift in how programming is done was mostly going to be on software developers.

It is important to understand that this document was last updated in 2010 and still some of the stuff there was new to me. However, some of the concepts detailed in there are timeless, like what is important to share and distribute in a parallel programming scenario. The end of the document is filled with advanced code that I would have trouble understanding even after reading this, that is why I believe you should keep this PDF somewhere close, in order to reread relevant parts when doing parallel programming. The document is free to download from Microsoft and I highly recommend it to all .NET developers out there.

Date Published: 7/16/2010 File Size: 1.5 MB

On the 3rd and 9th of February I will be presenting a demo of Reactive Extension in action on a Windows Presentation Foundation app that I am going to be building as I speak, first without and then with Rx. The presentation should be about 30-45 minutes, in Romanian, but I am sure we can accommodate foreign speakers by doing it in English if you request it. These events are all free, but you must register in order to know how many people to prepare for. Here are the Meetup links:
ReactiveX in Action, Wednesday, February 3 2016, 19:00, Impact Hub, Strada Halelor 5, Bucharest
MsSql / Reactive Extensions, Tuesday, February 9 2016, 19:00, Electronic Arts - Afi Park 2, Bulevardul General Vasile Milea 4F, București 061344
See you there!