I am starting a new blog series called Blog Question, due to the successful incorporation of a blog chat that works, is free, and does exactly what it should do: Chatango. All except letting me know in real time when a question has been posted on the chat :( . Because of that, many times I don't realize that someone is asking me things and so I fail to answer. As a solution, I will try to answer questions in blog posts, after I do my research. The new label associated with these posts is 'question'.

First off, some assumptions. I will assume that the person who said I'm working on this project of address validation. Using crf models is my concern. was talking about Conditional Random Fields and he meant postal addresses. If you are reading this, please let me know if that is correct. Also, since I am .NET developer, I will use concepts related to .NET.

I knew nothing about CRFs before writing this posts, so bear with me. The Wikipedia article about them is hard to understand by anyone without mathematical (specifically probabilities and statistics) training. However the first paragraph is pretty clear: Conditional random fields (CRFs) are a class of statistical modelling method often applied in pattern recognition and machine learning, where they are used for structured prediction. Whereas an ordinary classifier predicts a label for a single sample without regard to "neighboring" samples, a CRF can take context into account. It involves a process that classifies data by taking into account neighboring samples.

A blog post that clarified the concept much better was Introduction to Conditional Random Fields. It describes how one uses so called feature functions to extract a score from a data sample, then aggregates scores using weights. It also explains how those weights can be automatically computed (machine learning).

In the context of postal address parsing, one would create an interface for feature functions, implement a few of them based on domain specific knowledge, like "if it's an English or American address, the word before St. is a street name", then compute the weighting of the features by training the system using a manually tagged series of addresses. I guess the feature functions can ignore the neighboring words and also do stuff like "If this regular expression matches the address, then this fragment is a street name".

I find this concept really interesting (thanks for pointing it out to me) since it successfully combines feature extraction as defined by an expert and machine learning. Actually, the expert part is not that relevant, since the automated weighing will just give a score close to 0 to all stupid or buggy feature functions.

Of course, one doesn't need to do it from scratch, other people have done it in the past. One blog post that discusses this and also uses more probabilistic methods specifically to postal addresses can be found here: Probabilistic Postal Address Elementalization. From Hidden Markov Models, Maximum-Entropy Markov Models, Transformation-Based Learning and Conditional Random Fields, she found that the Maximum-Entropy Markov model and the Conditional Random Field taggers consistently had the highest overall accuracy of the group. Both consistently had accuracies over 98%, even on partial addresses. The code for this is public at GitHub, but it's written in Java.

When looking around for this post, I found a lot of references to a specific software called the Stanford Named Entity Recognizer, also written in Java, but which has a .NET port. I haven't used the software, but it seems as it is a very thorough implementation of a Named Entity Recognizer. Named Entity Recognition (NER) labels sequences of words in a text which are the names of things, such as person and company names, or gene and protein names. It comes with well-engineered feature extractors for Named Entity Recognition, and many options for defining feature extractors. Included with the download are good named entity recognizers for English, particularly for the 3 classes (PERSON, ORGANIZATION, LOCATION). Perhaps this would also come in handy.

This is as far as I am willing to go without discussing existing code or actually writing some. For more details, contact me and we can work on it.

More random stuff:
The primary advantage of CRFs over hidden Markov models is their conditional nature, resulting in the relaxation of the independence assumptions required by HMMs in order to ensure tractable inference. Additionally, CRFs avoid the label bias problem, a weakness exhibited by maximum entropy Markov models (MEMMs) and other conditional Markov models based on directed graphical models. CRFs outperform both MEMMs and HMMs on a number of real-world sequence labeling tasks. - from Conditional Random Fields: An Introduction

Tutorial on Conditional Random Fields for Sequence Prediction

CRFsuite - Documentation

Extracting named entities in C# using the Stanford NLP Parser

Tutorial: Conditional Random Field (CRF)

and has 0 comments
I often find a new thing that I haven't ever heard of, so I google it. A lot of the time, the first link returned is the Wikipedia article about that concept and I open it to get a general idea of what it is about. Most of the time I understand it immediately, but in some cases - mostly involving hard science like high level mathematics - that page is just a bunch of gibberish that means less to me than what I was looking to clarify in the first place. I mean, when I am searching for something, I usually use words, so there: much clearer.

However, that doesn't mean that I don't want to understand what is described on that page. One idea I had is that of "generational concepts", in other words the concepts that one needs to understand before tackling a new one. They are not "related concepts", they are not links to terms used in the description, they are the general concepts that you need to get first. I find it interesting and useful for several reasons:
  • I could open the links to those concepts and, if I understand them, I could come back and get the one that I wanted
  • If I don't understand the basic concepts, they would also have generational concepts to investigate
  • No one actually needs to create an entire chain of pages, like a teacher in a class, but just edit and existing page and link to the base concepts for it, yet the result is like a course that one can follow up and down
  • It would add context (and thus interest) to Wikipedia, which is now used as a collection of disparate tidbits
  • It would answer the question that I always ask myself when I open an incomprehensible page: what need I know in order to understand this crap?

So now I should put some time aside for fixing Wikipedia.

Just to remember this for future work. I wanted to replace GetDate() default column values with SysUtcDatetime(). This is the script used:
-- declare a string that will hold the actual SQL executed
DECLARE @SQL NVARCHAR(Max) = ''
SELECT @SQL=@SQL+
N'ALTER TABLE ['+t.name+'] DROP CONSTRAINT ['+o.name+'];
ALTER TABLE ['
+t.name+'] ADD DEFAULT SYSUTCDATETIME() FOR ['+c.name+'];
'
-- drop the default value constraint, then add another with SYSUTCDATETIME() as default value
FROM sys.all_columns c -- get the name of the columns
INNER JOIN sys.tables t -- get the name of the tables containing the columns
ON c.object_id=t.object_id
INNER JOIN sys.default_constraints o -- we are only interested in default value constraints
ON c.default_object_id=o.object_id
WHERE o.definition='(getdate())' -- only interested in the columns with getdate() as default value

-- execute generated SQL
EXEC sp_executesql @SQL

Recently I created a framework for translating JSON requests from a REST API to entities sent to the database. For simple objects, it was very easy, just create an SQL parameter for each property. However, for complex objects - having other objects as properties - this was not a solution. So I used a DataContractSerializer to transform the object to XML, send it as an XML SQL parameter and get the values from it in the stored procedures. Then I noticed date time inconsistencies between the two approaches. What was going on?

Let's start with the code. The DateTime object created from the JSON is a date and time value with a timezone, like 16:00 UTC+1. That is 15:00 in universal time. One you send it as a parameter for a stored procedure, the value received by the stored procedure is 16:00 (the server has the same timezone). In SQL Server, DATETIME and DATETIME2 types don't store timezone information. However, when sent through XML, the value looks like this: 2015-03-09T16:00:0.0000000+01:00. Using SELECT [Time] = T.Item.value('@Time','DATETIME2') FROM @Xml.nodes('//Location/SDO') as T(Item), the value returned is 15:00! You get 16:00+01 if you translate to DATETIMEOFFSET.

So let's recap: When you send a DateTime with timezone offset as an SQL parameter, the value reaching the SQL server is the local time. When you extract a textual value with timezone offset from an XML into a DATETIME, using the .value method, the value you get back is basically the Universal Time.

Solutions? Well, if you are going to use DateTime, you might as well consider that servers and devices are not always in the same timezone. Always translating values to universal time might be a good idea. Another solution is to extract from XML to a DATETIMEOFFSET type, which holds both the datetime and the timezone offset. Converting that value to DATETIME or DATETIME2 removes the timezone (Warning: it does NOT give the local time, unless you are in the same zone as the timezone in the datetimeoffset value).

and has 1 comment
I was reading this BBC article a few days ago on Philip Hammond, a British conservative politician, saying terror apologists must share the blame. This comes together nicely with all the recent changes in political stance that push otherwise modern democratic countries towards ideatic extremism. The UK is a prime example. After they invested immense resources into surveilling their own citizens, after they started blocking sites on the Internet, and after their media became more and more xenophobic, now they are moving towards this ... I don't even know how to call it... opinion control. In other words, you are allowed to speak your mind, but only if it is made up in a certain way. Akin to outlawing crazy people from denying the Holocaust, the political discourse is now pushing towards banning all kind of other opinions.

And I just had to write this article to say that this is completely idiotic. People do things not because they heard it somewhere, but because they have a drive to do it. If they are not sure about it, they start talking about it before they commit to action. Simple gagging a point of view - beyond being a very clear violation of the spirit of free speech - only pushes that opinion underground, where only like-minded people will engage in the conversation. Assuming you can quash an opinion just like that, through some legislative method, people who cannot discuss an idea will just implement it directly. The lone-wolf terrorist concept - one that has profusely been used by political media, but proven to be an unfounded myth - will become a self-fulfilling prophecy.

I remember when I was saying that outlawing types of philosophies, like the Nazi one for a classic example, is bad while other would argue using the same example to justify shutting people up. It is a bad thing that, writing these words, I feel vindicated. I shouldn't feel that way, instead I should be proven wrong. When an entire society chooses to close ears (and punish mouths) it should be for a good reason, not something that can predictably be abused later on and extended to ridiculous degrees.

One has to remember that when subscribing to some weird theory, one that is not generally accepted, people are just asking "what if?", an essential question for finding solutions for your problems, for thinking out of the box, for developing into a mature human being. If someone is asking "what if terrorism is good?" there should be a lot of people there to listen to them and argue back and forth until a conclusion is reached, one that in this case seems obvious, but still needs discussing. One could just as well ask "what if the Earth is not in the center of the universe?" - they punished people for that, too.

The principle of free speech as it is understood nowadays is less about freedom to speak and more about the principle of harm: you can say whatever you like, unless that is hurting someone. But we've exaggerated this idea so much, that everything is now considered harmful. This doesn't strengthen, but weakens us. Are we so fragile that we cannot take a few nutcases expressing their opinion? Are we children or are we adults that we must be protected from things we might hear for fear of somehow contaminating us. If you think about it, it is a ridiculous idea that an intelligent educated person would ever become a Nazi or a terrorist just because he stumbles upon some page on the Internet.

I just want to scream to these idiotic governments: "treat me like a human being, not like a mentally challenged child!". So yeah, rant over.

Just a few links from yesterday, all in the same edition of the BBC site:
EU plans new team to tackle cyber-terrorism
Access to blocked sites restored by Reporters Without Borders
UK ISPs block Pirate Bay proxy sites
Banning Tor unwise and infeasible, MPs told

and has 0 comments
The News: A User's Manual is a short book that reads like a thesis for improvement of the way news is reported. Why, asks Alain de Botton, is news trying harder to be "accurate" than to tell the entire story so that people can understand and feel it? Basically it is the old Star Trek trope when Spock or Data or Seven of Nine tell the time in milliseconds when all was actually needed for the purpose on hand was how many hours more or less. Just like in there, the news, as seen by the author, does not understand either what the whole story is (lazy reporting) nor what people need (or indeed what the purpose on hand is). Like a global organization struck by autism, it just repeats the same terrifying and intimidating bits of human suffering, only to ignore the good, the humain, the inspiring and the overall effect on the audience.

I will put is clearly: Botton is right. However, he is discussing news from the perspective of human betterment. Just like people eating too much and exercising too little that the news organizations are being paid from, they couldn't suddenly do what is right as opposed to what brings the money or the audience likes to see. Some of the points he makes could, presumably, be used in national televisions, the ones that should be apolitical and tasked towards the education of the audience, not towards making profit. Alas, such televisions do not exist anymore, I think. I believe, however, that the book was never designed as a how-to manual for news organizations, but for the people watching it. Imagining the news style that Botton is describing can make us, the viewers, understand not only why we watch the news as they are, but also what they do to us.

The book is split into several chapters, all of them containing sections which contain at least an introduction, a description, a comparison, an analysis, a damage report and a suggestion for change:
  • Politics
  • World News
  • Economics
  • Celebrity
  • Disaster
  • Consumption
  • Conclusion

What I found interesting was the psychological analysis of why we are attracted to some types of news items and what effect they have on us. I especially liked the comparison between "terrible tragedies" and the original Greek tragedies. According to Botton, telling what happened in 100 "unbiased" words is less engaging or instructive than going deeper and explaining the situation and the motivations of the people that did terrible things. Why, it is so much easier and comfortable to condemn a murderer of children as "sick" than to try to imagine what he has in common with you and in what situation you would snap that horribly. However, that teaches you and educates you more in life.

The Botton line (heh heh) is that without context, any information doesn't mean anything and makes us feel nothing. To overcome this, news makers are showing the most brutal and shocking things that they are allowed to show, just in order to elicit some semblance of interest. Instead, giving us the whole of the story, making us aware of how people from distant places live before stuffing down our throats how they died, might be more memorable and instrumental to make us feel something useful.

I found myself comparing news media to the justice system. There, a trial with no representation and due process is considered a sham. Both sides need to tell their story to the best of possibilities. If every news item is like a trial, its purpose making the audience judge a situation or a purpose, surely the same must be true. I do believe that Botton would have made his point more popular if he would have taken the stance of the lawmaker than the one of the psychologist. On the other hand, that would have deprived me of an instructive book that exposed many of the mechanisms through which the news is making us feel good while causing so much (hopefully) unintentional damage.

Not everybody is happy about his book, especially professional reporters. Here is one review from The Guardian: The News: A User's Manual by Alain de Botton – review

More helpful, here is a video of Alain de Botton himself discussing some points made in the book:

If you are interested in astronomy and the kind of space science that can be applied now, not in some distant future, this is the book for you. It describes the technical aspects of asteroid mining, an industry that is in its infancy (or should we call it still in the womb?), but is the only thing that can plausibly connect humanity to space. There will be no habitats on Mars, no colonization of the solar system, no interstellar travel - not for humans, not for robots, without the resources contained in asteroids. It is a short book, but filled with information and, as Lewis himself says, You presumably did not buy this book to be hyped by some huckster. If you did, I hope you will be sorely disappointed and not recommend the book to like-minded friends.

Dr. John S. Lewis is the chief scientist for Deep Space Industries, a space mining company that requires a separate blog post just to familiarize people with it. He is a world renowned asteroid resources scientist, with many written papers in the field, and also the author of Rain of Iron and Ice and of Mining the Sky. I hear you may consider these two part of the same series and, thus, you should probably try to get them before you read this book, even if it stands alone nicely.

Asteroid Mining 101 is filled with many pages on geology, minerals and general chemistry. I have to admit it is not what I expected, however true to its title. I thought I would read a little about asteroids, familiarize myself with the general concept outside my general knowledge of it, then read about the DSI's technical designs for spacecraft that would be used for prospecting and mining asteroids. Instead, it is a description of the concept of asteroid mining, followed by deep analysis of the issues that are involved and possible solutions. Reading it, one realizes how far we are from designing robotic miners when we haven't even developed the mining techniques that would work in space. Almost universally, the methods used on Earth rely on either gravity or heavy use of air, water or liquid fuels. It was therefore my first intention to criticize the book for being too geological in nature, but I end up praising it for it.



The book is structured as follows:

  • a very short introduction on the structure of the Solar System and on various spacecraft that can help prospect asteroids
  • a heavy geological description of asteroid composition, mineralogy and origins
  • classification of asteroids, including a very nice list of techniques used to calculate the various characteristics used
  • actual statistics on asteroids in the solar system
  • economical analysis of a space mining based economy
  • actual scenarios for finding, landing on and mining asteroids
  • appendixes with even more detailed information


From these, mineralogy and classification take more than half of the book. The mining scenarios section is small, but understandably so: Lewis tried to make this book as lacking in speculation as possible, and I have to admire him for that. This is not a book to make you dream, it's a book to make you think. This has the downside that there are no discussions on the politics of the matter, with the exception of nuclear fission energy not being politically feasible for spacecraft propulsion. Even if requiring speculation, I would have welcomed a discussion on the possible uses of asteroids as planetary weapons, conflicts in space or even the legal chaos of who owns what and what enforces law. The author is neither a military man, nor a lawyer, so these are subjects for other people.

Several ideas stand out in the book. One of them is that the true valuable resource in space is water. It is abundant and useful for everything from propulsion to radiation shielding and sustaining of life. The so called precious minerals are completely different in space, yet bringing platinum metals to Earth would have a very little profit margin and a very short one, until the market stabilizes on the planet. On the opposite side of the spectrum, nitrogen would be the limiting factor of an industry that could theoretically sustain millions of billions of people, while fissionable materials like uranium or plutonium would be almost missing. Energy has the same problem. In space, solar power would be the main if not the only source of energy, while the types of fuel used on Earth would be either too expensive to use, impossible to produce or irrational to produce (like high energy fuels containing nitrogen). Metals like titanium and aluminum would require too much energy to extract from the stable compounds that they are found in and are of little general use in space. Return on investment cycles would be long in space, maybe longer than the average political cycle. And so on.

Actually, I would say that this is the main idea of the book: how different a space economy would be, from the technical to the administrative. Problems that are insurmountable on Earth are easy in space and the other way around. What we need to make this work is to develop the techniques required, from the ground up (I know that this expression presupposes gravity and a planetary surface, but let's go with it), because out there we need to relearn everything from the beginning. It shows the potential of the asteroids in the solar system, the possibility of expanding the human civilization millions of times its current size, then it presents you with the difficulty of planning all of this from Earth, where everything is different. It is one of the books that demonstrate unequivocally why we need to go out in space and why we need to stay there: we need to begin to "get it".

In a way, and that is my speculative contribution on the subject, it is also a sad book. It makes it obvious how difficult, if not impossible, it is for the average Joe, commuting to work every day, worrying about mortgages and child education options, to understand what awaits us in space. By extension, how impossible is for politicians to do anything about it, even if they understood the concept and wanted to actually do something. Therefore, the need for private initiative is made clear and evident.

I was trying to solve another problem, that operations creating, altering or dropping databases cannot be made inside a .Net transaction. Therefore I created a new TransactionScope inside the main one, using the TransactionScopeOption.Suppress option, but then I started getting this weird exception: The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements. But I did Complete and Dispose all my transaction scopes, so what was going on? Long story short, this is really confusing message for a simple problem: your transaction probably timed out. Create the transaction scope with a large TimeSpan as the Timeout and it will get through. If you can, you can use a try/catch/finally block in which you Dispose the transaction (just remember that a using construct is basically a try/finally block). In my case, I was conditionally creating this new TransactionScope, so I wasn't using using. The transaction would fail, bleed into the other transaction where the confusing exception was being thrown.

I have another SQL quirk for you today, particularly in Microsoft's SQL Server. It concerns the AVG function when the SUM of the averaged values would result in an overflow for that type: AVG fails with the same error. Let's imagine for a moment that we wouldn't have an AVG function. In that case, our code would use SUM and COUNT to average values. For the integer column x, the naive SUM(x)/COUNT() would fail when the sum goes over the INT maximum value (I am not sure that SUM should fail, either, but that is another discussion). So our solution would be something like CONVERT(INT,SUM(CONVERT(BIGINT,x)))/COUNT(). Obviously, this is the solution for the AVG issue, just average the values converted to the biggest type available. So it seems that in their flagship SQL server, Microsoft implemented the naive version, hmm... To be fair, this behaviour is documented in the T-SQL function page: If the sum exceeds the maximum value for the data type of the return value an error will be returned.

As I see it, it is a bug, as this should have been handled internally, and one is logged in Microsoft Connect here: AVG function causes Arithmetic overflow error. Read the funny comment from some Microsoft employee who says there is a solution proposed, but they don't have time to implement it in SQL Server 2008, so probably they will do that in a future version. I'm trying on SQL Server 2014; it's 7 years later, Microsoft!!

Just for kicks, I tried the same in MySQL and it works there. More than that, it doesn't fail for SUM, either, as it automatically returns a BIGINT value, pushing the issue to an eventual insert or update in an INT column.

Just days after I was saying how great Star Trek New Voyages/Phase II was, I stumble upon this gem of a story: Star Trek Aurora. It is a 3D animated full movie set in the Star Trek universe. Even if the animation is primitive, give it a few minutes. The acting is good and the story is really nice and original, with a believable female character and a fresh perspective on the Star Trek universe. I am embedding here the full movie, but also go to their web site and YouTube channel, since they have more work coming!

and has 0 comments
A while ago I wrote a post about "unofficial" Star Trek series and movies, made by fans for the fans. Some quite awful, to my chagrin, but some quite engaging and with good production values. But I have to say that Star Trek New Voyages is just wonderful. Yet another series continuing the original, with captain Kirk at the helm of the Enterprise, it has some of the best Star Trek scripts I have ever seen and the acting is not bad, considering they are all amateurs, and the production values are good, considering that it is made by German studios (in English with American actors, have no fear!)

When I fell in love with Star Trek I did it in the time of Jean Luc Picard and The Next Generation. As much as it irks me to admit it, I liked it for the same reason my father liked it: the stories! The sci-fi was great, but it only enhanced what was already there: great stories about real people in real situations, focusing on positive traits like friendship, loyalty, love, intelligence, skill, courage, happiness, passion. With Star Trek, people have found that there is a higher ideal that they can aspire to.

Well, New Voyages has all of that: dedicated people doing the series out of passion, having the courage to get together and do something out of friendship and loyalty, but with intelligence, skill and soul. I present to you a team of people actually living their dream, and making the fans dreams come true as well. Just great stuff! And if it isn't enough, all the episodes are free to watch on their website, in high definition and with subtitles in several languages. And just in case you were wondering, original actors from the original series like George Takei (Sulu) and Walter Koenig (Chekov) are helping out.

Just watch it, it's just fantastic! I leave you with one of the best episodes (so far): World Enough and Time

and has 0 comments
House of Cards - Psychology and Psychotherapy Built on Myth is a very good book that needs more recognition. It describes and really criticizes the lack of scientific method in psychology and debunks the myth of the experienced psychologist as well as many others that are now taken for granted in the field. Unfortunately the book is also very detailed, filled with expositions, repetitions of concepts and statistical information on the studies that prove the author's point, so it is rather difficult to read; it is certainly not a book you take to help you relax. Every psychologist in the world should read it, though, as well as any aspiring students or people considering going to therapy.

To make it clear, this is not an anti-psychology book. It continuously says that therapy helps. What it also says is that the amount of training and experience of the therapist is statistically irrelevant. That irritates the author tremendously, as he is a psychologist himself and desires that his chosen scientific field evolve and ... well... become a real science. Robyn Dawes unfortunately died in 2010, at the ripe age of 74. During his life he studied human irrationality, intuitive expertise and statistical applications in medicine and psychology. No wonder that in House of Cards, he is ranting against the practice of psychology as it is today.

A few concepts in the book are very interesting and quite frightening. After WW2, a lot of people came traumatized and needed mental attention. At the time, a psychologist needed to be a psychiatrist as well, having gone through the university and studied medicine; they were all doctors, with a Ph.D. degree. So what they did in the US was to create another type of degree, called Psy.D, letting people without medical training enter the field, with only minimal instruction. This created the myth of the intuitive expert who can tell things about people because he has experience, having little else. Dawes proceeds to mercilessly debunk this myth.

In order to do that, he uses - what else - the scientific method. He gathers data as objectively as possible and then tries to find correlations. One correlation that is not found is one between amount of experience (or indeed, formal training) and positive results. One that is, though, is that therapy does help. We just don't know (or better said, we don't know how to quantify) why. One obvious reason would be that, in order to come to therapy, people need to accept they have a problem and then make the first step in solving it: showing up. This alone shows that the person is already actively pursuing healing, a major step into healing themselves. He also analyses diagnosis, often using standardized tests that presumably can help a specialist determine mental issues and their type. However, presented only with test results, the experts don't really get to any useful conclusion.

Dawes is not stopping at psychology, even if that is his main focus. In one chapter he speaks of studies that have proven that doing a thing for a long time doesn't necessarily teach you anything, especially if there is no immediate feedback on whether what you did was good or bad. This also applies to some types of medical diagnosis. And yes, those people went through school - that has the main purpose to promote people who can get through it much more than to provide a comprehensive body of knowledge - and graduated, but when faced with ambiguous symptoms, they pretty much randomly guess what the patient is afflicted by. Think about that when you go to just one doctor and he tells you that he knows what you have because he's experienced.

Anyway, as I said, the book is difficult to read, it is more like a scientific paper and, as much as I wanted to finish it, I realize that I am not an aspiring psychologist, nor am I planning to go to therapy soon. Also, since I have people close to me interested in the field, it wouldn't help to talk to them about how they don't use the scientific method and they are not real doctors ;). Joke aside, this book is invaluable for anyone in the field. Not for me, though, and so I decided to indefinitely postpone reading it to the end.

In this post I want to talk to you about new stuff that links to the good old stuff of our own youth. You probably know what Kickstarter is, but just as an introduction, it is a place where people ask for money for future work. It's like a crowdsourced financing scheme for your public elevator pitch (just imagine a planet-sized elevator, though). And when I say Kickstarter, I mean the actual site and all the other similar things out there. Like... Kickstarter-like, like it?

First stop: Underworld Ascendant. The team that made Ultima Underworld, one of my all time favourite games, is doing a new one. As you can see on the Kickstarter page, it is two weeks from completing. If you loved the Ultima Underworld games (NOT the Ultima games), you could consider pitching in.

Second stop: Hero-U. Remember Quest for Glory? It was made by Sierra Games and the entire series was awesome! However the designers of the game are the Coles. They have been working on Hero-U, a modern version of the QG universe. They planned to release in the spring of 2014, but scope creep and public feedback turned the game from a simple little game to a complex and interesting concept that is planned for release in the autumn of 2015 and it is well on schedule. Check it out! They are at their second Kickstarter round.

Turning to movies and series, this time works made by and for Star Trek fans. And I am not even talking about random people doing really weird and low quality stuff, I mean real movie business people doing great stuff. Check out Star Trek Continues, a continuation of the original Star Trek series, as well as Star Trek Axanar, which seems to become a really cool movie! I can't wait for it to get out.

Update June 27th 2016:
The Axanar story has become a poster for corporate greed and stupidity. Soon after the trailers for Axanar were released, Paramount and CBS - the corporations owning the Star Trek franchise - sued the producers on copyright infringement. Funny enough, they did this before anything real was released. Their problem? The production was too big.

Having received more than 1.2 million US dollars from Kickstarter, the show was actually starting to look great. Top production qualities, professional actors, good CGI and - most of all - passionate people. Paramount and CBS alleged that this was already a commercial venture, having such budget, even if it was released freely on the Internet after production. To me, it feels as if Hollywood started to feel the heat. They realized that if this production and distribution model catches on, they will be left trying to combat piracy and hiring armies of lawyers to arrange and check distribution contracts when "the opposition" will just release free on the Internet once the budget for production is met. Consider the implications! This would be huge.

It felt like entrapment. First you let legions of people use the Star Trek moniker and universe, then you jump with a lawsuit on the people that make the most money. So the studios started to try to deflect the anger and consternation of fans and independent producers with dirty tricks like instructing J.J.Abrams to say in an interview that the lawsuit would go away, only for it to continue anyway and finally, with a set of guidelines for independent productions to which the studios would not object. The terms are ridiculous and pretty much break the entire concept of serialized Star Trek. More here, check this out: “The fan production must … not exceed 30 minutes total, with no additional seasons, episodes, parts, sequels or remakes.”



A long time ago I wrote a post about Vodo, what I thought was the future of cool little indie movies and series. Vodo didn't quite live to my expectations, but Kickstarter has taken its place and, since it is not only about movies, but all kinds of projects, it has a larger chance of surviving and changing the way the world works. Not all is rosy, though. There are voices that say that the Kickstarter ecosystem is more about promises than about delivery. Also some governmental and commercial agencies are really displeased with the way money are exchanged directly between customers and producers, bypassing borders, intermediaries like banks and tax collectors and so on. If you combine this with Bitcoin type currency, their job of overseeing all commercial transactions and taking their cut does become more difficult. I sympathise... not really.

I leave you with some videos of the projects above. Think about looking for others that are working on something you want to sponsor. You might be surprised not only by the ingenious ideas that are out there, but also about how it would make you feel to support people with the same passions as yourself.

Underworld Ascendant trailer:


Game play for Hero-U:


The full first episode of Star Trek Continues from the creators themselves:


Prelude to Axanar, a small mockumentary about the events that will be the context of Axanar:

I am relatively new to the entire NuGet ecosystem. What I expected is for things to just work. You know... Microsoft. However the web of interdepencies seems to be too much even for them. The problems that appear when updating MVC versions, .NET Framework versions, etc, are as annoying as they are unclear. One example: I was trying to publish a project that worked perfectly on my system. I moved it to the server machine, and weird things began to happen. The most annoying of them all is that the errors that occur do that at runtime instead of at compile time. Such an error was "Could not load file or assembly System.Web.WebPages, Version=3.0.0.0...". The project reference for WebPages was 2.0.0.0. If I removed it and tried to add a reference, only versions 1.0.0.0 and 2.0.0.0 were available. Meanwhile Razor was complaining that it didn't find the 3.0.0.0 version of WebPages.

Long story short: don't try to resolve only the library dependencies, but also the framework dependencies. In this case, System.Web.WebPages 3.0.0.0 is only available for the .NET framework 4.5.1. The project was configured as 4.5. Updating the MVC framework after the change to .NET 4.5.1 solved it.

Steps:
  • Change the project version to 4.5.1 (or whatever the newest usable .NET framework version)
  • go to the NuGet Package Manager Console in Visual Studio
  • Run command Update-Package -reinstall Microsoft.AspNet.Mvc

This, of course, is not a panacea for all problems, but just remember that the .NET framework is important in these scenarios.

and has 2 comments
I met this situation where I wanted to implement an http interceptor, a piece of JavaScript code that would do something on unauthorized access to my API. The way to do this is irrelevant (and different from jQuery and AngularJS sites), but there is a problem that affects every situation and that is when you access the API from a different domain than the API's. You see, the browser needs the API server to authorize CORS for every Ajax request that accesses that server from another domain. You might think you did that already in your API, but let me ask you: when there is a problem, like not authorized access, are you sending CORS headers with your error response? Because if you do not, everything you send, including the http code, will not be parsed by the browser and any interception will just show a code of 0. The situation is a little confounded by the fact that the browser does announce that you have a CORS access problem, but also displays the status message, which in this case would be "Unauthorized access". This might make you think you can access that message or the status code. Well, you cannot.

The solution is to send the CORS headers: Access-Control-Allow-Origin, Access-Control-Allow-Methods, Access-Control-Allow-Headers. Just set their value to "*" and send them with your error response and now your JavaScript can read the error code and message.