and has 1 comment
  Few people know this, but for a while now I've kept tabs on what happens in outer space, specifically Solar System colonization and asteroid mining. This means I've been connected to the wonderful revolution that is silently unfolding regarding human understanding and access to our small patch of universe.

  But this entry is not about space news, but rather my own thoughts on a subject that keeps bugging me: our own place in the world. You might have heard about the Fermi paradox. It is the theory that as big as the universe is as much time that has passed, the possibility that life and intelligence arose somewhere else is very close to 1. Not only that, but it remains close to 1 even if we look at the universe a few billions years back. The Fermi paradox asks then, how come we haven't heard of anybody else? Look at how fast we are evolving technologically; given a billion years surely we would invent at least the grey goo (although admittedly we would have the good taste to have it in more beautiful colors). What is going on?

  You might think this is not a real problem, but it truly is. To believe that no one in the whole of the universe did think to create self-reproducing probes is as ridiculous as believing we alone are intelligent enough to do it. Even at non relativistic speeds (stuff drifting aimlessly in the void) such a machine should have spread across the galaxy. Why aren't they here already?

  I am writing this piece while we have received confirmation that Voyager, one of the space probes launched in the 70's, run by computers with 4Kb of memory and spending power equivalent to that of a small light bulb to send info back to Earth, has reached interstellar space. It took more than three decades, but it is there. Another few tens of thousands of years and it leaves the Solar System. Triple that time and it reaches our nearest star. Billions of years have passed, though, and a thousand centuries are nothing on that timescale. And we build Voyager in the 70's! Of course, one could get pissed that no 20 Watt light bulb ever survived 30 years here on Earth, but that's another issue entirely. So where are the Voyagers of other species?

  There are several schools of thought on the subject. One, which I won't even discuss, is that we are the chosen people and are the only ones intelligent or even alive. Some versions of panspermia that suggest the ingredients of life came from meteors and are extremely rare on planets seem equally implausible to me.

  Another one, which I found really interesting, is that as technology advances, we are bound to create more complex virtual worlds, which, as comfort goes, are much easy to live in than "real" worlds. And I double quote the word here because when the simulation is advanced enough, the inhabitants there will also make other simulations of their own. In this view, we are most likely creatures that have evolved on the spare CPU of some machine, which is itself a simulation. It's turtles all the way down, man.

  Anyway, I find this theory fascinating because it also fights the law of entropy and the infinity of time. You see, in a simulated world, time would run faster than in real life. There is no need to wait 4 billion years for life to evolve, if a computer can simulate it faster. Do this until you reach the quantum limit underneath which time and space cannot be divided into smaller units anymore (if that even exists in the "realest" world) and you get universes that function with the fastest possible speed. Duplicate one of these virtual machines and two universes live simultaneously at the same time. It's a wonderful concept. Also, the quantum nature of our universe is eerily similar to the bits in a computer. Of course, once we get on that path, anything would be possible. We might be alone in the universe because it was designed as such. Or because the developers of our world are using a very common trick that is to render the graphics of a virtual world only where there are people playing. Another eerie similarity with the quantum world that changes behavior when someone it watching.

  There is also the concept of the multiverse. It says that all the possible states that can be taken by the universe are actually taken. We just happen to live in one version. If a particle can go one way or another, it will go both, once in each of two universes. Universal constants have values in the entirety of a range and a universe for each. We haven't met aliens yet and we were not destroyed by the culture shock because we are here not destroyed. It's a sort of a circular definition.

  Then there is the quarantine hypothesis. Aliens are not only present, but very much involved into our lives. They are waiting patiently for us to discover space flight or quantum matrix decomposition or whatever, before they make contact. They could even make contact just to tell us to stay away, all the universe if full, we are just late to the party. I guess it's possible, why not?

  Another idea, a more morbid one, is that no civilization survives beyond a certain threshold that we have not reached yet. When a global problem arises, there are people who are quick to associate this idea with that problem. Nuclear weapons, global warming, terrorism, sexting, Justin Bieber, twerking, etc. In the universal landscape they are irrelevant, at least until now and in the foreseeable future. Still, there is always the possibility that a game changing technology, what we call disruptive, will emerge naturally and will make any civilization disappear or simply obsolete or completely pointless. Just like the others above, this idea may assume a type of absolute. We could have a tiny chance to escape doom, only its probability is as close to 0 as the probability that there is a whole lot of life in the universe is close to 1. It's a bit terrifying, but because of its inevitability, also pointless to worry about.

  This idea of a chance, though, is interesting because it makes one think further ahead. If such a disruptive event or technology (Kurzweil's singularity, maybe) is about to come, what will it be? When did we burst technologically? When we developed mass production of commodities. When did we explode as a populace? When we developed mass production of food. When will we become more than we are? When we develop mass production of people, perhaps. That's one scenario that I was thinking about today and spurred this blog post. After all, it would be horrible to live in a world where every cell phone or computer is designed and/or built individually. Instead we take the models that are best and we duplicate them. We could take the smartest, more productive and more beautiful of us and duplicate them. The quality of the human race (as measured by the current one, unfortunately) would increase exponentially. Or we don't do that and instead create intelligent machines that surpass us completely. Only we design them to take care of us, not evolve themselves. Lacking the pressure to survive, we devolve into unthinking pets that robots feed and clean. That is another of these scenarios. What is both exciting and worrying is that there are a number of these scenarios that seem very logical and plausible. We can already see a multiverse of doom, but we are not doing anything about it.

  This brings me back to the colonization of the Solar System. For most of us, that sounds pointless. Why go to Mars when we haven't really finished colonizing the high mountains, the deep seas or even the African desert. All of these are orders of magnitude more friendly to human life than Mars. But the overwhelming advantage, the only reason why we must do it (there are others, but not necessary, only compelling), is to spread humanity in more than one basket. It is the good thing to do exactly because we don't know what is going to happen: just make a backup, for crying out loud, otherwise our simulated worlds of increasing complexity will just cease to exist when a larger meteor hits the planet.

  And speaking of meteors, I met people that had no idea that recently a meteor exploded in Chelyabinsk. What does it take for people to take notice of this very plausible threat? A meteor crashing into the new World Trade Center?

  This last point of view, of all I have discussed, is the most liberating and the only one worthy of consideration. Not because it is the best idea ever, but because it leaves us with a way out. We can, if we are careful, see the threat before it becomes unavoidable or spread ourselves wide enough so we don't get wiped out instantly. It gives us both hope and vision. All the others are absolutes, ideas that, just as the concept of an almighty god, are pointless to consider just because we can do nothing about them. All of our voyages, the most treasured discoveries and realizations of human kind, they all start with a thought. Let us think ahead.

Your Inner Fish is a very nice book, popularizing the science behind paleontology and anatomy and making a surprising and thorough connection between the two. In short, Neil Shubin describes the way bodies are built and how our ancestry, from single cell organisms, fish, amphibians to primates, influences our design. It is a rather short book, and also easy to read. From field stories of discovering fossils in the wild to the anatomy classes that he teaches in university, the pages take one through a journey of true discovery and makes us understand so easily some things that very few people consider simple.

I could review Your Inner Fish for you, but someone did a lot more effort of it here. Also, the University of California Television YouTube channel released a one hour video presentation of the book which I am attaching to this blog post, as well as what seems to be the book's Facebook page. What I can say is that I liked the book a lot and I recommend it to everybody, science minded people or not.

I have been interested in the asteroids in the Solar system lately and, while perusing the vast amount of data that is now on the Internet on the subject, I've stumbled upon a video of the number of asteroids humans have discovered in the last 30 years (1980-2010). It is a simple bird's eye view of the Solar system, with the planets and the small objects we knew at the time to exist, together with a highlighted view of the objects we were seeing from the Earth at any given moment.

You should watch the video full screen and a large resolution, as the objects are pretty dim. If you only see the highlighted object, you should increase your video brightness or gamma settings. Enjoy!



The video is from Scott Manley's YouTube page, and there are more interesting asteroid videos there as well. I urge you to see them. The ones I enjoyed best I will include below.

Density Of Asteroids in the Orbital Plane of the Solar System




Asteroids In Resonance With Jupiter




Asteroid Belt - Edge On View





and has 0 comments
There is this childish game called "cordless phone", which funny enough is older than any possible concept of wireless telephony, where in a large group of people a message is sent to someone else by whispering it to your neighbour. Since humans are not network routers, small mistakes creep up in the message as it is copied and resent (hmm, there should be a genetic reference here somewhere as well).

The point is that, given enough people with their own imperfections and/or agendas, a message gets distorted as the number of middle men increases. It also happens in the world of news. Some news company invests in news by paying investigative reporters. The news is created by a human interpreting things from eye witness accounts to scientific papers, but then it is reported by other news agencies, where the original information is not the main source, but the previous news report. Then marketing shows its ugly head, as the titles need to be shockier, more impressive, forcing the hapless reader to open that link, pick up that paper, etc. Occasionally there are translations errors, but mostly it is about idiots who don't and can't understand what they are reporting on, so the original message gets massacred!

So here is one of the news of today, re-reported by Romanian media, after translation and obfuscation and marketization (and retranslation by me, sorry): "Einstein was wrong? A particle that is travelling at more than the speed of light has been discovered". In the body, written a little better, "Elementary subatomic particle" got translated as "Elementary particle of matter". Dear "science" reporters, the neutrino is not a particle that needed discovering and it is not part of normal matter, with which it interacts very little. What is new is just the strange behaviour of the faster than light travel, which is only hinted by some data that may be or not be correct and refuted by some other, like supernova explosions, information that you haven't even bothered to copy paste into your article. And, as if this was not enough, the comments of the readers, kind of like myself ranting here probably, are making the reporter seem brilliant in comparison.

Is there a solution? Not really. People should try to find the original source of messages as much as possible, or at least a reporting source that is professional enough to not skew the information too much when summarizing it for the general public. A technical solution could work that would analyse news reports, group them per topic, then remove copies and translations, red flag emotional language or hidden divergent messages and ignore the titles altogether, maybe generate new ones. And while I know this is possible to do, it would be very difficult (but possibly rewarding) as software goes. One thing is for certain: reading the titles and assuming that they correctly summarize the complete articles is a terrible mistake, alas, one that is very common.

A colleague of mine asked a question that seemed trivial, but then it revealed interesting layers of complexity: how would you build an algorithm for a random number in any integer interval assuming that you already have a function that returns a random binary bit? The distribution of the bit is perfectly random and so it should be that of your function.



My first attempt was to divide the interval in two, then choose the first or second half based on the random bit function. This worked perfectly for intervals of even length, but there were issues with odd sized intervals. Let's take the most basic version there is: we want a random number between 7 and 9. The interval has a size of 3, which is not divisible by 2.



One solution is to split it in half anyway, ignoring one number, then use the random bit function one more time to determine in which half the remaining number should be added. For example the random bit yields 1, so we add the odd number to the second half: 7,8,9 -> 7 and 8,9 . Now the random bit is 0, thus choosing the first half, which is 7. This sounds good enough, let's see how this works:



Possible random bit results:
  • 0 (7,8|9)
    • 0 (7|8)
      • 0 (=7)
      • 1 (=8)
    • 1 (=9)
  • 1 (7|8,9)
    • 0 (=7)
    • 1 (8|9)
      • 0 (=8)
      • 1 (=9)




The interesting part is coming when deciding (pun not intended) what type of probability we would consider. From the tree above, if we take the terminal leafs and count them, there are exactly 6. Each of the numbers in the interval appear exactly twice. There is a perfectly balanced probability that a number will appear in the leaf nodes. But if we decide that each random bit run divides the total probability by two, then we have a 50% chance for 0 or 1 and thus the probability that 7 would be chosen is 1/4 + 1/8 (3/8), the same for 9, but then 8 would have a 2/8 probability to be chosen, so not so perfect.



What is the correct way to compute it? As I see it, the terminal graph leaf way is the external method, the algorithm can end in just 6 possible states and an external observer would not care about the inner workings of the algorithm; the second is an internal view of the use of the "coin toss" function inside the algorithm. The methods could be reconciled by continuing the algorithm even when the function has terminated, until all the possible paths have the same length, something akin to splitting 7 in two 7 nodes, for example, so that the probability would be computed between all the 2 to the power of the maximum tree height options. If the random bit yielded 0, then 0, we still toss the coin to get 000 and 001; now there are 8 terminal nodes and they are divided in 3,2,3 nodes per numbers in the interval. But if we force this method, then we will never get a result. No power of two can be equally divided by 3.



Then I came with another algorithm. What if we could divide even an odd number in two, by multiplying it with two? So instead of solving for 7,8,9 what if we could solve it for 7,7,8,8,9,9 ? Now things become interesting because even for a small finite interval length like 3, the algorithm does not have a deterministic running length. Let's run it again:



Possible random bit results:
  • 0 (7,7,8)
    • 0 (7,7,7)
    • 1 (7,8,8)
      • 0 (7,7,8)... and so on
      • 1 (8,8,8)
  • 1 (8,9,9)
    • 0 (8,8,9)
      • 0 (8,8,8)
      • 1 (8,9,9)... and so on
    • 1 (9,9,9)




As you can see, the tree looks similar, but the algorithm never truly completes. There are always exactly two possibilities in each step that the algorithm will continue. Now, the algorithm does end most of the time, with a probability to end increasing exponentially with each step, but its maximum theoretical length is infinity. We are getting into Cantoresque sets of infinite numbers and we want to calculate what is the probability that a random infinite number would be part of one set or another. Ugh!



And even so, for the small example above, it does seem that the probability for each number is 25%, while there is another 25% chance to continue the algorithm, but if you look at the previous stage you have a 25% chance for 7 or 9, but no chance for 8 at all. If we arbitrarily stop in the middle of the algorithm, not only does it invalidate the result, but also makes no sense to compute any probability.



You can look at it another way: this new algorithm is splitting probability in three equal integer parts, then it throws the rest into the future. It is a funny way of using time and space equivalence, as we are trading interval space for time. (See the third and last algorithm in the post)



My conclusion is that the internal method of computing the probability of the result was flawed. As a black box operator of the algorithm I don't really care how it spews its output, only that it does so with an as perfect probability as possible (pun, again, not intended). That means that if I use the algorithm two times there is no way it can output equals amounts of three values. The probability can't be computed like that. If we use it a million times we would expect a rough 333333 times of each value, but still one would be off one side or another. So the two algorithms are just as good.



Also, some people might ask: how can you possible use the second algorithm for large intervals. You are not going to work with arrays of millions of items for million size intervals, are you? In fact, you only need five values for the algorithm: the limits of the interval (a and b), the amount of lower edge values (p), the amount for the higher edge (r), then the amount for any number in between (q). Example: 7778888888899999 a=7, b=9, p=3, q=8, r=5 . You split this in two and (for the coin toss of 0) you get 7778888 a=7, b=8, p=3, q=1 (don't care at this point), r=4. The next step of the algorithm you multiply by two p,q and r and you go on until a=b.



You can consider a simpler version though: there are three values in the interval so we need at least a number equal or bigger than three that is also a power of two. That means four, two coin tosses. If the coin toss is 00, the result is 7; if the coin toss is 01, the result is 8; for 10, the result is 9. What happens when you get 11? Well, you run the algorithm again.

and has 1 comment
It appears that a British project, secretly conducted by the Rutherford Appleton Laboratory, has produced a method of encapsulating hydrogen into microparticles of porous material. The result is something that acts like a liquid, burns like hydrogen and can be used inside normal cars without any engine modification. The price they propose is 1.5$ per gallon, which is 0.396$ per liter or 0.2915 euros. What is cool about it is that they don't need to extract any resource in order to produce this miracle fuel.

Could THIS be the end of oil? Frankly I am amazed that this news reached me and not the one about Stephen Bennington found dead in a ditch somewhere. I can only hope that the secrecy of the project paid off and that the guys at Cella Energy have really managed to find the solution while under the radar of Big Oil. Or maybe it is simply the time in which the dependency on oil has become a bigger threat to national security than the lack of funding coming from oil companies.

Link to the original news: Breakthrough promises $1.50 per galon synthetic gasoline with no carbon emissions

Update: I may have spoken too soon. A NewScientist article explains the process in a slightly different light. The beads do store hydrogen, but they must be heated in order to release that hydrogen, then the hydrogen would be used in fuel cells. That is at odds with the idea that you can use it as gasoline in a petrol tank car. Oh, well, I hope they get it right someday.

and has 0 comments
My personal opinion is that the freedom women now enjoy comes a lot from the humble contraception pill. Indeed, who would have the resources to pursue a career, fight for their rights or have a life of their own if only the men would carry the responsability for sexual protection? The pill allowed women to break the recursive loop, so to say.

Now, men have used condoms for a long time, with various degrees of efficiency, though. Sometimes they break, sometimes mysterious holes appear in them, sometimes they are so annoying they are not used. The search for a contraceptive pill for men is ongoing, but even if some progress was made, it is not yet a usable solution.

Here come James Tsuruta and Paul Dayton with "A Sound Decision", a method which would involve placing your balls into a liquid, zapping them with ultrasound and become sterile for a specified period of time.

Can you imagine the social impact this could have?

I am a great TED Talks fan, where most of the talks are impressively smart and useful and well presented, but this one I just had to embed. The title is a little misleading, if you ask me: The hidden influence of social networks. It is more about how statistical analysis on social connections yields all sort of interesting information about the human condition. Enough of this, watch the talk:

and has 0 comments
Just two month ago I was blogging about the cracking of 768bit RSA and now 1024 was cracked by using only 81 Pentium4 CPUs in 104 hours. There is a catch and that is they needed to fluctuate the power to computer CPUs. Here is more detailed information.

Update: Web security attack 'makes silicon chips more reliable'

and has 0 comments
In previous entries I have described how I got hooked on the The Teaching Company courses and especially on the ones lectured by the mathematician Edward B. Burger. After Introduction to Number Theory and The Joy of Thinking I only had one such math course to watch and that was From Zero to Infinity: A History of Numbers.

At first I thought it was a tame version of the course about number theory, only a bit more historical. It started up with how people moved from counting thing to an abstract understanding of numbers, then the evolution of the concept of the number with the advent of zero, negative numbers, rations, irrational numbers, complex numbers, Π, etc. However, at the end, the story split quite a bit and became a course in its own right so now I am pretty glad I watched it as well.

It did start to bother me that the level of understanding required for these classes is pretty low and as such the lecturer is forced to repeat and over-exemplify things and avoid as much as possible math notation and equations. The model makes no sense to me. If the people watching were to be uneducated, would they really want to watch the courses? If they did, would they have the money to spare for them if they were stupid? And if they were not stupid or they would be young people interesting in the basics of science, wouldn't they be smart enough to raise the bar a bit? I mean, it's not TV. People actually have to make an effort to purchase and then watch these courses.

Anyway, Mr. Burger was cool as always, but I had issues with some of the concepts presented in the course and how they were presented. After a plethora of information about Pythagoreans and natural numbers and Π, the lecture about the number e was really basic. No real proofs, no examples of use, it was like it didn't belong in the course at all.

Then there was the thing about 0.(9) being equal to 1. I understood the theory behind it, but it just got me wondering what about integer part of 0.(9)? And, if one could use the reasoning behind the idea, then how come S=sum(x^n) with n=0..infinity is not always 1/(1-x) regardless of x? And how come it is considered possible for a real number to have different decimal expansions? Shouldn't it there be a theorem about the uniqueness of said decimal expansion for a specific number just as it is about the prime factorization in order for some of the proofs in the course to make sense? I intend to write an email about it to Burger himself and if (with a godly voice from the sky :)), he answers me, I will be able to complete this entry.

That being said, I thoroughly enjoyed the course, although the one about number theory remains my favourite from this lecturer.

Update: Mr. Burger was kind enough to answer my questions. Here is his reply:
You are correct, there are examples for which the decimal expansion is not unique (and it only happens when we have an infinite tail of 9s). Here are two quick ways of convincing yourself about 0.(9):

1) I bet you feel very comfortable with the identity: 1/3 = 0.(3). Now multiply by 3: 1 = 0.(9)! Fun.

2) Suppose that 0.(9) does NOT equal 1. Then I'm sure you would guess it would be SMALLER than 1. Now recall that if we have two DIFFERENT numbers and we AVERAGE them, then the average will be larger than the smaller number and also smaller than the larger number (the average is in between them). So let's find the average: add: 1 + 0.(9) = 1.(9). Now divide by 2 and we see the average is 0.(9)... but that's one of the numbers we were averaging! Whoops.. therefore the numbers must be equal.

and has 0 comments
Using RSA encryption means you base your security on the lack of resources of people trying to break your code. Well, that's saying very little, as the required computational power is indeed not accessible to most individuals, yet the same is not true for some organizations. And when we are talking about "organizations trying to break your code" we are, of course, going above the few hackers that employ a few thousand bots and that normally would have no reason to crack your communication, and going directly to the more likely culprits, mainly governmental organizations. And given their propensity for secrecy and paranoia, maybe even 1024 RSA is not really safe. After all, in "lands of freedom" there are laws against exporting software that employs too powerful an encryption, like 1024bit RSA. And that's an old law.

Anyway, here is a news link about the 768bit RSA cracking and, for the math inclined, a link to the actual paper. A list of the different RSA bit lengths and the known efforts to break them is found here.

A little quote from Wikipedia, showing that the limit is not really 768: As of 2010, the largest (known) number factored by a general-purpose factoring algorithm was 768 bits long, using a state-of-the-art distributed implementation. RSA keys are typically 1024–2048 bits long. Some experts believe that 1024-bit keys may become breakable in the near term (though this is disputed); few see any way that 4096-bit keys could be broken in the foreseeable future. Therefore, it is generally presumed that RSA is secure if n is sufficiently large. If n is 300 bits or shorter, it can be factored in a few hours on a personal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999 when RSA-155 was factored by using several hundred computers and are now factored in a few weeks using common hardware. A theoretical hardware device named TWIRL and described by Shamir and Tromer in 2003 called into question the security of 1024 bit keys. It is currently recommended that n be at least 2048 bits long.

and has 0 comments
I've found this article on BBC News and I thought it was very interesting. It is a timeline for the most important science news of the year 2009. Read it here.

Here is the similar article for 2010.

and has 0 comments
Joy of Thinking: The Beauty and Power of Classical Mathematical Ideas is an earlier math course, staring a long haired Ed Burger and his Texas U colleague Michael Starbird.

Many of the ideas in Introduction to Number Theory have obviously originated here, however I didn't find this course so interesting, maybe because it was not so well thought trough or maybe because it was clearly targeted at a lower level of understanding and the many repetitions of basic ideas kind of turned me off.

The content of the course is structured into three parts: Numbers, Geometry and Probability. The first part contains very little that has not been covered in Introduction to Number Theory. The geometry section is a bit interesting as Michael Starbird takes us through some topology, talking about the Möbius strip and the Klein bottle. The last part is basic probabilities, although there are some interesting problems studied there.

Overall, a fun course, better suited for people that are really not into maths, but more into interesting ways of thinking. The last lecture summarises the life and thought "lessons" learned from this trip into mathematics.

...and no, I am not talking about the Ashes to Ashes spinoff of the British series, I am talking about actual life on Mars.

Remember in 1996 when everybody was talking about finding signs of life in a meteorite that came from Mars? At the time the theory was dismissed because other causes for the structures in the meteorite were thought to be valid. Here comes a new study from december 2009 that invalidates the proposed non-organic processes in which the features on the martian rock could have been formed.

Yay! Merry Christmas, green guys!

Here is a small funny video combining music and science in a geeky mix. I know, the music could have been less 80's rap and the dancing... well, could have been dancing. I mean, if even I noticed a lack thereof, it must really have been awful. But then it wouldn't have been geeky enough, right ;)