Category Archives: General

The end of Google Wave

On May 28th, 2009 Google announced Wave to much fanfare. Wave was going to change the world by merging blogs, wikis and IM and finally replacing email as the digital world’s main collaborative tool.

On August 4th, 2010, 14 months later, Google announced they have stopped working on Wave due to a lack of users.

Despite my interest in the underlying technology, I never used Wave for anything productive. The main reason for this is that I didn’t know anyone else using it regularly. Of course this is the catch-22 faced by every new communications technology and it is at the heart of Wave’s failure. The underlying problem is that Wave didn’t add enough value for users working on a document alone. Wave’s revolutionary power would have eventually come from its collaborative features but the problem with focusing on these aspects early is that they do not add any value until there are users to collaborate with. In episode #68 of This Week in Startups, Marco Zappacosta defines the amount of value a service brings to users without network effects as network independent value (NIV). Wave had next to zero NIV.

It didn’t have to be this way. There are several aspects of Wave that could have been focused on to create a superior or at least unique single user experience. The most obvious would be to have made basic document editing and management a better experience. Or they could have focused on use cases not supported by other document systems. The Wave API allows for third party applications (bots) to contribute to documents at the same time as users. This could have been used to build bots which automate tasks that are time consuming or annoying in Word or Google Docs. On the collaborative side, Google could have concentrated on a specific, practical use case such as a shared white board (there are several bots which do white-boarding) or built interesting applications such as Gravity by SAP.

Wave’s failure saddens me because Google was really doing this right. They were publishing the protocol specifications and most of the source code. Even more importantly they architected Wave based on a distributed/federated model which allowed for Wave servers to exist within every organization just like email servers do. This is a much harder problem than a centralized, all Google architecture but it is critical that single organizations do not control (what could have become) a core Internet protocol.

One also has to wonder where the revolutionary innovation that is required to replace email will come from if even Google gives up after only 14 months. Wave represents a large change that requires time to diffuse and for infrastructure to be built up around it. It is completely unreasonable to expect that Wave would have had large success in just 14 months. One has to feel for the Wave team whom it seems were given the impossible mission of changing the world in 14 months.

I believe that years from now we’ll look back on Google Wave and realize that it was closer to the solution than we thought. One of the key features that makes me believe this is the bot API. The idea of allowing third party applications equal access to a live document is very powerful and could spawn huge amount of innovation. For example, there is no competition to Microsoft’s grammar checker in Word or Google’s spell checker in Google docs. There cannot be as these are functions which are part of the application. Now imagine a world where a document system like Wave is the norm. Any user could select which spell checker to use just by adding a different bot to the Wave. I believe this flexibility would spawn competition that would drive a great deal of innovation. This speaks to the power of decentralized systems such as Wave.

There may also be an opportunity for a startup or open source project to take the Wave ecosystem and run with it. A lot of the hard work has already been done.

Blackberry Torch

I really hope that RIM has a successful device in the Torch and Blackberry OS 6. It would be such a sad story for technology in Canada if RIM continues to ride the slow train to irrelevance.

That said, what is the deal with having a touch screen, track pad and a keyboard?

This screams weak, unprincipled design. Take a stand! Lead instead of trying to mash together the best of every device on the market into some Franken-input system.

Next is Now

Just stumbled on this video created by Rogers.

One of a few good quotes:

“10 years ago it took 72 hours to download Godfather… – Today it takes 10 minutes – It still takes 3 hours to watch”

ChangeCamp London

Thanks to the organizers and participants involved with ChangeCamp London yesterday. It was amazing to see such a strong turnout of people interested in making London better. I hope everyone got as much out of it as I did.

For anyone who couldn’t attend, you can get a feel for the event by looking at the #ccldn tweets and following up with the actions which will be posted on the website.

Django/mod_wsgi on Fedora 12

I recently deployed a Django application with mod_wsgi on my server which runs Fedora 12. Since this required a bit more configuration than a standard Apache virtual host I thought it might be useful to document the configuration for others.

SELinux

While SELinux can be a little annoying if you don’t understand how it works it is a very powerful security layer that should not be disabled. In order to get the Django/mod_wsgi application working I had to modify a couple of SELinux booleans which give Apache extra permissions.

setsebool httpd_tmp_exec on
setsebool httpd_can_network_connect on

mod_wsgi configuration

The default configuration tries to create the mod_wsgi sockets in a directory that SELinux does not allow Apache access to. You can change this by adding the following line to /etc/httpd/conf.d/wsgi.conf.

WSGISocketPrefix run/mod_wsgi

Apache virtual host configuration

Below is the Apache virtual host configuration. Note that I have chosen to use mod_wsgi’s daemon mode and processes instead of threads because some of the libraries I’m using are not thread safe.

<VirtualHost *:80>
 ServerAdmin dan@example.com
 DocumentRoot /home/vhosts/example.com/
 ServerName www.example.com

 Alias /robots.txt /home/vhosts/example.com/example/web/static/robots.txt
 Alias /favicon.ico /home/vhosts/example.com/example/web/static/favicon.ico

 # Static files.
 Alias /static /home/vhosts/example.com/example/web/static

 # Admin static files.
 Alias /media /home/vhosts/example.com/dependencies/Django-1.2.1/django/contrib/admin/media

 WSGIScriptAlias / /home/vhosts/example.com/example/web/example/django.wsgi
 WSGIDaemonProcess example.com processes=15 threads=1 display-name=%{GROUP}
 WSGIProcessGroup example.com

 ErrorLog logs/example.com-error_log
 LogFormat "%a %l \"%u\" %t %m \"%U\" \"%q\" %p %>s %b %D \"%{Referer}i\" \"%{User-Agent}i\"" custom_log
 CustomLog logs/example.com-access_log combinedio
</VirtualHost>

New OpenPGP key

For the two people that care I’m migrating to a new OpenPGP key. I created the old key way back in 2001 so it is time to move to a longer RSA key instead of DSA. The new key also uses the stronger SHA-256 hash.

You can find the signed (with both the old and new key) transition note at:

http://www.coverfire.com/files/key-transition.txt

Here are a few useful links for anyone else making this transition.

http://www.debian-administration.org/users/dkg/weblog/48

http://keyring.debian.org/creating-key.html

Python tutorial and advice

A friend at work recently pointed me to a Python tutorial called Learn Python the Hard Way. It’s very basic but the later part has a little opinion chapter titled Advice from an old programmer which is worth taking the time to read. Below is one quote from this chapter.

Programming as an intellectual activity is the only art form that allows you to create interactive art. You can create projects that other people can play with and you can talk to them indirectly. No other art form is quite this interactive. Movies go out to the audience. Paintings don’t move. Code goes both ways.

Programming as a profession is only moderately interesting. It can be a good job, but if you want to make about the same money and be happier you could actually just go run a fast food joint. You are much better off using code as your secret weapon in another profession.

Canada 3.0 Twitter graph

The other day I found Gephi which was used to create these amazing graphs based on GitHub data. So I thought it might be fun to pull some data into Gephi and play with it. I decided on using the Twitter API to obtain all of the Tweets related to the upcoming Canada 3.0 conference in Stratford, ON, CA. I used the ‘can30’ hash tag as the search term but since the Twitter search only returns Tweets less than seven days old the history is limited.

I used Python and igraph to create the graph and exported it to GraphML which Gephi can import. Here’s the resulting GraphML file if you are interested.

I also used igraph to export PNG and SVG versions.

The nodes in the graph are Twitter users. The size of the node is relative to the number of new Tweets with the #can30 hashtag. By ‘new’ tweets I mean not re-Tweets. The edges represent re-Tweets and the width of the edges are relative to how many times the source user re-Tweeted the destination.

Based on the graph, Canada3Forum is the largest source of new Tweets followed by tobidh and there are lots of users re-Tweeting Canada3Forum’s messages.

Canada 3.0 on Twitter

Linux x86_64 and Javascript

The competition between browsers in the area of Javascript performance has led to some pretty dramatic performance increases in the last couple of years. A lot of this has been accomplished through Javascript just in time (JIT) compilers. What JITs do is convert the Javascript into native instructions which execute a lot faster than more abstract forms. The one downside to this approach is that each native architecture must be supported to get the speed boost.

If you follow Javascript performance you know that recent versions of Firefox have a JIT. What you may not know is that there is no JIT in Firefox for x86_64. This isn’t that big of a problem for Windows since there are so few 64-bit windows users but Linux distributions have been native 64-bit for quite some time. So if you’ve installed a 64-bit version of your faviourite Linux distribution you are getting far slower Javascript performance in Firefox than if you had installed the i686 version. How much slower?

The following benchmarks were executed on an i7-930 running Fedora 12, Firefox 3.5.8 and Epiphany 2.28.2. The benchmarks I used are the SunSpider and V8 Javascript benchmarks.

Browser/arch V8 (higher is better) SunSpider (lower is better)
Firefox i686 PAE 402 1002.6ms
Firefox x86_64 277 2131.2ms
Epiphany x86_64 887 1261.0ms

These results show that the Javascript performance of i686 Firefox is a lot better than x86_64. The Epiphany web browser is based on Webkit which, based on these results, I’m guessing does have a x86_64 JIT.