Sunday Selection 2009-07-19

I’m still at Virginia Tech for a few more weeks, but all the other students I was with have left. So I got moved to the really nice rooms for graduate students, the only downside being that I’m practically locked in since my card won’t open the doors (I was let in by a really nice grad student). Stupid technology. While I get that fixed, here’s another installment of Sunday Selection.


The Cathedral and the Pirate A funny but thought provoking look at the Microsoft-Google competition in light of the announcement of the Google Chrome OS.


Ubiquity in Depth Ubiquity is a creation from Mozilla Labs in the form of a Firefox extension that aims to bring the different components of the web and the services available into a easy-to-reach form in the browser. There’s a 6 minute demo video about half way through the article.


Ubiquity It takes some getting used to, but it’s a great tool. My personal favorite is the ability to quickly look up snippets of wikipedia without leaving the page I’m on. I would recommend getting the latest beta.

Why I’m not giving up on Firefox yet

Google is upping the ante on pretty much every major producer of operating systems and web technologies. The release of Google Chrome might not have been as ground-shattering as a lot of people of made it out to be, but it was certainly a clear message that Google was taking the control of the web seriously. The announcement of Chrome OS has ruffled even more feathers and set the rumor mills to work overtime and if you like around the web you’ll see all sorts of opinions regarding the whole situation. I’ve previously said that I wasn’t ready to take a side until I saw an actual release of Chrome OS and decided firsthand if it met my needs. I’m going to stand by my word and for the time being at least my browser of choice is still Firefox on all platforms. Here’s why:

1. Extensions

Chrome is growing support for extensions and I hear they’re going to be really easy to make. But it’s going to be a while before they catch the community support that already exists around Firefox. I use a number of extensions on a regular basis including Zotero, the Diigo toolbar, Twitterfox and Down Them All. Some of these extensions allow me to use stand-alone web applications, but without actually visiting the website (Twitterfox and the Diigo bar). Others let me easily perform common web-related tasks without requiring a separate program (FireFTP and FireBug). Some of these could easily be their own programs, but even compared to full desktop equivalents, they are outstanding tools (Zotero). Equally important are more experimental extensions like Mozilla’s Weave and Ubiquity tools. These are still experimental tools but already provide useful functionality that I think I’m going to use more and more as they mature. Until Chrome can sport equally compelling and useful tools, I’ll stay put.

2. I don’t use Windows all that much

My use of Windows varies from about once a month to once a week. My computer time is divided almost equally between OS X and Linux. Though the open source component of Chrome compiles and runs of both of them, they are still far from complete. I don’t mind trying out something experimental to get the feel for it (or if I have an active interest in it), but considering that Firefox already offers a high quality browsing experience already, I don’t see a need to switch.

3. I don’t quite agree with the ‘every tab is a process’ model

A large part of Chrome’s innovation is in treating the browser more like an operating system and each tab as an separate web application. While this is probably a good idea in terms of safety and reliability, it also turns the browser into a very memory heavy application. Combine that with Vista’s own inefficiencies and the fact I often run other heavy programs (compilers, IDEs and the like), I really want my browser to not take up more memory than it has to. As the article I linked to shows, Firefox does much better in memory terms than other popular browsers. The number of web applications that I leave open for any period of time are trusted applications and most of them do not stay open for more than an hour or so at a time so I’m not sure I need a full multiprocess browser for now.

4. Firefox is catching up

When Chrome first came out, it did come up with a number of cool new features. Besides the memory model, it sported an incredibly fast JavaScript Engine and (in my opinion) a really clean interface. Firefox was left in the dust to some extent, but it’s quickly catching up. The new 3.5 release has seen improvements in JavaScript performance and support for the HTML 5 video tag. JavaScript isn’t quite as fast as on Chrome (for example, Chrome experiments doesn’t quite work right), but it’s fast enough for the differences to be unnoticeable for most people (including myself). Thanks to theming, users can make the Firefox interface as clean as they want it to be. There’s even a good theme that closely mimics Chrome.

5. Chrome still has bugs that need fixing

While Chrome sports some great new technologies, there are still some problems that really need to be fixed. In particular, there are issues with image sizing (that I saw while viewing this blog with Chrome). As well as issues with the implementation of the HTML 5 canvas tag. The later is especially important to get right as it’s going to be really important for the future of web apps. One example that I think will become really important is the Bespin code editor which doesn’t work right under Chrome (and not all under Chromium). Until these bugs get fixed, I’m not going to be able consider moving to Chrome for full time use.

Though I’m not ready to leave Firefox yet, I’m certainly not making any promises for the future. I’m sure that Chrome will continue to push innovation in the browser sphere and all net users will benefit as a result. At the same time Firefox is also taking the initiative in a number of areas (especially in respect to HTML 5) and it will be interesting how they keep up the pace. I personally would like to see some amount of cross-fertilization between the open source browser communities. In particular it might be worthwhile for Firefox to consider adopting the V8 JavaScript engine. In return, Chrome could learn a few lessons from Firefox with regard to HTML 5. The following months and years are certainly going to be an interesting period for web technology and I’m sure to make some careful consideration before making any sort of browser move.

The Chrome Wars have begun

I come to work this morning and the intertubes are shaking with Google’s latest announcement: the coming of it’s Linux-based, web-oriented operating system for netbooks: Google Chrome OS. You’ve probably already read a lot of the other posts about the Chrome OS and know something about how it works. It’s an operating system at the core, but more importantly its a platform tuned to running web apps. It’s a clear signal from Google to pretty much every other operating system maker out there, including Microsoft and Apple, but also the Linux distribution providers like Red Hat and Canonical. The message is simple and clear; move over OS makers, the browser is the new application platform.

Google Chrome as the operating system for web apps

Google Chrome as the operating system for web apps

Though the reactions from around the web are mostly positive, there are some articles that are raising real issues. ZDNet Australia criticizes Chrome OS on the grounds that it will further fragment the Linux community (who will be contributing the kernel of the new OS) and a better solution would have been to join with Ubuntu which already has pushed Linux to new heights. A prediction from The Next Web makes the claim that Chrome OS will be “the beginning of the end for Ubuntu & co” and the real battle will be between Apple and Google, leaving everyone else in the dust. There’s also concern about the fact that Google already has a operating system for the web: Android, even though it’s only for mobile devices (though it has been ported to x86). Personally, I feel that these criticisms and fears surrounding Chrome contain the more interesting food for thought.

Microsoft isnt beat yet (image from Engadget)

Microsoft isn't beat yet (image from Engadget)

Chrome OS is undoubtedly going to be interesting, both in terms of technology and in terms of the market forces that it will affect. Also certain is that Google is more clearly than ever taking a swipe at Microsoft. Even though Google may have become the most powerful player in  the web sphere, the desktop operating system stronghold was undoubtedly held by Microsoft. Even many of Google’s own applications (including Chrome) target Windows as the primary platform. Microsoft is still a force to be reckoned with. Windows 7 is shaping up well and they have a few tricks up their sleeve, including a new browser project: Gazelle and even a cloud-centered operating system called Midori in the works. They also have a powerful research wing which does some really interesting work and a very big budget (which is enough for them to sit things out for a few years while they make a better product). Whether or not they will actually do so is still questionable, but lets not write them off just yet.

And there is Apple. The last few years have seen Apple’s gradual re-rise to stardom starting with the beautiful new OS X and continuing today with it’s dominance of the online music store arena and the strength of the iPhone platform. Not many people seem interested in pitting Google against Apple, especially since Apple has stayed out of the mainstream operating system and netbook markets. However, when it comes to the internet, Apple has a considerable stake. The iPhone is as much a portable internet device as it is a phone. And though it has carefully stayed away from the low-cost netbook market, it’s unlikely that they’ll sit by while Google plays its hand in the portable computer market.

Apple may the best suited to withstand Google (from The Next Web)

Apple may the best suited to withstand Google (from The Next Web)

However Apple’s strength in the current situation probably stems directly from the closed, proprietary nature of it’s technology. Apple has a reputation for both creating and support great desktop apps. Good design has always been a hallmark of software running on a Mac and most web apps are still far for matching the polish that Apple has  to offer. The user experience offered by the complete OS X operating system by virtue of the way it can tie together information across different apps is still something that web applications (even suites like Google Apps) have not matched to a large extent. I agree with the Next Web post that Apple probably has the most chance of retaining its user base as Google begins it’s foray into the operating system arena. With the iPhone they’ve shown that they’re still capable of market-shaking innovation and that will probably help them survive the coming OS wars.

One more important player in this market is Linux. Thanks in no small measure to Canonical, desktop Linux has gained some ground in the last few years. However, it’s still holding a very small piece of the desktop market. It’s a valid concern that Google’s entry into the market might eat into the Linux market share. Though it’s certainly possible, I’m not quite sure if this will come to pass. A lot will depend on how easy it is to get things working on Chrome OS besides the browser and web apps. What new webapps have to offer will also be influential. I personally have never been very hopeful of Linux’s position on the consumer desktop. It’s great for hacker-types like me, but I’m still not fully convinced if I would recommend it for everybody. In my opinion, most Linux desktop apps still lack some amount of external polish. That being said, I wouldn’t recommend Vista either. I do think that OS X is the best OS for most users. I don’t see Chrome as contributing to the ‘fragmentation’ of the Linux distribution scene because I expect it to be very different from traditional distros, but in this case, only time will tell.

So what can we expect in the months to come before and after Chrome OS hits the markets? Undoubtedly Google’s announcement will cause the other big players in the field to sit up and take notice. I think this move might consider other companies, especially Microsoft to push out web-centric products sooner than they otherwise would have. Google is clearly looking to shake things up in the near future and it would be folly not to plan to do something about it. However, it’s also worth keeping in mind that Chrome OS is still some time away and there is a lot of work to be done — Chromium works on Linux, but only just.

yodaBefore we make and declarations about drastic change in the OS market, it would be prudent to wait and watch and see what Chrome OS actually looks like when it releases. There is also the fact that Google will have to get people to actually use it and that may be easier said than done (considering the fact that most netbooks run Windows XP). Of course, as the iPhone has shown, there is room in the market for a sleek new product if it is made right. I will be interested in seeing how Chrome OS turns out, but I certainly won’t be giving up my Linux laptop or my Mac Mini anytime soon. I wish Chrome OS luck and hope to see some good ideas being implemented. As Yoda would say, ‘Begun the Chrome Wars have’, I’m not ready to pick sides just yet.

Getting social networks under control

Social networks. We all use them (to some extent) and there are a lot of them. I appreciate the services they provide and enjoy using them, but it can be hard to not spend too much time and energy micromanaging. For people who use just one social utility there isn’t much of a problem: everything that they do goes into that one environment. However, chances are you use more than one. It’s perfectly fine to keep them completely separated from each other and I’m sure that works for a lot of users. But I personally would like to provide a coherent image of myself across the sites that I do use. As I found out over the weekend, that’s not as simple as it sounds.

The Facebook Factor

Before I talk about how I actually went about trying my social networks, there are a  few things about the networks themselves that need to be said. First off, Facebook is the unstoppable juggernaut when it comes to social networks. I didn’t really think about this until I started out on my quest, but Facebook does (in a limited way) a lot of things which the other social networks individually focus on. Like most other people, you can use Facebook to keep in touch with friends and make new ones. You can post photos to Facebook albums. I haven’t heard any word on limits except being limited to 60 photos per album. There’s also the status line where you can broadcast what you’re doing to everyone who wants to listen. Conversely the newsfeed lets you keep track of everyone else. You can write notes and use them as a lightweight blog. Interesting links and videos you find around the web can also go on Facebook. And everything you do has a chance to become the basis of a longwinded, interactive discussion between your friends.

Facebook’s purpose as a social network is to essentially surround all the other social networking utilities. Posting a video on YouTube or saved an interesting link on Delicious? Put a link to it on Facebook. Wrote a blog post? Post a link to it or have it be automatically mirrored in your notes (learned this trick from a new friend of mine). Want to share photos of your hike with everyone who went with you? Put in a Facebook album, tag your friends and they’ll be automatically notified. No need to decide whether to use Flickr, Picasa, Zooomr or worry about how you’r going to let others know that their photos are up. If you need a place to discuss something, you can start a Facebook group and invite your friends rather than having to set up a mailing list or forum. Though Facebook does all these things decently well, it doesn’t do them quite as well as the other sites that are out there for specifically those purposes. And Facebook is definitely an ‘inside-out’ community: to use it well you need to have a network of friends first. On the other hand, if your goal is to reach out and find new people to connect with, you’re better off using another social site.

Facebook was my first problem. I tend to think of Facebook as a place for connecting with people I already know fairly well. It’s a good way to keep my friends updated on what I’m doing, especially if they aren’t using Twitter or Delicious or following my blog regularly. I really want Facebook to be part of my social network, but since it won’t be directly help me connect with new people, I want to use it passively as much as possible. To be fair, I suppose Facebook could probably be used for ‘reaching-out’ purposes, but I personally don’t want to do that since in my opinion there are better tools for that. Luckily for me, Facebook seems to understand it’s role as an aggregator of sorts: it makes it simple to integrate your blogs as notes and post your Delicious links and shared Google Reader feeds to the newsfeed. So even though most of online activities take place elsewhere, almost all of it is seamlessly mirrored on Facebook for all my friends to see and discuss.

A Network of Networks

The second part of the problem involves the many services that I use. The services that I use most frequently are Twitter, Delicious, this blog, and Facebook (not necessarily in that order). I do have a Picasa account, but I mainly use that for sharing pictures with my parents and close family and so it’s not really part of my social net. For a while I really wanted to integrate all this together. The service let me do just that. I could post something to and it would be sent to Twitter and Facebook and any links would be saved to Delicious. Using Twitterfeed I could have blog updates routed to as well. This setup was worked pretty well and there was nothing wrong with the implementation itself. However, I found some flaws in working this way that involved the very concept of trying to pull everything together.

All the services I’ve named above are all very different. That’s probably part of the reason why Facebook only offers generic clones of them (for now at least). Twitter is good at communicating small snippets of information while my blog is for far more detailed writings (my posts are routinely over a thousand words). Using Delicious with allows me to quickly save anything from the web, but I also don’t fully utilize their tagging functionality which helps greatly when trying to find something later. By using to quickly post to everything, I was losing out on the focused functionality that each of them offered.

I found myself faced with a classic dilemma: I wanted people who ‘followed’ me to able to see everything that I did socially, but I also wanted to be able to use the individual social services to their full potential. I found the answer (to some extent) in the form of Friendfeed. It’s an aggregator for all your social content, not entirely unlike the role that Facebook fills. Friendfeed gathers all your activity from as many as 58 service and provides a single unified feed that people can follow and interact with. My Friendfeed collects information from all the services I listed above as well as Google Reader’s shared feeds. In an ideal world, everyone would see my Friendfeed and interact mainly via comments on that (or on the original sources).

However, it’s not an ideal world. In particular, most people I know (and who know me) don’t actually use Friendfeed (unfortunately). To get around the fact that not everyone is going to see the collected feed that I present, the most reasonable solution is to maintain a careful amount of redundancy. I say careful because if I were to simply link everything to everything, not only would people be seeing the same thing many times over, I could also potentially set up infinite loops with messages being propagated from service to service and back ad infinitum. My first step in this careful redundancy setup is to keep Facebook separate. I use its own native integration features to pull information from the other services, but nothing pulls from Facebook. I generate very little original content on Facebook and so it’s mainly just for collection and discussion.

The second step is to take into account the distinctions between the services and use them accordingly. For example, Twitter is used for status updates so it gets used the most. Delicious is used for saving stuff from the web. Previously I had everything that I saved in Delicious appear on Twitter as well (via, but I’ve come to realize that I save a fair of stuff for my research work which I don’t think everyone wants to know. By separating the services, I can only post things which I think people will find interesting to Twitter. I can also use Twitter to elicit responses and feedback from followers while I use Delicious more for classification. I’m also considering moving to Diigo, but that’s a subject for another post. My blog also gets mirrored to Twitter via Twitterfeed because I think that’s an important part of what I want people to know about. Of course, if you are following me on Friendfeed, you will see all my bookmarks and some duplicates. That’s something I’m still working to solve, I’m not comfortable with having my followers see the same things multiple times.

The third and final piece of the puzzle involves Twitter. Though I use it mainly to send out updates to my followers, it also becomes a medium for discussion with people retweeting and replying to what I’ve said. Unfortunately the native Twitter interface isn’t the best for managing a multi-way communication stream and I’ve found that I can miss out on a lot if I don’t pay careful attention. Luckily there are a number of Twitter clients out there that do a good job of managing your Twitter traffic for you. The one I currently use is TweetDeck which provides a nice multi-panel layout for seeing general tweets, replies, mentions of your name and direct messages side by side.

In conclusion, the problem of managing your social networks is still a tricky one to solve and requires some careful thought to get right. Even then, it’s still not a perfect solution by any means (especially if your friends aren’t on the same networks). As time progresses, it’ll be interesting to see how social networks evolve. Facebook in particular is looking to place itself as the center of the social web. Personally I’m not a big fan of having Facebook be the center of everything, but it could have it’s benefits. But for the time being, Facebook does have serious competitors each with their own strengths and there are some things (like this blog) that I would prefer to happen outside any single social network.

Taking a look at 3D interfaces

I’m spending the better part of my summer working on a software engineering research project at Virginia Tech’s computer science department. Coming from a small liberal arts college with a small computer science department, it’s quite an interesting experience for me to learn about all the cool things that the various groups, faculty members and grad students are doing. The computer science has a well established Research Experience for Undergraduates program for students in the area of human-computer interaction (HCI). Though my work isn’t really HCI, I tag along with the other students in the program and as a result get to attend interesing presentations and discussions by faculty members working on interesting HCI related problems.

Today we got to listen to a presentation by Dr. Nicholas Polys, the Director of the Visual Computing Group. His presentation was mainly about 3D interfaces, how they were being implemented (on open W3C standards) and how people were using them to solve real problems. I thought that the projects that he showed us were interesting, but I had one major problem with what we were shown: even though these 3D environments were really well thought out, user’s interactions would still be via flat 2D monitors and devices like mice and keyboards, which weren’t the best for navigating a complex 3D environment. The way I’d like to see a 3D environment work would be like the Iron Man movie, where Tony Stark manipulates a 3D projection of his suit design using simple, intuitive and direct hand movements.

Dr. Polys answered my question by suggesting that the 3D interfaces helped primarily by helping put more information on a 3D surface and that a fully immersive “move-your-hands” interaction system might be too cumbersome for day to day use. But he acknowledged that I brought up valid questions worth looking into. The presentation was fun, but was even better was the demonstration we got a later of the Virginia Tech CAVE. The CAVE is an immersive virtual reality environment. But instead of being used for entertainment or simulation, it’s used for visualization of scientific data.

The system is actually quite simple in concept. Projectors are used to project images onto large screens on three sides and below the user. The images are special in that they take advantage of the fact our two eyes see slightly different images. Combined with special glasses that synchronize with the projectors to block corresponding images for each eye, they give the illusion of being in fully three dimensional environment. There can be any number of people wearing these glasses and being in the CAVE, but there is one ‘pilot’ who wears a set of glasses that have sensors allowing for head-movement tracking: the images change to adjust for how the user is looking at it. Movement is via a simple pointing device that allows for 6 degrees of movement. The CAVE information page has more details and videos as well.

After interacting with various different environments and simulations, there are a number of things I observed (which most of my friends agreed with). The most realistic simulations aren’t the fully immersive environments, but rather projections of smaller discrete objects. Our first demonstration was projections of various insects. These seemed very real and it was quite easy to believe that they weren’t projected onto the walls, but actually occupying the space in between them. I think this was due to the fact that they were small, discrete objects and that they were rendered with a very high level of detail as compared to some of the later demonstrations.

The most interesting demo we saw was a 3D model of the myoglobin molecule. This was representative of the scientific visualization work that the CAVE is primarily used for. It was interesting to fly around the molecule and see how the different atoms and ions connected together. The remaining simulations were all of real-world environments. One was a model of a solar powered house that Virginia Tech students had designed and another was a model of a city that was used for early detection of Alzheimer’s by testing subjects’ abilities to navigate streets and complete day-to-day tasks. They were both very interesting, but seemed less real than the bug or molecule simulations. For one thing the graphics were of much lower quality in order to be rendered fast enough to be responsive, and because they took up the entire view field, it was easier to notice the edges where the walls met and that hindered the illusion considerably.

We all had a really good and I’m sure it’s an experience many of us will remember for a long time. In the end, I have mixed feelings about 3D interfaces. The bug simulation has convinced me that it’s very useful for looking at small objects or designs where you’d like to be able move the object around and look at it from all angles. It’s also good for larger models (such as molecules) as long as you’re not looking for photo-realism. I think it’s definitely worth using when a simple 2D image or even a 3D model on a small screen just doesn’t cut it. However, the technology isn’t quite at the stage where a full-blown immersive simulation like a city can be made to look real enough to be truly satisfying (especially if your standards are the 3D environments you see in modern computer games). Immersive virtual reality technologies like the CAVE are definitely important and I’m sure more and more scientists will be using them for modelling and visualization work in the near future.

3D environments on the desktop are a somewhat different matter. Using 3D models for things like design are of course very helpful, but as a general paradigm for interaction, I think 3D on the desktop isn’t a very good idea, at least not with the current interface tools that we have. Controlling a 3D setup with a mouse can be very tiresome at times and I don’t like the idea of having to ‘walk’ through a virtual space to find something when I could find it much faster if it was laid out as simple menu or set of buttons. Things like BumpTop and Shock Desktop 3D look really cool, but I wonder how easy they would be to use in day-to-day work. Then again, I’m pretty much a confirmed minimalist so I’m probably not unbiased. I think that the way modern desktops work in 2.5D with having 3D-like effects (like piling windows on top of each other, transparency, gradients) but still having a 2D interface is good way for people to work while they’re looking at 2D screens.  Of course automating those features and making them easy to use is wonderful. Apple’s Expose is a good example.

What I’d really like to see is some sort of projection technology allowing people to interact with 3d representations in a simple way. That being said, I don’t think that 3D interfaces are going to take over any time soon. The truth is that the simple 2D format is deeply entrenched and works well enough for most intents and purposes. Also the keyboard and plain text is a very efficient way for communication with a computer that’s not going away anytime soon. Though I would really like to see large touchscreens become cheap. 3D interfaces are an interesting technology and I would love to see them evolve. I have my doubts as to whether or not they are ready for the mainstream, but they’re certainly worth looking at, especially if you do data visualization of any kind.