Google Wave has already spoiled me

Google Wave is ostensibly supposed to be what email would look like if it were invented today. Personally, I’m finding out that to be quite the case. Case in point: A few days ago I got an email from another student wanting some information on one of my previous summer projects. I promptly forgot all about it until last evening when my professor (who had supervised the project) emailed me and one of my collaborators and politely told us to reply to the original email. I went back to the original email, replied and CC’d my professor. While I was doing this I was thinking about how much simpler it would be if we were all on Wave.

Let’s look at all the different communications that took place as this happened:

  1. The new student emailed me and my collaborator separately asking if we would get him up to speed. He may also have emailed my professor separately. That’s 2, maybe 3 emails that went out.
  2. Both me and my collaborator neglected to reply, causing my professor to send out one more email (the same one to both of us).
  3. I replied to the original email and CC’d it to my professor. He now has one more email.

In the end, we each have between 2 and 4 emails in our inboxes, in at least two separate threads. Gmail with its conversation view makes things  a bit cleaner to handle, but Wave would make it easier still. Plus there is the human element. I didn’t CC my collaborator on the reply to the original email and probably won’t know what arrangements he makes with the new student. The student might end up meeting separately with both of us when it would make sense to meet all together.

In Wave, the new student would create a new Wave and add himself, my professor, my collaborator and me to it. We would then forget about it until my professor replied on the Wave which floated it back up to the top of the Inbox. I would post my reply and so would my collaborator. We would all have an idea of how we were going to meet and get each other up to speed. There would be no problem of “forgetting to CC”.

I’m not meaning to say that Wave is the solution to all problems. And a lot of the times the problem with new technologies like this is social, not technological. In the previous example, we wouldn’t really have needed Wave if the student had sent out a single email to all of us and my professor had just replied to that to remind us. In that case, Gmail with its conversation view (or any other client with threading) would have been sufficient. On the other hand, Wave would also have failed if there had been multiple waves made or if my professor wasn’t included on the original Wave. But even there Wave would have had a slight advantage because of how easy it is to add another person to an existing Wave – much easier than CC-ing someone on an entire email conversation.

I should say that a lot of the functionality I mentioned above could be handled by any modern wiki software. Instead of using a Wave for the discussion, use a wiki and send people the URL for it. Admittedly, there are some administrative questions (how does hosting and permissions of the wiki work) but the essence is the same: instead of sending packages around, have a wall where everyone posts.

But the similarity in strengths between Wave and modern wikis also highlights what I consider the biggest challenge to Wave’s popularity. When we used wikis for my Software Engineering class, we would have keep all our team information in the wiki (and fairly up to date when got used to it) but whenever anyone made a change, we would send out an email to everyone else asking them to look it over. Wave will be useless if I have to send out an email to people telling them to check their Waves. It will become just one more thing that I have to devote mental bandwidth to instead of focusing on my job. Need I reiterate the fact that this is again a social, not technological problem? Facebook has managed to beat this (at least among my fellow college students). We check Facebook out of habit often enough that we don’t need email reminders telling us to go check Facebook. Twitter also seems to have beaten the social inertia, but I suspect that the brevity of it’s messages changes the game somewhat.

It’s still too early to tell whether Wave will succeed in its professed goal of being what email would be if it were invented today. I like the way it’s going but still not sure if I’m ready to give up email for it any time soon. There are still some organixational issues that need to be solved. In particular long conversations in one Wave can be hard to comprehend especially if there are multiple discussions taking place in the same Wave. It’ll also be interesting to see if other solutions like email or wikis start to learn some lessons from Wave and start incorporating its features. Google Wave has a lot of potential and it’ll be intersting to see how it changes the web and the way we communicate on i.

What programmers can learn from Ender’s Game

Ender’s Game is one of my favorite books. It’s not exactly the pinnacle of human literary achievement, but Ender’s Game and it’s parallel novel, Ender’s Shadow, both have really good story lines and powerful characters. I think they are books that everyone should read at least once. In particular all programmers should read both of them.

Though the book is about children who are fighting a war for humanity’s survival, it’s also about soldiers learning to be generals and all-round leaders. At first read, it doesn’t seem to have anything to do with programmers. But a few months ago I read an OSNews article about the problem with design and implementation of software. After sitting in the back of my head and germinating for the past few months the one paragraph that came back to me last morning is:

The major problem with this is that ALL of software is design. 100% of software is design from the high level architect-like design to the low-level design of a for-loop. The implementers of software are not human! I knew you suspected as much given how odd many programmers are. No, the implementers of software are actually ‘perfect’ machines. They are the compilers (interpreters, preprocessors… are all included in the generic use of the this word). For almost all purposes, the compiler is perfect. I’ve yet to run into a situation where I’ve written code and the compiler has not followed my instructions and that is the reason something broke. It hasn’t happened yet.

One of the core themes of Ender’s Game and Ender’s Shadow is how the main character, Ender learns to become the commander of an elite fighting force. This force is special because unlike other commanders, Ender trusts his soldiers and gives them the independence to develop their own strategies and techniques. What I realized is that just as Ender is the commander of his fighting force programmers are the commanders of their own programming forces. Instead of having snipers or demolition experts, we have our compilers and source control systems. And like Ender, we must learn how to use your teams well and get the work done.

As programmers, especially novice programmers, we tend to anthropomorphize. A lot. Often enough we say things like “My program isn’t doing what I tell it to do” or “My program is misbehaving again”. In many ways, anthropomorphizing is a coping mechanism: it gives us a comfortable way of thinking about something that is very different from most other things we experience in life. While it might help us to think about computers and programs in human-like terms, it can also be a pretty limiting point of view.

The OSNews article above is about how thinking in terms of separate implementation and design phases is detrimental to writing quality software. The “perfect machines” idea that he talks about is an argument against the way we humanize computers. Because we think about our programs as people, we assume that they make mistakes just like people do. We assume that our programs aren’t doing what we want because they don’t like us or that they’re just misbehaving. I’ve been programming for a good few years now and I find myself falling into this trap more often than I would like. However, the truth is that we’re not dealing with people: we’re dealing with perfect machines. A mistake in our program is much more likely to be our own fault than it is to be a problem in the compiler or libraries.

Just as Ender learned to trust his soldiers to act on their own under battle situations, we programmers need to trust our machines. We need to understand that our machines are right, more often than not. Sure compiler errors aren’t perfect and it sucks when GCC spits out three pages of errors to tell you that you missed a semicolon, but that doesn’t mean that your program doesn’t “want” to compile. It’s not easy, because the world of computers is really very different from the world of people and thinking of programs in “people terms” helps us form a bridge. But like all such bridges, this one is ultimately a set of training wheels which we need to lose. And you can’t ride a mountain bike if you’re not ready to lose the training wheels.

Personally, I’m going to try to make a conscious effort to not blame the machine when something goes wrong. Problem-solving is what we do as programmers and it would really help to look in the right place. The right place is in my program because I can trust that my troops have done what I’ve told them what to do and if things are going wrong, it’s because I’m telling them to do the wrong thing. I could go on waxing eloquent about how we need to take responsibility for our programs, but I think I’ve made my point. Our implementors are perfect machines, but we’re certainly not perfect and we need to keep that in mind.

The value of the network

According to Metcalfe’s Law the value of any network is proportional to the square of the number of connected users. As the Web becomes an increasingly important part of our lives, it’s important to keep that in mind. It’s especially important to remember if you’re planning on running a business based on the Web or if you plan on using some web-based technology as a core part of workflow.

I’ve had a Google Wave account for a few weeks and I practically never use it. This is despite the fact that I communicate a  lot online. Even with Facebook, Twitter and this blog, the fact remains that the one tool I use more than anything else is email. It’s not because Email is the most efficient or powerful medium (it’s better than Facebook’s messages, but less powerful than Wave) but rather because it’s the network that has the most number of connected users and the square of that number is huge. Everyone has email. Everyone checks their email. The main reason I can see for Wave not catching on outside of internal corporate/team networks is that the email network is massively more valuable and powerful.

This is true even outside the Internet. Lately I’ve started looking for people who can be creative partners – people with who I can work with on innovative and fun projects. I’ve been looking outside my friend circle (though some of my friends are my creative partners too) because I want to create a larger, well connected network with a specific purpose. Looking back, I can see trace the start of this network back to about a year or so ago. Of course, I had no idea that I was creating such a network, but now that I can see that happening, I can see the value of the network growing as I add more people to the network. As this network grows, we all benefit by having access to more opportunities and coming up with better ideas than what we would have with fewer members. We’re using technology and the Web to sustain the network, but it’s power is independent of the medium.

I only really started thinking about networks when I read this post by Anil Dash where he talks about how he loves New York and it’s startup scene:

New York City startups are as likely to be focused on the arts and crafts as on the bits and bytes, to be influenced by our unparalleled culture as by the latest browser features, and informed by the dynamic interaction of different social groups and classes that’s unavoidable in our city, but uncommon in Silicon Valley. Best of all, the support for these efforts can come from investors and supporters that are outside of the groupthink that many West Coast VC firms suffer from. When I lived in San Francisco, it was easy to spend days at a time only interacting with other web geeks; In New York, fortunately, that’s impossible.

The reason that New York is becoming such a hotbed of new social technology startups is because of the networks it fosters. Similarly the reason that it’s startups are so different from Silicon Valley’s is that the composition of the network is different. Both networks have lots of nodes, lots of connections and hence are of great value, but the different compositions make them and the creations they engender very different.

Even when it comes to pure technology the network is powerful. The lone hacker in the basement may be the stuff of legend, but the most famous hackers are also the best networkers. Linus Torvalds put Linux out on the Internet and leveraged the power of that network. Jamie Zawinski helped build Netscape which in turn was instrumental in the creation of the modern Internet. He was also responsible for XEmacs – a text editor with a large and powerful network.

The moral of this article is this: don’t neglect the network. It might seem heroic and even fun at times to just lock yourself in your room and crank out code (or other works of art), but your work will only become powerful if there are lots of people using and appreciating it. And the fact this, that no matter what kind of introverted and antisocial tendencies you might have, networking is fun, if you find the right people to network with. At the same networking for networking’s sake is a mistake and can be very frustrating. Start out with a purpose – making a great free operating system for example – and then go find people who are into the same kind of thing. Happy networking.

Computing is still in the dark ages

Despite all the talk of Web 2.0 and the shiny multicore machines with their gigabytes of RAM and billions of cycles per second, I sometimes can’t help feel that we are still very much in the dark ages of computing. This time around my dark gloomy feelings have been brought about by this message to a mailing list which in turn was sparked off by the announcement of the Go Programming Language. As a computer user and a programmer I feel that the actual use of computers is far below their potential.

As the years go by, it seems like we keep on piling layer on top of layer while the results aren’t proportional to what we have to learn to get things done. Now, I’m not proposing that we all start writing down-to-the-metal code or force everyone to become a programmer, but things are starting to look like a mess. Web programming is an interesting development, but it adds yet another layer on top of the existing kernel, operating system, libraries and GUI toolkits. Add to that the fact all browsers are still a bit different from each other and you can start to understand why I’ve yet to make a serious foray into web programming.

But even without the web and the many formats and barely interoperating systems out there, there’s enough on the desktop to get you depressed. Start with the fact that there are currently three major operating systems out there and if you want to write a program that runs on all three of them, you don’t have an easy task. You either embrace three different toolkits and programming methodologies and maintain 3 very different codebases, or you use something like Java which works on all three, but screams non-native on each one. Even though there are languages like Python that run on all of three, it really puts me off that there is still no top-notch multiplatform GUI library. wxWidgets tries pretty hard, but if you look at the screenshots you can pretty easily that they don’t look quite right. It’s not very surprising that lots of smart developers are flocking to the web, where things in comparison are a lot smoother.

There is also the fact that programming languages, like all other pieces of major software, suck more than others. I still stand by what I said in my last post, that it’s an exciting time for language enthusiasts, but I also feel that there are some lessons we really need to learn. I’m starting to have concerns that there may not be any true general purpose language, simply because there are so many different types of problems to be solved. I think we need to start creating broader categories: a set of systems languages similar to C going in the direction of D and Go. A set of hyper-optimizing VM-based languages designed for long-running, parallel server applications (the current JVM is a good example). A set of languages for writing end-user apps that are significantly high-level, but are still compiled to pretty fast native code (maybe not C or even optimized VM fast, but better than todays Python or Ruby). I’m thinking Python in its Unladen Swallow incarnation might fill this gap.

As a programmer, the state of tools that we have to use is really quite depressing. Tools like Emacs and Vi are powerful and all, but let’s face it: we could really be having much more powerful IDE technology. We should be having full blown incremental compilation with autocompletion and support for rendering documentation for every major language out there. We should also have seamless version control with granularity down to the undo level. Every change I make should be saved and I should be able to visually browse all these changes, see what they are and restore to an older state (or commit them if I want to). We have the raw computing power needed to do all this, but yet we remain stuck doing mostly batch-style edit-compile-debug cycles and mucking around in plain text. Eclipse with its incremental compiler makes things much easier, but there’s so much more we could be using our machines for.

As a user, what irritates me is the amount of manual labor we still have to do on a daily basis. We still have to carefully name and place files so that we can file them later. I have to manually hit the save button (see version control bit above). Even with the Internet collaboration is a mess with most people throwing around emails with increasingly larger attachments. Add to that the fact that most email clients are pretty dumb pieces of software. Google Wave is a step in the right direction, if enough people get around to actually using it (and if it can integrate to some extent at least with the desktop). Also I think the web and the desktop need to be brought closer together. Ideally I would be able to sit down on any computer with a live Internet connection and have my full custom work environment (or at least the most important parts of it).

I’m fully aware that none of the things I’ve mentioned are trivial. In fact, they’re probably very hard projects that will take expert teams a good few years to complete. One day I would like to seriously work on some of the programmer-related issues, especially the IDE part. I love Emacs, but there are some parts of Eclipse I really like too. For the time being I’m going to have to make do with what I have, but I’ll be sure to keep an eye for interesting things and movements in the right direction.

It’s a great time to be a language buff

I make no secret of the fact that I have a very strong interest in programming languages. So I was naturally very interested when news of the Go Programming Language hit the intertubes. Go is an interesting language. It pulls together some very powerful features with a familiar, but clean syntax and has lightning fast compile times. It certainly takes a place on my to-learn list along with Haskell and Scala. But even as Go becomes the latest hot piece of language news, it dawned on me that over the past few years we’ve seen a slew of interesting languages offering compelling alternatives to the industry “mainstream”.

I guess it all started with the rise of scripting languages like Python, PHP, Ruby and the poster boy of scripting: Perl. Personally, these languages with their dynamic typing, “batteries included” design and interesting syntax provided a breath of fresh air from the likes of C++ and Java. Not that C++ and Java are necessarily bad languages, but they aren’t the most interesting of modern languages. In the early years of this decade computers were just getting fast enough to write large scale software in scripting languages. Things have changed a lot since then.

Dynamic languages aren’t just reserved for small scripts. Software like Ruby on Rails has proved that you can write really robust back end infrastructure with them. The languages for their part have kept on growing, adding features and making changes that keep them interesting and downright fun to use. Python 3.0 was a brave decision to make a break from backwards compatibility in order to do interesting things and it goes to show that these languages are far from ossifying or degrading.

Then there is JavaScript which was supposed to die a slow death by attrition as web programmers moved to Flash or Silverlight. But we all know that didn’t happen. JavaScript has stayed in the background since the rise of Netscape, but it’s only recently with advances in browser technology and growing standards support that it has really come into its own. I’ve only played with it a little, but it’s a fun little language which makes me feel a lot of the same emotions I felt when discovering Python for the first time. Thanks to efforts like Rhino, you can even use JavaScript on the client side for non-web related programming.

Of course, if you want to do really interesting things with these languages, then performance is not optional. Within the last year or two there’s been a strong push in both academia and industry to find ways to make these languages faster and safer. Google in particular seems to be in the thick of it. Chrome’s V8 JavaScript engine is probably the fastest client side JavaScript environment and their still experimental Unladen Swallow project has already made headway in improving Python performance. V8 has already enabled some amazing projects and I’m waiting to see what Unladen Swallow will do.

While we’re on the topic of performance, mentioning the Java Virtual Machine is  a must. The language itself seems to have fallen from grace lately, but the JVM is home to some of the most powerful compiler technology on the planet. It’s no wonder then that the JVM has become the target for a bunch of interesting languages. There are the ports of popular languages — JRuby, Jython and Rhino. But the more interesting ones are the JVM-centric ones. Scala is really interesting in that it was born of an academic research project but is becoming the strongest contender to Java’s position of premier JVM language. Clojure is another language that I don’t think many people saw coming. It brings the power of LISP to a modern JVM unleashing a wide range of possibilities. It has it’s detractors, but it’s certainly done a fair bit to make Lisp a well known name again.

Academia has always been a hot bed when it comes to language design. It’s produced wonders like Lisp and Prolog and is making waves again with creations like Haskell (whose goal is ostensibly to avoid popularity at all costs) and the ML group of languages. These powerful functional languages with wonderful type inference are a language aficionado’s dream come true in many ways and they still have years of innovation ahead of them.

Almost as a corollary to the theoretically grounded functional languages, systems languages have been getting some love too. D and now Go are both languages that acknowledge that C and C++ have both had their heyday and it’s time to realize that systems programming does not have to be synonymous with bit twiddling. D has gotten some flak recently for not evolving very cleanly over the last few years, but something is better than nothing. Also a real shift towards eliminating manual memory management is a welcome addition.

As someone who intends to seriously study language design and the related concepts in the years to come, it’s a really great time to be in getting involved in learning about languages. At the moment I’m trying to teach myself Common Lisp and I have a Scala book sitting on the shelf too. One fo these days, I plan on sitting down and making a little toy language to get used to the idea of creating a language. Till then, it’s going to be really interesting just watching how things work out in an increasingly multilingual world.