The exploding computer test

I came across an article about the exploding computer test. The gist of it is this — if you’re current computer met with some sort of catastrophic failure and was irrecoverable, how long would it take you to get up and running on a new machine? For me the test is a bit less demanding since I have multiple computers set up that could be pressed into service at a moment’s notice. But for argument’s sake let’s say that I am given a fresh new machine. Here’s what I’d do.

Install OS X or Linux

If I got a blank OS X machine things would go a bit faster. But if I got a stock Windows machine (like both my main computers were) then I’d start by putting Linux on it. If I was in a rush it would be Ubuntu which AFAIK has the fastest, simplest install. If I had a little more time I’d install Arch Linux since that’s what I’m more comfortable with, even though it has a longer initial setup time.

Install Git, Emacs and Chrome/Firefox

These are the three basic programs I need to get work done. I’m supposing that I already have some kind of terminal program since both Linux and OS X have them by default. On Ubuntu or Arch all of these are just a few commands away. They’d take a bit longer in OS X, but not really long enough to make much of a difference. With that done I can move on to retrieving all my data.

Cloning git repositories or getting stuff off Dropbox

Once I have Git getting back to a working state is mostly a matter of just cloning the repositories I need from my server. Everything I work on alone is in a Git repository and all my groupwork is in Dropbox. Getting all this back is just a few minutes work at most. If I was going to be working a lot on stuff living in Dropbox I’d install the client too. If nothing else, I would definitely get my system configuration repo and my Emacs setup repo since I consider them both absolutely essential to getting work done quickly.

Install compilers, interpreters, debuggers etc.

By this point I have my generic setup done. The last part is installing whatever project specific things I need. For example, if I need to work on my thesis I’d get GCC, GDB, Flex, Bison and Ruby. If I wanted to work on one of my JavaScript experiments, I’m all set because all I need is a browser my js2-mode for Emacs. But most of these are things I’d installed as I needed and I could get started on work again without waiting to have everything installed.

How long would it take?

The biggest variable here is the OS install time. For a Mac it’s basically zero, while for Arch it could be an hour or two. But like I said, if I was in a hurry I’d just install Ubuntu and be done with it. But even factoring in that uncertainty, it shouldn’t take more than a handful of hours. I’m hesitant to give an exact amount, but if I started right after lunch I would certainly be getting work done well before dinner time. Without having to do an OS install I could probably be up and running within the hour.

It’s been a long long time since I’ve actually had to set up a work machine from scratch. I’m a bit surprised as to how little I need to get to a work setup again. I remember when I was in high school getting back to working meant installing Windows, Office, Firefox, a Java environment called BlueJ and a bunch of other little things all of which took a good amount of time to install. I’m definitely at the stage when where my tools and workflow are very different from the average users. As the days go by I’m only becoming better and faster at getting stuff done, but that’s a matter for another blog post.

A summer’s worth of work

Summer’s almost an end and all across the United States (and I suppose some other countries too) students will be going back to college and school. I’ll be heading back in a week, a little early because I have Residence Advisor training for a week. I think it’s time to take a little time to look over my summer and think about the semester ahead. Of course, this being a technical blog, I’m going to focus on computer science related activities.

My main activity for this semester has been my summer research project, where I’ve been working with two friends and a professor to develop a system for automating urban planning and design. It’s certainly been interesting work. It’s my first experience working as part of a real team, and we’ve developing from opposite ends of the planet using email, web conferencing and of course, version control. Since I had decided to take up the human interaction portion, I had to come up with an interaction language. I got to learn about language design, basic parsing techniques and implemented a recursive descent parser in Python. I had been interested in programming languages for a while, but this has really fired my interest and I want to work more on this. I also learned about interacting with users and how to mold your software to fit their requirements. All in all, I had a good real-world experience which I think is just as important as any class I will take.

Besides my project, I’ve been reading up on computer languages and compilers. I’ve been reading online information sources and watching some videos about recent developments in the field. I started reading the Dragon book and have found it to be a very good book, though it will take some time and effort to get through. I fully intend to continue reading it next semester.

I had been looking for an open source project to contribute to, since I don’t want to limit myself to classwork or one-man projects. Considering my current interest, I think the Low Level Virtual Machine is a good target. It’s seems to be a very dynamic project which is implementing some very interesting concepts which means I can learn a lot. At the same time, there’s still a lot of work to be done and is separated well enough for a newbie to get to work and contribute without much trouble. I’m going to start by doing ‘code-cleanups’ (removing extraneous code that’s already been flagged as such) once I’m a little more grounded in the fundamentals of compiler technology. I’d like to be able to work on it over the rest of the year and next year so that I can get a Google Summer of Code project for it next summer.

Since I started using a Mac at the end of last year, I’ve come to admire it both as a user system and development platform. I dabbled in some Objective-C and UI development, mainly as a front end to my summer project. Though I think Cocoa is an excellent platform, and I would enjoy using it, it’s not something I want to pursue for the time being. I’ll still be using a Mac a lot, but it’s not something that I will coding for at the moment.

College education can often be just a case of following the requirements and graduating on time with a decent GPA. However, since I’m only going to be in college full time once, I’m determined to make the most of my college experience. As a result I’ve been looking to reduce the courses which I don’t think I will benefit from. Of course, I still need to do something and so I’ve been designing an independent study for myself. We have a Logic course which generally required for computer science. However, I’ve been designing a course based on Turing Machines and Lambda Calculus, essentially the historic foundations of computer science. My professor is suggesting that I change it somewhat to included more programming languages concepts and hence get credit for that instead. So there’s still some work to be done on it.

Finally I’ve been brushing up on my web design skills. I’m going to be making a info page for our project and also redesigning my college’s CS department website. I also need to update my own website which has been frozen for months. That should keep me busy until it’s time to go back next week.

So much for the summer. For next semester, I have a few things planned. I’m hoping to continue my research work as I have some ideas for what to implement next. And my independent study should prove interesting too. I would definitely want to write this blog more often and my studies should provide ample ideas (I’m taking a digital circuits class too). I think I should devote some time to my website too and make it more professional. I’m not sure when I’ll be posting again, since I have people to visit and packing to do. However at the latest, I should have something new up by the end of next week. So till then, goodbye, take care and have fun.

Should we make heavy software

When I showed off Firefox 3’s smart address bar to a friend of mine a few days ago, his first reaction was: “Wow, that’s cool” and half a smile. A second later the smile was gone and he let out a disappointed “Oh but that’s going to take up so many resources”. Let’s leave for the moment that my friend was not the tech-savviest person in the world and that he would probably not be able to list exactly what those “resources” were supposed to be. What his remark got me thinking was simply: “Is it really a bad thing if my software takes up many resources?”

I suppose the core issue here is that computer users don’t really care about resources. Most of my friends wouldn’t be able to tell me the specs of their computers and much less what those specs meant. What they are interested in is quite simply, speed. They want their software to be zippy and fast, so do I, so does everyone (I hope). When my friend said that Firefox 3 would take up more resources, what he really meant was “My computer will run slower.” He quite innocently equated less features to mean lightweight and hence faster. It’s an honest mistake, but that doesn’t change the fact that it is not really correct. Tweaks and customizations to programs can often increase actual program size and complexity but give better performance. Apple has been doing with OS X for years now. Many users and benchmarks will vouch for the fact that though OS X has been gaining features, it’s also utilizing computer resources more efficiently.

Of course, it’s undeniable that the general trend is towards programs that use more resources. After all, more resources does mean more maneuvering room, more stuff to build with. And resources to keep increasing. Our computers today are vastly more powerful than those available at even the start of the decade. And we’ve been reaping the benefits with more sophisticated software: better visuals, increasingly powerful desktop search and increasing higher resolution data formats.

At the same time, many people both programmers and not, are becoming increasingly worried that modern software is bloated an unwieldy. Just as Moore’s Law has been giving us faster processor, our software acts as a Moore’s Law Compensator. Our computers still take a long time to boot up and become usable, most programs don’t start in the blink of an eye, in essence, somehow the user doesn’t really see all the power that’s tucked under the hood. This has led to a growing trend in lightweight software, especially in the more tech-savvy community.

Among Linux users this dual nature of modern software is very evident. On the one hand Linux systems can now sport powerful 3D window managers and task switchers good enough to rival Vista or Leopard. On the other hand, there have been a wave of new minimalistic window managers lacking ant graphic splendor whatsoever.

I feel myself personally affected by this double trend: I myself use a tiling window manager called Awesome. I shun IDEs, preferring to use the very lightweight Vim from a simple terminal. Firefox is just about the only graphical program that I run. At the same time, I love OS X Leopard. I think the UI is quite beautiful and I’ve become an avid user of both Expose and Spaces. Not to mention the fact that somehow fonts seem to look much better on OS X than on any other operating system I’ve ever used. Performance-wise OS X is much better than Vista which takes much longer to start up and respond on a far superior system.

Being a programmer myself, this issue is particularly important because I think about performance almost all the time. Whether it’s deciding whether to use a high-level language like Python, or lug it out with something closer to the metal like C, or which graphics toolkit to use, I have to factor in resource-use. I’m still not sure about which way is better. Software bloat is bad. Very bad. At the same it doesn’t make much sense to let all that computing power sit there unused and the benefits very often outweigh the price to be paid. For the time being, I’m calling it a truce and letting the decider be something less tangible than performance benchmarks: user experience. If you can use heavy resources but deliver a solid user experience, then go for it. An incomplete user experience for the sake of a lighter program might be occasionally justified, but not all the time. However a bad user experience along with heavier system requirements is definitely a bad thing, to be avoided at all costs. After the best software is not the one that uses least RAM or has the prettiest interface, it’s the one that gets the job done without getting in the users way.