Creating a comfortable computer workspace

I just started work at the KnowledgeWorks building at Virginia Tech’s Corporate Research Center yesterday. The CRC is a nice place with lots of wide open spaces between buildings, lots of greenery and a quite lovely little lake in the middle of it all. The place I’ll be working is a floor that houses a large portion of Tech’s Computer Science department. The floor has an interesting layout. There seem to be faculty offices around the outer walls of the building and the central space is dedicated to grad students. They aren’t quite cubicles, but it’s not open space either. There are numerous curving partitions with about 3-4 workstations along each partition. There’s also a small area with couches for a more relaxed setting. It’s different from the large open computer labs that I’m used to working in, but I think it’ll grow on me. Each individual work area is just enough for one person to work. It’s quite possible to ignore everyone else (especially with a pair of headphones) but there’s ample opportunity for interaction. I only wish there were more whiteboards.

Yesterday was also the first day I actually got my Linux laptop to use an external monitor correctly. I’ve tried hooking up an external monitor to my laptop on numerous occasions, but I don’t think I ever got it to work quite right. There were a number of things working together to stop me everytime: drivers that didn’t work quite right, window managers that didn’t handle multiple monitors quite how I wanted them to, etc. I think I came close on a number of occasions, but I wanted something that ‘just worked’ and none of the solutions I tried were quite like that. However, a combination of XRandR, the open source radeonhd driver and the Xmonad window manager managed to let me hook up an external monitor in a way I really liked. Xmonad lets each monitor act as a viewport onto a different virtual screen, which I think is the best way to do multiple monitors. By comparison, the Mac Spaces approach of treating both monitors as a single huge desktop is something I can’t stand. I only wish that the Xmonad documentation were updated to say that their support worked with XRandR instead of the deprecated Xinerama.

The previous two paragraphs might seem unrelated, but they both have to do with creating with comfortable, efficient computer-centric workspaces. An increasing number of people are spendingsignificant amounts of time in front of computers (for business and for pleasure) and so it’s important to have a space that is enjoyable to be in. After coming back from work I remembered that I had read about a really amazing home office setup which I thought was a very good place for a computer professional to work out of. Some googling around turned up three such well thought-out office layouts that I think are good examples of what a comfortable workspace should be like. They are:

The spaces these people have made are pretty close to a computer geek’s dreams. Not only do they have an amazing amount of power under the hood, they also look good. These are all spaces that I would love to work in. Though I would probably run Linux with a tiling window manager of some sort.

Since I started college, I’ve learned to appeciate the value of a good working space more and more. I’ve also learned that building a good space isn’t easy, especially if you work with other people. It’s hard to get the right combination of openness and privacy. Back at Lafayette I live in a single room. It’s nice to be able to shut the door, but my desk is pretty small and the chair really sucks. The labs have much more space and better chairs but it can get a bit distracting if there are a lot of people in the room. The library is well lit and spacious and a good environment on the whole, but again too many people are a problem.

A lot of these problems are solved by simply having a personal office (with a door) with coworkers within walking distance. Beyond that, the problem becomes one of setup, equipment and space utilization. I think the setups described above are really good at serving the owner’s needs. At this point, a lot of the descriptions are subjective. I personally would have no idea what to do with 5 or 6 monitors, but I could definitely get used to 2 or 3. In fact, I’m starting to think that two large monitors is the bare minimum for anyone who does serious work with a computer. A comfortable chair and a decently ergonomic setup are essential unless you want to shell out a fair amount of cash for medical treatment later. I don’t have much of an opinion on tables and desks, but I’m fairly picky about chairs and keyboards.

On a bit of a tangent, I’ve been hearing about this new trend called coworking where freelance workers (mostly software people) get together to work in a semi-public space. Now, I love to have people near me to bounce ideas off, but I wouldn’t be comfortable working in a semi-public setting all the time. I love my laptop, but don’t like to be working off it all the time. If I ever become that sort of a freelance/work-from-home worker, I might cowork once or twice a week, but I’ll still want a decently tricked out home office.

On an ending note, let me just reiterate how important I think it is to have a good working environment. I still have a long way to go before I can put together anything as neat as the spaces you can find online, but till then I’m going to keep my eyes open for interesting and comfortable setups and environments. Though one thing is certain, I’m definitely going to have a bunch of monitors.

The Documentation Problem

Over the past year and a half I’ve come to realize that writing documentation for your programs is important. Not only is documentation helpful for your users, it forces you to think about and explain the workings of your code. Unfortunately, I’ve found the tools used to create documentation (especially user-oriented documentation) to be somewhat lacking. While we have powerful programmable IDEs and equally powerful version control and distribution systems, the corresponding tools for writing documentation aren’t quite at the same level.

Let me start by acknowledging that there are different types of documentation for different purposes. In-code documentation in the form of comments are geared toward people who will be using your code directly, either editing it or using the API that it exposes. In this area there are actually good tools available. Systems like Doxygen, Epydoc or Javadoc can take formatted comments in code and turn them into API references in the form of HTML or other formats. Having the API info right in the code, it’s easier to make sure that changes in one are reflected in the other.

User-oriented documentation has slightly different needs. As a programmer you want a system that is easy to learn and is fast to use. You also want to be able to publish it different formats. At the very least you want to be able to create HTML pages from a template. But you also want the actual source to be human-readable (that’s actually a side-effect of being easy to create) because that’s probably what you, as the creator, will be reading and editing the most.

Then there are documents that are created as part of the design and coding process. This is generally wiki territory. A lot of this is stuff that will be rewritten over and over as time progresses. At the same time, it’s possible that much of this will eventually find its way into the user docs. In this case, ease of use is paramount. You want to get your thoughts and ideas down as quickly as possible so that you can move on to the next thought. Version controlling is also good to have so that you can see the evolution of the project over time. You might also want some sort of export feature so that you can get a dump of the wiki when necessary.

Personally, I would like to see the user doc and development wikis running as two parts of the same documentation system. Unfortunately, I haven’t found tools that are quite suitable. I would like all the documentation to be part of the same repository where all my code is stored. However, this same documentation needs to be easily exported to decent looking web pages and PDFs and placed online with minimal effort on my part. The editing tools also need to be simple and quick with a minimal learning curve.

There are several free online wiki providers out there such as PBworks and WikiDot which allow the easy creation of good looking wikis. But I’m hesitant to use any of them since there isn’t an easy way to easily tie them into Git. Another solution is to make use of Github’s Pages features. Github lets you host your git repositories online so that others can get them easily and start hacking on them. The Pages features allows you to create simple text files with either the Textile or Markdown formatting systems and have them automatically turned into good looking HTML pages. This is a good idea on the whole and the system seems fairly straightforward to use, with some initial setup. The engine behind Pages, called Jekyll is also free to download and use on your own website (and doesn’t require a Git repository).

In addition to these ‘enterprise-quality’ solutions, there are also a number of smaller, more home-grown solutions (though it could be argued that Jekyll is very homegrown). There’s git-wiki, a simple wiki system written in Ruby using Git as the backend. Ikiwiki is a Git or Mercurial based wiki compiler, in that it takes in pages written in a wiki syntax and creates HTML pages. These are viable solutions if you like to have complete control of how your documentation is generated and stored.

Though each of these are great in and of themselves, I still can’t help feeling that there is something missing. In particular, there is lack of a common consensus of how documentation should be created and presented. Some projects have static websites, others have wikis, a few have downloadable PDFs. Equally importantly there isn’t even a moderately common system for creating this documentation. There are all the ways I’ve noted above, which seem to be the most popular. There are also more formal standards like DocBook. Finally lets not forget man and info pages. You can also create your own documentation purely by hand using HTML or LaTex. Contrast this to the way software distribution works (at least in open source): there are binary packages and source tarballs and in many cases some sort of bleeding-edge repository access. There are some exceptions and variations in detail, but by and large things are similar across the board.

Personally, I still can’t make up my mind as to how to manage my own documentation. I like the professional output that LaTex provides and DocBook seems like a well-thought-out standard, but I’d rather not deal with the formatting requirements, especially in documents that can change easily. I really like wikis for ease of use and anywhere editability, but I must be able to save things to my personal repository and I don’t want to host my own wiki server. I’ve previously just written documentation in plain text files and though this is good for just getting the information down, it’s not really something that can be shown to the outside world. For the time being, I’ll be sticking to my plain text files, but I’m seriously considering using Github Pages. For me this offers the dual benefit of easy creation in the form of text files as well having decent online output for whatever I decide to share. I lose the ability to edit from anywhere via the internet, but that’s a price I’m willing to pay. I can still use Google Docs as a emergency temporary staging area. I’m interested in learning how other developers organize their documentation and would gladly hear any advice. There’s a strong chance that my system will change in some way in the future, but that’s true of any system I might adopt.

Sunday Selection 2009-05-24

I realize that a lot of people around the world won’t be seeing  this until Monday and I apologize, the trip to Virginia Tech seems to have thrown off my timing somewhat. 


They Write the Right Stuff This is actually a much older article, but it showed up on Reddit a while ago and I thought it was really interesting. It’s about how the software for the Space Shuttle was written. I think it’s worth reading for anyone who professionally builds software. I wonder what the world would be like if all software was written this way.


Tim Berners-Lee on the Next Web Websites are great, but sometimes you really want to just get your hands on the raw data that other people have used to draw conclusions and are the topics of the web pages. One of the creators of the Internet is pushing an effort to do exactly that


Moblin v2 Beta for Netbooks Moblin is a new Linux system designed for netbooks. It’s main feature is a new graphical interface which makes the most of the limited screen estate available to netbooks. I’m considering getting a netbook at the end of summer and Moblin looks like something I might put on it.



Plans for Summer 2009

It’s that time of year when classes are finally over, the last exam has been taken and it’s time to do things which aren’t at all related to school and schoolwork. Last summer I stayed on campus working on a bunch of interesting projects and learning about programming languages (parsers in particular). This year I’m off to Virginia Tech to work with Dr. Barbara Ryder’s software engineering group. I’ll be working on performance analysis of large framework based Java programs such as Eclipse and IBM’s WebSphere. I’m not exactly sure what my work will be, but it seems like I’ll be having to a fair amount of data gathering (by running said applications in different configurations) and then playing with the data I get. I also hope to learn a ton of useful things from Dr. Ryder and the other people working with her and get a feel for what research at a large university might be like.

My primary interest is still programming languages (it’s been that way for the better part of a year now) and I’m definitely interested in learning more about them, in particular about compiler backends. Last winter I implemented a tiny domain specific language for experimenting with pattern generation via context free grammars. One thing that I’ve been thinking about is extending this language of mine (which doesn’t quite have a name yet) . I have a few ideas of what sort of things I’d like to see it do, but I want to put those ideas down in a concrete form before I start messing with the code. My software engineering class last semester has quite firmly taught me the value of doing at least some amount of planning and design before starting a programming project.

Thanks to my Digital Circuits class I’ve been exposed to some really low-level assembler and C code. We’ve been programming really down to the metal on microprocessors and it’s been a useful experience. Programming low-level systems software is very different from building a programming language, but that’s something for another blog post. I’d really like to do something useful on a small embedded microprocessor (or similar) but I don’t have enough experience in the area to come up with an interesting, feasible project all on my own. Writing my own embedded operating system would surely be an interesting project (not to mention some serious geek street cred) but I’m not sure if I’m up to something like that just yet.

All that being said, I still have my copy of Structure and Implementation of Computer Programs sitting around, only about half -read. I’ve always harbored a special fascination regarding Lisp and its derivatives and I’m starting to feel guilty that I still haven’t done much more than dabble in it. At the beginning of the year, I really wanted learn low-level C and high level Lisp in parallel, but due to a number of reasons it didn’t work out that way. I learned a fair amount of C and a somewhat larger amount of C++ (and C/C++ is a very misleading term), but barely touched Lisp at all.

Looking back at the few months since the beginning of the year, I’ve learned two very important life lessons:

  1. Having some sort of plan is essential to getting the most out of your time (and consequently, your life)
  2. Having too little on your plate leads to boredom, but having too much leads to burnout.

Since I really do want to get the most out of the summer (and the rest of the year) I think it’s important to have a plan that is based on a proper selection of activities and a clear division of my time and energy between them. Luckily for me, my summer is already clearly broken into two phases: I’m spending about two months at Virginia Tech and then almost a month back at Lafayette. Right now, the best way for me to take advantage of that fact would be to work on different things at the two places. At Viriginia Tech, my main focus will be my work with Dr. Ryder’s group, but I think that as a second project my programming languages experiment would be a good fit. The group I’ll be working with seems to have done previous programming language-related work in the past, so I might find myself in the company of some good brains to pick if problems crop up (which inevitably they will).

Once I’m back at Lafayette, I think it’ll be time to buckle down and finally learn some real Lisp. However, I might not want to just look at nested parentheses all day long, so a schedule of C in the morning and Lisp in the evening might turn out to be the best solution. C and Lisp are at two opposite ends of the philosophy (and implementation) of programming languages and programming philosophy, but I feel that both has it’s place and I do really like them both.

Summer is a good time to let your hair down after the hectic semester, but it’s also a great way to spend time learning stuff that isn’t really taught in class. I had a great experience last year and I’m looking forward to having a similarly fulfilling experience this year as well. By the end I’d like to have a functioning little language of my own (which I hope to write more about) as well as have some solid Lisp and C experience under my belt. I won’t put a definite tag on what I expect out of my Virginia Tech experience, because I would like to keep an open mind and maybe just go with the flow, but I will do my best to make it a worthwhile experience. And of course, I will be blogging about as much of it was I can so that everyone else out there can perhaps learn some of what I am learning.

I’m glad it’s finally summer!!

Refactoring my personal Git repository

People usually talk about refactoring when they’re talking about code. Refactoring generally involves reorganizing and restructuring code so that it maintains the same external functionality, but is better in some non-functional way. In most cases refactoring results in code that is better structured, easier to read and understand and on the whole easier to work with. Now that my exams are over, I decided to undertake a refactoring of my own. But I didn’t refactor code, but rather my entire personal source code repository.

About a year ago I started keeping all my files under version control. I had two Subversion repositories, one for my code and another for non-code related files (mostly school papers). A few months ago I moved from Subversion to Git, but my workflow and repository setup was essentially the same. When I moved to Git, I had a chance to change my repo structure, but I decided to keep it. The single repo would serve as a single storage unit for all my code. Development for each project would take place on separate branches which would be routinely merged back into the main branch. The files in the repo were divided into directories based on the programming language they were written in. Probably not the most scientific classification scheme, but it worked well enough.

Fast forward a few months and things aren’t quite as rosy. It turns out that having everything in one repo isn’t really a good idea after all. The single most significant reason is that the history is a complete mess. Looking back at the log I have changes to my main Python project mixed in with minor Emacs configuration changes that I made as well as any random little experiment that I did and happened to commit. Not very tidy. Secondly, using a separate branch for each project didn’t quite work. I’d often forget which branch I had checked out and start hacking on some random thing. If I was lucky I could switch branches before committing and put the changes where they belonged. If I was unlucky, I was faced with the prospect of moving changes between branches and cleaning up the history. Not something I enjoyed doing. Finally, organization by language wasn’t a good scheme especially since I took a course in programming languages and wanted to save the textbook exercises for each language. The result is that now I have a number of folders with just 2-3 files in them and I won’t be using those languages for a while. More importantly, getting to my important project folders meant digging down 3-4 levels down from my home directory.

I decided last week that things had to change. I needed a new organization system that satisfies the following requirements:

  1. Histories of my main projects are untangled.
  2. My main projects stand out clearly from smaller projects and random experiments.
  3. If I start a small project and it gets bigger it should be easy to give it main project status.
  4. An archive for older projects that I won’t be touching again (or at least not for the foreseeable future).
  5. Some way to keep my schoolwork separate from the other stuff.
  6. Everything is version controlled and I should be able to keep old history.

I’ve used a combination of Git’s repository splitting functionality and good old directory organization to make things cleaner. Everything is still tucked into a top-level src directory, but that’s where the similarities with my old system end. Each major project is moved to its own repo. Since I already had each major project in its own subdirectory, I could use Git’s filter-branch command to cleanly pull them out, while retaining history. Every active project gets its own subdirectory under ~/src which has a working copy of the repo. There is a separate archive subdirectory which contains the most recent copy of the projects that I’ve designed to file away. I technically don’t need this since the repositories are stored on my personal server, but I like having all my code handy.

I pooled together all my experimentation into a single repo called scratch. This also gets its own subdirectory under src. It currently holds a few simple classes I wrote while trying out Scala, some assembly code and a few Prolog files . My schoolwork also gets a separate repo and subdirectory. This contains code for labs in each class as well as textbook exercises (with subdirectories for each class and book). Large projects get their own repo and aren’t part of this schoolwork repo. Since I’m on break they’re all stashed under archive.

The process of actually splitting the repo into the new structure was smooth for the most part. I followed the steps outlined by this Stack Overflow answer to extract each of my main projects into its own repo. I cloned my local repo to create the individual repos but I still had setup remotes for each of them on my server. I followed a really good guide to setup the remotes, but first I had to remove the exiting remotes (which pointed to the local repo which I had cloned from). A simple git remote rm origin took care of that.

Things started to get a little more complicated when it came to extracting things that were spread out (and going into scratch). I wasn’t sure if filter-branch could let me do the kind of fine-tuned extraction and pooling I wanted to do. So I decided instead to create a scratch directory in my existing repo and then make that into a separate repo. I used the same process for extracting code that would go into my schoolwork repo.

The whole process took a little over 2 hours with the third Pirates of the Caribbean movie playing at the same time. I’m considering doing the same thing wiht my non-code repo, though I’ll need to think out a different organization structure for that. Things were made a lot easier and faster by the two guides I found and now that I have a good idea of what needs to be done, I’ll probably have an easier time next time around. I’ve come to learn a little more about the power and flexibility of Git. I’m still think I’m a Git newbie, but at least I know one more trick now. If any of you Git power users have any suggestions or alternate ways to get the same effect, do let me know.