Salvaging Dead Time and Procrastiworking

The last few weeks have been another continuous episode of “too much to do, too little time”. Graduate school is a very interesting environment from a work and productivity standpoint. On the one hand I don’t really have a fixed schedule (outside of a few hours of class a week) and can work whenever I want. I also live close to campus so commuting isn’t a issue. However distractions abound. I’m not meeting with professors on as regular a basis as I was, but there are still lots of talks, colloquia and seminars that I find really interesting and want to go see. It’s very easy to have the day be perforated by lots of little things and never get anything done. However, there’s one trick that I’ve learned that in the past week or so that can mitigate this fragmentation and helps me get things done: salvaging dead time.

Salvaging Dead Time

I currently have a class that runs from 10:10am to 11:25am. Then I go to a lunchtime talk at noon. Taking out the 5 minutes or so to get back to my office that leaves about half an hour that would normally be wasted on Hacker News or Twitter. As a graduate student I need to have pretty long blocks of time to sit, think and get work done. Thirty minutes generally isn’t a lot of time to get brain-work done and hence this would be “dead time” – time that is just lost.

However half an hour is more than enough time to knock off errands. Today I filed two helpdesk tickets, processed email down to inbox zero, paid my power bill and wrote out my rent check. Not only did I get actual work done (and a little high from crossing them off my checklist) it means I don’t have to take out time for them later. I don’t have to devote separate time chunks to errands later and I can allocate that time to actual research work. I think that counts as an all-round win.


While knocking off errands works great to salvage small blocks of dead time (up to about half-an-hour) sometimes there are sometimes larger blocks of 1-2 hours that also needs salvaging. This generally happens around dinner – I don’t have a fixed dinner time. Hence there’s often this awkward state where I won’t be having dinner till a little later, but don’t have anything planned before. Normally that time would evaporate into nothingness, but I’ve been trying out a different technique to salvage it.

While an hour isn’t enough time to do real research work, it definitely is enough to do some programming exercises or go through a few more pages of Real World Haskell. Earlier this week I decided to finally sit down and learn Haskell seriously. I’m familiar enough with Haskell at the moment that I can get up and running in a few minutes. Doing exercises is challenging enough that it takes brain work and requires thinking and learning. However at the same I don’t feel bad about leaving in the middle for dinner (I can generally finish the program I’m working on before leaving). This is classic procrastiworking: I’m slacking off on what I really should be doing (research) but instead of digesting Twitter I’m doing something beneficial.

There’s also a small matter of me being lazy and using dead time as an excuse for slacking off. Even though I know I could use an hour for programming exercises I’m tempted to slack off anyway. I’ve been trying to use procrastiworking for that too. I start off doing something that is really not work: like updating all my git repos or cleaning up my Emacs config. But once that’s over, since I’m already at the computer in a terminal, dealing with scripts and code I just quietly move myself over to a Haskell file and start hacking. It helps if I leave an unfinished function that I can then fill in (or a TODO note).

In Conclusion

Salvaging dead time and procrastiworking isn’t a catch-all solution for time management but I’ve found that it works great for the small blocks of time that I would have been wasting otherwise. Of course, you can’t fill in the blanks unless you have things to fill them with. Personally I use OmniFocus to keep a list of errands that I can go through in sequence. I also have a “project” for the longer blocks – working on Haskell – that easily decomposes into blocks of just a few minutes in length that can be taken up and put down without too much buildup. Finally I hope that in this case practive makes perfect and I get better at making use of dead time the more I consciously do it.

Mind Expansion

I think the world owes a debt of gratitude to Malcolm Gladwell. To my knowledge his book “Outliers” was the first popular work to expound at length on the 10,000 hour rule: the scientifically grounded theory that achieving expert level in most fields requires about 10,000 of “deliberate practice”. Geoff Colvin’s “Talent is Overrated” is similar, but in my opinion not quite as well written. However, both books are elements of what I see as a growing trend: the idea that improving yourself, becoming more than what you are, becoming what you always wished you were, is not only possible but actually achievable given a proper strategy and adequate dedication.

This idea of gradual, but consistent self-improvement isn’t just applicable to becoming top athletes or musicians. Tim Ferriss’ “The Four Hour Body” including the increasingly popular slow-carb diet applies a similar idea to health and fitness. Cal Newport, a recent MIT doctorate and professor at Georgetown seems to be applying similar ideas to becoming a top academic researcher. My personal goal is to apply it to become the best programmer I can possibly be (as most readers of this blog already know).

For a few weeks now I’ve had a two-fold concern. First, I didn’t know what exactly I was or wanted to be. Was I a programmer? A computer scientist? A software engineer? A technologist? A writer who just happened to use code as well as words? Secondly (and more importantly) I couldn’t see anything resembling a clear path from my current “not totally incompetent code slinger” phase to getting to the point where I could understand the works of the great masters (and maybe create something of similar lasting value). And that really, really bugged me. Combining my current indecision with no idea of how to move forward put me on a path leading straight to madness.

Here I was, a recent college graduate with two rather expensive degrees and a certain amount of knowledge about my field of interest. And yet my four years of education seemed like a piddly little drop compared to the proverbial ocean of knowledge and capabilities ahead of me. Not only could I not cross said ocean, I couldn’t even comprehend it’s boundaries. Sure I could write you a topological sort of some arbitrary graph structure. I could also bit-twiddle my way into a reasonably resource constrained embedded application. I’d even wager that I could build a reasonably stable concurrent system that isn’t too embarassing. But beyond that? I just barely grasped functional programming, my knowledge of type systems was rudimentary at best. I only had the foggiest notion of what a monad was and yet I had signed up for a good few years working on advanced programming languages. What on earth was I thinking?

One of the great dichotomies of our field is that on one hand it is possible to be completely self-taught but on the other there are few systematic guides on going from novice to expert. Most advice on the matter seems to reduce to “Read code. Write code. Think. Repeat.” Wise words to be sure, but not exactly a 12-step program. So what is one to do? What am I to do?

Though Gladwell and his ilk may have shown us that tangible self-improvement is possible, it’s not a quest to be taken lightly for there is no end to it. Until you come up against fundamental physical limits there’s always farther you can go, harder you can push yourself. And yet, after a point I’d argue that a law of diminishing returns kicks in, most victories past that point are Pyrrhic. Which is why it is important that I do what I do for fun, first and foremost. Because I find our field infinitely fascinating, constantly stimulating and there’s nothing else I’d enjoy more (though writing comes a close second).

After the why comes the how. What is the path from point A to point B? Where is point B in the first place? Do I even want to go there? Ostensibly I’m going to graduate school for Computer Science. I have a hard time calling myself a scientist or even a engineer. There’s a formality and heaviness about those terms that seems a little off, in the way that “colonist” has a different connotation than “explorer“. A colonist has a definite, long-term purpose – an unknown world to tame, a new land to cultivate and defend. An explorer is looking around, seeing the way things are, understanding and learning, eventually moving on. I’m content to explore for now. I think I’ll stay “just” a programmer for a while, or maybe even a writer, those two seem the best fit.

Though I do this for fun and computer technology has far-reaching effects on the human race, I’ve grown to see programming as a mind expanding activity in it’s own right, independent of other motivations and effects. I’m looking at our technology as a medium for creative expression. Programming is a way to become familiar with that medium, a way to increase our creative powers. And so the direction to explore is the one which will expand my mind the most, increase my creativity the greatest. It’s easy to stay within my boundaries, to stick to the stateful, imperative programming styles I’m familiar with. To go beyond that, to throw myself into functional programming, to use advanced type systems, to write compilers and virtual machines, to do all that, is hard. It is also a form of “deliberate practice” and essential to becoming better.

For fields and pursuits that aren’t easily quantified, deliberate practice is hard. However one heuristic is to look at how much a potential problem bends your mind. An interesting, worthwhile problem changes the way you think about your field in a general way, but requires you to acquire specific new techniques and skills. Once you solve the problems you’ve increased the size of your toolbox, but you’ve also changed how you will look at and approach problems in the future. With each problem you gain the capabilities to solve a wide range of new problems and inch closer to “expert” status.

A few weeks ago I tried looking for a 12-step program – an organized, decently, reliable path to becoming an expert programmer. I wanted a way to put in my 10,000 hours with a reasonable guarantee of a good payoff. I haven’t found such a plan, but I have found some principles – what I’m doing needs to be fun and must be mind-expanding. I have to keep programming, keeping reading and writing, keep thinking. I’ve reopened my computation theory books, I’m taking baby steps in Haskell, I’m scripting Emacs on a more regular basis. I’m trying to continually expand my mind and abilities, I’m trying to keep getting better. I hope in 10 years I’ll get close to where I want to be. But for right now, I’m drained.

Languages abound

After almost a month and a half I’m back in a position to write The ByteBaker on a regular basis again. Instead of a lengthy explanation and reintroduction, I’m going to dive right in.

At the end of August I’m going to be starting my PhD program in Computer Science at Cornell University. Over the last few years of college I’ve developed an interest in programming languages and so I’m spending the next few years pursuing that interest (and hopefully writing some good software in the process). Programming languages are an intensely mathematical and logical area of study. In fact, I’ll admit that I am a bit intimidated by the amount of knowledge about the logical foundations of PL I’ll have to gather before being able to make a meaningful contribution. But on a personal level, it’s not really the mathematical rigor or the logical elegance of these systems that I find interesting. For me, programming languages, just like real languages, are a medium of expression.

In the end, what we’re really trying to do is express ourselves. We start by expressing the problems we want to solve. If we do it well, (and our languages are expressive enough) the expression of the problem leads to the solution. If we do it not-so-well, or if the problem is particularly complicated, we have to express the solution explicitly. In addition to problems and solutions, we can express ideas, data, relationships between data and complex interacting systems of relationships and data. Again, the greater the expressive power of our languages, the easier our job becomes. We want to express the way information should flow through our system. We want to constrain what relationships and flows are possible. We want to specify the dependencies (and independencies) between parts of our systems, we’d like to build our systems out of well-defined components that behave according to fixed and automatically enforced rules.

Just as there are a infinity of things we’d like to say, there are a multitude of languages to speak in. And just as we want to choose the right tool for the job we also want the language that gives us the right level of expressiveness. That’s not to say that there is a linear scale of expressiveness. At one level all Turing-complete languages let you say the same things — if you’re willing to bend over backwards to varying extents. Some languages allow us to say more things and with less efforts than others. But some languages just let us say different things. And even though you could twist and turn almost any language to say almost any thing, I’ve come to feel that all languages have a character — a set of assumptions, principles and decisions at the core that affects how the language and systems built with it work.

Languages don’t stand on their own. And despite my love of languages for themselves, they’re only as important as the systems we built with them. Human languages are important and beautiful in and of themselves, but they’re far more important because of the stories, histories and wisdom they embody and allow to be expressed and recorded. Our computer languages are valuable because they let us express important thoughts, let us solve important problems and built powerful systems to make our lives better. Again, we have done great things with the most primitive of tools, but if you value your time and energy it’s a good idea to pick the right tool for the job.

Personally, I’ve always been something of a language dabbler, probably a result of being in college and needing to move to a different language for another course every few months. In my time, the languages I’ve done significant amounts of work are C, C++, Java, Python, Ruby, various assemblers and Verilog (not a programming language per se, but a medium for expression nonetheless). I wouldn’t consider myself an expert in any of them but I can definitely hold my own in C and Python, maybe Ruby too if I had to. Then there are the languages I want to learn — Lisp in general (Scheme in particular) and more recently Haskell and Scala (a newly discovered appreciation of functional programming and strong type systems). They’re all different mediums of expression each letting you say different things and there’s certainly a lot I want to say.

As a PhD student in programming languages, my job is not to become an expert in different languages (though I guess that could lead to an awesome consulting gig). My job is eventually to make some contribution to the field and push the boundaries (and that involves convincing the people standing on the boundary that it actually has been pushed). Luckily for me there are definitely lots of ways in which the state of the art for programming languages can be pushed. However, to do that I first need to know where the state of the art currently lies. Over the next few months (years?) I want to get deeper into studying languages and the ideas behind them. To start off, I want to explore functional programming, type systems and macros. And I’m sure those roads will lead to more roads to explore. Yes, you are all invited along for the ride.

Language paralysis

It’s winter break which means that I have a good amount of free time on my hands. Though I’m all in favor of sitting around and doing nothing, I do get bored after a few days of that and tend to look for something to keep my mind occupied. I decided that this time I would sit down and learn a new programming language, something I’ve been wanting to do for a while. But the thing is I can’t make up my mind as to which one.

I’ve considered learning one of three languages, each of which is a powerful yet somewhat quirky and niche language. My choices are Common Lisp, Scheme or Haskell. Common Lisp and Scheme are both Lisp dialects, but with different purposes and hence a different feel. From what I’ve learned Common Lisp is a full-fledged industrial strength, general purpose programming language while Scheme has a thriving research community surrounding it and is a great test-bed for implementing programming-language related ideas. Both share Lisp’s defining characteristics such as powerful dynamism and macro facilities. Both are inherently functional languages but are also capable of playing host to other programming paradigms (Common Lisp in particular with its Common Lisp Object System).

Haskell on the other hand is quickly becoming one of the most powerful programming languages on the planet and may be coming close to threatening Lisp’s throne. It’s a purely functional programming language with an increasingly powerful and capable type-system. It’s an excellent tool for language and type-system related research thanks to great parsing facilities and it seems to me that Haskell is at the forefront of computer science research today. Haskell doesn’t have a macro system, but I’ve never heard that be an issue.

All my choices are powerful languages with strong communities, but I simply can’t make up my mind as to which one to learn. I admire all of them and can see the strengths of each, but neither one is really compelling enough for me to sit down and decide to learn it. It’s time to explore some of the reasons behind my current paralysis and see if I can figure out a solution.

Looking back on my history of learning programming languages all the ones I’ve learned to any depth have been motivated by external cause. I learned Java because it was used in my basic CS courses. I learned C++ because I used it in my software engineering class. I learned C for operating systems and digital circuits courses. I learned Python because we use it for most of our research code at school (and it’s become the language I’m most familiar with). I’ve also picked up some JavaScript because I wanted to use it to give some dynamism to my website.

Unfortunately I don’t have similar motivations to help me make my current decisions. My current research is being done in Ruby because of it’s flexible object system. I have some ideas for a side project to pursue next semester but it’s not likely to be something requiring Lisp or Haskell’s particular talents. I’m not going to be doing any research into languages or type systems until the summer at least (and maybe not until later this year). As of this moment, I have zero external motivation to pick and learn any of these languages.

The thing is I really do want to learn one (and eventually all) of these languages. I think it’s a good idea for programmers to be continually learning new languages and expanding the ways in which we can think of our problems. However, I’m coming to realize that simply sitting down and going through a tutorial isn’t enough, at least not for me. I need an actual problem that I intend to solve in the given language. It doesn’t have to be anything fancy, but it should be something that gives me a well-rounded view of the language and it’s capabilities (especially when the language is Lisp or Haskell).

I consider myself a language buff. But it’s one thing to say that I’m interested and read about them and another to sit down, learn them and write code in them. Right now, I’m very interested in learning about Common Lisp, Scheme and Haskell and read both blogs and papers about them. But I can’t take that interest and use it to bridge the gap to learning and using them. Motivation has always been a bit of a problem for me and I’m rather annoyed that it’s preventing me from learning what I want to.

Since I still have about two and a half weeks of vacation left I’m going to give some serious thought as to what sort of programs I want to write in the near future and how I can choose the language that is most beneficial along those lines. At this point I’m open to suggestions for Lisp/Haskell projects that would be interesting as well as hearing about how other people motivate themselves to learn languages that they aren’t actively using.

Enforcing coding conventions

It’s Black Friday which means that in most of the United States people are out shopping taking the advantage of supposedly great deals. I’m not indulging because I try to only buy stuff when I absolutely need instead of getting something just because it’s cheap. However the one thing that I did buy was the Kindle edition of the Joel Spolsky edited “The Best Software Writing”. It’s a collection of talks and articles about software. I could probably have gotten all the material for free online, but I decided to pay the $9.99 to actually get the Kindle edition. It’s the first actual Kindle book that I bought and I spent a good hour today reading it.

The very first article in the collection is “Style is Substance” by Ken Arnold. It is a suggestion to do away with differing coding formats and conventions in programming languages and instead having a single style for the language that is written into the language’s grammar. Thus if you write a program that violates the format rules, it’s not just ugly or bad form — it’s actually an incorrect program that will not compile. The suggestion is probably not a very popular one. In particular, programmers tend to be rather defensive about these sort of personal preferences — coding style, editor, version control tool, all have been know to trigger religious wars (and still often do).

Personally, I can understand the lure of having a single, undisputed code format that is enforced by the compiler. No more spending time figuring out where one block ends and another begins. No more spending precious mental cycles figuring out someone else’s conventions. One of the reasons I like Python is that it’s such a clean, yet flexible language. But on the flip side, I’ve generally been a fan of letting programmers use whatever tools they like as long as they get the job done. From that perspective, coding convention is just another tool and people should be allowed to use whatever one makes them work better.

However, the argument that code formats are just another personal preference doesn’t really hold up. In particular, unlike editor color schemes or other such preferences, code is something that is meant to be shared with other people. One of the greatest lessons I’ve learned from SICP is that programs must be written primarily for people to read and only incidentally for computers to execute. Since code will be shared, it makes sense to take measures that ensure that it will easily readable and comprehensible by other people. Having different coding is less like everyone using a different editor and more like everyone using a different XML schema to exchange documents. OK, it’s not quite so extreme, but it can be close.

If you’re someone writing a compiler or a language and you decide to enforce a single format, the next question is: which one? I would say for current popular languages like C, C++ or Java trying to answer such a question is a futile effort. It’s not that you couldn’t change a compiler to implement a particular style. The problem is rather that there are already too many conventions in place and you’d have civil war if anyone tried to enforce a particular format at the compiler level. If we are to take the idea of a single correct format seriously, it has to be implemented in a new, or at least not-completely-mainstream language. Haskell already has whitespace indentation built into the language. Instead of using curly braces to enclose statements in a function definition, you can use indentation in a manner similar to Python. This is built into syntax and violating the indentation rules will cause the compiler to fail.

Go takes a different approach. Instead of writing format rules into the grammar and compiler, there is a program called Gofmt which will reformat any syntactically correct Go format into the standard Go format. It can also be used as a syntax translator meaning that as the Go language changes, Gofmt can be used to automatically upgrade programs written in an older version of Go to a newer version as long as the changes can be described as syntax transformation rules. Gofmt is a powerful program and sets a high standard for language and developer oriented tools.

So will the languages of the features have strict formatting that minimize the amount the time we waste mentally translating between formats? Maybe. But I doubt that it will be considered a standard part of mainstream languages any time soon. While we have curly brace languages people will continue to expect that the various conventions of the current curly brace languages will also be applicable. Stylistically different languages like Haskell (and to some extent Go) will probably lead the way in changing the way we think of conventions and programming style. It’s interesting and a bit intimidating to see how far the bar for new programming languages has been raised in recent years, but that’s a topic for another blog post.