Let’s kill Click Here

Click here to go to my last post.

Let’s stop doing that. As much as I love hyperlinks and the Web, I think it’s a bit unnecessary (and poor form) to have explicit link text saying something like “click here”. If you’re not really interested in the links these phrases just break the flow of your reading.

I’m not sure how this convention started, but I can imagine it being useful in the early days of the web. Before the idea of linking became ubiquitous it was a good idea to explicitly call out a link, especially if it was important. But I think we’re at the point now where most users can tell from the styling if a certain piece of text is a link. Think of how the movie Inception didn’t go to lengths to explain how people to get into others’ dreams – the Matrix movies have made the concept of “jacking in” pretty ubiquitous. The details aren’t very relevant to the story, the basic concept is well-known and movie makers can focus on more important things.

By and large the web conventions of the last two decades have established that underlined text in a different color is a link. This isn’t universally true of course. Thanks to CSS I can make my links look however I want, I can even make them look like plain text. But why would I want to? If I’m trying to attract attention to something, I want to do it clearly without being obnoxious. Using different colors and styles gets the point across perfectly well: this text is different and merits further attention, you might want to click on it.

Let’s look at natural speech. If we want to say something important we don’t preface it with “I’m going to say something important now”. We don’t end with “I’m done saying important things now”. Instead we speaker slower, louder, with greater emphasis in order to show what we’re saying is important. We don’t talk in a monotone all the time. We vary our tone, speed and volume to convey the different meanings of our speech. Web design (including designing links) should be similar: let’s put in the effort to make our links stand out without having to spell them out.

Aside: Along those lines, in daily speech if you’re saying “My point is” or “What I’m trying to say is” a lot, you should slow down and think carefully about what you want to say before you say it. I think public speaking and rhetoric should be a mandatory part of education for similar reasons, but that’s a whole other blog post.

I’ve been putting more links in my posts recently (especially since I ditched the WordPress web editor in favor of the excellent org2blog Emacs mode). My posts are often the result of stuff I’ve read on the Web fermenting in my head along with other ideas I’ve had. I want to link to relevant readings and I try to do that inline as much as possible. In an ideal world, we would have intelligent, automatically generated links as well as manual ones. For example, whenever I mentioned a person there would be a link created either to their personal website or their Wikipedia page. Lacking that, inline links is the next best thing I can think of. In doing so I’ve been trying to avoid making said links explicit. So far I’ve been pretty successful, it’s not that hard once you get used to it.

As with all communication there’s a lot to be said for brevity, precision and flow. I want my posts to be readable as pieces of writing even if someone is not interested in the links. By keeping links inline and using design choices to making them visible I think we can create online articles that are easy to read as well as being well linked to relevant resources – just the way it was meant to be.

Compilers and nuclear reactors

This summer I’m doing an internship at a small, research-y software shop called GrammaTech. They do a lot of research-based projects and have a static analysis tool called CodeSonar that is really quite nifty. I’ve been poking around the CodeSonar codebase as part of my work and I’m pretty impressed by the kind of things they do. Most of it is in C and I can easily say I’ve never seen C written this way before. They leverage C’s close-to-the-metal nature and the preprocessor to create some pretty powerful abstractions. It makes me painfully aware of how much I still have to learn. While marveling at what a deep knowledge of C can do I came across “Moron why C is not Assembly” by James Iry – an explanation of why C is not just syntactic sugar for assembly language.

As I learn more about programming in general and programming languages in particular, I’m starting to think that there is something of a Law of Conservation of Power, vaguely akin to the conservation of matter and energy (but not quite as absolute). For example, Iry talks about how C enforces the stack abstraction and hides any parallelization in the hardware (or in the generated code). By moving from the assembly world to the C world you’re trading one form of power for another – you obey the constraints of the stack and function calls get much simpler. But you lose the ability to fine tune using dedicated hardware instructions.

This is true as you continue exploring more languages – give up the looseness of C’s weak type system for something stricter and stronger (ML with it’s algebraic datatypes for example) and you can have the machine enforce invariants and guarantee certain properties of your code. You can perform easier and more elegant matches and actions on the type of your data. But you give up the flexibility that comes of raw data and fundamentally unstructured bits. If you choose strict immutability and pure functions (like Haskell or Clojure) you get to interact with your datatypes in a more mathematically precise form, you get to reap the benefits of concurrency without worrying about data corruption (to some extent). But you lose the ability to quickly stash some metadata into the corner of some variable data structure and pull it out at will.

If we start viewing different languages as tradeoffs in the power available to the programmer then a compiler becomes a very special tool – a power transformer, akin to a nuclear reactor (or a Hawking’s Knot if you don’t mind some futurism). A compiler, at its core, takes a stream of symbols (according to predefined syntactical rules) and transforms them into another stream of symbols (according to predefined semantic rules). Along the way, the compiler is responsible for enforcing the power transforms – ensuring that your C code doesn’t get to the underlying stack, that your Haskell code obeys the type constraints. If our languages are tools with which we build our universes, our compilers enforce the laws of physics (whatever laws they may be). The reason we’re computer scientists and not physicists is that we can create and change these laws at whim, instead of only studying and exploring them.

Just as we don’t want to be putting nuclear reactors in SUV’s there isn’t a “best” power tradeoff. Do you need to be really close to the metal on a resource constrained embedded system? Use C. Do you need a guarantee that a successful compilation will rule out a certain class of runtime errors? Use ML. If you need a fuel-efficient vehicle to take you to work everyday, get a Prius. If you need a self sufficient, water-borne weapons platform that only needs to be refueled every few decades and can rain down vengeance on your enemies should the time come, then invest in an aircraft carrier or a nuclear submarine. Don’t bring a knife to a gunfight.

There are two big elephants in the room: Turing-completeness and Lisp. All Turing complete languages are strictly equivalent in their computational power, but that misses the point of this discussion. Most programmers are not writing code for machines at all: they are writing programs for programmers (including themselves) to read, understand and use (and only incidentally for machines to execute; thank you, SICP). When you change the rules of the game to be not strict computational power but expressiveness and understandability to another human, this Conservation of Power thing becomes much more important. Choosing the correct set of tradeoffs and balances (and hence the correct language and related toolsets) becomes one that has far reaching impact on your team and project. Make the wrong choice and the Turing tarpit will swallow you alive and surrender your fossilized remains to a future advanced insectoid race.

Lisp is another matter entirely. The so-called “programmable programming language” has been used to build everything from operating systems to type systems which are Turing complete in themselves. As Manuel Simoni puts it, in the Lisp world there is no such thing as too much power. Lisp laughs flippantly at the Law Conservation of Power by placing in the programmer’s hands the enforcer of the Law – the compiler itself. By virtue of S-expressions and macros Lisp allows and encourages you to play God. The natural way to program in Lisp is to “write code that writes code” – creating your own minilanguages tailored to the task at hand. With great power comes great responsibility of course so the Lisp programmer must be particularly careful.

I haven’t explored Lisp as much as I’d like to and I’m only just starting to look into ML and Haskell. But as a career programmer (or some approximation thereof) I think it’s a good idea to routinely move between various points on the power spectrum. That’s not to imply that it’s a perfectly linear scale, but that’s a matter for another post. As my experience at GrammaTech is showing there are delights, wonders and challenges no matter where you decide to plant your feet.

Control the flow

There is an abundance of information in the world. You might even say there is an overabundance of information. I’d argue that the problem is not that the information exists, but rather it’s all too easy to get to. In fact, you don’t even have to go to it. Information comes to you, all the time, through multiple channels at once. And often it’s just too much. Compounding this problem is the lack of automated, intelligent and accurate filtering systems. The only way to deal with incoming information is to manually look at it and set up filtering systems by hand. Combine the abundant influx of information and the lack of ways to automatically parse and filter said information and you end up with a debilitating information overload.

If we ever want to get anything done, there is only so much time we can spend each day on absorbing information. To create, design, build or produce anything of value we need to temporarily cut ourselves off from the stream. Unfortunately the increasingly ubiquitous presence of the Internet combined with email, RSS and Twitter make such disconnection a hard proposition to swallow. For me at least, the temptation is strong to just compulsively check the streams all day long. It’s like constantly refreshing an inbox, but more addictive because everything is coming in faster. Furthermore, the addiction is real. Our information streams leverage variable reinforcement to keep us hooked. Every time there is something new we get a little dopamine high that makes us want to come back for more.

The price we pay, the price I pay, for paying attention to the stream is all the things that could have been created, but aren’t. And as the days go by, that price only gets steeper. You can’t have a brain concentrating on creative work if it’s hovering over multiple inboxes, hoping that something interesting will come through. Something’s got to give. At the end of the day we either give up hope of ever accomplishing anything worthwhile (and many of us do) or we constrict the streams, control the flow, reduce the inboxes and do the work.

We could go all the way – give up email, connect to the Internet sparingly, focus inward instead of outward. But let’s be clear: the Internet is pretty darn amazing and I love having the combined knowledge of humanity a few keystrokes away. You can have the information superhighway when you pry it from my cold, dead, RSI-crippled hands. Till then, a little prudence is in order.

I’m giving up on blogs and RSS feeds that refresh more than once a day (with a few, very select exceptions). That cuts away most of the “news” blogs that I skip over anyway. I want to read things where I get a view of author’s mind and thoughts, the expression and intelligence of another human being. I want their words to come to me, because I’ve already read some of them and determined that I don’t want to miss them. If I want raw information, I’ll go and find it. I’ll read when I want to, at the end of a long, fruitful day, on a lazy Saturday morning, not compulsively every hour in fear of missing something.

I’m giving up on all the blogs that sound the same (this seems true of a lot of technology blogs unfortunately). It’s great that the Internet gives you a voice, that doesn’t mean I’m obliged to listen to it. Twitter is a flowing, meandering river – it comes, it goes, if I’m taking a dip it’s to be refreshed, not to be carried away. If I find a nugget of gold I stash it in Instapaper for later. I already have automated filters for email. I see messages on two conditions — they are urgent or they are unlike anything my system has seen so far. It’s not a personal AI, but it’ll do for now.

This mindset also extends to production: I write words in a plain text editor, publishing by automated, low-friction, no-fiddle means. The system is open-source, programmable, transferable between platforms. I can have it grow with me, I can file bug reports and submit patches so that it becomes better for others too. I write code in the same editor, hooked up to compilers, debuggers and source-code managers with similar low-friction scripts and commands. Nary a clickable button in sight. For hashing out ideas I rely on pen, paper, whiteboards and intelligent human beings. This means I have time and opportunity to slow down, reflect and revise. Everytime I put up something for others to see I want to ensure that it sucks a little less.

And so I am trying to constrict the inflow and filter the outflow. Never before in our history has it been easy to get to things. Never before has it been so easy to create and publish. A side-effect is that there is a lot of crap to consume, it’s so easy to produce mediocre widgets. It’s about time we stopped sabotaging ourselves (and by we, I mean I), stopped drowning in the sea of information, started doing things we’d be proud to have our names on. We’ve figured out the technology to create, let’s figure out how to filter and refine.

In the presence of gods

From Wikipedia, James Gleick in Genius: The Life and Science of Richard Feynman:

This was Richard Feynman nearing the crest of his powers. At twenty-three … there was no physicist on earth who could match his exuberant command over the native materials of theoretical science. It was not just a facility at mathematics (though it had become clear … that the mathematical machinery emerging from the Wheeler–Feynman collaboration was beyond Wheeler’s own ability). Feynman seemed to possess a frightening ease with the substance behind the equations, like Albert Einstein at the same age, like the Soviet physicist Lev Landau—but few others.

Also, last week I went to a lecture by Jon Kleinberg, Tisch University Professor of Computer Science at Cornell University and winner of a MacArthur Foundation Fellowship (also known as a Genius Award), whose early research formed a large part of Google’s success as a search engine.

Some days we are reminded that we walk among giants, that we live in the presence of gods. On those days, we are humbled and uplifted at the same time.

Just for fun

Let me tell you about Sunday. Sunday was, among other things, uncomfortably warm here in Ithaca, NY. And a combination of being woken up much earlier than I wanted and skipping breakfast guaranteed that I was quite cranky all morning. But anyway, by 1pm I was decently well fed and had a mini-conversation with my advisor while standing in the sandwich line (which of course involved a good amount of programming language talk). Being in a considerably better mood I decided to avoid the heat by heading over to Starbucks and writing some code.

For most of last semester I had been working on an Actor library in Ruby to do some fun little concurrency experiments. I had been wanting to take my code (which has very rough in the way that only research projects can be) and turn it into a proper Ruby actor library. On Sunday I started down that path.

I had been aware of some prior art in this area, in particular the excellent Revactor library. Until yesterday I hadn’t actally taken the time to dig deep into it. When I did, I was devastated. Revactor is beautiful, well thought-out, flexible powerful, elegantly implemented. With the exception of a few aesthetic details it’s everythng I wanted my own library to be and more. It was devastating because it seemed like all I could do was reinvent the wheel. What’s the point of writing or making something if someone’s already done it before and better? It’s the kind of crushing hopelessness I feel sometimes as a language researcher: it seems like Lisp did everything, 30 years ago, and did it better.

So after banging my head against the table for the better part of an hour (and wondering why the girl sitting next to me had two straws in her iced tea) I decided to go for a walk. At almost 6pm it was still uncomfortably warm but after ten minutes I found myself at the steps of the computer science building (unsurprisingly). I walked in, claimed a couch and started writing some code.

My new project isn’t brilliant research, it’s not scalable and high-performance, it’s not an infinitely reusable library with unit and regression tests. It’s just a little hackish thing I threw together in an hour. It doesn’t do very much yet but offers the promise of many hours of fun hacking ahead. It’s a fun personal project that scratches a little itch and is a lot of fun to code up. I’m writing a combination of C and Ruby, I’m living in Emacs and my terminal and I’m having a lot of fun at it. After a long, long time I’m having fun writing code and remembering why I got into this gig in the first place.

The thing is, at the end of the day, I sling code because I like to. Because it’s fun. If it stops being fun I might as well just give it up and hang up the keyboard (or keep the keyboard and write words instead of code). As Andrew Appel says, not all of us want to be logicians, some of us just want to be hackers. I like math and logic and performance analysis as much as the next guy, but I also like just the pure, raw feeling of code. I remember the allure of the machine as a mysterious black box – a well of infinite potential if only you could figure out how to bend it to your will. I remember a time when we used to be explorers – poking and prodding our systems, seeing how they reacted, how they bent, how we could change them and restore them, how we could make them do what they weren’t meant to do. I remember that it used to be a whole lot of fun. Yesterday I remembered all that and had a lot of fun doing it.

I don’t know why you do what you do, but I hope it’s fun. I hope that when you go to work everyday it’s because you really, really want to, because you can’t see yourself doing anything else that’s as fun. Feel free to change the world, to make it a better place to live in, to support and help the people around you, but please have fun while doing it.