Sunday Selection 2011-12-11

Around the Internet

More shell, less egg It’s alway a joy to see two masters at the top of their craft engaged in a respectful, but determined duel. This is a short commentary on Donald Knuth and Doug McIlroy’s approaches to literate programming. Worth reading even if you’re not a big fan of literate programming.

Selective use of technology I firmly believe that science and technology is a good thing and that our world is better because of them. However I also understand that technology cannot do everything for us. In particular there are a lot of decisions it cannot make for us (yet). I also tend to get a lot of my best work when I am least partially disconnected and can hold at bay the full force of the Internet. All things in moderation.

Why sugar makes us sleepy (and protein wakes us up) As much as many of us would like to live as if we disembodied brains surviving on anything that barely resembled food, that is definitely not the case. Since we are stuck with our flesh-and-blood physical bodies for the foreseeable future, it is a good idea to figure out how it all works and make the most of it.

From the Bookshelf

Do the Work While I’m not entrely a fan of Steven Pressfield’s use of vaguely “spiritual” ideas and terms, this book is still worth reading for everyone. It’s especially useful if you have that big project you’ve been thinking about but never got around to actually starting. At $1.99 for the Kindle edition, it’s a steal.

Video

What we actually know about software development Despite the importance of software development, most developers are acutely unaware of the scientific studies in the area and rely mostly on anecdote. Luckily there is an increasing amount of research in software development (not to be confused with computer science) and it’s worth knowing what we actually know about the field and what is myth.

Advertisements

The Interface Paradox

As much as I love programming and good old-fashioned text-based command lines, I have an interest in ergonmics and futuristic interface. A few days ago a post entitled “A Brief Rant on the Future of Interaction Design” made the rounds on the Internet. It opens with an old, but interesting video and goes to make the argument that our current obsession with flat touchscreens and simple gestures is doing our us all as disservice. Our hands are capable of complex gripping, grasping and touching motions and having all that expressivity confined to a small, two dimensional surface with a limited number of motions is self-defeating. The article makes the damning statement: “Are we really going to accept an Interface Of The Future that is less expressive than a sandwich?”

The article helped me express an uncertainty that’s been floating back and forth in my mind for some time. I use my iPod Touch on a daily basis and I’ve been loving the multitouch trackpad on the new Macbooks. I love the swiping motions for window management and moving things around. At the same time I’ve started drawing by hand again (I loved drawing as a kid) and I realize that putting a pencil to paper is a rather complex but very fulfilling activity. Strangely enough I think that both the pencil and the touch-based iOS interface have a lot in common. In both cases, the actual physical device almost disappears letting you focus on the underlying application. The iPad or iPhone itself is just a thin frame around whatever app you’re using. The pencil is basically just a simple pointer but allows us to create an infinited range of images with it.

However in both cases, the expressiveness offered by the device is not enough. Pencils are not enough to express all the images we might want to create. That’s why we have pens, brushes, chalk, crayons and a variety of papers and canvases. The flat touch interface is also not enough, especially if we are confined to a small surface that fits in one hand. The question then is how we can take the simplicity of our current touch interface and extend them to a larger set of expressions and interactions?

Case in point is the camera interface on the iPhone. For a long time there was a software button that you had to touch to take a picture. But that meant sticking your finger in the middle of the picture. Normal cameras have a better interface: there is shutter button on the top that keeps your hands far from the actual image (even if you’re using a LCD screen instead of a traditional viewfinder). This deficient interface on the iPhone led to the Red Pop, a giant red shutter button and now iOS 5 turns one of the hardware volume buttons into a shutter button.

The Red Pop camera interface for the iPhone
The Red Pop camera interface for the iPhone

Having a fluid, upgradeable, customizable software interface is nice and I like smooth gradients and rounded corners as much as the next guy. But our hands evolved to use actual physical matter and before computer interfaces we built a lot of interesting physical interfaces. Apple has hooked us on the idea of sleek, smooth devices with no extraneous. While it’s great to lose unnecessary knobs and edges the Apple design philosophy might not be best in the long run, especially if your device’s UI doesn’t neatly fit into the touch-drag-swipe system of gestures.

Ultimately it would be great to have “smart matter” physical interfaces – the flexibility and programmability of software with the physical usability that solid matter offers. Imagine some sort of rearranging material (based on some form of nano- or micro-technology maybe?) that can be be a simple smooth shell around your interfaces but can change to form buttons, sliders, knobs or big red shutter buttons as your application requires. But in the years (decades?) between now and then we need other solutions. The range of accessories and extensions available for the iPhone (including the Red Pop, tripods, lenses etc.) seem to suggest that enterprising young device maker could use the iPhone (and it’s successors and competitors) as a computing core to which they can attach their own physical extensions. With a more open and hackable platform (an Android-Arduino hybrid perhaps) we might see a thriving device market as well as an app market. Am I a dreamer? Hell yeah, but as the projects I’ve linked to show, I’m certainly not the only one.

Ubuntu should zig to Apple’s zag

It’s another October and that means it’s time for another Ubuntu release. Before I say anything, I want to make it clear that I have the utmost respect for Mark Shuttleworth, Canonical and the Ubuntu project in general. I think they’ve done wonderful things for the Linux ecosystem as a whole. However, today I’m siding with Eric Raymond: I have deep misgivings about the direction Ubuntu is going, especially in terms of user interface.

I’m not a UI or UX designer. I’m sure there are people at Canonical who have been studying these areas for longer than I have. But I am a daily Linux user. In fact I would say that I’m a power user. I’m no neckbeard, but I think that by now I have a fair grasp of the Unix philosophy and try to follow it (my love for Emacs notwithstanding). The longer I see Ubuntu’s development the more it seems that they are shunning the Unix philosophy in the name of “user friendliness” and “zero configuration”. And they’re doing it wrong. I think that’s absolutely the wrong way to go.

It seems that Canonical is trying very hard to be Apple while not being a total ripoff. Apple is certainly a worthy competitor (and a great source to copy from) but this is a game that Ubuntu is not going to win. The thing is, you can’t be Apple. That game has been played, that ship has sailed. Apple pretty much has the market cornered when it comes to nice shiny things that just work for most people irrespective of prior computer usage. Unless somehow Canonical sprouts an entire ecosystem of products overnight they are not going to wrest that territory from Apple.

That’s not to say that Canonical shouldn’t be innovating and building good-looking interfaces. But they should play to the strengths of both Linux the system and Linux the user community instead of fighting them. Linux users are power users. In fact I think Linux has a tendency to encourage average computer users to become power users once they spend some time with it. I would love to see Ubuntu start catering to power users instead of shooing them away.

It’s becoming increasingly clear that Apple does not place its developers above its customers. That’s a fine decision for them to make. It’s their business and their products and they can do whatever they like. However as a programmer and hacker I am afraid. I’m scared that we’re getting to the point where I won’t be able to install software of my choosing without Apple standing in the way. I’m not talking about just stuff like games and expensive proprietary apps, but even basic programming tools and system utilities. That’s not something that I’m prepared to accept.

Given the growing lockdown of Apple’s systems, Canoncial should be pouring resources into making Ubuntu the best damn development environment on the planet. That means that all the basics work without me tinkering with drivers and configurations (something they’ve largely accomplished). It means that there’s a large pool of ready-to-install software (which also they have) and that it’s possible (and easy) to install esoteric third-party tools and libraries. Luckily the Unix heritage means that the system is designed to allow this. Instead of trying to sugar coat and “simplify” everything there should be carefully thought-out defaults that I can easily override and customize. Programmability and flexibility grounded in well-tuned defaults should be the Ubuntu signature.

It makes even more sense for Canonical to take this angle because Apple seems to be actively abandoning it. A generation of hackers may have started with BASIC on Apple IIs, but getting a C compiler on a modern Mac is a 4GB XCode download. Ubuntu can easily ship with a default arsenal of programming tools. Last I checked the default install already includes Python. Ubuntu can be the hands-down, no-questions-asked platform of choice for today’s pros and tomorrow’s curious novices. Instead of a candy-coated, opaquely-configured Unity, give me a sleek fully programmable interface. Give me a scripting language for the GUI with first-class hooks into the environment. Made it dead simple for people to script their experience. Encourage and give them a helping hand. Hell, gamify it if you can. Apple changed the world by showing a generation the value of good, clean design. Canonical can change the world by showing the value of flexibility, programmability and freedom.

Dear Canonical, I want you to succeed, I really do. I don’t want Apple to be the only competent player in town. But I need an environment that I can bend to my will instead of having everything hidden behind bling and “simplification”. I know that being a great programming environment is at the heart of Linux. I know that you have the people and the resources to advance the state of computing for all of us. So please zig to Apple’s zag.

PS. Perhaps Ubuntu can make a dent in the tablet and netbook market, if that’s their game. But the netbook market is already dying and let’s be honest, there’s an iPad market, not a tablet market. And even if that market does open up, Android has a head start and Amazon has far greater visibility. But Ubuntu has already gone where no Linux distro has gone before. For most people I know it’s the distribution they reflexively reach for. That developer-friendliness and trust is something they should be actively leveraging.

Making multiple monitors work for you

A few days ago I happened upon a post regarding large screens and multiple monitors, or rather the lack thereof. The article was a well-written, but sloppily thought out piece where the author makes the claim that having a single, small display makes him “meticulous”, “instantly less distracted” and generally more productive. I take exception to this claim.

I want to make the argument that if using multiple monitors is not boosting your productivity then either you are not using them correctly, or that your work does not require two monitors in the first place (which is perfectly fine). But going back to one small screen is not suddenly going to make you more productive. Productivity is a function of both the work you do and the tools you use and it annoys me when so-called “technology bloggers” hold up interesting tools to be the end all and be all.

Rants aside, the issue of multiple monitors is something that I’ve been thinking about on and off for the past few years. At various points I’ve gone between one and two screens. Over the summer I had a dual monitor desktop setup. Now I have my Macbook Air at home and a single 22″ monitor at work. Since I have a standing desk at work I don’t hook my Macbook up to the monitor. If there is a way to make the Air float in mid-air please let me know.

Here is my thesis: multiple monitors are most useful when your work actually needs you to be looking at multiple pieces of information at the same time. Any time you find yourself quickly switching between 2 or more applications, tabs or windows, you could probably benefit from adding a monitor (or two). Personally this happens to me all the time when I’m programming. I have one monitor devoted to code and one devoted to documentation about libraries I’m using. When I was taking a screenwriting class I would have my script on one monitor and the other would be devoted to research (mostly Wikipedia because I was writing an alternative history piece involving the British Empire).

I’ve always used multiple monitors with virtual desktops (or workspaces as they’re called now). They blew my mind when I first started using Linux and I couldn’t live without them now. Some window managers (OS X in particular) treat multiple monitors as one big extended desktop. I think that’s the wrong way to go about it. I prefer using XMonad with multiple monitors – it turns each monitor into a “viewport” onto my set of up to 9 single-monitor desktops. That means that I can control and move what’s on each monitor independently which gives me much finer control over what I’m seeing at any time. If I have code, IRC and documentation open in three different workspaces I can keep code open on one monitor and move in between docs and IRC on the other if I need questions answered. This level of control is also why I think smaller dual monitors are better than larger single monitors.

Xmonad is also a tiling window manager which automatically arranges my windows to make the most efficient use of my screen space. It makes OS X’s fullscreen app mode look like a toy in comparison. Instead of futzing around without laying out my windows by hand I can have preprogrammed setups that are guaranteed to be the same every single time. This comes in very handy when you need to look at three terminals logged into three different computers at the same time. Spatial memory can be very powerful and it’s one more trick in the productivity bag. It’s also why I love Firefox Panorama, but that’s a matter for another post. XMonad also lets me use space that would otherwise go wasted if I had to spend time and effort manually setting it all up. Large screens actually get partitioned into usable layouts. window manager that will do it for you. If your second monitor is filled with

Two things to note about my usage: first, I do work that actually benefits from having multiple things open side-by-side. Second, I use tools that are molded to suit my workflow, not the other way around. And therein lies the fallacy when people start saying that going to smaller, single monitors increases their productivity. If you’re a writer who doesn’t need her research open side-by-side then you’re not going to benefit from another monitor. If you absolutely need to focus on PhotoShop for long periods of time then a single big monitor might be the best. As I’m writing this blog post I’m using one-half of my piddly 13″ Air screen. If you’re having trouble using space on a large monitor you should get a email, Twitter, IM and other distractions then of course it’s going to make you less productive. If you’re fighting with your technology rather than shaping it to your will then it’s not surprising that you get more work done with less tech.

The last thing I want to talk about is physical positioning. The article I mentioned at the beginning does make one valid point: If you have a dual monitor setup then either the seam between the screens is right in front of you or one is off to the side. The article also claims that having a monitor off to the side means that it’s less used (or completely unused). You don’t want to be staring the seam by default, but I’m not sure that having the monitor to the side is such a bad thing. Again, when I’m writing code I like having the code front and center and having the documentation off to the side isn’t such a bad thing. If your monitor to the side is really unused you should consider the points I mentioned above. Manually dragging windows to the side can be a pain and I think this might be biggest killer for a lot of people.

When I’m at a desktop with two monitor I tend to sit directly in front of my “main” monitor (whatever has my code up) and have the second one off to one side. XMonad makes sure I’m never dragging anything over and moving my head or eyes a few degrees every now and then isn’t a big deal. Your mileage may vary. But when I have a laptop connected to an external monitor, I prefer having the external one propped up so that the screen is above the laptop. Configurations with a small laptop side by side a larger monitor always struck me as odd. Having them stacked above each other makes the workflow slightly better. In Linux (or Gnome at least) you can definitely get the vertical stacking to work right, I haven’t tried it with OS X.

In conclusion, please think carefully about your workflow if you’re considering changing your monitor setup. Use tools that will work with you, not against you. Don’t think that a particular setup will work for you just because someone on the Internet promotes it or some piece of software makes that the default. People and companies get things wrong. Even Apple. There is no substitute for using your own head, especially when it comes to personal matters like workspace setup.

PS. I know that I haven’t linked to the original article. I did it on purpose because links are the currency of the Web and I don’t want to reward what I consider to be sloppy thinking and inaccuracy. If you really want to find it, Google is your friend. If you think I should link anyway, feel free to convince me in the comments.

Sunday Selection 2011-09-25

Unfortunately work-related activities having been taking up a lot of my time and energy over the past couple of weeks. On the good side I’m gradually making progress towards figuring out this grad school thing. While work on a funny and insightful blog post to blow you all away I leave with you a brief tour of the Intertubes.

Society

It’s not gender warfare, it’s math Being a computer science graduate student I’m regularly confronted by the fact that there are not enough women in our field (and that doesn’t seem to be changing any time soon). Here’s a look at why and that needs to change and some work in the right direction.

The Fraying of a Nation’s Decency Sometimes we just need a reminder that we’re all human after all.

Web Technology

10 best @font-face fonts I think embeddable web fonts are one of the best things to have happened to the web in recent years. Think of this article as a good “getting started” guide if you’re trying to figure out what fonts to use for your own projects.

How to make a simple HTML5 Canvas game The canvas element is an even bigger improvement than web fonts. Like the name suggests, it gives you a general purpose drawing element on a web page. Combine that with fast JavaScript engines and you have a pretty decent game engine on your hands.

Video

QuakeCon 2011 John Carmack keynote If you’re interested in gaming engines or high-performance, down-and-dirty programming then you should take the hour and half to listen to John Carmack — the brains behind the Doom and Quake game engines.