Ubuntu should zig to Apple’s zag

It’s another October and that means it’s time for another Ubuntu release. Before I say anything, I want to make it clear that I have the utmost respect for Mark Shuttleworth, Canonical and the Ubuntu project in general. I think they’ve done wonderful things for the Linux ecosystem as a whole. However, today I’m siding with Eric Raymond: I have deep misgivings about the direction Ubuntu is going, especially in terms of user interface.

I’m not a UI or UX designer. I’m sure there are people at Canonical who have been studying these areas for longer than I have. But I am a daily Linux user. In fact I would say that I’m a power user. I’m no neckbeard, but I think that by now I have a fair grasp of the Unix philosophy and try to follow it (my love for Emacs notwithstanding). The longer I see Ubuntu’s development the more it seems that they are shunning the Unix philosophy in the name of “user friendliness” and “zero configuration”. And they’re doing it wrong. I think that’s absolutely the wrong way to go.

It seems that Canonical is trying very hard to be Apple while not being a total ripoff. Apple is certainly a worthy competitor (and a great source to copy from) but this is a game that Ubuntu is not going to win. The thing is, you can’t be Apple. That game has been played, that ship has sailed. Apple pretty much has the market cornered when it comes to nice shiny things that just work for most people irrespective of prior computer usage. Unless somehow Canonical sprouts an entire ecosystem of products overnight they are not going to wrest that territory from Apple.

That’s not to say that Canonical shouldn’t be innovating and building good-looking interfaces. But they should play to the strengths of both Linux the system and Linux the user community instead of fighting them. Linux users are power users. In fact I think Linux has a tendency to encourage average computer users to become power users once they spend some time with it. I would love to see Ubuntu start catering to power users instead of shooing them away.

It’s becoming increasingly clear that Apple does not place its developers above its customers. That’s a fine decision for them to make. It’s their business and their products and they can do whatever they like. However as a programmer and hacker I am afraid. I’m scared that we’re getting to the point where I won’t be able to install software of my choosing without Apple standing in the way. I’m not talking about just stuff like games and expensive proprietary apps, but even basic programming tools and system utilities. That’s not something that I’m prepared to accept.

Given the growing lockdown of Apple’s systems, Canoncial should be pouring resources into making Ubuntu the best damn development environment on the planet. That means that all the basics work without me tinkering with drivers and configurations (something they’ve largely accomplished). It means that there’s a large pool of ready-to-install software (which also they have) and that it’s possible (and easy) to install esoteric third-party tools and libraries. Luckily the Unix heritage means that the system is designed to allow this. Instead of trying to sugar coat and “simplify” everything there should be carefully thought-out defaults that I can easily override and customize. Programmability and flexibility grounded in well-tuned defaults should be the Ubuntu signature.

It makes even more sense for Canonical to take this angle because Apple seems to be actively abandoning it. A generation of hackers may have started with BASIC on Apple IIs, but getting a C compiler on a modern Mac is a 4GB XCode download. Ubuntu can easily ship with a default arsenal of programming tools. Last I checked the default install already includes Python. Ubuntu can be the hands-down, no-questions-asked platform of choice for today’s pros and tomorrow’s curious novices. Instead of a candy-coated, opaquely-configured Unity, give me a sleek fully programmable interface. Give me a scripting language for the GUI with first-class hooks into the environment. Made it dead simple for people to script their experience. Encourage and give them a helping hand. Hell, gamify it if you can. Apple changed the world by showing a generation the value of good, clean design. Canonical can change the world by showing the value of flexibility, programmability and freedom.

Dear Canonical, I want you to succeed, I really do. I don’t want Apple to be the only competent player in town. But I need an environment that I can bend to my will instead of having everything hidden behind bling and “simplification”. I know that being a great programming environment is at the heart of Linux. I know that you have the people and the resources to advance the state of computing for all of us. So please zig to Apple’s zag.

PS. Perhaps Ubuntu can make a dent in the tablet and netbook market, if that’s their game. But the netbook market is already dying and let’s be honest, there’s an iPad market, not a tablet market. And even if that market does open up, Android has a head start and Amazon has far greater visibility. But Ubuntu has already gone where no Linux distro has gone before. For most people I know it’s the distribution they reflexively reach for. That developer-friendliness and trust is something they should be actively leveraging.

Making multiple monitors work for you

A few days ago I happened upon a post regarding large screens and multiple monitors, or rather the lack thereof. The article was a well-written, but sloppily thought out piece where the author makes the claim that having a single, small display makes him “meticulous”, “instantly less distracted” and generally more productive. I take exception to this claim.

I want to make the argument that if using multiple monitors is not boosting your productivity then either you are not using them correctly, or that your work does not require two monitors in the first place (which is perfectly fine). But going back to one small screen is not suddenly going to make you more productive. Productivity is a function of both the work you do and the tools you use and it annoys me when so-called “technology bloggers” hold up interesting tools to be the end all and be all.

Rants aside, the issue of multiple monitors is something that I’ve been thinking about on and off for the past few years. At various points I’ve gone between one and two screens. Over the summer I had a dual monitor desktop setup. Now I have my Macbook Air at home and a single 22″ monitor at work. Since I have a standing desk at work I don’t hook my Macbook up to the monitor. If there is a way to make the Air float in mid-air please let me know.

Here is my thesis: multiple monitors are most useful when your work actually needs you to be looking at multiple pieces of information at the same time. Any time you find yourself quickly switching between 2 or more applications, tabs or windows, you could probably benefit from adding a monitor (or two). Personally this happens to me all the time when I’m programming. I have one monitor devoted to code and one devoted to documentation about libraries I’m using. When I was taking a screenwriting class I would have my script on one monitor and the other would be devoted to research (mostly Wikipedia because I was writing an alternative history piece involving the British Empire).

I’ve always used multiple monitors with virtual desktops (or workspaces as they’re called now). They blew my mind when I first started using Linux and I couldn’t live without them now. Some window managers (OS X in particular) treat multiple monitors as one big extended desktop. I think that’s the wrong way to go about it. I prefer using XMonad with multiple monitors – it turns each monitor into a “viewport” onto my set of up to 9 single-monitor desktops. That means that I can control and move what’s on each monitor independently which gives me much finer control over what I’m seeing at any time. If I have code, IRC and documentation open in three different workspaces I can keep code open on one monitor and move in between docs and IRC on the other if I need questions answered. This level of control is also why I think smaller dual monitors are better than larger single monitors.

Xmonad is also a tiling window manager which automatically arranges my windows to make the most efficient use of my screen space. It makes OS X’s fullscreen app mode look like a toy in comparison. Instead of futzing around without laying out my windows by hand I can have preprogrammed setups that are guaranteed to be the same every single time. This comes in very handy when you need to look at three terminals logged into three different computers at the same time. Spatial memory can be very powerful and it’s one more trick in the productivity bag. It’s also why I love Firefox Panorama, but that’s a matter for another post. XMonad also lets me use space that would otherwise go wasted if I had to spend time and effort manually setting it all up. Large screens actually get partitioned into usable layouts. window manager that will do it for you. If your second monitor is filled with

Two things to note about my usage: first, I do work that actually benefits from having multiple things open side-by-side. Second, I use tools that are molded to suit my workflow, not the other way around. And therein lies the fallacy when people start saying that going to smaller, single monitors increases their productivity. If you’re a writer who doesn’t need her research open side-by-side then you’re not going to benefit from another monitor. If you absolutely need to focus on PhotoShop for long periods of time then a single big monitor might be the best. As I’m writing this blog post I’m using one-half of my piddly 13″ Air screen. If you’re having trouble using space on a large monitor you should get a email, Twitter, IM and other distractions then of course it’s going to make you less productive. If you’re fighting with your technology rather than shaping it to your will then it’s not surprising that you get more work done with less tech.

The last thing I want to talk about is physical positioning. The article I mentioned at the beginning does make one valid point: If you have a dual monitor setup then either the seam between the screens is right in front of you or one is off to the side. The article also claims that having a monitor off to the side means that it’s less used (or completely unused). You don’t want to be staring the seam by default, but I’m not sure that having the monitor to the side is such a bad thing. Again, when I’m writing code I like having the code front and center and having the documentation off to the side isn’t such a bad thing. If your monitor to the side is really unused you should consider the points I mentioned above. Manually dragging windows to the side can be a pain and I think this might be biggest killer for a lot of people.

When I’m at a desktop with two monitor I tend to sit directly in front of my “main” monitor (whatever has my code up) and have the second one off to one side. XMonad makes sure I’m never dragging anything over and moving my head or eyes a few degrees every now and then isn’t a big deal. Your mileage may vary. But when I have a laptop connected to an external monitor, I prefer having the external one propped up so that the screen is above the laptop. Configurations with a small laptop side by side a larger monitor always struck me as odd. Having them stacked above each other makes the workflow slightly better. In Linux (or Gnome at least) you can definitely get the vertical stacking to work right, I haven’t tried it with OS X.

In conclusion, please think carefully about your workflow if you’re considering changing your monitor setup. Use tools that will work with you, not against you. Don’t think that a particular setup will work for you just because someone on the Internet promotes it or some piece of software makes that the default. People and companies get things wrong. Even Apple. There is no substitute for using your own head, especially when it comes to personal matters like workspace setup.

PS. I know that I haven’t linked to the original article. I did it on purpose because links are the currency of the Web and I don’t want to reward what I consider to be sloppy thinking and inaccuracy. If you really want to find it, Google is your friend. If you think I should link anyway, feel free to convince me in the comments.

The operating system for your brain

Last Friday I finished my summer internship at GrammaTech. A few days before that (I forget when exactly) the discussion on our IRC channel turned to cybernetic implants. We’re a company full of pretty hardcore software types, what do you expect? Though to be honest, I was the chief instigator. Anyways, the conversation quickly moved to the question of securing such implants. The questions raised are summarized by one coworker’s comment: “Which software vendor do you trust to write the operating system for your brain?” Given that regular implant technology probably isn’t too far in the future, the question is a valid one. For now my answer is: no one.

Let’s be honest: most of our computer systems are hopelessly insecure. And making them insecure isn’t as simple as installing antivirus software from a big vendor. Depending on just how secure you need or want to be, you potentially have to go very, very deep. In a lot of cases the trouble is not worth it. Want to take down my VPS running my personal website and storing my Git repos? Go ahead, it’ll take me all of five minutes to shut it down and spin it back up, maybe half an hour to restore everything. That’s far easier to do than statically analyzing every line of the Linux kernel, the GNU utilities and the web stack for vulnerabilities (and then fixing them without introducing new ones or breaking things). This is not to say that these aren’t worthwhile, important activities, they’re just not top priority for most users.

However, it’s another matter entirely when the systems are mission critical — banks, defense, the Internet backbone – or they’re running inside our body. Coming back to the original problem, medical technology is quickly progressing to the point of us having fully functional implants replacing faulty organs. Insulin pumps are just the start. Cochlear implants and artificial limbs have been around for a while. Bionic eyes are slowing pushing forward and real cyborgs exist. We’re not going to see full cyberbrains just yet and we’re definitely not throwing out the wetware for full synthetic bodies. But as the number of computers inside our bodies gradually increases it’s never too early to start thinking about how we’re going to keep them safe, especially if we want them connected to the Internet (and we will).

Having our implants connected to the Net is a matter of convenience as well as health and safety. Real-time monitoring, remote diagnostics and over-the-air software updates would greatly cut down on the amount of time you spend in your doctors’ waiting room. However, if you want your arm or eyes hooked up to the Internet you definitely want to be careful about who can connect to them. Asymmetric encryption and signing for all communications (especially updates) would be necessary, just for starters. I can see some kind of code-signing for the software itself being beneficial. But it raises of the question of whether the user can/should be able to hack their own organs. I really don’t want to jailbreak a critical organ if there is a possibility of bricking it. But at the same time I do have a right to my own bodyparts, biological or synthetic.

Aside: I wonder why cars don’t come with 3G connections for remote software upgrades. If the Kindle can do it, it can’t be that hard. Then again car manufacturers haven’t exactly been the most innovative and forward thinking in recent years. Maybe I should be talking to Elon Musk.

Even if the proper technical measures are in place, there is still the question of just who do we trust to provide and potentially control our body parts. I don’t mind Apple storing my music and Amazon can store and sync my books. I do mind them locking me in, which is why I’m still hesitant to go completely digital. But do I trust either of them (or any for-profit corporate entity) with my vital organs, or even non-vital ones? Furthermore do they get keys to shut down “malfunctioning” organs, for some definition of “malfunctioning”? What safeguards are in place to prevent them for misusing these keys? Given the life-threatening nature that such shutdowns might have, requiring a complex legal procedure to overturn shutdowns is dangerous and ethically negligent.

When implants start becoming mainstream and popular we’re going to start seeing issues and problems similar to the ones with computer systems. There are always going to be people who want differing degrees of control over their technology, whether that technology be cars, computers or prosthetics. It would be interesting to see something like a “homebrew” implant scene come up, though I doubt it would rival the popularity of the homebrew computer scene. Like many important problems the questions are both technical and social in nature. So, who do you trust to write the operating system for your brain?

The Age of the Maker is here

Last week a friend sent me a link to the world’s first sub-$1000 PCR machine. PCR stands for Polymerase Chain Reaction, it’s a method of replicating a section of DNA it billions of times. This means you can now study the building blocks of life to your hearts content, in your basement, for less than the price of a top-of-the-line computer. As the announcement says: DNA is now DIY.

OpenPCR joins a list of recent technological milestones including 3D printing, cheap embedded microcontrollers, ubiquitous computing and broadband Internet connections. The technological scene is supported by social phenomena like the open source movement, coworking and hacker spaces and organizations like Kiva and Kickstarter. The rise of increasingly powerful DIY technology and the surrounding social systems is pushing us toward what can best be described as the Age of the Maker.

Going from idea or innovation to self-sustaining product doesn’t require large factories or upfront investments anymore. As projects like OpenPCR and Coffee Joulies show it’s feasible to create a truly novel, popular product combining nothing more than talented, hard-working creators and willing customers. I’d like to believe that this is the beginning of a new industrial age, one that produces a similar improvement in the quality of human life without many of the bad side-effects of the last one. This revolution focuses on the individual and the small team rather on the factory. Sure, there are businesses and there is manufacturing, but the point of it all is not just profit. Profit is important, but a lot of people and groups I just mentioned are doing it largely because it’s fun and exciting.

Technology and the means of production are becoming increasingly democratic. What can be accomplished by small groups of focussed individuals leveraging modern technology is truly amazing. The software industry has already shown that small groups of people can create products and services that change the world. Today’s generation of makers and hackers are taking that a step further – showing that such world changing innovation doesn’t have to be limited to software.

I’m not an economist, but I’d argue that in many ways we’re seeing a reinvention of capitalism. Financial capital doesn’t have to be concentrated in the hands of a few – it can be widely distributed among the masses – millions of customers around the world. What is needed are people with ideas and skills that can bring that capital together just-in-time to create a product – the makers. And we now have the services required to bring the capital in (the Internet, Kickstarter, Kiva) and the cheap infrastructure needed to get the product out (UPS, FedEx, etc.). With OpenPCR, Arduinos, 3D printers and the we’re democratizing and distributing the means of production.

If you’re someone who likes building cool, interesting things there has never been a better time to be alive. The Industrial Revolution brought about mass production and cheap commoditized goods. But it also decimated independent artisans and craftsmen. Today we’re just getting ready to put all the manufacturing power of modern industrializaton back in the hands of individuals with ideas and skills. With today’s technology Leonardo da Vinci may have been able to build his flying machines.

What have you made today?

Separating work from play

A recent post by Seth Godin has showed up multiple times in my feed reader recently that has ignited some old ideas. As with most of Seth’s post this one is short and tight with a good lesson tucked into the end. While you should read the whole thing if you’re in any sort of creative profession, here’s the pithy one liner you need to remember:

Simple but bold: Only use your computer for work. Real work. The work of making something.

This ties in well with a tweet by the erstwhile _why the lucky stiff that I came across a few weeks ago:

when you don’t create things, you become defined by your tastes rather than ability. your tastes only narrow & exclude people. so create.

Creating and making things is important. And not just on a one-off, once in a while manner, but on a regular, consistent, day-to-day basis. The reason that most of us get into programming, writing, designing and related fields is that we loved building things. Let’s face it: the joy of making something is pure, unadulterated crack. Sure it’s hard to get started and it can be even harder to keep going when things don’t go the way you want them to. And by the time we get done, we’re drained and tired and just want to sleep. But the rush of taking something out of our minds, something that was just a thought and putting into a definite shape and form is unequaled.

Unfortunately, as Seth Godin says, we’re using the same tools for both work and play and that doesn’t turn out well. It’s hard to concentrate on writing or hacking when there are email and Twitter alerts clamoring for our attention. And it’s not just the momentary interruptions. Even if you aren’t getting bothered by notifications, it’s hard to gather the mental energy to create when it’s easier to play a game or check the latest Internet happenings. Seth Godin’s solution is actually deceptively simple: use separate machines for work and play. In fact, this is something that I had written about in my rules for computing happiness.

Originally I had planned to wait until graduate school to put this division into effect. Since I was going to get a work machine form the department I would use it for work only. There would be no social software on it, no Facebook or Twitter, no RSS feeds and maybe not even email. I would have a separate Macbook for my non-work stuff, social or not. I’ve heard horror stories about graduate students hemorrhaging time until suddenly it’s five years later and the thesis is only half done. I did not plan on being one of them.

I considered keeping my current setup, but Seth’s post led me to think if I could make any quick, effective changes. The answer was staring me in the face. I’ve had a Google Chrome netbook for a few months now that comes with just the ChromeOS. However there is a developer switch that you can use to unlock it. Yesterday I flipped the switch and installed Ubuntu. I now have a lightweight, portable, lightning fast machine that I can use for getting work done. Also since this is a clean install I can consciously avoid installing stuff that has no place on a work machine. I have the standard Gnome terminal, Emacs and Firefox 4.0 and that’s it. There isn’t even music or a media player. Since I always carry my iPod Touch, that can be my ‘play’ machine. It has all the distractions that I indulge in and my entire music library (which isn’t that big).

I’ve been playing the productivity game long enough to know that no technological tool or setup is a silver bullet for the problem of wasting time. The new setup is going to work only if I use it properly and consistently. There is going to be some work involved to break my old habits and set new, better ones but this is a start. Someday I’ll get around to reforming my other machines but till then this work/play setup will do nicely.