The Shape of the Future

I love Star Trek. As a kid Kirk was my hero and Spock was always fascinating (and just a little bit mysterious). I loved the idea of starships exploring the galaxy, of alien worlds and strange beings. The technology of tomorrow was just amazing – communicators, tricorders, warp drives, phasers and even hyposprays. I’m pretty certain that Star Trek was what started my love of science and technology. If I hadn’t grown up watching the crew of the Enterprise (both Kirk and Picard’s) using science and technology to save the day I would probably have been a historian or a writer. So basically, I owe Gene Rodenberry a huge debt.

A lot has happened since I was a five year old watching Kirk slug it out with the Klingons and Picard battle the Borg. I’ve read and seen a lot more science fiction – Asimov, Heinlein, Clarke, recently Charlie Stross – and I’ve seen a lot more too – Star Wars, Battlestar Galactica, Doctor Who (though that’s not really science fiction), any number of scifi movies. More importantly, the world around me has changed. Ubiquituous connectivity, portable supercomputers, massively distributed computation systems and the first steps towards cybernetic implants.

A lot of what we once considered science fiction we now accept as part of daily life without batting an eyelid. Our smartphones are much cooler than Kirk’s communicator (though smartphone is a misnomer, that’s a matter for another post). Any large datacenter probably has more computing power than the Enterprise. On the other hand we’re also far behind some science fiction classics. Warp drive, or any other form of faster than light travel, is still only a fantasy (though an ion-powered starship may be much closer to reality). It will probably be thousands of years before we achieve the technology (and harness the sheer resources) to create stable traversable wormholes.

That being said, there are areas in which we will probably surpass science fiction. I’ve find it interesting that most “popular” science fiction shows have some form of interstellar travel and starships but little by way of advanced robotics, artificial intelligence, genetic engineering or nanotechnology. Even in shows that do have them (the later Star Trek series, BSG, Andromeda) they’re still mundane and boring. Robotics is mostly limited to personal butlers or killer drones. AI are either hell-bent on destroying humanity or they’re our loyal servants. Human genetic engineering is either outlawed (Star Trek) or again rather banal (Andromeda). There are no interesting political or economic systems. There are no uploads, no interesting alife and very little by way of actual space-time engineering.

For better or for worse our future is going to be far more interesting (and much less neat and tidy) than what scifi television would have us believe. Most science fiction literature paints a far more interesting vision of things to come. Ubiquitous computation and connectivity is just the beginning. We’re barely using any of the computation capacity in our pockets for our benefit. Within a decade or do I’d like to see a more subtle merging of man and machine as our technology becomes better at monitoring our behavior, actions and needs and steps in to take over when we’re under stress. With 3D printing getting better and cheaper we’re well on our way to another manufacturing revolution. I won’t be surprised if a startups of the near future starts shipping products as 3D-printable templates instead of the physical product.

Any attempt to look into the future carries with it the danger of being hopelessly wrong. After all we were promised flying cars and we got a high-bandwidth globally distributed data and computation net instead. Not a bad bargain if you ask me. Luckily, while the future may be hard to foresee, it is also something we have a direct hand in shaping. “Invent the Future” has an inspiring ring to it. Perhaps for the first time in human history it’s actually possible for large sections of the human race to invent their own future. While interstellar travel and hard AI are still a dream and a hope away there’s a lot of interesting stuff between here and there.

The shape of the future is being designed on portable supercomputers, communicated over fast data nets and brought into being by affordable 3D printers. And we’re going to have a hand in making it happen.

Advertisements

Where is the computation?

I’m pretty happy with my Nexus S so far. It’s a decent phone with some solid apps and services. More importantly, it’s a well-equipped little pocket computer. However the more I use smartphones (and similar devices like the iPod Touch) the more I feel a nagging sense that I’m not really these devices well, at least not to their full potential.

While the devices in our pockets might be increasingly powerful general purpose computers I feel like we use them more for communication than for computation. That’s not to say that communication does not require computation (it does, lots of it), but we’re not using our devices with the goal of solving problems via computation.

This is perhaps a very programmer-centric viewpoint of mobile technology, but one that is important to consider. Even someone like me, who writes code on a regular basis to solve a variety of both personal and research problems, does very little computation on mobile devices. In fact, the most I’ve been using my Nexus for is email, RSS reading, Twitter, Facebook and Foursquare. While all those services definitely have good uses, they are all cases where most of the computation happens far away on massive third-party datacenters. The devices themselves act as terminals (or portals if you prefer a more modern-sounding term) onto the worlds these services offer.

Just to be clear, I’m not saying that I want to write programs on these devices. Though that would certainly be neat, I can’t see myself giving up a more traditional computing environment for the purposes of programming anytime soon. However, I do want my device to do more than help me keep in touch with my friends (again, that’s a worthy goal but just the beginning). So the question is, what kind of computation do we want our mobile devices to do?

Truth be told, I’m not entirely sure. One way to go is to have our phones become capable personal assistants. For example, I would like to be able to launch an app when I walk into a meeting (or better yet, have it launch itself based on my calendar and geolocation). The app would listen in on the conversation, apply natural language processing and generate a list of todos, reminders and calendar items automatically based on what was said in the meeting. Of course there are various issues (privacy, technology, politics, corporations playing nicely with each other) but I think it’s a logical step forward.

As payment systems in phones become more popular, I’d like my phone to become my banker too (and I’m not just talking about budgeting and paying bills on time). For example if I walk into a coffee shop my phone should check if I’m on budget as far as coffee shops go and check coffee shops around the area to suggest a cheaper (or better, for some definition of better) alternative. And it doesn’t just have to be limited to coffee shops.

Mobile technology is sufficiently new that most of us don’t have a very clear idea of what to do with it (or a vision of what it should do). Most so-called “future vision” videos focus more on interfaces than actual capabilities. However this technology is evolving fast enough that I think we’re going to see the situation improving quickly. With geolocation-based services, NFC and voice commands becoming more ubiquitous and useful the stage is becoming set for us to make more impactful uses of the processors in our pockets. As a programmer I would love to be able to hook up my phone to any cloud services or private servers I’m using and be able to interact with them. The mobile future promises to be interesting and I’m definitely looking forward to it.

A New Year, A New Phone

This year I’ve decided to make a foray into the future by finally getting myself a proper smartphone. I’ve had an iPod Touch for a while but also had a simple Nokia not-smart phone to make actual phone calls. It’s always been somewhat annoying to have to manage two devices: a real phone for calls or texts and the iPod for any Internet and data-related work. A large part of my resistance to getting an actual smartphone was that I simply didn’t want to spend a lot of money on a cell phone plan when I was surrounded by wi-fi all the time and barely made actual phone calls. But now that there are finally both reasonably cheap unlocked smartphones and contract-free data plans I decided to bite the bullet.

The unlocked iPhone 4S would end up costing me a tad over $800 after tax and Applecare. I was also getting bored of the iOS ecosystem and its closed, silo system for apps. So instead I got myself a much cheaper unlocked Android phone – the Google/Samsung Nexus S. I’m pairing that with a $30 a month T-Mobile data and phone plan. I’m still waiting for a new SIM card to show up but till then I’m making use of the ample wifi coverage that’s a side-effect of living in a college town. For now, I’m only going to talk about my first impressions on the Nexus S itself.

Google Nexus S
Google Nexus S (via Wikipedia)

The Nexus S is Google’s previous flagship phone. Its current flagship is the Galaxy Nexus which Google is also selling unlocked. However it’s almost twice the price I paid for the Nexus S and in my opinion, isn’t sufficient of an upgrade to justify the price. Even though it’s about a year old by now (and technically running the old version of Android), I haven’t had a problem with it so far.

It looks pretty different from the iPhone and the plastic feel takes some getting used it. I also think it slips more easily, but that might just be a personal problem. The back of the phone has something of a ridge at the bottom which I guess is supposed to make it easier to hold. Though the build quality does feel inferior as compared to the iPhone, I like it and have no major complaints.

The Android sofware feels like a breath of fresh air as compared to the iPhone. It is considerably more customizable and I like the presence of both tradiiontal apps as well as “widgets” that add functionality directly to your home screens. I’ve found widgets great for quickly looking up data like the weather, Twitter mentions or what system services are currently running.

The tinkerer in me loves how customizable the Android system is. Changing the look and feel is just the beginning. There are a lot of bells and whistles and options and sometimes it can be a rather confusing. For now I’ve only stuck to the usual set of apps (Twitter, Foursquare, Camera) but I’m looking forward to trying out new and interesting apps in the future. More than that I feel like Android would be a really good platform if I decide to get into mobile dev anytime soon.

There are a few things about the Nexus S that I’m concerned with. I think the battery life is a tad too short, especially with the geolocation services on all the time. Luckily, the battery monitor widget makes it simple to turn off services with a touch so maybe some manual management might make it better. While the Google apps are really well integrated (especially Google Voice) and apps from large companies are well done, third-party apps seem to be of considerably less quality than iOS equivalents. I don’t really blame the developers given the multitude of devices but it does mean that finding good apps for simple things like RSS is more difficult than it should be.

Despite the glitches and minor annoyances I really like the Nexus S. The hardware is pretty solid and I like Android so far. Right now having a fully functional smartphone is still pretty new to me, but I’m hoping that when the novelty wears off I’ll dive into actually programming the powerful computer in my pocket.

The Interface Paradox

As much as I love programming and good old-fashioned text-based command lines, I have an interest in ergonmics and futuristic interface. A few days ago a post entitled “A Brief Rant on the Future of Interaction Design” made the rounds on the Internet. It opens with an old, but interesting video and goes to make the argument that our current obsession with flat touchscreens and simple gestures is doing our us all as disservice. Our hands are capable of complex gripping, grasping and touching motions and having all that expressivity confined to a small, two dimensional surface with a limited number of motions is self-defeating. The article makes the damning statement: “Are we really going to accept an Interface Of The Future that is less expressive than a sandwich?”

The article helped me express an uncertainty that’s been floating back and forth in my mind for some time. I use my iPod Touch on a daily basis and I’ve been loving the multitouch trackpad on the new Macbooks. I love the swiping motions for window management and moving things around. At the same time I’ve started drawing by hand again (I loved drawing as a kid) and I realize that putting a pencil to paper is a rather complex but very fulfilling activity. Strangely enough I think that both the pencil and the touch-based iOS interface have a lot in common. In both cases, the actual physical device almost disappears letting you focus on the underlying application. The iPad or iPhone itself is just a thin frame around whatever app you’re using. The pencil is basically just a simple pointer but allows us to create an infinited range of images with it.

However in both cases, the expressiveness offered by the device is not enough. Pencils are not enough to express all the images we might want to create. That’s why we have pens, brushes, chalk, crayons and a variety of papers and canvases. The flat touch interface is also not enough, especially if we are confined to a small surface that fits in one hand. The question then is how we can take the simplicity of our current touch interface and extend them to a larger set of expressions and interactions?

Case in point is the camera interface on the iPhone. For a long time there was a software button that you had to touch to take a picture. But that meant sticking your finger in the middle of the picture. Normal cameras have a better interface: there is shutter button on the top that keeps your hands far from the actual image (even if you’re using a LCD screen instead of a traditional viewfinder). This deficient interface on the iPhone led to the Red Pop, a giant red shutter button and now iOS 5 turns one of the hardware volume buttons into a shutter button.

The Red Pop camera interface for the iPhone
The Red Pop camera interface for the iPhone

Having a fluid, upgradeable, customizable software interface is nice and I like smooth gradients and rounded corners as much as the next guy. But our hands evolved to use actual physical matter and before computer interfaces we built a lot of interesting physical interfaces. Apple has hooked us on the idea of sleek, smooth devices with no extraneous. While it’s great to lose unnecessary knobs and edges the Apple design philosophy might not be best in the long run, especially if your device’s UI doesn’t neatly fit into the touch-drag-swipe system of gestures.

Ultimately it would be great to have “smart matter” physical interfaces – the flexibility and programmability of software with the physical usability that solid matter offers. Imagine some sort of rearranging material (based on some form of nano- or micro-technology maybe?) that can be be a simple smooth shell around your interfaces but can change to form buttons, sliders, knobs or big red shutter buttons as your application requires. But in the years (decades?) between now and then we need other solutions. The range of accessories and extensions available for the iPhone (including the Red Pop, tripods, lenses etc.) seem to suggest that enterprising young device maker could use the iPhone (and it’s successors and competitors) as a computing core to which they can attach their own physical extensions. With a more open and hackable platform (an Android-Arduino hybrid perhaps) we might see a thriving device market as well as an app market. Am I a dreamer? Hell yeah, but as the projects I’ve linked to show, I’m certainly not the only one.

Ubuntu should zig to Apple’s zag

It’s another October and that means it’s time for another Ubuntu release. Before I say anything, I want to make it clear that I have the utmost respect for Mark Shuttleworth, Canonical and the Ubuntu project in general. I think they’ve done wonderful things for the Linux ecosystem as a whole. However, today I’m siding with Eric Raymond: I have deep misgivings about the direction Ubuntu is going, especially in terms of user interface.

I’m not a UI or UX designer. I’m sure there are people at Canonical who have been studying these areas for longer than I have. But I am a daily Linux user. In fact I would say that I’m a power user. I’m no neckbeard, but I think that by now I have a fair grasp of the Unix philosophy and try to follow it (my love for Emacs notwithstanding). The longer I see Ubuntu’s development the more it seems that they are shunning the Unix philosophy in the name of “user friendliness” and “zero configuration”. And they’re doing it wrong. I think that’s absolutely the wrong way to go.

It seems that Canonical is trying very hard to be Apple while not being a total ripoff. Apple is certainly a worthy competitor (and a great source to copy from) but this is a game that Ubuntu is not going to win. The thing is, you can’t be Apple. That game has been played, that ship has sailed. Apple pretty much has the market cornered when it comes to nice shiny things that just work for most people irrespective of prior computer usage. Unless somehow Canonical sprouts an entire ecosystem of products overnight they are not going to wrest that territory from Apple.

That’s not to say that Canonical shouldn’t be innovating and building good-looking interfaces. But they should play to the strengths of both Linux the system and Linux the user community instead of fighting them. Linux users are power users. In fact I think Linux has a tendency to encourage average computer users to become power users once they spend some time with it. I would love to see Ubuntu start catering to power users instead of shooing them away.

It’s becoming increasingly clear that Apple does not place its developers above its customers. That’s a fine decision for them to make. It’s their business and their products and they can do whatever they like. However as a programmer and hacker I am afraid. I’m scared that we’re getting to the point where I won’t be able to install software of my choosing without Apple standing in the way. I’m not talking about just stuff like games and expensive proprietary apps, but even basic programming tools and system utilities. That’s not something that I’m prepared to accept.

Given the growing lockdown of Apple’s systems, Canoncial should be pouring resources into making Ubuntu the best damn development environment on the planet. That means that all the basics work without me tinkering with drivers and configurations (something they’ve largely accomplished). It means that there’s a large pool of ready-to-install software (which also they have) and that it’s possible (and easy) to install esoteric third-party tools and libraries. Luckily the Unix heritage means that the system is designed to allow this. Instead of trying to sugar coat and “simplify” everything there should be carefully thought-out defaults that I can easily override and customize. Programmability and flexibility grounded in well-tuned defaults should be the Ubuntu signature.

It makes even more sense for Canonical to take this angle because Apple seems to be actively abandoning it. A generation of hackers may have started with BASIC on Apple IIs, but getting a C compiler on a modern Mac is a 4GB XCode download. Ubuntu can easily ship with a default arsenal of programming tools. Last I checked the default install already includes Python. Ubuntu can be the hands-down, no-questions-asked platform of choice for today’s pros and tomorrow’s curious novices. Instead of a candy-coated, opaquely-configured Unity, give me a sleek fully programmable interface. Give me a scripting language for the GUI with first-class hooks into the environment. Made it dead simple for people to script their experience. Encourage and give them a helping hand. Hell, gamify it if you can. Apple changed the world by showing a generation the value of good, clean design. Canonical can change the world by showing the value of flexibility, programmability and freedom.

Dear Canonical, I want you to succeed, I really do. I don’t want Apple to be the only competent player in town. But I need an environment that I can bend to my will instead of having everything hidden behind bling and “simplification”. I know that being a great programming environment is at the heart of Linux. I know that you have the people and the resources to advance the state of computing for all of us. So please zig to Apple’s zag.

PS. Perhaps Ubuntu can make a dent in the tablet and netbook market, if that’s their game. But the netbook market is already dying and let’s be honest, there’s an iPad market, not a tablet market. And even if that market does open up, Android has a head start and Amazon has far greater visibility. But Ubuntu has already gone where no Linux distro has gone before. For most people I know it’s the distribution they reflexively reach for. That developer-friendliness and trust is something they should be actively leveraging.