Where is the computation?

I’m pretty happy with my Nexus S so far. It’s a decent phone with some solid apps and services. More importantly, it’s a well-equipped little pocket computer. However the more I use smartphones (and similar devices like the iPod Touch) the more I feel a nagging sense that I’m not really these devices well, at least not to their full potential.

While the devices in our pockets might be increasingly powerful general purpose computers I feel like we use them more for communication than for computation. That’s not to say that communication does not require computation (it does, lots of it), but we’re not using our devices with the goal of solving problems via computation.

This is perhaps a very programmer-centric viewpoint of mobile technology, but one that is important to consider. Even someone like me, who writes code on a regular basis to solve a variety of both personal and research problems, does very little computation on mobile devices. In fact, the most I’ve been using my Nexus for is email, RSS reading, Twitter, Facebook and Foursquare. While all those services definitely have good uses, they are all cases where most of the computation happens far away on massive third-party datacenters. The devices themselves act as terminals (or portals if you prefer a more modern-sounding term) onto the worlds these services offer.

Just to be clear, I’m not saying that I want to write programs on these devices. Though that would certainly be neat, I can’t see myself giving up a more traditional computing environment for the purposes of programming anytime soon. However, I do want my device to do more than help me keep in touch with my friends (again, that’s a worthy goal but just the beginning). So the question is, what kind of computation do we want our mobile devices to do?

Truth be told, I’m not entirely sure. One way to go is to have our phones become capable personal assistants. For example, I would like to be able to launch an app when I walk into a meeting (or better yet, have it launch itself based on my calendar and geolocation). The app would listen in on the conversation, apply natural language processing and generate a list of todos, reminders and calendar items automatically based on what was said in the meeting. Of course there are various issues (privacy, technology, politics, corporations playing nicely with each other) but I think it’s a logical step forward.

As payment systems in phones become more popular, I’d like my phone to become my banker too (and I’m not just talking about budgeting and paying bills on time). For example if I walk into a coffee shop my phone should check if I’m on budget as far as coffee shops go and check coffee shops around the area to suggest a cheaper (or better, for some definition of better) alternative. And it doesn’t just have to be limited to coffee shops.

Mobile technology is sufficiently new that most of us don’t have a very clear idea of what to do with it (or a vision of what it should do). Most so-called “future vision” videos focus more on interfaces than actual capabilities. However this technology is evolving fast enough that I think we’re going to see the situation improving quickly. With geolocation-based services, NFC and voice commands becoming more ubiquitous and useful the stage is becoming set for us to make more impactful uses of the processors in our pockets. As a programmer I would love to be able to hook up my phone to any cloud services or private servers I’m using and be able to interact with them. The mobile future promises to be interesting and I’m definitely looking forward to it.

I miss my server

Instead of the usual Sunday Selection, I’m going to present a more personal article today. I’ve been busy moving for most of last week and by Thursday night I finally had my college room mostly set up the way I wanted it. How better to celebrate a new room and relax after a few days of heavy lifting than to completely re-install a server OS and restore everything from backups, right? My server is an old G4 Powermac with 768MB of RAM and currently 80GB of hard disk space. I used it as a desktop for about a year before I got my hands on a Mac mini and decided to turn it into a server.

Digital Disaster

I had been running OS X Leopard on it for the last year and though it worked well enough, there were some things that were starting to get to me. I didn’t like having to use a Virtual network Client to get in and change settings and I wanted a simple command line way to install programs. Though tools like Fink would have let me do that, I decided that for a production server, I wanted to run Linux. Luckily for me Arch Linux has a PowerPC port so I decided to bite the bullet, wipe the hard disk clean and install Arch on it. Unfortunately, that didn’t quite go according to plan. Though I managed to start the installer and partition the hard drive, I simply could not install anything. I also tried an over-the-network install, but I couldn’t get things off the software repository servers either. After a good three hours of fighting, I decided to throw in the towel.

I’ve decided to use the Fedora 11 PowerPC distribution, but I’ll probably strip it down to the bare essentials. I have to wait until Monday to get physical access to my server and till then I’m stuck with just my Arch laptop and my Mac mini desktop. I’m just coming to realize just how much I’ve become used to having a personal server. I’ve started using as something of a personal cloud. I use it store and sync data between my machines (using Git) and I use it to test out my web designs. Since I move around computers regularly, I’m feeling rather crippled without having my server on.

The Revelation

The thing is, I value mobility and security very highly. I’ve gotten used to starting a document or some other project on my Mini, working on it till my back hurts from sitting on the terrible school chair and then moving to the lounge to sit on the couches. Just before the move I just commit my work to a Git repository, push it to a remote mirror on the server and then pull it down to my laptop and I can be up and running like nothing happened. If need be, I can use Google Docs, which I do if I’m on a lab machine, but I really like to have a secure, safe, version controlled copy of anything important that I’m working on and I simply can’t bring myself to rely on Google Docs exclusively. My server is both a safe haven and an electronic freight route.

As you can well imagine, having the server down throws that workflow out the window. All of yesterday (and most of today) I’ve been very lethargic and reluctant to do any real writing. I dislike being tied to my laptop or desktop and I’m afraid to start anything lest I feel the need to move and have a hard time actually making the move. That in turn means that I have stuff on just one machine, something which I’ve learned to be paranoid about. The effects of this paranoia manifested in the form of writer’s block. It took me a good few hours to get myself to actually sit down and write this.

I’ll be the first to admit that I’m probably going overboard with all this. In two days the world won’t end (hopefully) and even if I start something new, I probably won’t have anything that really needs to be backed up. I suppose the real truth is that I’ve become used to a rather eccentric, and for me, efficient way of working. I’ve found and learned to use tools that aren’t used by many people, including many tech enthusiasts. It is cloud computing in a way, but in my own personal cloud for the most part. I’ve made a trade-off, and it hasn’t been an easy one to make. I’ve sacrificed the ability to use a rock-solid well managed solution (like Google Docs) with something that is more flexible and suitable, but that I have to manage myself. It’s not a choice I regret, except maybe at times like this. In fact, I don’t think regret is quite the word. A few details would make the situation much better (ie using stock x86 hardware with Arch Linux) and I don’t make the necessary changes for reasons that are not really technical. I’ve become my own cloud, I have a few dark and gloomy patches, but I could become bright and shiny without too much trouble.

Looking ahead

I’ll have my server back up and running, hopefully by lunchtime on Monday, with a fully functional Linux install. But this little incident has shown me that I need to give some serious thought to how I maintain my server in the future. The electrical engineer in me likes to have the actual physical device around (I’m putting in another hard drive once this problem is solved). But when I leave college in two years time, I’m not dragging the fairly heavy Powermac with me. At that point I will almost certainly switch over to using a hosted virtual server of some sort, maybe a Linode, depending on my budget. That will remove the need to have to personally maintain the hardware and deal with any resulting downtime.

The other question that needs answering is what software to run. Till Friday I ran OS X, but I’m pretty certain I’ll keep my servers to Linux from now on. I’ll run Fedora now, but when transition to virtual, I would like to run Arch Linux to have a more uniform computing environment. I intend to keep running an Apache server and using Git via SSH. However, if I keep using public computers the way I do, I’m going to have to find a way to get thing to Git without having to save in Google Docs first and then manually put in the repo. Luckily for me, most of the work I do is text and the Bespin text editor is coming along nicely. It’s a really nice online text editor and the backend already has support for version control systems like Mercurial and Subversion. I plan on running a local copy on my server and when it grows Git support, I’ll try to jump ship from Google Docs. Of course, for non text files I still need to use a proper Git install, but I do those rarely enough that it shouldn’t be a problem.

By the end of tomorrow I should have a fully operational Linux install ready to go. It’ll probably take a few more days of work to get everything I wanted installed and configured right and have all my git repositories set up properly again. After that I’ll begin looking into running Bespin and any other interesting personal cloud software I might find.

CherryPal: Is cloud computing finally here?

Slashdot just linked to an article about the CherryPal, a lightweight computer designed to run applications off the Internet rather than locally installed ones. The CherryPal is light both in terms of size and power. While it’s only 10.5 ounces and uses just 2 watts of power, it’s also powered by a low-power Freescale processor at 400Mhz with just 4GB of storage and 256MB of RAM. That’s incredibly low powered by today’s standards. However unlike other lightweight machines such as the OLPC or the Eee PC, the CherryPal computer isn’t meant to be a stripped down generic computer.

The CherryPal runs an embedded version of Debian Linux, but the operating system is effectively hidden from the user. While there are a few installed applications like OpenOffice, Firefox and Multimedia support, the bulk of the CherryPal’s applications run online and are augmented by 50GB of online storage. Essentially the CherryPal is meant to be just a gateway to the cloud. I’ve talked about cloud computing before and I’m gradually coming to believe that cloud computing is going to be one of the core infrastructures of 21st century computer technology. While I am a bit skeptical about the CherryPal’s ultra-light specs, I think it is certainly a good concept. The question is, will it all actually work?

Two days ago Infoworld ran an article comparing 4 cloud computing architecture providers. The article was interesting for showing that while cloud computing is already a feasible option for those who want to try it, there is still no lowest common denominator for the industry as a whole. The 4 services are quite different and if you should ever be in a position of moving from one to another, I don’t think the experience would be entirely painless (or even possible in some cases).

When cloud computing becomes an everyday reality (which won’t be too long looking at how things are going), then it seems like their will be three-tier structure. The first will be the user: you running a low cost, low power computer, perhaps not very different from the CherryPal. It will have just even power to run a browser with full Javascript and Flash, and maybe a very minimal set of on site software. You’ll mainly use this machine to use online services, such as Zoho’s suite for office work, Gmail for email, MP3tunes to store your music and so on. Of course, all your data will be stored online behind layers of encryption with multiple backups. Some of these providers will have their own servers with their own back-end infrastructures. But a lot of them, especially smaller, newer startups will simply be leasing computing power and storage from the even larger cloud provided by companies such as Amazon and Google (S3 and App Engine for example).

Of course, cloud computing won’t completely replace normal computers. I’d hate to play a 3D FPS over the Photoshop Expresscloud and more intensive graphics application will still need powerful on-site hardware for acceptable performance (though Photoshop Express does show promise). There’s also the fact that many people will simply not trust their data to another party. However, for these people, it would be possible to run their own cloud. With the falling price of hardware and storage, you can even today easily put together a powerful home server and connect to it remotely, thus not having to lug your data with you everywhere. In fact, this will be one of my projects for next semester at school.

So is cloud computing finally here? Not quite. The industry is still in a growing stage and needs to become much more mature before it becomes. We will need to see standardization in the same that the internet has (mostly) standardized around HTML and CSS. Different cloud computing providers will have to work together so that it is possible to pull features from different clouds and change providers if necessary. Hopefully this time around we’ll see a more civilized growth of a powerful new computing paradigm and not a rerun of the browser wars of the 90s.

Cloud Computing: coming full circle

Google Operating System has a recent article about a presentation made by Google China’s President Dr.Kai-Fu Lee on cloud computing. Cloud computing is what is gradually making Google a powerful force on the planet. The idea is to reduce a user’s dependence on any one computer or device. All data and applications are stored “in the Cloud”, so to speak. Everything is backed on multiple servers and is easily accessible from an Internet connected device. This means that you can use your data and the applications to manipulate that data from almost anywhere in the world (anywhere that has a decently fast internet connection at least). It’s an extremely powerful idea, which has numerous benefits both for the users and for whoever supplies your software.

As the user, the most obvious benefit is that you can access (your often very important) data from a lot of various places. Also important is that you can take advantage of the massive amount of powerful hardware and computing resources that are available in the cloud. You no longer need a 3Ghz processor with 4GB of RAM just to edit that letter to your boss, you just need enough power to run a modern browser and handle a fast internet connection. The results of having so much power is of course obvious right now: any web-based search is much faster than any desktop search due to much more hardware and powerful algorithms to take advantage of that hardware (Google’s MapReduce comes to mind). For the company providing the cloud services the advantages are just as attractive. There aren’t a dozen different versions to support with all the related compatibility issues. Once bugs are tracked down and fixed, the patches can be rolled out without having to bother the user. Google has being using in this in the gradual upgrades to Gmail and Google Documents and has been working quite well. For every piece of software that the user sees there is a lot more that is carefully hidden away. When you’re running a cloud, you have control over the what the user sees as well as what is behind the scenes. With some care you can make sure that both the software systems remain essentially separate but still work well together (unless you don’t control the cloud software, but that’s a different matter entirely).

Cloud computing seems to be the wave of the future, and with ubiquitous broadband, I think this will be a very efficient way of utilizing powerful computing resources. At the same time, cloud computing reminds me of the old days of time sharing computing (which I was born far too late to be a part of). Data and software was in a large mainframe (later minicomputers) either at a company’s computer headquarters or a college campus. To access them users could log from a remote terminal, sometimes dialing in over the phone lines. Of course in those days, users shared processor time and memory on just one computer and data transfer speeds were slow, even for the plain text that went around. But the user’s data and applications were still centrally located, accessible uniformly from any terminal connected to the system.

How different is this from today’s cloud computing? Well, very. Data speeds are higher. Communication is available not just for connected terminals but for any internet connected device, and there is far more computing power available for users. But the core concepts are nevertheless, essentially the same: data located in one place, but uniformly accessible from multiple separate locations. In many ways it’s like the saber-tooth: a design that has appeared many times throughout evolution exactly because it is a good design. Cloud computing too is a good design, which is going to stay with us for a long time to come.