The Age of the Cyborg is upon us

And they’re nothing like what the movies make them out to be. Today’s (and tomorrow’s) cyborgs are not a random and gruesome mix of metal and flesh out to destroy the rest of us. Rather, today’s cyborgs are… us. Each and every one of us, in some form or another. So what am I talking about and how did this come to pass?

For starters, technology, especially computer technology has permeated every aspect of our lives. And along with the computer has come the network. Within the next decade mobile broadband will become ubiquitous (at least in urban areas) meaning that we will always be connected to the full knowledge and collective intelligence of the internet. As a direct result we are all gradually becoming cyborgs: our machines, especially in the form of mobile network connected devices are becoming an inseparable part of us. Sure, we may not be jacking in with our brains as a part of the regular morning routine, but connecting to the global network of computers (and hence indirectly to everyone else using those computers) is already a routine occurrence which we don’t give a second thought.

A recent Wired article talks about how average chess players combined with the right machine assistance can beat out better human players as well as other players with better software. The key is in the human’s ability to make the most of their machine assistants: figuring out which machine results to accept, which to reject and how to ask the right questions. Our currently technology is in exactly the same position. The talent of the person using a computer or the computational power of the machine is less important than being able to combine the two properly.

Leaving chess aside, there are more practical areas where this combination of man and machine is producing great payoffs. Successful blogger and author Tim Ferriss makes no secret of the fact that he uses analytics extensively to fine-tune how his website operates and is viewed in order to maximize his earnings. In earlier days, Paul Graham created effectively the world’s first web application, Viaweb and successfully beat out better funded competitors by placing powerful tools (Common Lisp) in the hands of experienced users (himself and his team).

People my age and younger have never lived in a world when we couldn’t connect with people across the globe at the click of a mouse. All that has ever stood before us and the vast stores of information on the Internet has been a single text box with a button titled some variation of “Search”. We’re cyborgs in the sense that the use of our machines is natural and reflexive, requiring little explicit mental bandwidth. Who needs a port in the back of the skull when you have a copy of Google Hacks tucked into your brain?

Of course, not all cyborgs are made equal. Even among people my age there are both those who revel in technology and its gifts and those who would prefer to keep it at arm’s lengths. And I’m not talking about the difference between computer science graduate students and theater majors. I’m talking about the people who are content to use the Microsoft Word’s default font and paragraph spacing and those who spent hours tinkering with their websites to get things looking just right. I’m talking about the people who tweet a dozen times a day and those who log in to Facebook once a week. I’m talking about those who have three different emails and those who pull all their email into Gmail. I’m talking about… you get the point.

On the flip side there’s a careful balance between using technology to achieve a further goal (Tim Ferriss’ website tweaks) and technology for technology’s sake (the hours spent tweaking the CSS on a blog only your mum reads). The Wired article says that there is a difference between people who use technology productively and hence feel smarter and more focused and the people who seem lost and intimated by online life. I would add a third category: those who feel smarter, but really aren’t better than the baseline. Cyborgization may be becoming ubiquitous, but that doesn’t mean that it’s easy.

The growing cyborgization of our society is also the reason why I’m excited about the second coming of tablet computers: the iPad and whatever Chrome-based offering Google throws its weight behind. Take a few minutes to check out the new guided tours of the iPad and you might get a hint of what I feel. The interface is completely different from how we use computers today and I think that’s a great idea. Let’s face it: most people today don’t really need a real computer. They need basically two devices: a internet connection device and some sort of glorified typewriter/calculator for writing reports and spreadsheets. Of course the iPad doesn’t excite those of us who type hundreds of words a minute or write code for a living. That’s because we’ve already crossed the line of cyborgization: we know (or are at least trying to find out) what we can do with our machines. The iPad is for the people on the other side, those who couldn’t care less about how many cores or how much RAM they have. It’s for people who are more than willing to trade their freedom (and their wallets) for a computing experience that they can relate to better and easier. It’s for the mum who wants to snuggle up in bed with her kid and Winnie the Pooh. It’s for the people who still consider reading a newspaper in the morning a holy rite. It’s for the people who have by and large been on the outskirts of the computer technology revolutions of the last few decades. It’s for a new generation of cyborgs who stop thinking of their machines as computers and rather view them as constant, unobtrusive, electronic companions.

With some luck, my children will be growing up in a world where they are surrounded from birth by the warm embrace of the internet. For them, actually sitting down in front of a computer will be quaint and outmoded in the same way we don’t go to a landline phone to talk to someone anymore. And it will be devices like the iPad connecting remotely to powerful servers running recommendation engines and personalized search databases that will be their first connection to the world of computation. As Pranav Mistry says, people don’t really care about computation, they care about knowledge and information. We’ve been able to bring people closer to information by erasing it’s physicality and making everything available remotely. Our children will be getting that information without the burden of thinking about a browser or keyboard or URLs. For them, all sorts of data will be all around them accessible at the tap of a touchscreen (or hopefully without requiring even that).

Here’s looking forward to the Age of Cyborgs, of which we are the heralds and first citizens. We live in exciting times.

Treat your education as a business

I’ve been thinking about business a lot lately, thanks in no small part to the hype surrounding the book Rework (which I haven’t bought yet, but am sorely tempted to). Also to blame is the startup visa that is interesting to me, to say the least. As romantic and exciting as starting a business may seem I’m a full time student right now and will be for a good few years to come. I’m really enjoying my student career, but I’m seriously thinking about starting a business someday. Over the past few weeks, I’ve been putting together the theory that though I won’t start a business now, I can certainly apply business-style thinking to my student life.

Before I dig in, there’s a disclaimer due: I’ve never run a business and I hope to never get a formal training in business. I’m also very much a fan of start-ups and I like businesses that sell premium products with large profit margins than ones that sell tons of cheap stuff. Feel free to provide your own examples. What I’m going to talk about will be informed from what I’ve read and heard backed up with healthy amounts of common sense (which often seem to be rather lacking in the business world). As an additional clarification, when I talk about getting returns from education, I’m talking about things you can cash in on today, not some vacuous future where you get a nice job and pay back your hundreds of thousands of dollars worth of student loans.

Your job is to turn a profit

If your business isn’t turning a profit, then you fail and deserve to be cut down by the Invisible Hand. At least stop calling what you’re doing a business. By the same logic, if you’re going to college and not learning, you’re doing it wrong. Living on a really diverse college campus it becomes abundantly clear that there are a fair share of students who are perfectly willing to coast along and graduate with the bare minimum credit and effort. It’s similar to how many businesses seem to think that it’s ok to give away their product for free with no clue as to how to get into the red. If you’re paying thousands of dollars and spending hundreds of hours in class, make sure that you are actually learning something that you are interested in and what to learn about. Of course you can’t do that if don’t realize that

Showing up is half the battle

This isn’t so much a business maxim as it is general life advice. Getting down to work everyday and actually starting on the important tasks is essential to running a business. You won’t be making any money if you’re not actually producing something. Similarly, don’t expect to be learning things if you’re not going to put in the effort of going to class, paying attention and doing the assignments. It’s tempting to sleep in and just study for the exam, but we all know that in most cases it just doesn’t work that way.

Though showing up is necessary, it’s not sufficient to keep you on target. In particular, there’s no point showing up if you’re not showing up for the right things. Which is why it’s important to …

Decide what your product is

Microsoft does a lot of things, but it still makes almost all its money from Windows and Office. Walmart knows what it does: sell lots of stuff for cheap.It irritates me no end when college students get to end of their second year with no idea of what they want to major in. Keeping an open mind and exploring is good, but you can’t expect to get a good education if you can’t decide what it is you want to study. If you’re not going to take charge and make your own decisions, someone else is going to make them for you and you probably won’t like it. After all,

No one is going to run your business for you

As the people at 37signals make clear: you can’t just be “the idea guy” and your ideas count for nothing without good execution. If you want to get a good education, you’re going to have to stand up and get it yourself. You’ll have to take hard classes, study hard and smart and really immerse yourself in the material (as opposed to the night-before-exam cram routine). If you take easy classes and do the bare minimum needed to pass, then you’ll get the bare minimum back — a degree that thousands of other people have as well with nothing to set you apart.

I can tell that I’m starting to make the whole thing sound really gloomy, but here’s the kicker:

You have to enjoy what you do

No one ever succeeded in a business that they didn’t believe in, but a lot of people get stuck in jobs they hate. A lot of students think that college is stressful and boring and a drag because they haven’t figured out what their product is. The happiest students I know are the ones that really love what they’re studying and tie it into their activities and daily lives. They may be insanely busy, in the same way that people at successful companies can work insanely hard, but they don’t regret it. In contrast, the students that are the most stressed are the ones that don’t like their major (and hence put off studying till the last minute) and would rather be doing something else. If you’re in it just for the money (or the degree) you’re doing it wrong and should seriously consider doing something else.

On an ending note, I haven’t yet taken all the rules to heart myself. I enjoy what I do and I have a good idea as to what my product is. I show up most of the time, but about once a week I’ll expect someone else to watch the shop for a prolonged period of time and my profits aren’t as high as I’d like them to be. But putting these thoughts down have given me a better idea of what I’m doing wrong and how I should restructure. I’m look forward to strong second quarter earnings.

Meditations on Moore’s Law

As part of my study on parallel programming I’ve been thinking a lot about how processor architectures have evolved and are going to evolve in the not-too-distant future. No discussion of processor architectures is complete without some talk about Moore’s Law. The original formulation of Moore’s Law is:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year… Certainly over the short term this rate can be expected to continue, if not to increase.

The original statement is a bit more complicated than the generic “number of transistors on a chip double every two years” that is the common phrasing. Firstly, the original states that the doubling occurs every year. That’s a bit too optimistic, the historic rate has been between 1.5 and 2 years for the doubling. But the more subtle (and generally missed) point that Moore made was that it is density at minimum cost per transistor that increases. It’s not just the density of transistors, but rather the density at which cost per transistor is lowest. We can put more transistors on a chip, but as we do so the chances that a defect will cause the chip to not work properly will increase.

There are a number of corollaries and by-laws that go along with Moore’s Law. For one, the cost per transistor decreases over time but the manufacturing cost per area of silicon increases over time as the number of components crammed onto a chip increases (as well as the materials, energy and supporting technology required to create that chip). But the one consequence that has hit the chip industry in recent years is that the power consumption of chips also increases: roughly doubling every month. And that is something that directly affects the bottom line: no one is going to by a 6 GHz if you need an industrial strength cooling solution to use it. Even though transistor densities (and as a result processor speeds) have been progressing steadily over the past few decades, memory speeds simply haven’t kept up. The maximum speed at which a modern processor can operate is far more than the speed at which it can pull data to operate on.

These two factors: increased power consumption and lagging bandwidth has prompted a significant course change for chip manufacturers. We have the technology to pack a lot of processing power on a single chip, but we quickly hit diminishing returns if we keep aiming for speed. So instead of making the fastest, meanest processors, the industry is turning to leaner, slower, parallel processors. Computing power is still increasing, but it’s increasing in width, not depth. Multi-core CPUs and their cousins, the GPUs  exploit Moore’s Law are bundle multiple processing units (cores) onto a single piece of silicon. The top of the line GPUs boast hundreds of individual cores capable of running thousands of current execution threads. Intel’s newest Core i7-980X Extreme has clock speeds of up to 3.6Ghz, but also boasts 6 cores capable of running 12 threads. Parallel computation is here to stay and it’s only going to keep increasing. Moore’s Law should be good for another decade at least (and maybe more until we hit the limits imposed by quantum mechanics) and it’s a safe bet to assume that all those extra transistors are going to find their place in more cores.

Having dozens or hundreds of cores is one thing, but knowing how to use them is quite another. Software makers still don’t really know how to use all the computational width that Moore’s Law will continue to deliver. There are a number of different ideas on how to run programs across multiple cores (including shared-state threads and message passing), but there doesn’t seem to be a consensus on how best to go about it. Then there is the problem that we have billions of lines of serial code that will probably not benefit from multi-core unless they are rebuilt to exploit parallelism. Anyone who’s tried to re-engineer an existing piece of software knows that is not an easy task. It’s also expensive, not a good state of affairs for a multi-trillion dollar industry.

Luckily there are a lot of really smart people working on the matter from a number of different angles. The next few years are going to be an exciting time as operating systems, programming languages, compilers, network technologies and all the people working on them try to answer the question of what to do with the all the cheap computing cores that are lying around. The downside is that software won’t run any faster for a while as clock speeds stagnate and we figure out how to work around that. For the next 2 months I’ll continue reading up and thinking about the different ways in which we can keep using Moore’s Law to our benefit. The free lunch may be over, but that doesn’t mean that we shouldn’t eat well.

Lessons learned from a course with Edward Tufte

Ever since I became interested in data representation and visualization a few months, I’ve been actively trying to seek out interesting people in the fields so that I can learn directly from them. A few weeks ago I had a chance to meet and talk to Loren Madsen

who’s done some really interesting data-driven art pieces. Yesterday I went down to Philadelphia to attend a one-day course taught by Edward Tufte: an Emeritus Professor at Stanford and author of four great books on data analysis, presentation and visualization. He’s also a great observer of user interfaces and presentations and a sworn enemy of PowerPoint, Excel and the “lowest common denominator” school of design. The course is a bit pricey ($200 for students and $380 for everyone else) but you get all four of his books and you get to attend one of the best presentations you’ve attended. Here are some of the things that I learned (to learn everything, you really need to take the course).

If you’re going to a course taught by a famous person, get there early

I was under the impression that the course would be a fairly small affair, say 50 to 100 people. Blame it on going to a liberal arts school with very small class sizes.  There turned out to be more like 400 people there and I happened to arrive just as the thing was getting started. I ended up getting a seat right at the back, though I quickly made it up to the middle. Edward Tufte is a great presenter and the way he conducts the course means that it doesn’t matter very much where you sit, but it’s nice to actually see the man as he talks. Also take something to write with.

Keep your ears, eyes and mind open

The course is presented in a rather informal way. It’s not disorganized, but there’s no simple outline either. He says a lot of simple but important things, and if you try to write down everything, you’ll be writing a lot and not really taking in what’s being talked about. Write down the important things but also pay attention. Open the books and look at the examples he points at. There’s a lot you can learn by just following along. Since you’ll have all the books you can pop them open later and refresh your memory later.

Leave your preconceptions at the door

The course teaches you a lot of things about how to think about and visualize data that go contrary to what the popular opinions. You’re not going to get much out of the course unless you go in open to give other ideas a try. Also don’t worry immediately about how you are going to apply what you learned to your problem. This will probably prevent you from getting the most out of the general principles taught. Think about and absorb the principles first and then think about the specifics. If you’re someone who keeps an eye on the internet, especially in the web and interface design worlds, you’ll also find some of his advice conflicting with what you read online (such as emphasizing content over design). Do be your own judge, but make sure you’ll judging on the basis of actual merits as opposed to hearsay and group-think.

You’ll need to deal with your problems yourself

This course isn’t about giving out prepackaged solutions. Like many high-level thinkers, Tufte is more concerned about identifying the overarching principles and then applying them, rather than focusing myopically on niche issues. He will give you some solutions (especially on regarding preparing and giving presentations) with some specifics (like how to use paper and avoid Powerpoint) but they’re templates that certainly need to be filled in with information specialized to the task. I also think that it’s important that you think about your own problem and bring your sense of creativity to the issue (without which you’ll be cloning someone else’s stuff).

Read, read, read and think. A lot.

Edward Tufte is a very well read and very intelligent man. He draws on examples from people all in all sort of fields through history (from Euclid to Feynman people you’ve probably never heard of). It’s not expected that you know everything about all the things he shows you (if you did you wouldn’t be going to the course). But if you want to understand things the way he does and come up with new ideas of your own, you’re going to want to keep reading about the things he refers to. It’s also important to keep exploring other things and actively playing around with and implementing the things you learn. And that means going out there and actually giving presentations and creating graphics based on what you’ve learned.

I’m at the stage where I can understand most of the principles that he’s talked about, but I’m not sure about how to apply them to the problems I have at hand. Some of the issues he talked about are similar to ones that I’ve had myself (and some I haven’t encountered at all). I love all the great historical examples he used and I intend to read up on them more. What I need to do now is to look harder at my own problems with Tufte’s examples as a guideline. As he said, it’s generally a good idea to take a strong model and copy it for your own purposes.

Sunday Selection 2010-03-14

Reading:

Books in the age of the iPad is a very well thought out (and very well designed) article on how books and print media might evolve to be useful in a generation when lightweight, connected digital displays will become ubiquitous. It’s an old topic, but the article is definitely worth reading.

Media:

Reddit.com interviews Peter Norvig who as some of you might know, is something of a hero. He’s a prominent AI researcher, Lisp hacker and the author of a must-read essay titled Teach Yourself Programming in 10 Years.

Software:

nginx is a relatively new, but rising member of the server world. It’s a lightweight, high performance server that currently holds about 7% of the market and is being used by sites like WordPress.com and Hulu. It’s also what I plan to put on my server once I get around to putting Arch Linux on it.