We’ll always have native code

A few days ago my friend Greg Earle pointed me to an article with the provocative title of “Hail the return of native code and the resurgence of C++“. While the article provides a decent survey of the state of the popular programming language landscape, it misses the point when it comes to why VM-based or interpreted languages have become popular (and will continue to be so).

The reason why languages running on managed runtimes are and should continue to be popular is not quite as simple as relief from manual memory management or “syntactic comforts” as the author puts it. While both are fair reasons in and of themselves (and personally I think syntax is more important than is generally considered) the benefits of managed runtimes are more far reaching.

As Steve Yegge put it, “virtual machines are obvious“. For starters most virtual machines these days come with just-in-time compilers that generate native machine code instead of running everything through an interpreter. When you have complete control over your language’s environment there are a lot of interesting and powerful things you can do, especially in regards to the JIT. No matter how much you optimize your statically generated machine code you have far more information about how your program is actually running at runtime. All that information allows you to make much smarter decisions about what kind of code to generate.

For example, trace-based virtual machines like Mozilla’s Spidermonkey look for “hotspots” in your code: sections that get run over and over again. It can then generate custom code tuned to be fast for those cases and run that instead of the original correct but naive code. Since JavaScript is a dynamically typed language and allows for very loose, dynamic object creation, naive code can be very inefficient. Thus the runtime profiling is all the more important and can lead to massive speed improvements.

If you’re running a purely native-compiled language you miss all the improvements that you could get if you reacted to what you saw at runtime. If you’re writing an application that is meant to run for long periods of time (say a webserver for a large, high-traffic service) it makes all the more sense to use a VM-based language. There is a lot of information that the VM can use to do optimizations and a lot of time for those optimizations to kick in and have a definite performance benefit. In robust virtual machines with years of development behind them you can get performance that is quite comparable to native code. That is, unless you’re pulling all sorts of dirty tricks to carefully profile and optimize your handwritten native code in which case all bets are off. But honestly, would rather spend your days hand-optimizing native code or writing code that actually does interesting things?

Personally, I hardly think a new C++ standard is going to usher in a widespread revival of native code. C++ has problems of it’s own, not the least of which is that it simply a very big language that very few people fully understand (and adding features and constructs doesn’t help that). A fallout of being a complex language without a runtime is that writing debugging or analysis tools for it is hard. In Coders at Work both Jamie Zawinski and Brendan Eich denounce C++ to some extent, but Eich adds that his particular grievance is the state of debugging tools that have been practically stagnant for the last 20 odd years. Being fast, “native” or having a large feature list is not nearly sufficient to be a good programming language.

All that being said, there is definitely a place for native code. But given the expressive power of modern languages and the performance of their runtimes I expect that niche to keep dwindling. Even Google’s new Go programming language, which is ostensibly for systems programming and generates native code has some distinctly high level features (automated memory management for one). But if you’re building embedded systems or writiing the lowest level of operating systems kernels you probably still want to reach for a fast, dangerous language like C. Even then I wouldn’t be surprised if we see a move towards low-level unmanaged kernels fused with managed runtimes for most code in some of these places. Microsoft’s Singularity project comes to mind.

We’ll always have native code (unless we restart making hardware Lisp machines, or Haskell machiness) but that doesn’t mean that we’re going to (or should) see a widespread revival in low-level programming. I want my languages to keep getting closer to my brain and if that means getting away from the machine, then so be it. I want powerful machinery seamlessly analyzing and transforming source code and generating fast native code. I want to build towers on top of towers. You could argue that there was a time when our industry needed a language like C++. But I would argue that time has passed. And now I shall get back to my Haskell and JavaScript.

The operating system for your brain

Last Friday I finished my summer internship at GrammaTech. A few days before that (I forget when exactly) the discussion on our IRC channel turned to cybernetic implants. We’re a company full of pretty hardcore software types, what do you expect? Though to be honest, I was the chief instigator. Anyways, the conversation quickly moved to the question of securing such implants. The questions raised are summarized by one coworker’s comment: “Which software vendor do you trust to write the operating system for your brain?” Given that regular implant technology probably isn’t too far in the future, the question is a valid one. For now my answer is: no one.

Let’s be honest: most of our computer systems are hopelessly insecure. And making them insecure isn’t as simple as installing antivirus software from a big vendor. Depending on just how secure you need or want to be, you potentially have to go very, very deep. In a lot of cases the trouble is not worth it. Want to take down my VPS running my personal website and storing my Git repos? Go ahead, it’ll take me all of five minutes to shut it down and spin it back up, maybe half an hour to restore everything. That’s far easier to do than statically analyzing every line of the Linux kernel, the GNU utilities and the web stack for vulnerabilities (and then fixing them without introducing new ones or breaking things). This is not to say that these aren’t worthwhile, important activities, they’re just not top priority for most users.

However, it’s another matter entirely when the systems are mission critical — banks, defense, the Internet backbone – or they’re running inside our body. Coming back to the original problem, medical technology is quickly progressing to the point of us having fully functional implants replacing faulty organs. Insulin pumps are just the start. Cochlear implants and artificial limbs have been around for a while. Bionic eyes are slowing pushing forward and real cyborgs exist. We’re not going to see full cyberbrains just yet and we’re definitely not throwing out the wetware for full synthetic bodies. But as the number of computers inside our bodies gradually increases it’s never too early to start thinking about how we’re going to keep them safe, especially if we want them connected to the Internet (and we will).

Having our implants connected to the Net is a matter of convenience as well as health and safety. Real-time monitoring, remote diagnostics and over-the-air software updates would greatly cut down on the amount of time you spend in your doctors’ waiting room. However, if you want your arm or eyes hooked up to the Internet you definitely want to be careful about who can connect to them. Asymmetric encryption and signing for all communications (especially updates) would be necessary, just for starters. I can see some kind of code-signing for the software itself being beneficial. But it raises of the question of whether the user can/should be able to hack their own organs. I really don’t want to jailbreak a critical organ if there is a possibility of bricking it. But at the same time I do have a right to my own bodyparts, biological or synthetic.

Aside: I wonder why cars don’t come with 3G connections for remote software upgrades. If the Kindle can do it, it can’t be that hard. Then again car manufacturers haven’t exactly been the most innovative and forward thinking in recent years. Maybe I should be talking to Elon Musk.

Even if the proper technical measures are in place, there is still the question of just who do we trust to provide and potentially control our body parts. I don’t mind Apple storing my music and Amazon can store and sync my books. I do mind them locking me in, which is why I’m still hesitant to go completely digital. But do I trust either of them (or any for-profit corporate entity) with my vital organs, or even non-vital ones? Furthermore do they get keys to shut down “malfunctioning” organs, for some definition of “malfunctioning”? What safeguards are in place to prevent them for misusing these keys? Given the life-threatening nature that such shutdowns might have, requiring a complex legal procedure to overturn shutdowns is dangerous and ethically negligent.

When implants start becoming mainstream and popular we’re going to start seeing issues and problems similar to the ones with computer systems. There are always going to be people who want differing degrees of control over their technology, whether that technology be cars, computers or prosthetics. It would be interesting to see something like a “homebrew” implant scene come up, though I doubt it would rival the popularity of the homebrew computer scene. Like many important problems the questions are both technical and social in nature. So, who do you trust to write the operating system for your brain?

Mind Expansion

I think the world owes a debt of gratitude to Malcolm Gladwell. To my knowledge his book “Outliers” was the first popular work to expound at length on the 10,000 hour rule: the scientifically grounded theory that achieving expert level in most fields requires about 10,000 of “deliberate practice”. Geoff Colvin’s “Talent is Overrated” is similar, but in my opinion not quite as well written. However, both books are elements of what I see as a growing trend: the idea that improving yourself, becoming more than what you are, becoming what you always wished you were, is not only possible but actually achievable given a proper strategy and adequate dedication.

This idea of gradual, but consistent self-improvement isn’t just applicable to becoming top athletes or musicians. Tim Ferriss’ “The Four Hour Body” including the increasingly popular slow-carb diet applies a similar idea to health and fitness. Cal Newport, a recent MIT doctorate and professor at Georgetown seems to be applying similar ideas to becoming a top academic researcher. My personal goal is to apply it to become the best programmer I can possibly be (as most readers of this blog already know).

For a few weeks now I’ve had a two-fold concern. First, I didn’t know what exactly I was or wanted to be. Was I a programmer? A computer scientist? A software engineer? A technologist? A writer who just happened to use code as well as words? Secondly (and more importantly) I couldn’t see anything resembling a clear path from my current “not totally incompetent code slinger” phase to getting to the point where I could understand the works of the great masters (and maybe create something of similar lasting value). And that really, really bugged me. Combining my current indecision with no idea of how to move forward put me on a path leading straight to madness.

Here I was, a recent college graduate with two rather expensive degrees and a certain amount of knowledge about my field of interest. And yet my four years of education seemed like a piddly little drop compared to the proverbial ocean of knowledge and capabilities ahead of me. Not only could I not cross said ocean, I couldn’t even comprehend it’s boundaries. Sure I could write you a topological sort of some arbitrary graph structure. I could also bit-twiddle my way into a reasonably resource constrained embedded application. I’d even wager that I could build a reasonably stable concurrent system that isn’t too embarassing. But beyond that? I just barely grasped functional programming, my knowledge of type systems was rudimentary at best. I only had the foggiest notion of what a monad was and yet I had signed up for a good few years working on advanced programming languages. What on earth was I thinking?

One of the great dichotomies of our field is that on one hand it is possible to be completely self-taught but on the other there are few systematic guides on going from novice to expert. Most advice on the matter seems to reduce to “Read code. Write code. Think. Repeat.” Wise words to be sure, but not exactly a 12-step program. So what is one to do? What am I to do?

Though Gladwell and his ilk may have shown us that tangible self-improvement is possible, it’s not a quest to be taken lightly for there is no end to it. Until you come up against fundamental physical limits there’s always farther you can go, harder you can push yourself. And yet, after a point I’d argue that a law of diminishing returns kicks in, most victories past that point are Pyrrhic. Which is why it is important that I do what I do for fun, first and foremost. Because I find our field infinitely fascinating, constantly stimulating and there’s nothing else I’d enjoy more (though writing comes a close second).

After the why comes the how. What is the path from point A to point B? Where is point B in the first place? Do I even want to go there? Ostensibly I’m going to graduate school for Computer Science. I have a hard time calling myself a scientist or even a engineer. There’s a formality and heaviness about those terms that seems a little off, in the way that “colonist” has a different connotation than “explorer“. A colonist has a definite, long-term purpose – an unknown world to tame, a new land to cultivate and defend. An explorer is looking around, seeing the way things are, understanding and learning, eventually moving on. I’m content to explore for now. I think I’ll stay “just” a programmer for a while, or maybe even a writer, those two seem the best fit.

Though I do this for fun and computer technology has far-reaching effects on the human race, I’ve grown to see programming as a mind expanding activity in it’s own right, independent of other motivations and effects. I’m looking at our technology as a medium for creative expression. Programming is a way to become familiar with that medium, a way to increase our creative powers. And so the direction to explore is the one which will expand my mind the most, increase my creativity the greatest. It’s easy to stay within my boundaries, to stick to the stateful, imperative programming styles I’m familiar with. To go beyond that, to throw myself into functional programming, to use advanced type systems, to write compilers and virtual machines, to do all that, is hard. It is also a form of “deliberate practice” and essential to becoming better.

For fields and pursuits that aren’t easily quantified, deliberate practice is hard. However one heuristic is to look at how much a potential problem bends your mind. An interesting, worthwhile problem changes the way you think about your field in a general way, but requires you to acquire specific new techniques and skills. Once you solve the problems you’ve increased the size of your toolbox, but you’ve also changed how you will look at and approach problems in the future. With each problem you gain the capabilities to solve a wide range of new problems and inch closer to “expert” status.

A few weeks ago I tried looking for a 12-step program – an organized, decently, reliable path to becoming an expert programmer. I wanted a way to put in my 10,000 hours with a reasonable guarantee of a good payoff. I haven’t found such a plan, but I have found some principles – what I’m doing needs to be fun and must be mind-expanding. I have to keep programming, keeping reading and writing, keep thinking. I’ve reopened my computation theory books, I’m taking baby steps in Haskell, I’m scripting Emacs on a more regular basis. I’m trying to continually expand my mind and abilities, I’m trying to keep getting better. I hope in 10 years I’ll get close to where I want to be. But for right now, I’m drained.

The importance of a good environment

After a few months of wandering around I finally moved into my apartment for the next year on Monday. The next day I picked up my new Macbook Air. After a good amount of time I’m back to having a good working environment, in more ways than one.

Over the last two months I’ve realized that it’s vital to have a decent environment if you want to get things done. It doesn’t have to be the best, it doesn’t have to have all the amenities, it doesn’t have to be perfect. In fact, overly indulgent environments are probably less conducive to good work than merely adequate ones. However, your environment does have to be good enough for you to sit down and do your work without constantly thinking or worrying about other things.

I’ve grown to like working in coffee shops and similar semi-public spaces. I also like my current internship office and I’m looking forward to setting up a nice office space once I start at Cornell. However it is nice to have a nice home to come back to. It’s nice to have options when it comes to work locations and spaces but it’s even better to feel that you’re not forced to choose. I had romantic notions of being a true techno-nomad – being able to work from wherever, whenever. Unfortunately I’ve found out the hard way that I’m not quite that hardy. I’m all for frugality and minimalism, but a good work and living environment is definitely worth investing in.

Talking of being a techno-nomad, for me my computing environment is just as important as my physical living environment. Just as it’s hard to get anything done if you’re constantly worrying about your living conditions, it’s hard to do anything if your machine is fighting against you instead of cooperating. Since almost all my work involves a computer in some shape or form it’s all the more important that I have a stable, working and adequate computing environment. Admittedly, getting a Macbook Air was a bit indulgent. But I wanted something that would last a few years, was close to the high end and that I could use as my only machine day in, day out. Since I had some money to spare (by virtue of previously mentioned internship) I decided it was worth it. I like the decision so far.

I’m considering the last few months to be part of my leaving college and growing up experience. And the importance of environments is one very important lesson that I’ve learned. I think I always knew that theoretically a good environment helps you create good work. However, now I know the practical effect of that theory firsthand. I’m sure there is some amount of personal preference involved, I know people who have done great work in pretty bad conditions. However, if you have the resources to set up and maintain a good environment then there are very few reasons why you shouldn’t do so.