Sunday Selection 2013-10-13

Around the Web

Advice to a Young Programmer

I’ve learned a lot about system design and programming since I started grad school two years ago. I’m still learning a lot as I use new tools and techniques. This post does a good job of summarizing an experienced programmer’s advice to someone younger and newer to the craft.

Why Microsoft Word Must Die

I’m quite happy to say that I haven’t used Word in years. I don’t have a copy installed and I can’t remember the last time I needed to edit a Word document. I use LaTeX for most of my writing (everything from applications and academic papers to my resume). For the rare occasion that I need to open a Word document, Google Docs is more than adequate. Charlie Stross is one of my favorite newer science fiction authors and like most of his technology-related writing, this piece is on point about why the modern Microsoft Word is simply bad.

Less is Exponentially More

This article about why Go hasn’t attracted more C++ programmers is over a year old, but as a student of language design it’s interesting to see how language features interact with programmers’ needs. If you’re interested in programming languages or write lot of C++ code this is a worthwhile read.

Video

Jiro Dreams of Sushi

I’ve been meaning to watch this documentary for a long time, but finally got around to seeing it last night. It’s about Jiro Ono, and 85-year-old sushi master and owner of a tiny 3-star Michelin sushi restaurant in Japan. At its heart it’s a story of a man’s quest for perfection and devotion to his craft. Though it’s ostensibly about the art of sushi, I think there’s a lot for any professional can learn. It reflects a way of life and devotion to purpose that we rarely see in day-to-day life. You can catch it on Netflix streaming and on Amazon Instant Video (it’s not free for Prime members though).

Not so Svbtle

A few weeks ago I got an invitation to Dustin Curtis’ hip new(ish) blogging platform called Svbtle. The original announcement created a bit of a stir around the Intertubes. It was supposed to be both a clean, minimalist writing environment and a fresh new platform for vetted, competent writing. Here’s a relevant excerpt (emphasis mine):

I wrote this engine entirely for myself, without the intention of opening it up to other people. But since realizing that it has improved the way I think and write, I’ve decided to open it up to a small number of vetted bloggers. At least at first. The goal is simple: when you see the Svbtle design, you should know that the content is guaranteed to be great. Network bloggers are encouraged to keep quality high at the expense of everything else.

If it sounds provocative, that’s probably because it was meant to be. The emphasized line in particular, is fighting words, as they say. It’s been about a year and half since that post (at least that’s how long I think it’s been, Svbtle posts don’t seem to have visible timestamps). Now that I have an invite, I thought it would be interesting to see how things have held up. Is Svbtle really all that Mr. Curtis cracks it up to be?

At face value, the original claim seems to have fallen flat. The idea for a minimalist writing platform was copied and open-sourced almost immediately and there’s also a Svbtle-like WordPress theme. Given that Svbtle will let you use your own domain name, it’s hard to tell that you’re reading a Svbtle post unless you care to look. So much for seeing and recognizing the Svbtle design. But what about the rest of the claim? Are we really guaranteed that the content is great?

Svbtle currently positions itself as a “new kind of magazine”. The current About page reads as follows:

We’re a network of great people mixed with a platform that takes the best things from traditional publishing and combines them with the best parts of the web. We want to make it easier for people to share and discover new ideas.

The Svbtle blog announced that they received an undisclosed amount of VC money (good for them). They currently have over 200 writers and hope to build “the future of journalism”. Svbtle is building us up to expect not only good writing, but great writing and journalism. The current state of Svbtle doesn’t give me much confidence. As of this writing, many of the posts on the Svbtle front page would probably only be of interest to a certain section of Silicon Valley resident.s Posts like “The 3 competitive Defenses of Enduring SaaS Companies” and “The Single Best Content Marketing Channel for your Startup” make me think that Svbtle is more a thinly veiled mirror of Hacker News than a magazine devoted to ground-breaking journalism.

To me at least, Svbtle is not so much subtle as confusing. Who are these 200 writers? Why did they get invitations? They claim to span “at least eight disciplines” and journalism doesn’t seem to one of them. If Svbtle is supposed to take the best things from traditional publishing, then where are the editors and expert photographers? If Svbtle is going to be “an important place for the sharing of ideas” then where are the comments and where do I send Letters to the Editor?

Furthermore, this confusion isn’t just on the outward, public face of the endeavor. As a writer, it’s not clear to me what I get from publishing on Svbtle. A group of 200 writers is not exactly exclusive, especially when I have no idea what the invitation criteria are. I don’t see any Terms of Service, or an Export button for that matter. The invitation email claims “One of our main goals is to help members become better writers”, but there’s no mention of how that’s supposed to happen. Is there a peer review or editorial process? If there is, what are the qualifications of the editors and reviewers? I just wrote and published a short post and there doesn’t seem to be any of those things. Can I be kicked out and my posts deleted at a moment’s notice?

I suppose that for people dissatisfied with their current blogging platform Svbtle might be an interesting alternative. But it’s not for me. I’m perfectly content with WordPress when it comes to actual writing and Tumblr when it comes to everything else. I’ve never been distracted from my writing by the various controls and buttons and Svbtle lacks too much of what I’d consider essentials for a modern blogging platform.

Of course, it’s certainly possible that I simply don’t get it and that Mr. Curtis has some grand scheme that I don’t grasp. For the time being, though, it seems like Svbtle is simply just yet another blogging platform. It’s a different flavor than WordPress, Tumblr, or Medium, and some will be drawn to it for that reason. At this point, someone will no doubt point out that I won’t get it unless I try it. While I’m skeptical of that line of reasoning, I would like to give Svbtle a fair chance. Maybe the writing experience really is that much better. If I can think of something that needs publishing and isn’t relevant to The ByteBaker, then my Svbtle blog is where it will go.

(As an aside, I’ve been thinking of starting a research blog, along the lines of Lindsey Kuper’s Composition.al,. I’d use Svbtle for that, but there seems to be no support for inserting syntax-highlighted code snippets.)

In the meantime, if you’re looking for modern, journalistic writing that covers a variety of topics, I recommend a publication like New Republic.

Uncertainty about the future of programming

I finally got around to watching Bret Victor’s “The Future of Programming” talk at DBX. It did the rounds of the Intertubes about two months ago, but I was having too much at the Oregon Programming Languages Summer School to watch it (more on that later). Anyway, it’s an interesting talk and if you haven’t seen it already, here it is:

You really should watch it before we continue. I’ll wait.

Done? Great. Moving on.

While the talk generated a lot of buzz (as all of Bret Victor’s talks do), I’m not entirely sure what to take away from it. It’s interesting to see the innovations we achieved 40 years ago. It is a tad depressing to think that maybe we haven’t really progressed all that much from then (especially in terms of programmer-computer interaction). While I’m grateful to Mr. Victor for reminding us about the wonderful power of computation combined with human imagination, at the end of the talk, I left wondering: “What now?”

The talk isn’t quite a call-to-arms, but it feels like that’s what it wants to be. Victor’s 4 points about what will constitute the future of programming show us both how far we’ve come and how far we have left to go. However, like his other talks, I can’t help but wonder if he really thought through all the consequences of the points he’s making. He talks about direct manipulation of information structures, spatial and goal-directed programming and concurrent computation. His examples seem interesting and even inspiring. But how do I translate the broad strokes of Mr. Victor’s brush to the fine keystroke of my everyday work? And does that translation even make sense for more than small examples?

For my day to day to work I write compilers that generate code for network devices. While I would love to see spatial programming environments for networks and compilers, I have no idea what that would look like. If I’m building sufficiently complex systems, (like optimizing compilers or distributed data stores) the spatial representations are likely to be hundreds of pages of diagrams. Is such a representation really any easier to work with than lines of code in plain text files?

While I’m all for more powerful abstractions in general, I’m skeptical about building complicated systems with the kinds of abstractions that Bret Victor shows us. How do you patch a binary kernel image if all you have and understand is some kind of graphical representation? Can we build the complex computation that underlies graphical design systems (like Sketchpad or CAD tools) without resorting to textual code that lets us get close to the hardware? Even the Smalltalk system had significant chunks of plain text code front and center.

Plain text, for all it’s faults, has one big thing going for it — uniformity. While we might have thousands of different languages and dozens of different paradigms, models and framework, but they’re all expressed as source code. The tools of software development — editors, compilers, debuggers, profilers, are essentially similar from one language to the next. For programmers trying to build and deploy real systems I believe there is a certain benefit in such uniformity. Spatial programming systems would have to be incredibly more powerful for them to be seriously considered as an alternative.

I am doing the talk some injustice by focusing on spatial programming, but even in the other areas, I’m not quite sure what Mr. Victor is talking about. With regards to goal-directed programming, while I understand and appreciate the power of that paradigm, it’s just one among many. Concurrent and multicore is great, but a lot of modern computation is done across clusters of loosely connected machines. Where does the “cloud” fit into this vision? Mr. Victor talks about the Internet and machines connected over a network, but there’s no mention of actually computing over this network.

I believe that Mr. Victor’s point in this talk was to remind of an era of incredible innovation in computer technology, the good ideas that came out of it and how many of those ideas were never realized (and that there hasn’t really been such an era of imagination since). I can appreciate that message, but I’m still asking the question: “So what?”

Building complicated systems is a difficult job. even with plain old text files we’ve managed to do a pretty good job so far. The innovations between then and now have been less flashy but for me, at least, more helpful. Research into distributed systems, machine learning and programming languages have given us powerful languages, tools and platforms which may look mundane and boring but let us get the job fairly well. Personally I’m more interested in type systems and static analyses that let me rule out large classes of bugs at compile time than in interfaces where I connect boxes to link functionality. While I hope that Mr. Victor’s vision of more interactive and imaginative interactions with computers becomes a reality, it’s not the most important one and it’s not the future I’m working towards.

Pacific Rim is a work of art

Over the weekend I went with some fellow graduate students to see Pacific Rim. It’s not a particularly complicated movie, there are some gaping plot holes, the technobabble reaches facepalm levels and it fails the Bechdel Test. All that being said, Pacific rim was one of the most enjoyable science fiction movies I’ve seen in a long, long time. It’s much better than the current crop of superhero flicks (with possible exception of The Avengers and the Batman movies) and the last movie I liked this much was probably District Nine.

So why do I like this movie so much? It’s hard to put my finger on it, exactly. The concept is simple, but interesting: giant monsters rise out of the depths of the Pacific Ocean and humanity assembles giant robots piloted via a neural link. Crucially, these machines must be piloted by two pilots at time, leading to interesting character interactions who share the neural link. Over time, humanity grows complacent, the robot program gets scrapped and the defenses are left to rot until finally the apocalypse is nigh and only a handful of fighters stand between us and oblivion. Not the most novel premise in existence, but the magic is in the details.

The characters are at once both larger than life and fatally flawed. The imagery is classic Guillermo del Toro: beautifully detailed while being rough and gritty resulting in something that is clearly imaginary while being strangely believable. The giant robots have been neglected for years: they’re banged up, dented, rusty, constantly being repaired (think more Matrix Revolutions and less Oblivion). The last line of defense is a small, cramped base in Hong Kong. Everyone lives in cramped, mostly dirty conditions. This is not humanity’s finest hour. And with that as the background, humanity’s last line of defense is provocatively international: American, Russian, German, Australian, Chinese, Japanese and more. At the end of the day, scientists and engineers prove to be just as important as the gunslingers and military commanders. A smuggler and gangster helps put in place a core piece of the puzzle. Families are broken, important characters fall and fathers live to see their sons die. Accept the premise and forgive the technological stumblings and the movie is oddly human in comparison to your standard sci-fi flick.

And then there are the fight scenes. They are, to say the least, interesting. Though we see giant robots battling sea monsters, the battles are more martial arts than technological warfare. It’s as much about the pilots in the machines as it as about the machines themselves. Through it all, the anime influence is clear. The robots, termed Jaegers, are equipped with plasma cannons and missiles as well as swords and spinning blades. The camera angles are often imperfect and the lens is often wet or scratched. Crazy? Yes. Fun to watch? Absolutely.

In many ways,  Pacific Rim stands out because of what it is not. It’s not your run-of-the-mill action hero story, the characters and actors aren’t well known and hence open to both interpretation and evolution. You know there’s going to be an epic battle at the end, but there’s enough unknown in between to keep you from getting bored. It bears more in common with an old western than a modern superhero or scifi movie. It’s a reminder that people are still capable of coming up with an original screenplay that’s good and worth watching.

Should you go watch it? Absolutely. If you’re a scifi buff, keep in mind that it take the word “science” very liberally and definitely doesn’t take itself very seriously. If you’re not, then don’t worry, the science isn’t really a key part of the movie. The characters, their histories and interactions carry as much as the action sequences. Pacific Rim definitely takes it to my list of science fiction that I’ll recommend to people looking for something new.

(PS. If you’re interested in getting some insight in what went into the movie, this interview with Guillermo del Toro is definitely worth reading).

To thine own reading habits be true

It’s been about two weeks since the untimely demise of our dearly beloved Google Reader. Since then many replacements have been stepping up to the plate. I’ve been using Feedly, but I hear good things about Digg Reader too. A few days after that Anil Dash wrote a post entitled “The Golden Age of RSS” where, among other things, he provides a very long list of RSS readers across various platforms. He also makes four suggestions about improving the state of the RSS ecosystem and two of those four are about the actual reading experience. While I have immense respect for Mr. Dash (and Dave Winer), I’m not excited by either of his suggestions.

First off, Mr. Dash seems to not be a big fan of the mailbox style of displaying feeds (a la Google Reader) or the magazine style (a la Pinterest and Feedly). He seems to rather favor Winer’s river of news style. Secondly, he says that he wants a blog reader — essentially a single site RSS reader that kicks in when you visit the site and gives you a content-focused, style-independent view of the site. While both of these suggestions seem interesting (and I hope someone picks them up and does cool things with them) neither of them is particularly appealing to me.

Personally, I like the mailbox-style of reading feeds. I like to be able to look through a list of titles, read the ones that sound interesting, and get rid of the rest (currently by mass marking them as “read” — not the best interface, but it gets the job done). I don’t want a river of news — I want a digest of interesting things that I can read at my own leisure, irrespective of when the author posted them. My RSS reading list isn’t a source of news, it’s a selection of authors who write interesting pieces and whose posts I don’t want to miss. Now, an argument could be made that if some post is really good, it will filter through my Twitter or Facebook circles and I’ll hear about it. But I have neither the time nor the energy to sift through those streams to find interesting things my friends are posting. I’d rather just have the good stuff come directly to a single known location. And this brings me to Mr. Dash’s second recommendation (and why I disagree with it). I don’t see much personal value in the sort of site-specific reader he wants. The whole point of having RSS for me is that I don’t have to visit the website. See above arguments for a central location for posts from approved sources.

Does this mean that river-of-news or site specific RSS readers are a bad idea? No, of course not. Anil Dash and Dave Winer are both very intelligent people with proven track records and if they’re advocating something it’s worth looking into. All I’m saying is that they’re not the best idea for me. Reading habits are a very personal thing. We like to read different sorts of things and we like to read them in different ways. Dave Winer likes to be plugged into a river of news, I prefer to have a stack of articles waiting for me at the end of the day.

I truly believe that the web is a democratic medium — it allows us to define both how we publish and consume content (within limits). While we’ve explored the publishing aspect in lots of different ways (sites, blogs, tumblelogs, podcasts, microblogs, photoblogs, vlogs), the consumption side has perhaps seen a little less action. The death of Google Reader seems to have sparked a new burst of RSS-related innovation. Once we’re done picking our favorite clone, moving our lists and syncing our devices, maybe we can think about how to make the consumption experience as democratic as the publishing experience.