What if the Singularity already happened?

I’ve been re-reading one of my favorite science fiction books, Accelerando by British author Charlie Stross. In one of my favorite passages, some of the characters are sitting around talking about their belief in the Singularity. One of the characters makes the following claim (about when the Singularity happened):

“Au contraire. It happened on June 6, 1969, at 1100 hours, Eastern Seaboard Time,” Pierre counters. “That was when the first network control protocol packets were sent from the data port of one IMP to another — the first ever Internet connection. That’s the Singularity. Since then we’ve all been living in a universe that was impossible to predict from events prior to that time.”

While it’s typical to equate the Singularity with the future advent of superhuman artificial intelligences, I think this definition makes a lot of more sense. The Internet has had more impact on our world in the recent past than any other technology (especially after the advent to mobile pocket-sized connected computing devices), and furthermore, it came almost completely out of left field. Few of the “classic” science fiction stories I remember reading (particularly by Isaac Asimov) prominently feature networked computers, even though they have faster-than-light spaceflight, aliens, robots and the like. Perhaps we should take that as a warning: the most disruptive technologies are the ones we’re least cognizant of, until the disruption is well under way.

Advertisements

Moshe Vardi on Humans, Machines and Work

Yesterday evening I had the pleasure of listening to Professor Moshe Vardi talk about the effects of automation on the workforce and the economy, and how the continued development of AI and machine learning-based technologies might further tip that balance. This post is based on the notes I took during that talk. Please read it as more of a high-level summary, rather than a transcript. The talk itself contained more links and references that I haven’t had the time to chase down, so any inaccuracies and misquotations are probably my own fault.

Professor Vardi is a professor at Rice University, and currently Editor-in-Chief of the Communications of the ACM, and the winner of numerous awards including the ACM Gödel Prize, and the EATCS Distinguished Achievements Award. He is also an excellent speaker and it was wonderful to see a coherent narrative formed out of many disparate threads.

The talk started with a brief mention of Turing’s thesis, which can be read as a compelling philosophical argument for thinking machines, and the related intellectual questions. The early history of artificial machines was characterized by unbridled optimism (expectations that general purpose AI would arrive within a generation), punctuated by several AI winters (1974-80 and 1987-93) where funding and support for AI research dried up. However, 1997 started a new era in AI research when a chess playing computer, IBM’s Deep Blue, defeated Garry Kasparov. More recently, DeepMind’s AlphaGo defeated European Go champion Fan Hui. Crucially, AlphaGo combines machine learning techniques (including deep learning) with search space reduction, resulting in an approach that could be termed “intuition”.

With the resurgence of AI research, automated driving has been the holy grail for about a decade. Cars were one of the most important developments of the 20th century. The automobile shaped geography and changed history, and led to lots of infrastructure development. By some estimates, there are over 4 million truck drivers in the US, and 15 million jobs involving operating a vehicle. Today there are about 30 companies, working on self-driving vehicles, attacking an estimated market of $2 to $5 trillion a year. Experts predict that the main technical issues will be resolved in 5-15 years. While this will be a great technological achievement, it will produce profound business disruption. For starters, there is likely to be major industrial contraction (cares are idle 90% of the time), and a major loss of business for insurance, legal, medical fields, as automobile accidents are drastically reduced.

Unfortunately, this industrial disruption follows a trend that has already been in progress for a while now. The last 40 years has resulted in a harsh negative impact on middle & working class. For much of the 20th century there was a “Great Coupling”  between productivity, private employment, median income and GDP growth: they all followed a linked upward trend. However, since the 70s, this trend has “decoupled”, a fact observable from many dataset. In particular, there has been increasing inequality: a massive decline in the bottom 50% of earners, and a massive increase in the top 1% of earners. There is a declining chance that a person in their early 30s is going to be better off than their parents.

This in turned has resulted in an “Age of Precariousness”: half of Americans would have trouble affording $400 for an emergency, and two-thirds would have trouble dealing with a $1000 emergency. Labor force participation for men 25-54 has dropped from 97% to 88% and those with high school degrees or less were the hardest hit — almost 20% are not working.
Technology is eating jobs from the “inside out”. High-paying and low-paying jobs are both growing, but middle class jobs are declining. According to a Bloomberg 2016 report: as we move towards more automation, we need fewer people in manufacturing and more people go into the service sector, historically a low-wage sector.

All this paints a pretty bleak future, and from Prof. Vardi’s talk it’s unclear what the way forward is. Universal Basic Income seems like one idea to help offset this dangerous trend, but UBI is still a hotly contested topic. The following discussion raised some interesting questions, including asking what the role of “work” and employment is in a mostly-automated society, and questioning the role and responsibility of educational institutes in the near future.

Personally, I feel lucky to be in a field where jobs are currently booming. Most of my work is creative and non-routine, and thus not amenable to automation yet. At the same time, I am very concerned about a future where the majority of people hold poorly paid service sector jobs where they can barely eke out a living. I am also afraid that jobs that seem more secure today (administrators, doctors, lawyers, app developers) will also be gradually pushed into obsolescence as our machine learning techniques improve. Again, no good solution, but lots to think about, and hopefully work on in the near future. As the Chinese proverb goes, we live in interesting times.

Investing in the Open Web

It seems like every few days there’s a new post lamenting the death of the Open Web, and the corresponding rise in ad-driven social media machines and clickbait. Recent examples include this lament on the Cult of the Attention Web (prompted by Instagram moving to an algorithm presentation, away from a chronological timeline), and Brendan Eich’s response to online news publishers strongly objecting to the ad-blocking browser, Brave.

At the risk of beating a dead horse, we seem to have collectively struck a number of Faustian bargains: free services in exchange for our personal information; free articles, audio and video in exchange for advertising, more personal information and ugly, slow sites; walled gardens, in whose operation we have little say, in exchange for ease-of-use. And while I would love to pit advertisers and social media giants against brave independent bloggers and developers in a black-and-white contest, the reality is never quite so simple.

If we really want an vibrant, independent, open web, we need to invest in it with our time, money, effort and technical know-how. But I don’t know if that investment exists, or if the people complaining about the state of the open web are ready to make it. Examples abound: the above piece about Instagram is posted on Medium, which might join said Cult of the Attention Web any day. WordPress, which powers a significant fraction of the open web (and on which this site is built), would rather pretend that it’s a feed-reader and encourage me to “follow” other blogs, than make it simple and quick to write or edit posts (it takes me four clicks from the WordPress.com page to start editing a draft). And I myself would rather rant about investing in the open web than build a CMS that I actually want to, and enjoy using.

If we seriously care about preserving an open web outside of walled gardens and free of ugly, privacy-destroying advertising, we need to be an active part of it. We need to publish to our own domains, backed by services that won’t turn into advertising machines tomorrow, maybe even pay for hosting. We need to vote with our wallets and actually subscribe to publications we want to read and support. We need to write code and build publication platforms that embody our ideals and values, and make it easier for others to do the same.

I do two of those three, though not as often as I would like to. I don’t exaggerate when I say I wouldn’t be where I am in my life without the open web. I would like to invest in it so that others can say the same in the future.

To Compete with Medium

Dave Winer is encouraging bloggers (or really anyone with something to say) to post anywhere but Medium. He says that Medium is becoming a “consensus platform” for posting longform writing on the web, especially for people who don’t have a regular place to post. In doing that, Medium becomes a single point of failure, much like Twitter is for real-time short posts, or that Google Reader was for RSS. That means that Medium becomes increasingly capable of unilaterally changing how writing on the web works, for whatever purposes it desires. Medium could decide what you write, how it looks, who sees it, and whether or not you can take it elsewhere. And if Medium shuts down, you could lose everything you wrote.

Winer says that the reason people don’t just set up their own blog (even if they won’t write regularly) is because it feels wasteful to set up something and then not use it. This holds people back, even though a pure text blog takes up negligible space and bandwidth compared to videos or images. While he’s right about the minuscule size requirements of plain text, I think there’s more to users’ reluctance of setting up their own blog. There is a cognitive cost and mental overhead to setting up your own blog that Medium side-steps. To set up an blog on WordPress or Tumblr, you need to create a user account by providing a username, email address and password. Then you need to create the actual blog, by picking a domain name and title (and optionally, a theme). And then you can start to write.

Medium, on the other hand, lets you sign in via Twitter, automatically selecting your username and other account details (which you can change). After that you can just start writing. To be  fair, you are asked to follow other users and tags, but you can just click a button and move on. That’s exactly what I did before writing this post. There are options to use Facebook and email to sign up as well, but I’m assuming they’re equally streamlined. To break free from Medium’s hold on casual writing on the web, a competing service would have to be just as streamlined and painless.

So how would one go about competing with Medium? First you need to reuse identity from some existing social network or identity provider. Second, writing and publishing a post would have to be super-simple. Finally, to address Winer’s concerns, the competing service should come from an entity whose main business isn’t written content, but somehow naturally falls out of (or can be built atop) the core service. Luckily, there is already a service that can do this: GitHub.

GitHub is a popular code-sharing and hosting service that is very popular with programmers (and increasingly, with non-programmers). By default, GitHub hosts repositories of code, but they have an adjacent service called GitHub Pages that hosts simple websites. As a GitHub user, you can create a specially named repository and any HTML pages in that repository are served as username.github.io. Anyone with a GitHub account (which these days, is pretty much anyone who writes code) can post writing to their own repository and have it be served as a webpage from GitHub. Now, this only completes one part of the puzzle, since there’s no Medium-like interface to actually write your posts. You would have to write your posts using a text editor and push them to your GitHub pages repo. However, such an interface could be created by anyone, not necessarily by GitHub. They would just need your GitHub credentials, temporarily, to post your writing from the editor to the repository.

In conclusion: part of Medium’s attractiveness comes from having a streamlined path to posting irregular writing on the Web, helping to make it a large and powerful platform for web publishing. GitHub Pages provides part of the puzzle to create a neutral competitor that offers many of the same benefits. All that is needed is a writing interface that uses GitHub pages as a backend.

I haven’t talked about the social media and promotional features of Medium. I’m not sure how to replicate them in the same fashion. My goal with this post was to propose an alternative to the publish-and-forget style that Medium allows, and I think GitHub Pages is a step in that direction. Since Winer published his post, Medium has posted a response that addresses many of his concerns. The takeaway from the response seems to be that if you’re afraid of Medium having too much control over your content, post to both your own blog and to Medium.

Sunday Selection 2015-08-16

Around the Web

Mad as Hell: How a Generation Came of Age While Listening to John Stewart

Last week marked John Stewart’s last week on The Daily Show. I enjoyed his last few episodes, but part of me was really wishing that his last show would include a takedown of the Republican debate. This isn’t the most in-depth post about his years at The Daily Show, but I think it captures effectively how many people of my age feel about John Stewart and the show.

Meditation vs medication: A Comic Essay on Faciing Depression

I’ve meditating more regularly in recent months as well as reading more about meditation, mindfulness and Buddhist philosophy in general. At the same time, depression is a growing concern, especially among people involved in technology and for me personally as well. I’ve also come to realize that there is a certain taboo surrounding anti-depressants: a latent fear that medication will fundamentally change who we are. I don’t think any one article can completely tackle this complicated bundle of issues, but this is a good place to start.

Programming Sucks

If you’ve ever wondered what the day-to-day life of a programmer is like, or the state of our technology is, this post gives a only half-joking look at behind the digital scenes. If you lived in the trenches yourself, you will find yourself nodding along, and maybe shedding a tear or two. There should probably be a trigger warning associated with this article.

From the Bookshelf

Radical Acceptance by Tara Brach

Talking of meditation, my most recent foray into that world came in the form of this book. It’s not about meditation per se, but rather involves using meditation as a tool to become more comfortable with our lives, face our inner demons, and accept the way things are as a focal point for living a better life. The book is replete with personal stories from the author’s life (and those of her patients) and includes helpful guided meditations to get you started.

Video

Forging the Hattori Hanzo Katana

I’ve always had a fascination with Japanese culture and martial arts, and Hattori Hanzo’s monologue is probably my favorite part of the Kill BIll movie. The movie doesn’t actually show you how the sword is forged, so here is a video that does. The narration could have been better, but it’s still a very entertaining (and educational) video.

Quick Notes on the OnePlus One

I’ve been a happy owner and user of a Nexus 4 for about two years (and the Nexus S before that), but in the last few months, my phone was starting to show its age. I was barely getting a full day’s usage out of the battery and after the Lollipop updates, things seemed generally more sluggish in general. It was time for an update, and following my usual habit of a skipping at least a generation when it comes to tech, I was really hoping to get a Nexus 6. Unfortunately, the $650+ price point placed it more than a little out of my reach. I’ve never owned a non-Nexus smartphone, but it seemed like it was finally time to move on to something else.

There’s been a lot of hype and news about the OnePlus One that I won’t bother recapping here. In short, the OnePlus One is a reasonably priced, state-of-the-art Android smartphone that comes unlocked and runs a version of the CyanogenMod ROM. It’s not stock Android like the Nexus line, but there’s no bloatware either and it works just fine with the full suite of Google Apps and (as far as I can tell) most popular Android apps in general. After being invite only for several months, you can now buy one from the OnePlus website, but only on Tuesdays. I’ve had mine for about two weeks now and thoroughly enjoy it. Yesterday a friend of mine asked me about my experiences about the device. I thought I’d collect all the points I made in that conversation and share them here.

For starters, I really like the device. It’s much snappier as compared to my Nexus 4, the large screen is gorgeous and the design in general is well executed. I got the 64GB “Black Sandstone” version. As the name suggests, the back of the phone has a black, sandstone-like texture that makes the device quite pleasant to hold. Time will tell if the texture holds up with daily wear and tear. The battery life is really good—I can easily get almost two days of moderate use on a full charge, and well over a day even with heavy usage. It’s really nice to know that I have a good few hours of usage left even if I forget to plug it in overnight.

I was a little concerned about the large 5.5″ screen, which is pretty massive compared to smartphone screens I’m used to. However, after a few weeks, I’ve gotten used to it and it feels really comfortable to use on a daily basis. By and large, I can use it with one hand (even for input using the swipe keyboard), but it is definitely easier to use with two hands. In fact, the device is light and slim enough that compared to my Nexus 4, it actually feels lighter and less of a burden to carry around. I do a lot of reading on my iPad Air (RSS, websites and Instapaper) but I’ve barely used it over the last two weeks. I’ve been testing out the One as a tablet replacement, at least for format-independent reading, and it’s been working out quite well so far.

I only have two main gripes about the One. First the CyanogenMod ROM that it’s using is still based on KitKat and I got used to Lollipop on the Nexus 4. But in all fairness, there’s nothing I seriously miss or can’t live without. And there’s a Lollipop-based ROM in the works. Second, the swipe keyboard seems noticeably less accurate than what I’ve gotten used to. However, that might just be because I still have the muscle memory of using the swipe keyboard on a smaller phone.

In summary, I think the OnePlus One is currently the best option for an unlocked, reasonably priced smartphone, especially given how expensive the Nexus 6 is.

Sunday Selection 2015-02-22

Around the Web

Oliver Sacks on Learning He Has Terminal Cancer

There is said to be a Roman tradition where a victorious Roman general would parade through the streets of Rome and as he did so a servant would whisper in his ear: Respice post te! Hominem te esse memento! Memento mori!”—“Look behind you! Remember that you are but a man! Remember that you will die!” We don’t have Roman generals parading through the streets anymore, but we do have talented writers reflecting on their impending deaths in the context of their lives.

My Prescribed Life

While the anti-vaccination “movement” has gotten a lot of press recently, there are other kinds of drugs administered to children that can significantly impact their lives. This piece traces the author’s use of anti-depressants from a young age and discusses how it affected her life and her growth as a person.

Squid can recode their genetic make-up on-the-fly

From the “truth is stranger than fiction” section: “A new study showcases the first example of an animal editing its own genetic makeup on-the-fly to modify most of its proteins, enabling adjustments to its immediate surroundings.”

From the Bookshelf

The Defining Decade

As someone approaching the tail end of their twenties, a book with the tagline “Why your twenties matter and how to make the most of them now” sounds like something I should have read five years ago. Oh well, better late than never I suppose. In this book, psychologist Dr. Meg Jay explores psychology, neuroscience, sociology and economics to make a compelling case for why the twenties can be an important time for growth and development and explains how the choices made (or not made) then can affect the rest of our lives. She combines personal anecdotes, interviews with numerous twenty-somethings and a host of solid evidence to write a narrative that is often hopeful, sometimes scary, but always compelling.

Video

BlackBerry 10 OS Vintage QNX Demo Floppy

I spent the better part of an hour today learning about QNX—a real-time operating system first developed in the 80s  that sports a practical microkernel architecture, a POSIX API and forms the core of a multitude of high-availability software (including the BlackBerry 10 OS, various car software and runs Cisco IOS devices). Best of all, it fits on an old-school floppy disk, complete with GUI and a web browser. QNX represents a great technical achievement and an interesting part of computer history.