The need for a common configuration system

I’ve been using tiling window managers for a good few months now and my manager of choice so far has been awesome. Version 3.0 just came out a few days ago but for the first time I’m seriously considering not upgrading, even though I’m  a fan ofo keeping my system up-to-date and fairly bleeding edge. i’m new sure that the new release packs number of nice feature and would probably a good thing to have. There’s just one thing that is keeping me from upgrading right now: the fact that the config files are now in Lua.

What that little detail means is that my old configuration is useless and that I now have to devote time and energy to getting to grips with a new config format and recreating something that I already had working for months. Before people start with the arguments regardign infinite customizability, here’s my take: I really don’t care if the new system lets my config file turn my window manager into a whistling teapot. If I have to invest time and energy without nothing to gain in the process, that’s not a good thing. I’m not criticizing the technical decision to use Lua (since I have no experience with it) and I can understand how a lot of people will appreciate the greater flexibility. But for most users, even people like me who are decently tech savvy we don’t want something to be fixed if it isn’t broken. The ideal thing to do (if a change to Lua was a certainty) was to have provided a script that would migrate the existing config file to something that the new system could use. I hope someone out there is working on something of the sort, and I’ll put off updating for a while until then.

Understanding that this is open source, there’s certainly the whole “if you don’t like it, do it yourself” which I think people are justified in making. If I new more about Lua and awesome and was sufficiently interested in the project, I might actually have done it (considering that I am interested in programming languages and compilers and that translation is a big part of those fields). That being said, isn’t the whole point of open source to make softwware that people can actually use without having to pay a heavy price for? Let’s remember that the price can be in terms of time and effort as well as money. 

However there is a larger issue at stake here rather than just awesome’s decision to move to Lua. There are a proliferation of config file formats out there for applications, especailly for Linux application that have tradiitionally have human-editable plain text config files. While the good thing is that you don’t need a special program to edit your configuation file, the bad news is that you have to get used to a myriad of different config syntaxes which are only vaguely similar to each other. Throw in config files that are actually little programs written in scripting languages and you quickly have a high increase in what you have to learn to be able to just get your programs to work the way you want them to. 

To be fair, on the other side you have full blown GUIs which write to closed binary configuration systems. this means that you can’t edit files with a simple text editor, and you might not be able to write programs that automatically edit the configs either. So where do the two ends meet? 

We could start with a uniform configuration syntax. XML is a viable choice. But anyone who has spent time actually editing XML by hand knows that it is not a painless experience. And XML would be overkill for the simple configuration needs of most small software programs. Perhaps some sort of CSS like syntax with nested property lists would cover a lot of ground. Without a large scale survey of curreny config syntaxes and what sort of things programs allow to be configured, it would be hard to come up with something that works for a lot of people. 

For sake of argument, let’s assume that there is some such syntax. We would need libraries that can easily parse the syntax and make the data available for the program. These libraries would have to be in multiple languages without a fairly consistent interface. They would also have to small and lightweight so that simple programs can include them if necessary. Finally there has to be GUI support. Editing text files might be the simplest way to do things, but it’s not that simple if people aren’t used to it already. Considering a uniform syntax, it might be possible to generate GUIs automatically (especially since there will already be libraries to do parsing). This would mean that developers no longer have to worry about writing configuration code at all. They just decide what options they want to be user-configurable and how the config file maps to internal variables and options and then let the libraries worry about reading the files and building an interface.

I’m not sure how much effort this would take, but my feeling is that it would be doable with a good team and enough time. In fact this might be a good summer project to work on. I do hope that something like this comes along because I’m starting to get tired of learning configuration syntaxes for every new program I want to use.

How much crowdsourcing do we need?

With the rise of Web 2.0 and the push to make the users of the internet producers and not just consumers, there has been a rise in “crowdsourcing” — if you need an answer to question, ask a large number of people and someone will give you the right answer. Websites like Digg and Reddit (and to some extent, Wikipedia) have risen to power on the basis of this concept and a good argument could be made that crowdsourcing actually does work. A few days ago, another crowdsourcing project went into public beta — StackOverflow. This is the brainchild of two software-engineers/ bloggers whom I have a fair amount of respect for: Joel Spolsky and Jeff Atwood. The basic idea is that there is a large amount of very good and useful out there on the web, but most of it is in the form of random blog posts, long forum posts, IRC transcripts, so on and so forth and so it’s not really accessible to people looking for a straight answer. The goal of StackOverflow is to unlock that knowledge by being a hybrid of forum, wiki, blog and digg/reddit style rating systems.

While I certainly understand the goal of the project and understand it’s value, I’m not quite convinced if it will work the way it’s supposed to. As a programmer, a lot of the time I really don’t want to read through detailed documentation or long mailing list discussions. I just want a simple “What does this button do?” style answer. And as any programmer  who has gone digging through documentation knows, that’s not an easy thing to come by. StackOverflow hopes to promote a community of developers in the hopes that someone has already solved the problem you’re facing, and is ready to come out and help. There has been some criticism that StackOverflow will only attract mediocre developers looking for quick fixes and superficial knowledge thereby leading to the lowering of the competence of the people who are there. I think that this argument holds for crowdsourcing in general. There has been a pretty good answer made to this criticism and I suggest you go read it for a fair evaluation of the problem.

But let’s explore the alternative. If you don’t want to be pulled down to the lowest common denominator of competence, you’ll have to learn from someone better than you, not someone worse. Of course, the best way to do that is probably to find an expert in the problem you’re looking at. While this looks like a good solutions, there are as always, a number of problems. The one you’re likely to run into first is that the so-called experts are often far too busy to answer your questions. After all, they didn’t become experts without spending a lot of time actually working on stuff. Secondly, I have personally found that experts are often prone to give you a packaged answer without telling you all the inner details. This isn’t because they are arrogant or trying to confuse you, but rather because the underlying reasons are so clear to them that they don’t bother mentioning them. They are telling you their approach to the problem, which may not be the same as your own. Finally, just because they are considered to be experts doesn’t necessarily mean that they are the fountain of all knowledge. They can be wrong, imprecise or may just not have the answers.

So can crowdsourcing actually provide the easy-to-get, mostly correct information that so many of us are looking for so much of the time? I don’t think there is a real consensus and I can’t quite tell if there can ever be a definitive answer to this question. In a perfect world we would each have our own personal AI that plugged into various search mechanisms and databases and gave us the information we wanted, tuned to our needs. Of course this AI would keep evolving to better anticipate what we want to know. Think of it as a digital butler to serve our informational needs. I have personally not had to rely on crowdsourcing very much. Living in a small liberal arts college environment, I’ve had professors at arms reach, most of whom are very willing to help out and explain things to me. However, I understand that I am probably a special case in the programming world. Probably as a direct result of that, I have come to favor direct communication between myself asking a question and someone who has the answer (or can at least point me in the right direction). At the same time, I find the ideas of digital communities very interesting and I would love to see StackOverflow grow into something genuinely good and useful. I will be keeping my eyes on it and maybe in a few months it might be time to revisit my opinion of crowdsourcing.

Is Chrome really so important?

So the intertubes are resounding with word of Google’s latest offering: a next-generation browser called Chrome. Chrome certainly embodies some really cool ideas and could just pave the way for a new generation of browsers that are designed to support rich web apps and not just the text and images of web 1.0. But honestly, how far is Chrome going to go and how soon?

Chrome is being touted by some as being a full-blown web operating system that will soon supercede Windows. Ahem. Allow me to respectfully disagree. Sure the cloud is becoming an important part of everyone’s computing experience, but the desktop isn’t going anywhere soon. Let’s keep in mind the fact that the majority of computer users aren’t really tech savvy and aren’t continuously on the move. The most common use that people have for computers is word-processing, spreadsheets, maybe presentations, email and Facebook. Let’s face it: your grandmother doesn’t really want or need her cookie recipes to be kept on remote servers using Amazon S3.

Though personally I do quite like Google Chrome, there are some things that really trouble me. First up is memory usage. Google Chrome takes up about 267MB of memory which is more than IE8 which in turn is more than Windows XP. Seriously, all that for a browser so that I can run webapps which for the most part have features I could find in mid-90s software? Wasn’t it the promise of cloud computing that we would have trillion of clock cycles and terabytes of storage at our fingertips just for the asking? Webapps still have quite some way to go before I can justify a quarter of a gigabyte just to run a browser. Let’s not even talk about things for which there aren’t any webapps yet. I’ve recently begun moving away from word processors to Latex for my writing. There isn’t an online Latex environment. Nor are there full-scale online development environments (though CodeIDE is pretty cool). As a programmer, I’m not quite ready to move onto the cloud full time.

So the desktop is here to stay. After all it doesn’t really make sense to scale everything to thousands of processors just because you can. All that being said and done, I do think Google Chrome brings some interesting ideas. Separating the workload across multiple processes is interesting and the V8 javascript virtual machine could prove to be a good step forward. Only time will tell if Google Chrome really does have a substantial impact on the state of the web. But a web operating system is still some time away.

Dual monitors is addictive

For the past two weeks I’ve been running an interesting dual monitor setup. I have an old Mirror Door PowerMac G4 which has outputs for both standard DVI and Apple’s own ADC. I had been using an ADC that came with it for a while, but over the summer I was using a smaller Dell monitor because I needed some extra desk space. But since moving into my new room I decided to go ahead and set up both monitor. I was a bit skeptical of the experience because I had never used a dual monitor setup, especially with two different types of monitors.

However after using a dual monitor setup for about 2 weeks, I’ve become so used it that I would find it difficult to work with only one monitor (unless maybe if it was a very large one). I’ve been using my Dell monitor as my main working monitor with a browser or text editor open and the ADC monitor as a ‘monitor’ for iTunes, feeds, email or documentation that I need (including API documentation, tutorials and also school assignments). It takes some getting used and occasionally applications will pop up on the wrong monitor. But once you get used to it is really worth it. One major benefit is that you are no longer continually swapping between your work and whatever you are using as reference material. Combine that with something like virtual desktops and you can neatly compartmentalize all your work and not have your IM bleed into your school paper or vice versa.

Unfortunately I might have to go back to one monitor. My schools new network policy now makes it impossible for students to run servers from their rooms. However, we have a colocation facility where we can move our servers and leave them running. I’ve decided to go that route because I’ve become very dependent on my personal Subversion server, which is great for keeping track of multiple versions of documents. That means that my G4 with its dual outputs has to go. I do have an old Core Duo Mac Mini which has a single DVI output which I’ll be using as my desktop machine. That means that the ADC monitor will become effectively useless. Apple does have an ADC-DVI converter, but it costs $99. I would much rather spend $50 more and get a newer, larger monitor.

I’m pretty determined not to stay with just a single monitor. After realizing how much easier it is to work with two, I would probably be constantly finding myself wishing for another monitor. I’m currently in between buying a nice widescreen monitor for under $180 or simply scavenging a smaller monitor (like my current Dell) from the IT departments throw-aways. Considering my status as a starving college student who has to move back and forth about twice a year, a smaller old monitor would be the most reasonable choice, but a larger monitor would come in handy for having more stuff open at the same time (which can be quite important when you’re a developer using a number of different technologies at the same time) and it’s certainly better for watching movies and the like.

I’ll probably keep off the decision until I actually move my Mac into server room. The best thing at the moment would probably to get a smaller Dell and try it out to see if it meets my needs. Unless there is some pressing need to get a larger monitor, I’ll just stick to the smaller setup.