How to build reliable software

    Software today may be great and it sure can do a lot of things. But there’s a catch: reliability. Face it, no software in the world is perfectly reliable. The ’98’ in a certain computer operating system name has been often touted as the number of times it crashed in an hour and even the toughest Linux or BSD based systems will fail with enough time and effort. This would be nothing more than an irritating headache if people only used computers to read email, surf the net, write documents and watch the occasional movie. But they don’t only do that, do they? Computers control some of the most precision demanding tasks humanity has engendered like controlling aircraft traffic. And the software that runs on these computers can’t afford even the smallest glitch and a full crash is out of the question. 

    A recent article which was linked on one of my favourite tech sites, OSNews, tried to present a way in which the problem of reliability can be tackled using today’s technology. The author lays the problem squarely at the feet of the way we write software today: using algorithms. Algorithms, for those who don’t know, are simply step by step instructions for completing a given task. The problem is not algorithms themselves, but the way they are inplemented in software. The author basically says that algorithmic software is bound to fail because there is no way to keep track of all the data at the same time and so a programmer constantly has to keep track of every small change that is made and how it might affect all the other data that is being used. That might work for a small program, but it certainly won’t work with big software like Operating Systems where you have thousands of programmers writing code.

     The author attempts to provide a solution and if you browse around the site, you’ll find that a system is being developed to make use of the authors strategy, which centres around making sure that everything in the software systems works according to a proper clock timing and automate updating of changed information throughout the system. Now I’m not much of a computer specialist (hell, I’m not much of an anything yet), but here’s my two cents anyway.

    The article does spark some interest, but after you go through the site, you realize that there’s really not much in the way of getting the job done. The system they are trying to develop is still in infancy and it’s hard to see how one would actually use it. To actually write the sort of programs the author talks about, you would first and foremost require a proper programming language, where the automatic signal processing that the author describes is handled implicitly by the language itself. Now, that would also require that you build a compiler or interpreter which would take the programmers instructions, as well as all the implicit instructions of the language and then translate them to something that can actually be run on a real CPU. Finally you would need a kernel capable not only of handling the hardware, but also capable of simulating the synchronous nature of the software system. in case you’re wondering what I’m talking about, let me try and explain. 

    For all the might and billions of calculations per second that our computers can do, they can still do only one calculation at a time. Even when it looks like your computer is doing a dozen different things at a time, deep down in the kernel, it’s actually plaing a delicate balancing game, sharing the CPU between all the programs asking for it. The author of the article says that a lot of the problems of faulty software are due to the fact that things don’t happen at the same time, data that is supposed be in a specific form isn’t so because it hasn’t been processed yet. Making multiple things happen at the same time and making sure that changes are reflected everywhere would solve a lot of the problems.

    Now, that’s not a bad idea in itself, but it’s the author’s solution that worries me: he wants to create a software system which will emulate a proper synchronous system. Firstly there is no real implementation of this idea, other than a few pages of theory, even though the project appears to have been around for a while. Secondly, as the comments on the OSNews report page show, the idea isn’t exactly novel and the author does not seem to have all his facts correct. Third, this person is not actually writing the software system himself. He seems to have no formal education or practical experience in computer science and seems to lack any knowledge about the mathematics behind computer programming and algorithms. Does “practice what you preach” ring a bell anyone? Finally the author seems to take no notice of other advances in computer science, like parallel processing, neural networks and cellular automata to name a few. 

    And in the end, we should remember that a lot of the unreliability of computer programs can be cured by good old hard work. The reason that the linux kernel is much more stable and secure than Windows, is simple because there are a lot more people looking for bugs and contributing code. In a wonder, the author’s idea is not bad and it might work, but it is hopelessly short-sighted and without any semblance of a working model, I’m not taking the bait. However I would encourage you to read the article and draw your own conclusions. (No, you don’t have to be a CS graduate, but some experience might be nice)

Advertisements

Published by

Shrutarshi Basu

Programmer, writer and engineer, currently working out of Cornell University in Ithaca, New York.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s