Hundreds of Little Things

Last week I came across a blog post about the new release of an image editor called Acorn. I don’t use Acorn, but one part of the post appealed to me. In a section titled “Hundred of Little Things” the author talked about fixing bugs in Acorn:

It took months and months of work, it was super boring and mind numbing and it was really hard to justify, and it made Acorn 5 super late. But we did it anyway, because something in us felt that software quality has been going downhill in general, and we sure as heck weren’t going to let that happen to Acorn.

Most of my past week was fixing lots of small and annoying little bugs in my summer project. Some of them were edge cases in the core functionality of the system, but a lot of them were little bits and pieces and rough edges that I would have normally just let pass. I do agree that software quality seems to be going downhill in general. One way to fix it is to pay attention to all the little things that we usually let slip past. On a related note, I think that writing software in a way that doesn’t allow these little things to slip past us is still unnecessarily difficult and complicated, but that’s a matter for another post.

Advertisements

Is code reuse a lie?

I took my first real programming class in high school. Someone up in the higher echelons of the Indian education system decided that we should all start learning Java even though it’s a perfectly terrible language for beginners. Anyways, here we are, some twenty-odd high school kids being indoctrinated in the many wonderful benefits of object-oriented programming despite Java not really being an honest-to-goodness OO language. One the things we memorized (and later spilled onto our exam papers) is that OO enables this really good, absolutely awesome thing called ‘code reuse’. I don’t remember anyone actually defining what code reuse is. I can’t really blame them because it seems rather self-explanatory, doesn’t it? You use old code instead of rewriting stuff over again. Until recently, I didn’t realize all the nuances that came with the idea of code reuse.

I had always thought that code reuse meant being able to lift code unchanged from one project and drop it into another. And I couldn’t think of any case over the last few years where I’d actually done something like that. With what experience I had with OO programming, it didn’t seem to me that reuse in that manner fell out naturally from using OO. In fact, it seemed quite the opposite. In each project, my classes were fairly specialized to the task at hand and I’d only be reusing them if I was doing a rewrite of the project along the same basic lines. It struck me that code reuse might just be yet another one of the myths that software engineering has picked up in its short, but interesting lifetime. Or as tycho garen tweeted to me : “cake is a lie. code reuse? just a little fib”.

However I didn’t want to base any broad generalizations like this on only my limited personal experience. I first tried to see if there were any published statistics regarding code reuse across large corporate projects. I found some academic papers regarding how code reuse could be encouraged, but nothing by the way of real hard data. I next turned to Arch Linux community contains a fair number of programmers. This turned out to be a really good as the resulting discussion brought up a number of points I hadn’t thought about regarding the different forms of code reuse.

1. Code Reuse between projects

This was the type of reuse that I had in mind. It seems that this sort of reuse is mostly a myth and understandably so. In most cases you want to get a project as simply as you can (though simplicity can be measured in different ways). That in turn means that your code will most likely be designed to solve the problem at hand as opposed to being suited for reuse. At the same time I think that if you have a group of similar projects, you can be careful to make sure you can use code between them. However the cance you’re going to be working on suffiiciently similar projects at the same time is pretty slim

2. Code reuse through libraries

Even though copy-and-paste reuse between projects may be a myth, there’s still a lot of code being resued out there. If it weren’t our field wouldn’t get very far. A lot of this reuse is done via libraries: collections of code that solve a particular problem and are designed to be used as part of larger programs. No matter what sort of programming, it’s almost inevitable that you’ve used some sort of third party library. This sort of reuse has been around a lot longer than object-orientation and I personally think that the appearance of OO hasn’t made library use more common or even easier. The ease with which code can be distributed and reused is very dependent on the support a particular language has for it which is orthogonal to how much it supports object-orientation (or any other paradigm for that matter).

3. Code reuse through class hierarchies

I’ve come to realize that this is probably what is actually meant by code reuse in OO languages. You don’t reuse classes between projects, but rather structure your class hierarchy so that you minimize the amount of code you have to write and try to eliminate copy-paste code duplication (which can be a nightmare to maintain). A proper design means that you can isolate common functionality and place them in superclasses leaving subclasses to deal with the variation. The code in the superclass gets reused without having been actually placed in the subclasses. Trent Josephsen gave a more detalied answer in the Arch Linux thread:

Imagine three classes A, B, C that share an interface.  The status() method is the same for all three, but the serialize() method is different in each class.  Without code reuse, you would copy identical code from A.status to B.status and C.status, which could be a problem if the code in A has a bug in it which becomes a serious problem in B or C.  With OOP-style code reuse, you derive A, B, and C from an abstract class Z which contains an implementation of status() and a declaration of serialize().  Then each child “re-uses” the code in Z without having to write it in each class.

Furthermore, imagine that you derive class AA from A, and you need to modify the serialize() method ever so slightly.  If most of the code is identical, you may be able to call A.serialize() from AA.serialize(), hence “re-using” the code in A.serialize() without copying it into AA.  You might argue this is a redefinition of “reuse”, but I think it is the generally accepted definition in OOP.

It’s possible to take this view on code reuse and implement it even if you’re not working in an object oriented languages. Functional languages with higher order functions allow just this. If you have a bunch of functions which do almost the same thing with a small amount of variation you can isolate the common part into its own higher-order function and pass in smaller functions as arguments (which each implement the different behavior).

So it turns that code reuse isn’t a lie after all, but it didn’t mean what I thought it did. Another comment that came out of the discussion was “Code reuse is a choice” which I think is a very appropriate way of putting it. You can have code reuse in a number of different ways: Using libraries, proper class or function organization or even writing your code as a set of reusable libraries tied together with some application specific code. There really is more than one way to do it and it’s up to the programmer to decide which path to take.

Creating extensible programs is hard

As part of my research work I’m building a program that needs to have a pluggable visualizer component. I would like users to be able to create and use their own visualization components so that the main output from the program can be viewed in a variety of ways: simple images, 3D graphics, maybe even sounds. My program is in Python, but I would rather node limit my users to having to write Python visualization code. Ideally, they should be able to user any language or toolkit that they prefer. The easiest way to do this (as far as I can see) is to have the visualizers to be completely separate standalone programs. The original program would save its output as a simple text file and the visualizer would then be responsible for reading the file and performing its actions. However at the same time I would like to make it easy to write Python visualizers and that wouldn’t have to supply their own file reading operations. These goals means that I need to design and implement a stable framework that can handle all this extensibility.

I only really started working on this last night, but in that time I’ve realized that this is harder than it sounds. Here are some the things that I’ve figured that I have to do:

  1. Determine whether the visualizer is a Python module or a standalone program
  2. If it’s a standalone program, save the output as a file and and then call the program with the output file as a parameter
  3. If it’s a Python module, there should be a class (or a number of classes) correspond to visualizers.
  4. The visualizer objects should have a clean API that the main program should be able to use.

While none of these tasks are impossible or require advanced computer science knowledge, they do require a considerable amount of care and planning. Firstly, the mechanism to detect whether the visualizer is a Python module should be robust and accept user input, which means that there has to be error checking and recovery. There also needs to be a stable interface between the main program and modules that are loaded.  There should be clear communication between the parts but also the modules should not be able to interfere with the main program.

Allowing third party (or even second party) code to run inside your framework is not something to be considered lightly. Malicious, or even sloppily written code can have very dangerous effects on your own code. In my case, I’ll be directly translating my programs output to API calls on the Python visualizer objects. Calling non-existent methods would throw exceptions and at the very least I need to make sure these exceptions are caught. I also need to make decisions regarding just how much information the visualizers should get. Luckily for me, I won’t be allowing the modules to be sending back any information, so that makes my job easier.

Writing an extensible program like the one I am now is an interesting experience. I’ve been interested in software engineering for a quite a while now and though I’ve written large programs before, this is the first time that I’ve made one specifically geared towards extensibility. Extensibility brings to the forefront a number of issues that other types of development can sweep under the carpet. Modularity, security, having a clean API, interface design, everything is a necessity for making a properly extensible system. Furthermore, having an extensible system means that you are never quite sure what is going to happen or how your software will be used. This being the first time that I’m making such a system, I’m going to be very careful. I’m putting more time into the design phase because I don’t want to do a rewrite partway through the project. Let’s hope that this experience proves to be a good one.

Software is Forever Beta

Word on the web is that Google just pulled the Beta label off it’s Chrome browser. As Google Operating System has noted, it’s not that Chrome is fully ready for mass consumption but rather that it’s just good enough to enter the fray with Firefox and Internet Explorer and that Google is in the process of sealing bundling deals with some OEMs. There is still work to be done, there are still bugs and their  are important features in the works (including an extension system). But the event raises a question that I don’t think has ever been convincingly answered: when is the right time to take the Beta label off a piece of software?

Wikipedia says that a beta version of a software product is one that has been released to a limited number of users for testing and feedback, mostly to discover potential bugs. Though this definition is mostly accurate, it’s certainly not the complete definition. Take for example Gmail which is open to anyone and everyone, isn’t really out there for testing, but is still labeled beta years after it was first released. You could say that in some ways Gmail changed the software culture by being ‘forever beta’. On the other hand Apple and Microsoft regularly send beta versions of their new products to developers expressly for the purpose of testing.

Corporate branding aside, everyone probably agrees that a piece of software is ready for public release if it generally does what it claims to do and doesn’t have any show-stopping bugs. Unfortunately this definition isn’t as clear cut as it seems. It’s hard to cover all the use cases of a software product until it is out in the real world being actively used. After all, rigorous testing can only prove the presence of bugs, not their absence. It’s hard to tell what a showstopping bug is until the show is well under way. Also, going by the definition, should the early versions of Windows have been labeled beta versions because they crashed so much? With exam week I’ve seen college library iMacs choke and grind to a halt (spinning beachball of doom) as student after student piles on their resource intensive multimedia. Is it fair to call these systems beta because they crack under intense pressure?

Perhaps the truth is that any non-trivial piece of software is destined to be always in beta stage. The state of software engineering today means that it is practically impossible to guarantee that software is bug-free or will not crash fatally if pushed hard enough. As any software developer on a decent sized project knows, there’s always that one obscure that needs to be fixed, but fixing it potentially introduces a few more. However, that being said, the reality of life is that you still have to draw the line somewhere and actually release your software at some point. There’s no hard and fast rule that says when your software is ready for public use. It generally depends on a number of factors: what does your software do? Who is your target audience? How often will you release updates and how long will you support a major release? Obviously the cut-off point for a game for grade-schoolers is very different from that for air traffic control software. Often it’s a complicated mix of market and customer demands, the state of your project and the abilities of your developers.

But Gmail did more than bring about the concept of ‘forever beta’. It introduced the much more powerful idea that if you don’t actually ‘ship’ your software, but just run it off your own servers, the release schedule can be much less demanding and much more conducive to innovation. Contast Windows Vista with it’s delayed releases, cut features, hardware issues and general negative reaction after release, with Gmail and it’s slow but continuous rollout of new features. Looking at this situation shows  that Gmail can afford to be forever beta whereas Windows (or OS X for that matter) simply cannot. The centralized nature of online services means that Google doesn’t need to have a rigid schedule with all or nothing release dates. It’s perfectly alright to release a bare-bones product and then gradually add new features. Since Google automatically does all updates, that means that early adopters don’t have to worry about upgrading on their own later. People can jump on the bandwagon at any time and if it’s free, more people will do so earlier, in turn generating valuable feedback. It also means that features or services that are disliked can be cut off (Google Answers and Browser Sync). That in turn means that you don’t have to waste valuable developer time and effort in places where they won’t pay off.

In many ways the Internet has allowed developers to embrace the ‘forever beta’ nature of software instead of fighting it. However even if you’re not a web developer, you can still take measures to prevent being burned by the endless cycle of test-release-bugfix-test. It’s important to understand that your software will change in the future and might grow in unexpected directions. All developers can be saved a lot of hardship by taking this fact into account. Software should be made as modular as possible so that new features can be added or old ones taken out without need for drastic rewrites. Extensive testing before release can catch and stop a large number of possible bugs. Use dependency injection to make your software easier to test (more on that in a later post).  Most importantly however, listen to your users. Let your customers guide the development of your products and don’t be afraid to cut back on features if that is what will make your software better. After all, it isn’t important what you label your software, it matters what your users think of it. People will use Gmail even if it stays beta forever because it has already proved itself as a reliable, efficient and innovative product. Make sure your the same can be said of your software.

Use your own software

Also known as “eat your own dog food”, this is the concept behind one of the most successful software engineering projects of modern times: the Windows NT kernel. The Windows NT kernel was written by a highly-talented team led by a man who is arguably one the best software engineers of all time: Dave Cutler. Dave Cutler was also the lead developer for another groundbreaking operating system: Digital’s VMS. However there was more to this project than talented developers: the whole team actually used Windows NT everyday as soon as possible. This meant that the developers exposed themselves to problems that would be encountered by the average user and could then fix those problems before actually shipping the finished product.

Let’s face it: software is buggy. We still have no clue about how to reliably write bug-free software. So we’re stuck with the situation of writing buggy software and then wrangling the bugs out of them. A lot of bugs are removed through the process of just writing a working piece of software. Automated testing also gets rid of a fair amount of bugs. However, there are some things that no amount of debugging or automated testing can get rid. Modern software systems are large and complicated and it’s hard to tell exactly how all the different parts will interact until you actually start using it all.

Even if you’re certain that your code is relatively bug-free, it’s still important to use software that you’ve written. There are a lot of things about software like look-and-feel, intuitiveness, ease of use, which can’t be determined automatically. The only way to see if your program has an elegant and smooth interface or is powerful, but clunky is to use it repeatedly. When you start using software that you’ve written on a regular basis, you start to think about how your software can be improved, what are the bottlenecks and hard-to-use features, what features are missing and what are unnecessary. This constant evaluation is a key ingredient of making better software.

Unfortunately, most of the software that is being created is made by developers who really don’t know how that piece of software is going to be used in the real world. After all, Photoshop wasn’t made my a team of artists and most users of Microsoft Word have never written a line of working code. So how do programmers go about creating software that they might never use? Enter the beta tester. Beta testers are given pre-release versions of software to evaluate and their suggestions are then folded back into the next release. The best beta testers are the very people for whom you’re writing the software in the first place. If you’re writing a software package for a specific client, then it is essential to have a continuous dialog open. Test versions should routinely be given out to get feedback and then that feedback should be incorporated into the next edition. If your software is for a mass market, then your user community will be your pool of beta testers, encourage them to give feedback and then take those opinions into account. Eating your own dog food is a good idea, but it’s not a disadvantage if you can give it to a bunch of other dogs and see what they think of it.