The Ackermann Function in Java: Why Computers are Stupid

I’ve started teaching myself Java, right from the basics and as a guide I’m using the Java version of the How to Think Like a Computer Scientist book. One of the exercises at the end of the fifth chaper (called Fruitful Functions) is to implement the Ackermann Function as a recursive method. The Ackermann function is mathematically defined as:


Now, the Ackermann function is quite well suited to computerization, it takes little real intelligence to solve for any two numbers, and is mostly repetitive calculation (which computers are good at). It took me less than a minute to implement the function as a Java method as follows:

public static int ackerman(int ack1, int ack2){

if (ack1 == 0)

	return ack2+1;

else if (ack1 >0 && ack2 == 0)

	return ackerman(ack1-1, 1);
else if (ack1 >0 && ack2>0)

	return  ackerman(ack1-1, ackerman(ack1, ack2-1);
)

I passed it to the compiler and the compiler replied with a cheery: “missing return statement”. Since I already had three return statements, that meant that there was a possible path through the method where the method would end without a value being returned. The Ackermann function doesn’t work with negative numbers, so I had already implemented a check for negatives before the function call. I tried putting in another check for negatives in the function itself, but that didn’t work. By this point I was getting rather frustrated because the above code would catch all non-negative numbers and produce appropriate returns. I ran the code in my mind with a few small numbers and everything seemed to check out as it should, the values of ack1 and ack2 would keep on reducing until ack1 hit zero and the method would end with a proper return.

Finally on a hunch, I decided to remove the last if-else and make the last return statement free-standing. Semantically it was the same thing, because there was only one possible case if the first two conditions were not satisfied. And for some reason the compiler thought that this new version was perfectly passable. I haven’t entirely ruled out the possibility that there was really some path that would have resulted in no return. But I think that it is far more probable that the compiler for some reason couldn’t handle the multiple recursions and simply gave up. Of course, I’m not an expert in these things and if someone knows of a proper explanation please let me know. Until then I’ll stick to the knowledge that a computer is still quite some distance away from what would be common sense to a human (or at least the Java compiler is).

Advertisements

Cross-platform Programming: The Details

In the last article, I took a general look at cross-platform programming. From a developer’s point of view, there are a number of ways of programming cross-platform. While maintaining separate source code trees for each platform would make your program “fit in” the most with each platform, it would be almost like maintaining a different program for each platform, a lot of work indeed. One solution is to keep the same source code and use a different compiler to create a different executable for each platform. While this reduces the amount of work you have to do, using this method means that you have to be careful not to use platform-specific methods. And the performance of your program will vary according to the quality of the compiler use.

Probably the simplest solution for a cross-platform developer would be to write in an interpreted language. The upshot is that you can (mostly) forget about dealing with the platform, because the interpreter will be the same irrespective of the platform and you can be pretty confident that your code will run the same on whichever platform you make it work. On the other hand, this method relies heavily on the proper performance of the interpreter. However, if you use one of the more common interpeted languages like Python or Perl, that is unlikely to be a problem. These interpreters have been out there for years and have been thoroughly tested on all of the more common platforms out there.

And now for the Java Virtual Machine. The JVM is just what it sounds like: a sort of pseudo-computer which conveniently lets you ignore the real computer. But there are two important things that make the Java Virtual Machine different from interpreted languages. Firstly, the JVM is independent of the actual Java language. Though Java is the primarily language, there are a number of other languages which can be translated into bytecode that can be run on the JVM. There are also implentations of Python and Ruby for the JVM, though these are considerably less complete than the actual implementations. The second reason will be more appealing to developers focussing on GUI applications: Java has it’s own GUI toolkit called Swing which can provide a consistent look-and-feel across all platforms. Your program not only functions the same across all platforms, it looks the same as well.  Most other interpreted languages require an external GUI toolkit which means that the end user has to make sure that the relevant libraries are installed (or you, as a developer have to package the libraries with your program).

Talking about GUI toolkits, there are a number of toolkits out there for you to choose from. Qt and GTK are probably the more popular ones, but wxWidgets is gaining popularity. GTK and Qt are both non-native toolkits, in that they use their own drawing engines. Qt now uses the drawing API of the platform to give a more native look-and-feel. GTK on the other hand uses a number of different engines some of which emulate the look of native widget sets. wxWidgets however uses the native drawing system of the platform itself and only provides a thin abstraction layer on top of it. This means that your apps will look like native apps because they are effectively using the same set of graphics widgets. GTK is written in C while Qt and wxWidgets are in C++, but bindings for all of them exist in many popular cross-platform languages, so you can pick and choose at will.

Though I’ve presented cross-compiling and interpreted languages as two different solutions, you can take a middle path: writing part of your program in code that will be compiled separately for each platform and part in an interpreted language. The Firefox browser uses this technique. The Gecko layout engine is written in C++, but a large part of the application is made using interpreted scripting languages like XUL, CSS and JavaScript, allowing it to be easily ported. Besides Windows, Mac and Linux,  ports are available for BeOS, SkyOS, FreeBSD and OS/2.

Ultimately it comes down to you as a developer to decide which method and technologies will make your worker easier and the end product. My personal favourite is Java/Swing for heavy projects and Python/wxWidgets for smaller personal work.

Cross-platform Programming: A Primer

Microsoft Windows is still the most popular desktop operating system in the world, but open source alternatives (mainly Linux and BSD systems) are gaining a wider user base and the Apple Mac, with it’s undeniable cool factor, is making a decent comeback. A lot of people have a lot of different opinions about this, but for a software developer this means one thing: there are now three main platforms, each with a reasonably large number of users that you might want to devlopment software for. The question is which one do you choose?

The most obvious choice would be Windows because that way your software gets out to more people (especially important if you’re actually selling your software). However that obvious advantage is offset by the fact that there is already a large amount of Windows software and your application might just get lost in the crowd unless it’s very exceptional (or you’re a ruthless marketer). The problem with Mac is that not only is its user-base smaller, and Mac users are generally quite discerning users. Linux and BSDs (which are binary compatible) have a slew of their own problems like different GUI toolkits, different software packaging systems and a general aversion to proprietary software. The upshot is that there are plenty of early adopters and a good application can gather momentum quickly.

You could do some research, weigh the pros and cons and then take the gamble, or you could side step the issue altogether. How? By developing cross-platform. You write your program so that it will run on all of the above without the need for drastic changes. Now the problem with this idea is that all of the platforms are different from each other. A program written for Windows will not run under Linux or Mac OS, unless you use some sort of emulator or virtualization. And most users will not go to so much trouble just to run your program. However, developing cross platform can be done and without too much of a fuss. There are a number of ways to do this, but I’m going to be focussing on two ways that are most efficient for software developers.

Cross-compiling

You write your program in a compiled language like C or C++ and then just used a compiler specific to the platform you’re interested in to compile your program to a binary for that platform. This way you only have to write your source code once and only have to maintain a single source tree. This also gives you access to platforms that are more esoteric than the three common ones mentioned. The GNU compiler can compile to almost 50 different processor architectures (assuming of course that there is an operating system to actually run your program on). The downside is that you have to avoid using libraries and system calls specific to a platform. This can be something of a platform, especially regarding GUIs.

Abstraction

Here you don’t worry about compiling for different platforms. Rather you write your program in an interpreted language which will run without any change for all platforms for which an interpreter exists. You can use a full-fledged virtual machine like the Java Virtual Machine or you can use a more lightweight interpreter solution like Python or Perl. The obvious restriction with this method is that an interpeter must exist for your target platform (and your users must have the interpreter installed). Howver this shouldn’t be a problem if you stick to the three listed above.

Tomorrow I’ll take a look at ways to implement each of the above methods, explore the potential difficulties of each and take a look at solutions and workarounds.

Perl vs Python: Why the debate is meaningless

Competition is a common thing is hackerdom, and arguments over which one of two (or more things) is the better are common and keep coming and going (but never really go away). Some of the most common ones are: Java vs C#, Emacs vs Vi (and it’s clones), Linux vs BSD, Closed Source vs Open Source and everyone’s favorite: Microsoft vs everyone else. And the internet is littered with evidence of these great wars (which none too rarely take on epic proportions) and some digging will throw up evidence on forums, website, bulletin boards, Usenet archives and of course, blogs. Today I’m taking a look at one of these wars that I’ve had personal experience with: Python vs Perl.

The last few years of my life have been filled with an as yet unfulfilled to learn some real programming (see last post). A few months ago I sat down to really learn Python, because I had a read a lot online about how it is a great beginner’s language (though I’m not really a complete newbie at this point). Anyway, I’ve been at it for a while and I’m liking how things are going.

But before I had started Python, I had a brief brush with Perl a year ago. At first glance it seemed something I had dreamed of. After struggling through memory management in C++ and Java and sitting through long compile times (which are really irritating if you’re a new programmer making lots of mistakes), Perl’s write and run was a dream and built in memory management made me a lot happier. But it didn’t take too long for me to get dissatisfied. Reading other people’s code was never very easy and variables behaving differently in different contexts was not something that I liked. I left Perl in just over a month.

Looking for an alternative I quickly found out about the Python vs Perl debate and was especially inspired by Eric Raymond’s Linux Journal article. I started Python and quickly fell in love with it. It had all the things I liked in Perl, but none of the eccentricities that I had wanted to leave behind. For a long term I didn’t give Perl another thought. But being the avid netizen, open source fan and linux lover, I kept reading about Perl here and there. So, last December, I decided to look into Perl again.

What I found out since then is that the Perl vs Python debate is rather like apples vs oranges. One of the biggest complaints about Perl is that it is untidy. Perl’s philosophy of “There’s More Than One Way To Do It” means that there are numerous ways to twist your code syntax around and it can be made absolutely unreadable. And it’s not just the syntax. Perl’s later features, especially object-orientation, have a distinctly kludge-like feel about them, as if they were simply slapped on later rather than integrated. Python, on the other is much cleaner and better designed. But there’s a twist to this tale. It’s this: Perl and are Python are very different languages.

Firstly Perl was not meant to be a general purpose language. It was meant to be a sort of shell-script on steroids. It’s a glue language, aptly called a Swiss Army Chainsaw. If you have a myriad of different programs and data sources and all you need to do is bind them with the electronic equivalent of duct tape, use Perl. Keeping Perl to it’s original uses makes most of the problems disappear. You don’t come into contact the kludginess of more advanced features because you will rarely (if ever) use them and readability isn’t so big a problem, because your programs will not genrally be more than a hundred or two lines.

Python on the other hand, is a more general purpose language. Features like object orientation are better implemented because you’re expected to use them all the time. And there is a definite emphasis on writing clean code because your programs will be big with complex tasks, not just delegating to other programs.

So the final word is this: choose your tools carefully. If you need something that will update a web page with a list of recently played songs, use Perl. A good example of how to use Perl effectively would be Xterminus’ changes page, which uses a Perl script to gather data from his del.icio.us account and his wiki software and create a pure HTML page. If you want to write anything with more than a few layers of abstraction and of more than medium complexity, grab Python. If you have any experience with these two, let me know.

Tomorrow’s debate: Emacs vs Vi

Update: I had said above that I found memory management in Java and C++ tedious, I admit that was something of a typo. I hadn’t meant to say just C++. Thanks to Digg and Reddit users for pointing this out.

The 5 year programming plan

I’ve been trying to learn programming for a number of years and I really haven’t gotten too far. My first brush with programming was with BASIC and I never learnt much beyond FOR. My first real programming language was Java. I learnt some more, but nothing beyond simple inheritance. On the way I tried twiddling with C++ and failed miserably (I won’t go into details). The reasons for my failures were numerous, including bad teachers, bad syllabi and my own inability to settle down with a good text-editor rather than IDE-hopping. But enough is enough. I’m finishing school in a few months and then I hope to get a Computer Science degree in college. So I’m starting a 5 year plan to really learn programming starting now and going upto 2011 when I graduate.

Before I outline the plan, here are the basic premises on which the plan is based: I should learn at least one programming language and one programmig tool each year. By “learn” I don’t mean knowing everything that there is to know about the language, or learning all the available extensions and toolkits that are available. By the end of the year, I should be able to write significantly large program without having to lookup documentation too regularly. I should also be able to read other people’s code and understand what is going on without much difficulty. As part of my project, I’ll also be implementing at least one large project. As you may have realized by now, all this leaves room for a certain amount of ambiguity and interpretation, but that’s ok for me, because I think that it’s good to have some amount of flexibility in one’s plans. Without any delay, here goes:

2007

  • Languages: Python/Perl/Shell Script
  • Tools: Emacs
  • Project: A PIM with internet backup and synchronization

2008

  • Language: Java
  • Tools: Eclipse
  • Project: Undecided

2009

  • Language: C/C++
  • Tools: Undecided
  • Project: Perhaps some sort of Databsse application

2010

  • Language: Ruby
  • Tools: Rails
  • Project: Perhaps a reimplementation of my previous apps to a web framework.

2011

  • Language: LISP
  • Tools: SLIME
  • Project: Undecided

As you can see, there is a fair amount of stuff that is currently undecided. I intend to fill in the blanks as I learn more abouot the langaguages and programming in general and where to use what. Another thing that i should mention is that since I’m going to be using Emacs, I suppose I will have to dabble in Elisp a bit, but I’d like to keep it to a minimum until I actually start learning LISP. If you’re a programmer and have had experience with anything listed above, any advice would be welcome.