on the edge

computers & technology, books & writing, civilisation & society, cars & stuff


Greg Black

gjb at gbch dot net
Home page
Blog front page


If you’re not living life on the edge, you’re taking up too much space.


FQE30 at speed



Syndication / Categories

  All
   Announce
   Arts
   Books
   Cars
   Family
   House
   Meta
   People
   Places
   Random
   Society
   Software
   Technology
   Writing



Worthy organisations

Amnesty International Australia — global defenders of human rights

global defenders of human rights


Médecins Sans Frontières — help us save lives around the world

Médecins Sans Frontières - help us save lives around the world


Electronic Frontiers Australia — protecting and promoting on-line civil liberties in Australia

Electronic Frontiers Australia



Blogroll

(Coming soon…)



Software resources


GNU Emacs


blosxom


The FreeBSD Project

Wed, 08 Aug 2007

Code Craft falls down hard

I know it’s not possible to write a big book without having any errors fall through the cracks, and I don’t make a habit of public excoriation of people for things that can be forgiven — but there are unforgiveable things.

Take Code Craft by Pete Goodliffe, published by No Starch Press as an illustration. Here we have a 580-page tome dedicated to the practice of writing excellent code and on page 13 it has an egregious example of unforgiveable content.

Before getting to the details, I would mention that neither the book nor the website give me any information that I could find in a reasonable amount of time about how to report errata. Had there been such an avenue, I’d have taken it. As it is, this seems the easiest approach.

This is in Chapter 1, On the Defensive, subtitled Defensive Programming Techniques for Robust Code. Under the heading Use Safe Data Structures, he gives the following example of some C++ code:

    char *unsafe_copy(const char *source)
    {
        char *buffer = new char[10];
        strcpy(buffer, source);
        return buffer;
    }

He then gives the correct explanation of the problem with this code when the length of the string in source exceeds 9 characters. After some discussion, he then says it’s easy to avoid this trap by using a so-called “safe operation” and offers this idiotic solution:

    char *safer_copy(const char *source)
    {
        char *buffer = new char[10];
        strncpy(buffer, source, 10);
        return buffer;
    }

In case the reader doesn’t know how the C string library (which is what is being used here, despite the otherwise C++ content) works, let me point out that strncpy is guaranteed not to solve the problem under discussion. The strncpy function will only copy at most the specified number of characters, but — in the critical case where the source string is too long — it will not add the very important NUL-terminator character. And so users of the returned buffer will still fall off the end of it and cause breakage.

Every C or C++ programmer who has been paying attention knows what is wrong with the C string library and knows how to use it correctly. So an error of substance like this should simply never have happened. It’s not a typo. It’s not a trivial error. It’s just plain wrong. And there’s no excuse for it.

I’m sure the author has many good things to say in this book and many of the sentences I have skimmed certainly do make sense. But stuff like this makes it impossible for me to suggest that it has any place on the budding programmer’s bookshelf. That’s a shame, because we need books that do what this book purports to do.

What irritates me most about this is that none of the book’s reviewers spotted this glaring error and none of the online reviews that I found noticed it either. This means that nobody with even a tiny clue has been looking at it.


Wed, 04 Jul 2007

Python 3000

About three years ago, I announced my plan to move away from Python for future development work. I returned to that theme twelve months ago in a couple of posts about recent experiences with Python.

It seems time to update things now. I have just been reading Guido van Rossum’s Python 3000 Status Update in an attempt to understand what the future holds for Python.

Clearly, the Python people have decided to make major changes to Python, such that software written for Python-2.x will need work if it’s to be expected to run on Python-3. Equally clearly, a great deal of work has gone into creating mechanisms to assist programmers with the necessary translations when the time comes and that’s something I applaud.

However, I have long been unhappy with Python’s continual introduction of what I see as gratuitous changes and have been looking at alternatives. Now seems like the time to jump ship. My plan now is to do some serious testing with alternative languages so that—when the time comes for me to write some new thing—I will be ready to do it in some non-Python language.

This post is just to mark the point where that decision was finally made and to link to the Python 3000 paper that marked the tipping point.


Wed, 30 May 2007

A new approach to spam filtering

About three years ago, I first considered DSPAM as a potential solution to the incoming tide of spam that was drowning me and that was increasingly overwhelming SpamAssassin, my then tool of choice. I wrote a couple of blog entries that discussed my research and included references to papers by Jonathan A. Zdziarski (the author of DSPAM) and Gordon Cormack (who, with Thomas Lynam, wrote an evaluation of anti-spam tools). I also mentioned some of my discussions with both Zdziarski and Cormack and said I would report more when I had more information.

Much time has passed and the spam problem has, as we all know, continued to get worse. Last December, having become completely fed up with the worsening performance of SpamAssassin, I decided to install DSPAM for testing. I elected not to bother training it, but allowed it to do its thing and contented myself with informing it of its errors. The downside was that I had to look at every incoming message, whether spam or not, to be sure of the classification. I have examined 82,931 messages in the last five months and I’m amazed at how well DSPAM works.

Overall, it has caught 98.92% of all spam and its false positive rate has been 0.02%. Most of the errors were in the first and second months while it was learning. Now, it is catching over 99.2% of spam with a false positive rate below 0.01% and there have been no false positives at all for a couple of months. For my wife, the learning was a little slower because she receives much less total email than me and her legitimate email volume is so small that it’s a bit of a challenge to get enough for training. However, even in her case, the detection rate is up to 98.90% and false positives have also disappeared.

I was going to modify qmail to reject messages that were deemed to be spam, but I’ve decided that it’s too much work, given the ickiness of the qmail code and the excellent performance of DSPAM. I also toyed with the idea of changing MTA, but I have not found an MTA that I would be willing to use that also has the ability to do what I want. I may one day decide to write my own MTA for in-house use, but for now I’m going to stick with qmail and the other modifications I had made to it in the past and—starting right now—I’m going to stop my practice of reviewing incoming spam in case any legitimate email is lurking there.

In other words, from now on, if anybody emails us and DSPAM thinks it’s spam, nobody will ever see the message. There will be no bounce, there will be no error message, there will be no sign that the message was lost. But it will be irretrievably lost. I have decided that the time spent on reviewing the spam is not worth the rewards, when the chances of finding a real message seem to be less than one in a million now. This is especially true when it’s also true that anybody who might need to contact us and who we would care to hear from has other methods of doing so.

I am really delighted to have got to the point where my spam load consists of hitting ‘S’ once a day to tell DSPAM about something it has missed.


Fri, 20 Oct 2006

Software quality

It’s hard to find examples where the two words in my title belong together. People of all kinds–users of software, software developers, and those who teach the next generation of developers–have been pontificating about both the problems with software and various approaches that might help to solve the problems for decades. But, as a general rule with almost no exceptions, software still sucks. And it’s getting worse, not better.

Anybody who happens to read this already knows that software is a problem since they have to be using quite a bit of software just to be reading a blog–and it’s my contention that just using software is enough to drive you to drink.

I’m a software developer, so I am fully aware of the difficulty of creating high quality software. It is indeed difficult to produce software with no bugs and, for most software, it’s probably impractical–or at least not worth the cost. But that’s not to say that the quantity of bugs in the stuff we all have to deal with every day is justifiable.

Here are a couple of examples from some of my appliances. I have a DVR. It has a 60G hard disk and a DVD writer. And it doesn’t have to do much. So why does it take 30 seconds to boot up? Why does it have to boot up after making a recording? Why can’t I set it to record something that starts less than five minutes after the previous item? Why can’t it sort titles alphabetically using the same rules as anybody else?

To elaborate on the last point, as it’s a classic example, consider the following list of titles:

  • Apple
  • Apple Tree
  • Azimuth

That’s sorted in the way that any sane person would expect. Getting software to sort it that way is child’s play. The Unix sort program will sort it that way by default. So what does my DVR do? Behold:

  • Apple
  • Azimuth
  • Apple Tree

I’ve had enough time to study its behaviour now, so I can choose titles for things that it can sort the way I want–but it’s completely crazy that I should ever have been driven to think about this. I don’t want to belabour the point about this one appliance too much, so I’ll limit myself to one other bizarre fault. It has, as I mentioned earlier, a 60G disk. So it can store quite a number of off-air programs for viewing at more convenient times. Well, it could do that if it didn’t have a limit of 7 recording slots. That’s right–seven. The first VCR I ever owned, more than 20 years ago, could be programmed for more than seven recordings.

I’m sorry, I lied. I’m going to mention one more thing about this device–because it’s faintly possible that this is a deficiency in the hardware rather than the software (not that I believe that for a minute). The advertising material and the manual for my DVR claim that it can copy between media at accelerated speed. That seems to be a reasonable capability, given what we know about other such equipment, so I expected it to work. But it doesn’t. Copy an hour of video from the hard disk to a DVD and it takes exactly one hour. Copy an hour of video from a DVD to the hard disk and that also takes exactly one hour.

And, for another instance that might be hardware but probably isn’t, consider the fact that it makes DVDs that quite likely can’t be read by at least a few of the 8 other DVD readers in the house. Sometimes, to show just how clever it is, it can make a DVD that it can’t read itself. And when that happens, the only way to get control of it again is to remove the power and physically remove the offending disk.

That device has a flotilla of other hideous bugs that make it a nightmare. My wife just leaves it to me to drive it. She is pretty smart.

Now, let’s turn to my phone. No, let’s not. At least, not beyond mentioning that it’s slow and as buggy as hell too. Having once worked as a consultant for one of the mobile phone manufacturers–where my task was to teach their programmers C and to help them with some of the interesting bits of their software–I’m not surprised to see that this phone has software that I wouldn’t spit on.

The trouble is that software is ubiquitous in our world. Even if you never touch a thing called a computer, you can’t escape it. So you’d imagine that we–the software creators of the world–would have figured out some of the basics of making software by now. But we simply haven’t come close. And nothing I see in our tertiary institutions makes me think that’s about to change all of a sudden.

One of the good things about the world of software is Free Software. (For anybody who doesn’t recognise why those two words were capitalised, have a look at this definition for some insight.) Sadly, the Free Software people are at least as bad as the rest of the software community when it comes to quality. The Free Software crowd have a bunch of silly slogans written by would-be philosophers without much insight such as the famous “Given enough eyeballs, all bugs are shallow”, usually attributed to one of the worst poseurs in the community.

The problem is that bad software is much easier to get done than good software. Of course, if you consider the subsequent investment of time by the software authors while they try to address the worst bugs, then the apparent speed of the favoured method seems less of a sure thing. And, heaven forbid, if we considered the time lost by the unfortunate users of this software, then the equation becomes ridiculous. Time spent on whatever process we can find that results in fewer bugs going into the product will be amply rewarded. And, as many people are now showing, it is almost certain that taking the time to get it right in the first place will be quicker than rushing bug-infested rubbish out the door–certainly once the developers have had time to become established in this new pattern of work.

Just as I avoided listing brand names of my appliances above, I’m not going to single out individual pieces of software for criticism here. But it’s pretty safe to say that any high-profile piece of software is almost certainly riddled with thousands of maddening bugs. And the bigger the software, the worse it will be, for reasons that will be obvious.

As a final comment on the sad state of things, I’m going to look at the state of programming languages–again without naming names. I’m not talking about the suitability of our languages for developing great software, although that is indeed an important matter. I’m just talking about the woeful state of the software in the interpreters and compilers themselves. I recently acquired a 64-bit computer, not so much because I needed the extra capabilities it had, but to use as a platform for me to use to weed out any little buglets in my own code that might be exposed by a 64-bit machine. As it happens, I have not found any. But I have been amazed at the number of languages that I wanted to use in my testing that are simply not able to run on a 64-bit platform, despite the fact that 64-bit systems have been around for years. And not to mention all the other applications that are not yet available for my 64-bit platform. This is really sad.

And it’s so bad that, in the next few weeks when my operating system comes out with its next release, I’m going to install the 32-bit version on my workstation so that I’ll be able to use all the stuff I want to use.

I’ve been working all my life at continuing to improve the way I do things. I will keep doing that. I’m happy to talk with people about ways of improving software. And I really think it’s way past time for the software development community to get off its collective butt and to start looking hard at injecting quality into software.


Sat, 15 Jul 2006

Further thoughts on Python

I posted an article recently that took a swipe at the direction of Python development. Had I realized that it would be seen by people on the Python developers list, I’d have phrased things differently—I treat this blog as a private repository of my thoughts about various things and assume it will be mostly read only by people who know me. This post is intended to explain my position a bit more clearly and to take into account some of the responses from the Python developers.

I’m going to start by outlining my position and my expectations so that those people who seemed baffled by my stance can have a better opportunity to understand this stuff. Then I’m going to discuss a small fragment of the responses on the mailing list. And I’ll finish up with my thoughts about the future.

At the outset, it would be useful to understand that there’s nothing personal in anything I say here—I’m simply stating my take on things. I know I’m far from a typical Python user, and I’m certainly not trying to suggest that everybody should do things my way or even agree with me. Although I would hope that people would agree that I’m entitled to hold my own opinions.

I’ve been developing software for commercial clients for about 25 years and some of the code I wrote twenty years ago is still in daily use in business environments. I take a lot of care to make my software into a stable and useful tool that allows its users to conduct their normal processes in the way they want—I do not believe that customers should twist themselves in knots to adapt their way of doing things to the software that somebody tosses on their lap.

Sometimes, this whole process is straightforward. The customer has a stable hardware platform and I provide stable software and their process undergoes minimal change—and their entire platform is isolated from the big bad world. In these cases, we might see the same old hardware running for 15 years or more with unchanged operating system and application software. Nothing will go wrong there.

In other circumstances, customers run their business processes on systems that are exposed to the Internet and need to keep their operating system and basic utilities up to date in order to avoid exploits. This can result in unexpected updates of things that my software might depend on—such as a new version of Python or, to take another real example, an updated Unix C compiler that introduced a gratuitous change in the format of floating point values that resulted in a database where all values were suddenly multiplied by 4.

Since I use a large number of software packages on my systems (over 500 at present), it’s completely impossible for me to keep fully informed about all the evolution that goes on in all of them. I am a contributor to a small number of free software projects and I do take seriously my responsibility to test them. But I just can’t do that for everything I use if I am also going to do my day job. So I have an expectation that my tools won’t introduce gratuitous change into my world.

What puzzled me about some of the responses on the Python developers list was that people felt entitled to take a swipe at me for expecting bug free software, despite the fact that I had clearly explained that I was not complaining about a bug—all software has bugs and I understand that they must be fixed when found. My complaint was about a change in behaviour from a function that had no bug in it.

Fortunately, Guido van Rossum (the Python benevolent dictator) is a lot smarter than the chief Perl weenie and knows how to read for the real content. He recognised that the issue I was complaining about was something that had bitten him in the past and he requested that it now be fixed. I understand that a fix is scheduled for Python-2.5 when it comes out.

I understand the desire of the Python community to continue to develop their language. (I think they’re wrong, but I’m in a tiny minority and I have no intention of trying to convert the majority to my opinion.) What I find problematical, however, is their willingness to break working code as part of this process. I complained the other day about the change in behaviour of time.strftime(). I was previously bitten by the bizarre change to the fcntl interface. Where once you had to import FCNTL, fcntl and then used a constant called FCNTL.LOCK_EX (as one example), things changed so that the FCNTL module disappeared and the constant changed to fcntl.LOCK_EX. I never bothered to discover the official reason for this change, but I stick to my view that it was completely unnecessary.

It’s one thing to extend the language and its support libraries. And I have no argument with that at all. And it’s fine to fix actual bugs in the existing code. But making changes that are guaranteed to break existing correct code is just insane, as far as I’m concerned.

As another example of inexplicable change, I would mention the change in meaning of the division operator. It doesn’t matter if, in hindsight, you see that it would have been nicer to do something differently—once people are using your language, you have to leave it alone. Or else they will do what I’m going to do. I’m going to lock my customers into Python-2.3 for now and then I’m going to migrate all my Python code to a language that doesn’t go in for this kind of breakage.

Ironically, had I used awk for the software in question, I’d have had no problems at all. But Python was new at the time and arguably nicer to write and had two minor but useful features that were missing from awk, so I decided to develop a collection of software using Python. I don’t regret doing that, but it’s definitely time for me to move on now. That’s not just because of the small number of issues that I’ve discussed here, but because of the looming arrival of Python 3000 which sounds like far too dramatic a change for me to want to keep up with it. If I have to deal with that level of change, I’m going to be far better served by choosing a more stable environment for the future work.


Sun, 09 Jul 2006

Python loses the plot

In Python’s early days, I saw it as a fine addition to the programmer’s toolkit—it seemed to offer the good things that Perl offered, but without the gruesome syntax and other Perl perversions, and the Python benevolent dictator and community seemed to have a good plan for the future.

As a result, I began developing most new large applications in Python, with the occasional bit of heavy lifting in C as needed. This approach worked well for some years. But then the wheels slowly started falling off—and yesterday’s experience has pushed me to the point where I’ve decided not to use Python for any new development. This leaves me with a dilemma, of course, as I don’t have any suitable candidate for a replacement.

So what’s my beef? In a nutshell, it’s gratuitous changes that break code that was once correct when it’s exposed to a newer Python release. This disease has afflicted Python for some time, although I have been lucky enough to have only been bitten once before. Yesterday was my second experience with this kind of breakage and I’m going to make it my last.

For those who care, the behaviour of time.strftime() changed in Python-2.4 in a really silly manner. I have a module that asks the user for a date and then parses the elements of the provided date to ensure that we in fact have a valid day, month and year. I then used time.strftime() to format it in a specific manner. Because I only had 3 of the possible 9 elements of the argument passed to this function, I had been using what used to be the documented values as place holders. As of Python-2.4, that is suddenly no good.

Clearly, I can work around this. But then they’ll break some other standard function that I’ve been using and I’ll have to work around something else. And so on. There is no legitimate excuse for this kind of arbitrary change. It’s impossible to code in such a way that you won’t be bitten, and there’s too much new software coming out every day for developers to have the time to waste reading all the fine print just in case some idiot has broken some standard API.

My interim solution is to change the first line of all my scripts from #!/usr/bin/env python to #!/usr/bin/env python2.3 and, in the longer term, I’ll find a suitable alternative language to work with. I’m certainly finished with Python.


Tue, 30 May 2006

Native driver for nVidia NIC saves the day

As distributed, FreeBSD for the AMD64 platform comes with a rather dodgy driver for the on-board nVidia nForce MCP NIC that appears on many motherboards for this CPU. In my case, the symptoms were trillions of device timeouts and weird unresponsiveness under Gnome and bizarre keyboard malfunctions—lost keystrokes and occasional cases of keys repeating hundreds or thousands of times.

Some research quickly established that these issues were well-known and that there was a revised version of if_nve.c that was supposed to address these concerns. Unfortunately, simply replacing that file with the updated version resulted in a kernel build failure, as other stuff was required. Since that other stuff was supposed to live in a directory that doesn’t even exist on my machine, I decided to try plan B.

Shigeaki Tagahira has developed a FreeBSD native driver, based on the OpenBSD driver. Today, I built that and patched my kernel with his ciphy patch for the Cicada PHY and am pleased to say that the odd behaviour I was seeing now seems to have been cured. This is great news.

And yes, this is a bit boring, but I wanted to record the essential data in case I blow away my installation and forget how to rescue it. Aren’t blogs wonderful?


Fri, 26 May 2006

Rewrite it or fix it?

I’ve been reading Adrian’s series of articles on XP with considerable interest. I’ve found it interesting to see how somebody I know has got involved with this approach to software development and I’ve felt that there were lots of good lessons there.

But I was a bit taken aback by something in a recent entry, Framing The XP Principles:

Netscape has all but gone out of business because of one bad technical decision to rewrite their entire browser instead of taking small steps and fixing the existing one.

Now, before I launch into my speculations here, I should point out that I’ve never worked at Netscape and I’ve never worked on any of the Mozilla products—although I have occasionally skimmed parts of their source code. So what follows is purely my guesses, based on what I can see from outside.

On the other hand, I have worked on a number of big projects where a decision was made to throw out some existing implementation in favour of a complete rewrite—and I’ve seen such projects succeed.

I am satisfied that the old Netscape code base was rubbish. I am also convinced that it would have been insane to attempt to fix it. Where I think Netscape went wrong was in failing to learn anything from the first time through. From where I stand, they seem to have embarked on another gigantic piece of junk without getting hold of the right people or developing a sensible plan. It’s clear to anybody who uses or who has the misfortune to need to build something like the current version of Firefox that this new code is pretty nearly as bad as the old code. Of course, it is better—but not significantly better. It’s much more like the old rubbish than like something that we’d all be proud to be a part of.

I have no way of proving my point, of course. And I don’t care that much about the specific case. And I certainly agree that it’s better to refine a code base as a general principle rather than to automatically throw it all away. But I think it’s important to recognise that there are real cases where it is better to discard even a huge code base than to get lost in a vain attempt to “fix” it.

Naturally, if the decision to rewrite is taken, it should only be done if there is a clear commitment to first learn the lessons from the failed implementation and to create a design and a methodology that have a reasonable likelihood of success.

Having said that, I now await Adrian’s next instalment with interest.


C is harder than it looks

I recently saw a nice example on a mailing list of the kind of problem dilettantes run into when they play with C. The coder naturally failed to show what he’d done, but described his problem in terms of “what’s wrong with the implementation of sockets in this operating system?” and went on to describe an impossible scenario. This led to a variety of responses from people who like to be seen to be able to help and don’t strongly feel the need to be right.

It was pretty obvious that the original coder had made a standard beginner’s mistake with C syntax and had compounded his error by failing to turn on warnings in his compiler. In essence, he claimed that the socket(2) call would return a descriptor that was already in use because, after opening his socket, data written to the descriptor would appear instead on the standard output.

Fortunately this was seen by somebody who actually knows C in time to stop too much silly speculation. This guy suggested that the coder must have done something like:

    if (sd = socket(AF_INET, SOCK_STREAM, 0) != -1) {
        /* do stuff */
    }

That code will always assign the value 1 to sd, except for the rare case where it’s not possible to open a socket (in which case the reader, having been alerted to the error, will now know what value it will get). Had our coder turned on compiler warnings in gcc, he would have been told, “warning: suggest parentheses around assignment used as truth value” which might not have been enough, but would have suggested that he needed to get help.

Real C programmers have that extra set of parentheses burned into their fingertips and don’t need to think about them—and, on the rare occasions when they do forget them, know instantly what that warning means. And real C programmers always run their compilers with all warnings turned on.

Unfortunately, too many people take a quick look at C, see something quite simple, and decide that they can safely work with it. It is indeed a simple language, but it’s also a demanding language that provides no training wheels for learners. It takes time to really understand it and it takes regular practice to make good use of it. It’s my opinion that the time required is well spent because you then have a very powerful tool at your disposal—but, like all powerful tools, it can hurt the unskilled operator.


Thu, 27 Apr 2006

Customers can drive you to distraction

Everybody who deals with customers knows that my title is a truism, but sometimes it can be interesting to examine a case.

Recently, I told the story of a customer who was in a state because the power went out suddenly and they had been seriously inconvenienced as a result of lying to me about having finally got all their equipment protected by UPSes. Yesterday, they arranged for the electricians to come back to do properly what had been arranged so long ago. As instructed, they shut down everything while the electricians were on the premises.

Late this afternoon, they finally rang me to ask what I was doing about getting everything going again. I was waiting to be told the electrical work had been done. They said they had emailed me yesterday. I asked why they had not followed up, or at least checked that the email had left their systems. This upset them, so I let it go, beyond telling them that they knew how to do these things and that they also knew how to check that their ADSL modem was functioning—which it wasn’t.

To cut a very long story short, they said they were desperate to have all their machines running now that the UPS stuff was in hand. (I had told them that I’d only sanction one machine running until the work was done, because that one machine was indeed on its UPS.)

So I said that we now needed to get a lamp or something else that didn’t draw too much current and go around to check that the new outlets were really connected to the UPS. At this point the customer went ballistic, saying that they’d already spent three days with most of their computers unavailable and they just couldn’t spare the time to do this silly checking. Of course, the sole reason they had a problem was their earlier refusal to follow the agreed plans and get all the machines protected by the various UPSes that sit there in their building for that purpose.

I then said, “OK, if you really want to power up the machines without us first checking the electrical work, all you need to do is to sit down for a minute and write me a quick email stating that you are happy with the electrical work and that you take full responsibility for anything that may happen in future if it turns out that the work was not done right.”

“But that’s not fair,” protested the customer. So I explained that what wasn’t fair was the fact that they expected me to drop everything and jump through all kinds of hoops because they refused to follow my advice. I said that they could get the machines running in a couple of minutes if they sent me the email.

And then, surprise! They decided that I’d better step one of their staff through the process of testing the wiring after all. This time the job had been done right, so now all their machines are humming away and I can be comfortable that the next power outage won’t result in me having to spend hours solving problems that should never happen.

Of course, the owners of the business have gone home in a snit and they will be even more cranky tomorrow when they tell their staff to chase me up about other things that were put on hold while the crisis was fixed—as I told the staff who stayed behind to sort out the UPS testing, tomorrow is my wedding anniversary and I just won’t be willing to take their calls.


Fri, 21 Apr 2006

That went well

Got a call in the car this afternoon, with a customer in a panic and Friday peak hour traffic buzzing about me. Their building managers had announced they’d be “working on the power” over the weekend and gave them twenty minutes to prepare. According to the customer, they then immediately cut the power. (Based on what happened later, I think they probably got the twenty minutes and failed to act promptly.)

But not to worry, since they had finally got around to doing the necessary wiring and now all their equipment was protected by their UPS devices. So I commented on their good fortune. And then discovered that they’d lied. Perhaps the wiring was done but they hadn’t moved the machines over, or perhaps I’m going to find out next week that the wiring wasn’t done at all. In any case, the machines were not protected by the UPS boxes that are sitting there humming away in their offices.

Since it was Friday and since the building’s power will be on and off over the weekend, I told them to run around the offices immediately and yank the power cords from every bit of equipment and then call me back. I stressed the “immediate” bit several times.

When they called back, about fifteen minutes later, they asked if they should keep pulling the cords now that the power was back on. Clearly, “immediate” had a different meaning for them than it has for me. Eventually, as the power went out again, and the machines all went down again in the middle of starting up, they got around to pulling the power cords.

I told them we’d sort things out on Monday morning. Fortunately, they’re in the same time zone as me. But they have a shock coming. We’re going to get one machine, their main server, running and we’re going to prove that it’s connected to its UPS. And that’s all we’re going to do.

I’m going to explain that none of the others will go back into service until they are connected to a UPS. And I’ll make it clear that I will be able to verify this and that the deal is absolutely non-negotiable. We have stuffed about for years over this issue and I’m just not going to be the idiot who runs around in a panic because they are too lazy or too pig-headed or too stubborn about what they imagine are their own priorities to manage something so simple and so essential to their own well-being.

They are going to be angry with me on Monday. That’s OK with me. But this time I’ve been handed a lever to use to make them do the only sane thing and I’d be remiss if I failed to get something done.


Face Time and Free Stuff

I am frequently asked questions by other software developers about the issues one faces in the process of moving away from a job into the scary world of running one’s own business.

Today I saw a nice post by Christopher Hawkins of Cogeian Systems entitled Face Time and Free Stuff. You’ll have to scroll down a bit to get to it, as he seems to have combined two posts in one permalink for some reason. I think he has a lot of good ideas there and I recommend it to all the people who have been asking me questions.


Fri, 24 Mar 2006

Respect must be earned

I offended a customer today by failing to show respect for some really appalling software.

The customer asked me to look at some code that they acquired from somebody cheaper than me. Unsurprisingly, it was not up to the standard they had come to expect after using my software for some years.

As is my policy, I reminded them that I don’t fix bugs in other people’s software—but I agreed to have a quick look at it. After a couple of minutes, I exclaimed, “This is a complete load of crap.”

The customer asked me not to be so judgemental about somebody I’d never met.

I reminded them that they asked me to look at the software because it failed to perform its intended function. I also reminded them that they probably asked me to look at the software because they trusted my understanding of software after years of perfect satisfaction with it. Then I explained that the software I was looking at was so bad that it must have been written by a family friend or relative for free or else it was written by a complete charlatan if they had actually paid any money for it.

There was no point in trying to explain to them why the software was broken because these people, like too many of my customers, take pride in remaining as ignorant as possible about everything to do with the computers that their business depends upon. But I did get my point across when I successfully predicted a couple of the faults that it exhibited after my quick glance at the source code.

Inevitably, this took us to: “Can you fix it for us, please?” I reminded them, once again, that fixing crap takes longer than writing good software from scratch and that I just don’t fix crap software. As so often happens, they don’t want to pay for quality. But they can’t live with what they want to pay for. Since they insist in posing in megabuck cars and flaunting their wealth in various other ways, I don’t find much sympathy for their concerns about expensive software.

I offered to write something that worked and that came with a guarantee for a fair price, but they have decided to find somebody else to fix it. Since that means I won’t have to waste time writing code for them, I thought I could spend the time ranting here.


Wed, 01 Mar 2006

Somebody please kill all the Perl weenies

If ever there was a day when it was going to become unambiguously clear to me that all the “Perl programmers” need to be taken out and shot, today was that day.

Sure, go ahead and use Perl if it’s the only hammer in your toolbox and you just need to whip up a quick personal script and can’t be bothered learning how to do things properly and haven’t got the time or money to get somebody else to do it properly. But please, all of you, just stop writing big programs in Perl and passing them off as serious software.

My specific beef today is with the complete barking idiots who are responsible for SpamAssassin. Not only do they introduce command line incompatibilities for no good reason, but they make it barf over no-longer-supported options in such a way that a message gets delivered containing only the contents of the SMTP MAIL FROM and RCPT TO commands. No headers, no body, and no sign that there’s a problem. How utterly stupid.

And then, when the command line is fixed, it barfs because the new version uses some alternate database storage for its Bayes stuff. And this barfage is even more brilliant. It spams the MTA’s logs with lengthy messages, but exits with a success code and this time outputs an entirely empty message.

Even a child could work out the appropriate failure modes for such a tool. But not those idiots.

After missing out on a chance to find out what was in the 350-odd messages that arrived while I upgraded my workstation, I can tell you that I was mightily pissed off. And it’s nice to see the Apache crowd have indeed brought something else into their fold that seems to match their approach to software. Be a shame if something good had wandered into that hole by mistake.


Sat, 04 Feb 2006

Seven Secrets of Successful Programmers

In a recent blog post, Lars Wirzenius took a swipe at a post entitled Seven Secrets of Successful Programmers by Duncan Merrion. Lars “found it to be simplistic enough to be inane” and I certainly agree with that assessment. And I mostly agree with his other criticisms of the original article.

But then Lars goes on to give his own list and that’s always going to lead to people wanting to disagree—something I’m about to do.

His fourth point is to develop debugging muscles. Lars suggests that this is difficult, since nobody teaches it and that does seem to be true, to some degree at least. But then he says “you’ll be spending most of your time doing it.” And that’s where we diverge. In my view, if you’re a “successful programmer”, then you’ll spend most of your time programming. It’s the unsuccessful programmers who spend most of their time debugging.

So, if you’re one of the people who knows your debugger better than your editor, you really need to learn more about the practice of programming—from books like The Practice of Programming by Kernighan and Pike, or Bentley’s Programming Pearls or even a classic like The Elements of Programming Style by Kernighan and Plauger.

That’s not intended to be an exhaustive list, of course. You need a more complete grounding than that and you need domain-specific knowledge as well. But, regardless of where you go to learn your craft, you need to develop your skills to a level where you hardly know how to drive the debugger because it’s so long since you last used it.


Sat, 31 Dec 2005

Technical writing is harder than people think

I’m always on the lookout for good technical writing—whether it be in books, magazines or online. I have several motivators for this:

  • my own continuing education;
  • source material I can use in teaching;
  • reference material to quote in my writing;
  • staying in touch with potential publishers for my writing.

It won’t surprise anybody that I find plenty of technical writing that fails to impress. I don’t write about everything I find that I don’t like—I’d rather be spending my time on things that interest and challenge me. And I have previously written about flawed technical papers and about the stupidity of treating C and C++ as interchangeable. But those items were written nearly 16 months ago, so it seems reasonable to revisit the themes again now.

Right now, I’m particularly interested in looking at the technical book publishers—I have a couple of books planned [1] and I’m thinking about how to pitch them to the publishers. So I was pleased to see an article in Linux Journal entitled The Arnold Robbins Book Series: A Review. The review covers the recent Open Source Development Series, edited by Arnold Robbins and published by Prentice Hall. It incorporates an interview with Arnold Robbins and reviews of two books in the series. My interest is in the series as a whole and one of the books, Linux Programming by Example: The Fundamentals, also written by Arnold Robbins.

After skimming the interview and the review, I was sufficiently interested to have a look at the book. Although the Prentice Hall link above claims the book is available on Safari Books Online, it was not available via any of my Safari subscriptions. However, Prentice Hall do offer a sample chapter online. Since that chapter covers memory management, one of the three basics that almost all books covering C programming get wrong, I thought I’d have a look.

Memory management is not only one of the big three for technical errors, but it is also frequently associated with another of the big three—imagining that it’s possible to write usefully about both C and C++ in the same place. And so it is here. As so often happens, Robbins recommends unwarranted casts in his discussion of malloc(3). Oddly enough, he incorporates some sample code from Geoff Collyer that eschews the cast and comments unfavourably on that, rather than learning from it.

This is not the place for a detailed explanation of the reasons why a cast is useless and potentially harmful, but it is worth mentioning that in C—as distinct from C++, a quite different and arguably significantly inferior language—use of the cast operator is almost always a sign of programmer ignorance. It is rarely needed in correct code, and the circumstances where it is needed are clearly understood by competent practitioners.

Part of the problem is caused by people who dabble in both C and C++ and who have fallen for the strange propaganda from the C++ zealots that C++ is an “improved C”. It’s not. As a wild guess, I’m willing to bet that the reason for the requirement for all the casts in C++ is another consequence of fact that Stroustrup, despite purporting to write a successor to C, never managed to master that simple and elegant language. (Last time I speculated on something like that, the interested party wrote to me and put me straight; expect a correction here if that happens this time around.)

Of course, Arnold Robbins is not the only person who has made this mistake of thinking it’s a good idea to cast every pointer returned by malloc. The mistake was most famously made in the second edition of the C bible, known to all as K&R2. When I first pointed out the error to Dennis Ritchie, he defended the book; but, in the face of my persistent nagging, he re-thought it and eventually decided that the book was wrong. This is now documented in the Errata for The C Programming Language, Second Edition. (Search for the references to malloc on pages 142 and 167 for the details.)

Returning to the book, the other concerns I had were that the author spent a lot of time discussing things that he suggested (quite rightly, in general) that you should not do, even with some lengthy and detailed example code. This seems pointless to me. And he failed to explain reasons for other things he recommended. For instance, he suggests that it’s always a good idea to zero newly allocated memory. (In fact, I don’t agree with the “always” part, but it is often true.) But he fails to explain either of the fundamental reasons why it’s sometimes a good idea. That seems unforgiveable.

The final issue from the sample chapter is more difficult to be sure about, because it’s impossible for me to read this book with the same state of mind as its intended audience. But, from my reading, it lacked bite. I think I’d have wanted something that stated its goals clearly, tackled the useful issues head on, explained good practice and equally explained the reasons for the possible pitfalls. This chapter seemed weak in those areas—and the specific failings I described certainly leave me disinclined to recommend the book or the series (although I’d have to look at some of the other books before I came out strongly against the whole series).


[1] As a renowned procrastinator, I may well never get around to completing these books; but I like to prepare for things in case I do go on to finish them.


Fri, 04 Nov 2005

Shell horrors

I borrowed my title from Sarah’s story, but I’m not sure I share the angst.

Of course, since we don’t get to see the “22 lines of code including if’s and for’s”, it’s impossible to know if there is reason for concern beyond the obvious—what happens to the output file if we bail in the middle because of a bug or unexpected data?

As a general matter, wrapping up relatively lengthy script sections inside a loop or a sub-shell and collecting all the output into a single redirection or pipe is common shell programming practice. Whether it’s advisable depends on many factors:

  • Is this a quick and dirty script that will be run manually and where errors will be apparent to the user and easy for her to fix?
  • Is it something that gets run in the dead of night by cron that everybody expects to work perfectly (and relies upon)?
  • Does it have multiple paths to failure, or is it something that will always work as planned? (And can we be sure of that?)
  • Will it cause us grief if it doesn’t contain recovery code to handle an unexpected full disk or other similar show-stopper?

There are more questions, but that list gives the flavour of the ones that would need to be addressed.

My own shell horror from yesterday was a script to start a binary that built a loop with:

    while /bin/true ; do
        # stuff
    done

What’s so bad about that? It’s bad because the installer didn’t bother to discover if there was a /bin/true, but just blindly assumed it. (I will say that the decision by the FreeBSD folk to move true to /usr/bin—which happened many years ago—seems to me to have been an error, but it happened.) And it’s bad because the “standard” idiom is so much better for several reasons:

    while : ; do
        # stuff
    done

This is faster and neater and it will always work—certainly on any sh that came out in the last 20 years (and that’s got to be good enough). (I’ve run out of time to research just when “:” first appeared, but it’s in at least one 1987 reference next to my desk.)


Tue, 04 Oct 2005

del.icio.us

Well, I’ve finally got with the program and signed up with del.icio.us.


Thu, 30 Jun 2005

Ruby is not the way

When I was a kid, I loved my mother’s engagement ring with its brilliant ruby surrounded by a cloud of tiny diamonds. Ever since, I’ve had a soft spot for rubies. And I was predisposed favourably towards the Ruby language when it appeared, in particular because at least one person I respected wrote some words of praise about it a few years ago. I thought that person was Henry Spencer, but my googling can’t find the article now, so either I misremembered or it was in a print medium that has escaped Google.

Recently, I’ve been playing with some software written in Ruby and found that it met my needs rather well—modulo some minor modifications I’d need to make to fit it into my way of working. And I’d also read quite a bit of praise for the language and its “Ruby on Rails” web framework which had made me consider it as a possible tool for some new development I wanted to do.

So, as is my way, I went looking for books about Ruby and found two on the web. I’ve now read one and about half of the other one. I’ll probably finish my reading, but I don’t see myself adopting Ruby for my own work and I don’t want to use software written in Ruby either. One of the great crimes of the Perl push was the idiotic mantra “there’s more than one way to do it.” And the Ruby bunnies seem to have taken this to extreme lengths—only to end up with a language that cannot be parsed reliably by human readers.

Regardless of practices that people may have adopted in the distant past, in today’s world the single most important thing in software development is simple, clear, obvious code—code that anybody with an appropriate background can just read and understand. Consistency is an important factor in readability, not just in the usual areas of white space and indentation, but in the overall syntax of the language. And a language that encourages variant syntax (or even allows it) is just a menace in terms of reliable code and practical maintenance.

I’m going to give just one example, but there are many similar cases in Ruby. The syntax for a function call is free (except where it might confuse the parser, meaning that the programmer has to know far too much about the internals of the parser for comfort). So you might have a call to the foo function in the common form:

    foo(bar, baz)

But you can drop the parentheses if you like and if (insert some complex and rather ridiculous set of rules here). I might have mis-described that slightly, and can’t be bothered checking the syntax now, but the bit about the optional parentheses is correct as is the bit about bizarre rules for when you might need them. And, even if your call is right today, a modification elsewhere in the file might make the code wrong tomorrow. When you think how simple it would have been to declare a single, simple and unambiguous syntax for a function call, this kind of design just makes me weep.

Maybe I’ll have to learn to like Python more (or Scheme or Common Lisp or Erlang)…


Wed, 25 May 2005

Firefox still has issues

While gathering the facts for my previous post, I opened a new Firefox window with a few tabs to the articles of interest. Shortly afterwards, my gkrellm monitors started yelling at me about my CPU temperature rising. It had suddenly jumped to 42C (from the 29 to 30 that it’s running now). A quick look at a top display also showed that we were running at about 97% CPU utilisation (where 5% would be normal). Those figures remained until I closed the extra Firefox window and now everything is back to normal. At least things did go back to normal and I was able to leave Firefox running, so this is better than earlier releases—although it could be better. In fairness, I’m not running the latest release today. This is 1.0.2 and I do have 1.0.4 and would normally be using it, but this particular instance of 1.0.2 has been running since 11 April, so I just put it to use out of laziness.


Mon, 07 Mar 2005

Too many ways to do it

As a longstanding member, I generally read the Usenix magazine, ;login:, from cover to cover. Most articles are interesting and most are written by people with some reputation in the field. But every so often I am disappointed.

The February 2005 issue contains an article entitled Error Handling Patterns in Perl, which I thought might be interesting. Although I don’t use Perl myself (for reasons that I have frequently enumerated and won’t repeat here), I’m quite happy to read about good programming practices, regardless of the language chosen for exposition, because ideas usually translate to other languages as well.

If one were to write an article that was intended to show the superiority of one language over another, it would make sense to provide examples in both the languages—unless one was Bjarne Stroustrup writing one of his silly things purporting to show the superiority of C++ over C. But, where the objective is to demonstrate good practice in one language, it would be wise to restrict oneself to examples in that language rather than illustrating one’s lack of knowledge of other languages.

In what follows, I’m going to assume the Perl is correct on the basis that the author claims to know Perl, but the correctness of the Perl has no impact on what I’m about to say.

To get things started, the author gives an example of what he thinks is the C idiom for opening a file (slightly cleaned up to avoid confusing the issue with extraneous material):

    FILE *fp;
    fp = fopen("/my/file", "r");
    if (fp == NULL)
        return -1;

Then he explains that you can do the same thing, using the same idiom, in Perl. The example code is obvious, so I won’t repeat it here. Then he explains why a simpler idiom might be useful and he gives the following Perl code as an illustration of a better way to do it:

    open(my $fh "/my/file") or die "Cannot open '/my/file'\n";

Fair enough. But it’s just silly to try to make that look better than the C code when anybody who is capable of getting a job as a C programmer would have used the real C idiom below:

    if ((fp = fopen("/my/file", "r")) == NULL)
        err(1, "cannot open '/my/file'");

Clearly, that is identical to the Perl example. Why make a fuss about this? Had the point been “don’t do it the way we showed in the first Perl example,” that would have made some kind of sense. But the real point was obscured by the silly pretence of showing something that purported to be better then the C way, making this a choice of language issue rather than simple good practice independent of language.

Perhaps it doesn’t really matter all that much, but I find it difficult to take seriously somebody who takes the time to give example code in a language he doesn’t use without making the slightest effort to discover if what he’s saying is plain silly. And that makes me doubtful about the value of the rest of his paper—which might just have some good stuff in it.

As it happens, I did read the rest of the paper because I still had coffee in my cup. That’s when I came across some of the real reasons why I hate Perl (and provided my title for this rant). There really are too many ways to do things, and some of them affect you even if you don’t specifically choose to use them, being built in to the language. Nobody is forced to use C’s ternary operator, and many people just don’t use it. Everybody who uses C understands the concepts of true and false as expressed in the language. But Perl has so many ways of “helping” you that it also has a million ways of expressing false, some of which work all the time and some of which don’t. How silly is that?


Mon, 03 Jan 2005

Testing software

Some Humbug bloggers have recently discussed software testing and made some interesting points. I’m keen to play devil’s advocate on some of these issues and will do so, but perhaps not for another week—by then, my study’s grand re-organisation should be finished and it might be possible for me to find a few minutes to put together some sort of case.


Wed, 29 Dec 2004

Plone plonk

Somebody recommended Plone, a “user friendly and powerful open source Content Management System” according to its web site. (Plone is built on top of Zope and Python.) From there, I had a look at The Definitive Guide to Plone, which is available on-line as well as in print.

So this is a book about using some software to create web sites. You might expect some competence in its own web presence. You might be disappointed, as I was. Stuff that smacked me in the face in the first few seconds included URLs without links; forced use of fonts that look ugly on my screen rather than leaving it to my common sense to select suitable fonts; forced use of windows wider than my screen, forcing me to scroll horizontally as well as vertically to read it. Well, thanks for the effort, but this is such a poor advertisement for the content that I’ll skip trying to read it and I’ll probably never look at Plone again.


Wed, 08 Dec 2004

Gmail fails to excite

I accepted a Gmail invite for several reasons:

  • curiosity
  • a spare email address for times when I didn’t want to use a “real” one
  • a way of subscribing to mailing lists and being able to use Google search to find interesting stuff

I had never really thought I’d use it for private or important email—that’s stuff I’d rather keep on my own servers, both for privacy and reliability reasons. But it had seemed to me that Gmail might be useful.

Given Google’s reputation, I had been expecting that Gmail would have been pretty well thought out. This turns out not to be true. It has an astonishingly poor user interface, bringing new levels of clumsiness to the web. Once I had allowed my test mailing lists to generate a few hundred messages, I was able to confirm that what looked like a poor design was in fact really dreadful. And it has very poor customisation capabilities, which render it pretty well useless to me for many purposes.

I’ll keep the account, for now, but I won’t be using it for anything unless they adopt all of the suggestions I have put to them—and a few that I haven’t bothered to ask for.


Tue, 23 Nov 2004

Progress with SpamAssassin

I’ve been using SpamAssassin for two years now, but only switched to a version with Bayesian code recently—mainly because installing the required version of Perl was more trouble than it was worth until the mail machine was upgraded. The results with SpamAssassin 2.63 have been pleasantly surprising and have convinced me of the value of the Bayesian approach.

Using version 2.20 and 2.44 (with dozens of custom rules), things had deteriorated to the point where it was detecting less than 50% of the spam addressed to my inbox. In the first week with 2.63, that improved to 88%; after two weeks, it was 93%; and now, after seven weeks, it’s over 96%. So, instead of getting 150 or more spam messages in my inbox each day, it’s now down to 12 or 13. That’s still more than I’d like, but it’s low enough that email is once again useful to me. And it’s done without any custom rules.

There is still a downside with the way I’m handling it. Because I’m using qmail, I can’t make it refuse the email during the SMTP transaction; and I certainly can’t set it up to bounce the stuff I want to reject because of the prevalence of forged sender addresses in spam. This means I have to take a few minutes each day to manually scan the sender and subject of all the suspect messages to make sure that there aren’t any false positives lurking in there.

However, I plan to put that behind me fairly soon. I’m going to switch from SpamAssassin to Dspam (to get the performance of a C library instead of Perl code) and I’m going to patch qmail to reject mail at the SMTP transaction when the spam assessment is positive or when the recipient address is either a spam trap or a non-existent address. The SMTP failure notice will point people at a web page that explains the particular reason for the failure and, for legitimate senders, gives instructions for sneaking past the filters. Of course, people with some Microsoft-based systems won’t ever see the failure reasons but will instead be confused by ridiculous made-up reasons inserted by utterly broken software. I suppose I’ll just have to put an easy-to-find page up that explains that, but I’m not all that fussed if Microsoft-using people can’t email me. My customers are not allowed to use Microsoft software and they know how to contact me; my family can always ring me; people who use my free software know how to contact me; if I lose a tiny amount of email, I think I can live with that.


Fri, 12 Nov 2004

Software packaging

In general, I like the FreeBSD ports system (which has equivalents in the other BSDs). If you have the CDs for a release, it’s as simple as typing “pkg_add emacs-21*” and it’s there. If you don’t have the package, it’s still simple (albeit slower). You just “cd /usr/ports/category/program; make all install”. The system fetches the distribution archive, adds any FreeBSD-specific patches, configures, builds and installs the software. This is all nice to use.

There are, however, circumstances where it doesn’t work so well. If you use databases such as PostgreSQL or MySQL, you can’t just build and install a new version—first you have to “dump” your current data using the old version of the software; then you build and install the new version; finally you “load” your data and test for breakage. And, if your own code was written in a language like Python and used your own C extension to talk to the SQL database, then you’ll have to rebuild that so that it links against the new database code.

This becomes critical if you decide to use the marvellous portupgrade program to update all your ports by magic because it will cause you real grief if you took a short cut and used a port for one of these critical and fragile components of your system. So the ports system is good, but it’s not perfect and probably never will be as there are just too many possibilities out there for the ports maintainers to cover them all.

There are other problems that are harder to understand. Lately, I’ve been evaluating FreeBSD-5.x, as it has a number of features that I really want to use. The recent 5.3 release seemed to be the first that might be ready for production use, so I was hoping to manage a simple install for testing. One thing I want to deploy is a web server that can do more than static pages without requiring me to become an expert in apache which I regard as unsuitable software to expose to the outside world. My intention was to try yaws. For that, I needed first to install Erlang.

There’s no erlang package on my CD, so I go to /usr/ports/lang/erlang and type “make”, only to be rewarded with: “erlang-r9c2_1,1 is marked as broken: Does not compile on FreeBSD >= 5.x”. It seems strange, but I decide to try it by hand. Some time later, I have the source (for a later version or Erlang, but I might as well have the latest), it’s configured and built and passes my tests. So then I build and install yaws and it handles my existing web site fine, as well as doing SSL and running yaws code (in a similar way that mod_php does with PHP code in apache). Today I will test it with CGI and PHP code and then I’ll make a decision. But if I’d relied on the ports person, I’d have rejected FreeBSD-5.3 for a task that it will probably handle with ease.


Another update on the Apple NTP issue

Since writing about this previously, I’ve discovered that—even though I restarted ntpd with the same magic as before—it has so far failed to synchronise with the NTP server. Clearly, something is rotten here. I’m going to reboot the laptop shortly, to accommodate some security upgrades, so it will be interesting to see what happens when it wakes up.

Update: Several reboots later—for software upgrades—and nothing has changed. Apple’s NTP is not working. I’ve since seen the announcement by the OpenBSD people for their OpenNTPD implementation (released on 2 November). It’s a lightweight utility, lacking some of the support tools (e.g., ntpdate and ntptrace) and offering slightly less accuracy than the huge implementation usually used. Their goals are to provide an implementation that is secure, lean, easy to configure, reliable, and sufficiently accurate for most purposes. So my next step will be to set it up on the laptop and see how it goes. Watch this space for further info.


Thu, 21 Oct 2004

Complexity makes mutt an ugly dog

I’ve been using mutt as my MUA for quite some time and I like it. Originally I used it with my existing MH mail folders, as I had a vast store of mail that I wanted to preserve. One of the reasons for adopting mutt was its inbuilt support for MH folders.

However, I have also been considering switching to Dan Bernstein’s maildir mail storage system to allow safe storage on NFS. Builtin support for maildir was another reason for trying mutt.

Recently, as part of an abortive attempt at using IMAP, I wrote a small utility to convert my MH folders to maildir and I have been working with mutt and maildir for the past couple of weeks. But I have become irritated with some of maildir’s deficiencies and this entry is inspired by that.

As is common with Bernstein’s great ideas, the mechanics of his solution and the solidity of its implementation are hard to fault—but there are gaping holes in the aspects of it that affect the individual user. My principal complaint is in the way he has reserved names that I want to use. Each folder is required to have three sub-directories, named “tmp”, “new” and “cur”. The directories really are needed, but it’s idiotic to steal from the name space that users might want. At the very least, they should be called “.tmp”, “.new” and “.cur”, as the maildir format explicitly suggests that names beginning with a dot be ignored by client programs.

Since I’m going to have to write a working IMAP server from scratch, I thought I might create a gbmaildir format and use that. As a first step, I then thought I might as well make the small change to mutt so that it could work with this updated maildir scheme.

Usually, when I’m about to make a change like this to some existing software, I browse the source code—to get a feel for how it was written and to reach some conclusions about the ease of accomplishing my goals. Since I use mutt every day, and since I entrust it with many tens of thousands of stored messages, I don’t want to introduce any instability into it. And since mutt, like all MUAs that I’ve ever heard about, is full of security bugs, I want my patches to be simple and clean so that I can reapply them easily each time the mutt maintainers issue a new bunch of bug fixes.

It turns out that mutt is huge. There are almost 75,000 lines of C code—enough to fill over 1,000 A4 pages in 9 point Courier and far too much to just read. To put this into context, the entire qmail suite, including daemons, ancillary programs, etc., is only 16,644 lines—including Bernstein’s replacements for most of the standard libraries and my various patches to qmail. Anyway, mutt is far too big to just read. And, once I started looking for the bits that handled maildirs, I discovered that it’s also a total mess.

Imagine that you’re writing a MUA and you know, as you must if you’re awake, that you’ll have to handle multiple mail storage formats, both local and remote. If you’ve got half a clue, you’ll have an abstraction layer between your code that presents messages to the user and the storage component. Then, when new storage schemes appear, you just add some simple code to implement them. With such a design, I could have made my changes in a few minutes and been quite confident that I hadn’t broken anything. But it’s not like that at all. And it’s such a mess that I won’t bother modifying it. Instead, I’ll write my IMAP server and test mutt against it. Let’s hope that it can make a decent fist of that. Yet another piece of open source software that should never have seen the light of day.


Fri, 15 Oct 2004

Linux dump(8)

Jason has written about some problems with dump(8) under Linux:

I am just now discovering why dump isn’t up to the task: It lacks a simple way to write a backup image to multiple destinations at once. […] My current workaround is something like this:

      dump -0aq -f - | tee /dev/ntape | bzip2 > file

Unfortunately this has the side effect of reducing the speed of the dump to about one quarter its usual speed; given that in general one wants to keep the tape moving to reduce wear, this is decidedly suboptimal.

While I can see that the proposed solution is far from ideal, I’m surprised that Jason has not, apparently, tried the canonical solution to the tape writing problem—dd(1). I would be trying something like the following:

      mkfifo /tmp/fifo
      dump -0a -f - / 2>/tmp/dump.log | 
          tee /tmp/fifo | nice -20 bzip2 >/tmp/dump.bz2 &
      dd if=/tmp/fifo of=/dev/ntape bs=8m

In the above, you’d substitute suitable paths and you’d experiment with the blocksize value for dd until you had your tape streaming.

It’s also quite possible that Jason’s particular hardware just cannot manage both the bzip2 work and the tape writing fast enough to stream the tape, no matter how clever we are with tricks such as the above. There is another solution, of course. Since any sane backup strategy involves reading back the tape to make sure that it’s usable, he could just stream to the tape with dd(1) and—while reading the tape back for his verification run—he could save the data to a file, test it, and only then compress it. This won’t work if the disk storage is inadequate, but that is at least easy and cheap to fix.


Fri, 08 Oct 2004

IMAP is such a crock

I’ve got this far in my life without needing IMAP. Today, I wasted most of the day discovering that it’s an utter crock and is not capable of performing the modest task I had in mind for it. I can’t believe that somebody actually designed this junk, especially after they got it completely wrong once before with POP. Now, when I get back from my trip north—for which I’m ill-prepared after losing so much time with bloody IMAP—I’m going to have to sit down and design and implement something that actually works. I had other stuff that I wanted to write about, but it will have to wait; I’m beat.


Tue, 28 Sep 2004

Version control systems (update)

The last time I wrote about this, I mainly discussed CVS, arch and Subversion. I noted issues with CVS that had been the principal factor that got people interested in developing alternatives and stated my reluctance to use CVS. I noted two issues with arch that bothered me—file naming and overall complexity. Those issues with arch seem unlikely to go away and recent comments by Tom Lord make me reluctant to get on to that train. I do agree that some of the issues that some people have noted with Subversion have some substance, although most of them don’t seem likely to affect me much. My original, and continuing, concern with subversion is its reliance on the Berkeley DB software for repository storage.

Since then, I’ve also had a look at Darcs and Monotone. I have not installed either of them, but have read all their documentation. I see that Martin has collected his various blog items on version control here, and I have found his thoughts very helpful. Unfortunately, although he admits to having tried Monotone, I could not discover what he thought about it. I’m almost tempted by Monotone, but I’d want to hear from people I trusted before spending time on it. I’m not at all convinced by Martin’s claims for simplicity in Darcs. Perhaps it is simple, for the simplest cases; but I found myself having to think hard about lots of things when I was thinking about trying it. I’m going to leave it aside for now.

And, while researching this, I just discovered that the 1.1 release of Subversion provides an alternative to the BDB storage that was bothering me so much. I think I’ll put this on hold for a little while and try out the newest Subversion and then make a decision. For now, I’m leaning more towards Subversion than any of the others. It seems to do what I want; it’s actively developed; it’s in use by big projects so it’s likely to get tested and developed further; and it no longer requires BDB.


Mon, 27 Sep 2004

Apple does weird things with NTP

I’ve whined about the MacOSX weirdness with NTP previously, but today I spotted a new piece of utter stupidity from their setup. To recap, the original thing that got up my nose was the fact that, although you can indeed enter the name of your local NTP server in the dialogue box to override one of the far distant Apple servers, the UI doesn’t tell you that you have to reboot before it will take effect. Of course, you don’t have to reboot, but unless you know how to go and find and kill and restart the ntpd that’s running, the change won’t happen until you reboot. The bad thing is that, although they must know all this, they don’t tell you.

Anyway, I now have it using the local NTP server, known by the name “ntp”. Today, I noticed that they haven’t done anything to cope with the fact that ntpd, as delivered, doesn’t cope well with an environment where it spends days in close contact with a server and then spends some time alone—as happens with laptops. Yesterday, I noticed that the clock was about 15 seconds out, as a result of the hours I spent at Humbug on Saturday. This morning, it had drifted to 27 seconds and it was clear that I’d have to fix it manually. This rather defeats the otherwise simple business of moving from one network to another, but I’ll pass over that for now.

So I use ps and kill to stop the ntpd process. Then I run “ntpdate ntp” to set the clock. Short pause, then “Segmentation fault”. Nice work, Apple. On all the FreeBSD boxes on this network, that exact incantation works. OK, maybe we can try “ntpdate 172.16.11.99”. Great, that at least works and I can restart the daemon. But what earthly excuse is there to segfault without explanation on a perfectly sane invocation of an old program?


Sat, 18 Sep 2004

Flawed technical papers

In the early 1980s, I read a lot of doctoral theses—some of them because they were written by friends who wanted to know what I thought; some because I thought they might be interesting; and some for other reasons that we can gloss over for now. Mostly, they were pretty boring; some, written by friends in fields that I knew next to nothing about, were fairly incomprehensible; and a surprising number were just plain bad. But then life was kind to me and I have only read the rare thesis, out of interest in its subject matter, over the past twenty years.

Recent investigations into programming languages, however, have prompted me to read some more. I find that I get quite irritated when I find silly mistakes or, worse still, plain wrong material in a thesis. After all, they are not rushed out, but are the result of a great deal of work and considerable review by others before being published. Here’s an example of a silly mistake that should have been fixed early on. It’s taken from Making reliable systems in the presence of software errors:

Strings […] are written as doubly quoted lists of characters, this is syntactic sugar for a list of the integer ASCII codes for the characters in the string, thus for example, the string "cat" is shorthand for [97,99,116].

Leaving aside the poor punctuation, the thing that just slaps you in the face is that the integers listed there cannot possibly be right; you don’t have to have the ASCII code in your head for this to be obvious. So why didn’t somebody tell him to fix this? It seems ironic that a dissertation with such a title, especially when marked “final version (with corrections)”, should have such obvious errors. There are others, but I won’t belabour the point here, as I have another example to discuss.

To be fair, this next target is not a doctoral thesis, but an honours paper—but it’s available online and is recommended from time to time by people who are normally discerning. The paper is In Search of the Ideal Programming Language. I first came across it following a recommendation from a contributor to a site that specialises in programming languages.

Early in the paper, the author provides some code examples to support his contention that C and Pascal are superior to Java in terms of simple expressiveness. The code instantiates the famous “Hello, world!” example. I don’t know Java or Pascal well enough to comment authoritatively, but the examples look similar to stuff I remember; but I do know C and his example code, for this tiny program, is plain wrong. Considering that the correct version is well known, that’s just inexcusable. Ironically, when the C is corrected, it’s not at all clear that it’s significantly superior to the Java version. In any case, being wrong, it does not make much of a case.

Later, we see that this early praise of C was not serious. He goes to great length to criticise elements of C that are so utterly trivial that they don’t deserve discussion; most often, what he demonstrates is merely that he does not understand C. There is a lengthy section that purports to show that C’s lack of a string type is an impediment to somebody writing, e.g., a word processor—when anybody who was writing such a thing in C would simply be using a string library that provided whatever facilities seemed useful. Considering that our author specifies easy integration with extension libraries as an essential feature in any useful language, it’s odd that he doesn’t seem to know when one might be used.

Towards the end of the paper, he talks about portability and gives examples of code that might give different results if compiled with different compilers. Leaving aside for now the question of whether his specific examples are “undefined” or “implementation-defined”, the real point is that only mad people would write code in this way—if a human reading the code can’t possibly guess what the programmer had in mind, the code is simply bad code and discussion of what a compiler might do with it is simply irrelevant.

The important concept that seems to have been completely missed in this long paper is that programming is a discipline and programmers have to learn how to approach it in such a way that their code is understandable by humans—regardless of the specific programming language they may choose to adopt. While it’s certain that some languages are better suited to some kinds of software than others, it’s also true that pretty much anything can be written in any general purpose language—meaning that you can’t write an operating system in awk, or any real program in Pascal.

At the end, the author asserts that no current language offers the facilities that a programming language should offer, although he stops short of specifying what his ideal language would look like. This is an odd conclusion, for two reasons. On the one hand, he seems to ignore the fact that a great deal of software has been developed successfully with the tools we have (admittedly with some classic disasters along the way); and on the other he completely fails to discuss many serious languages that have been developed to address some of his concerns. Some of those languages might be a bit new for his 1997 paper, but most of the modern languages were known before Java hit the headlines; and some important languages which have been around forever, such as the Lisp family, do not even get mentioned in passing. This is a very strange paper, in my opinion, and it’s hard to understand why people would point to it as a starting point for an investigation into programming languages.

As a final note, I did not read either of these papers in full. The Erlang paper is something that I’ll come back to and will read fully, because the content seems useful and reasonably well-written even if the silly mistakes detract from it. I’m not likely to return to the ideal language paper because it seems to have insufficient merit on any level to be worth spending the time on. So I may have missed some details, but not enough to be likely to require me to revise my thought here.


Mon, 06 Sep 2004

Why do SourceForge think C and C++ are the same thing?

I was reading somebody’s profile on SourceForge tonight and saw that he was listed as a wizard in a skill described as “C/C++”. What’s wrong with these people? Don’t they know that C and C++ are different things? I am certainly a wizard at C. As for C++, I’m ignorant—although the lowest category they have is “want to learn”, which certainly does not describe me. I feel really uncomfortable even having a developer account under an umbrella that is so opaque.

And why won’t Blosxom allow legal Unix file names to be used?


Sun, 05 Sep 2004

Sender ID hits another roadblock

It’s good to see that Debian have announced that they are unable to deploy Sender ID under the current Microsoft Royalty-Free Sender ID Patent License Agreement terms. The Debian announcement is based on an earlier announcement by the Apache Software Foundation. With a bit of luck, this will mean that Sender ID (and SPF) will bite the dust before the whole useless scheme achieves any critical mass.


Tue, 24 Aug 2004

Erlang, Haskell, Scheme

It has been my intention to return to the Lisp family for some time and one of my big plans for this year was to get somewhere with that. I even bought a couple of books (which I’ll discuss later when I’ve read more than I’ve so far had time for). And I’ve installed implementations of Common Lisp and Scheme on my workstation so that I can play.

But, as always, other things come along to distract me. Andrae has also been a bad influence on me at Humbug meetings—his readiness to talk about other interesting languages has recently had me reading books on Haskell and Erlang as well as my planned reading.

Haskell looks like an interesting language, and if I was a twenty-something student I’d want to spend some time on it. But there’s not time for everything and, since Haskell fails my beauty test badly, I’ve decided to drop it—at least for now.

Erlang is still in there with a chance. I’m finding its syntax hard to swallow, although I can see that it might improve with exposure. The jury is still out on that. What it has going for it is plenty of easy-to-find documentation and freely-available implementations of some interesting software, in particular a web server that offers Erlang as its extension language and supports the basic stuff like SSL and PHP.

It will be a while before I decide which of these languages I’ll use for my next large project, but I expect that one of them will find itself in my toolbox in the next few months—and it will have to be a language that can step into the breach caused by my decision not to develop any more big applications in Python and tkinter.

Just in case somebody cares, the beauty test is the way I evaluate the “look” of a language. To take more well-known languages, Perl fails; Python passes. The idea is that programming languages exist for the humans who have to write and (more importantly) read them. A language that is easy to read is a good language. A hard-to-read language is, other things being equal, less good.

Beauty is not my sole criterion, of course. Other things matter a lot, in particular expressiveness. If you can express a program in three lines of code that’s easy to read, then that’s usually going to be better than 300 lines or 3,000 lines because it will be easier for you to get it right and easier for others to see that you’ve got it right. This means that high-level languages such as Perl or Python or Scheme will mostly be “better” than C—except for those times when C is the best or only choice. Fortunately, it’s easy to write beautiful C.


Tue, 10 Aug 2004

Scripting versus C

In a recent article, Adrian asks why people like scripting languages. While exploring this, he says:

I don’t see why they should be considered the be all and end all solution that people seem to think they are.
And later on:
The most common reason I hear people giving for why they like scripting languages is because they “just flow better”.

I’m not sure that I agree with either of these remarks. My experience with so-called scripting languages is that people use them when they are an appropriate tool for the task at hand. People in the Unix world have been writing scripts since long before Perl and other modern scripting languages were invented. Those scripts made sense then; they still do. But this doesn’t make scripting languages the be all and end all. And scripts are quicker to write than programs written in C or Fortran (and arguably perform rather different tasks).

These are quibbles, of course, and I agree with most of what Adrian says—at least if we exclude this:

I wouldn’t consider C an option unless performance was absolutely critical for server systems however because it leaves open the risk of buffer overruns and similar security holes that can be completely eliminated automatically by most other languages.

This is one of those popular “reasons” for eschewing C, but it’s only advanced by people who haven’t learned how to write C.

Already, there are probably plenty of people saying that they do know C and that I’m being arrogant or that I’m plain wrong.

Let’s consider this a bit. If somebody drops a copy of The Elements of Programming Style in front of me, open at one of the Fortran examples, and asks me to explain it, I’ll be quite able to provide the explanation and do it well enough that it appears that I know Fortran. And, to some extent, I do—it was the first “high level” language that I learned. But nobody in their right mind would hire me to write or maintain any serious program in Fortran. I’m just not up to it. Apart from reading examples in my favourite books, I haven’t done anything with Fortran since 1965.

Lots of people who claim to know C have skills on a par with my Fortran (or my Algol, to take another language that I once worked with). Those people do tend to write C with buffer overflows. After all, one of C’s strengths is its small size—anybody can learn all the C language in a few days; and can then start writing bad C programs immediately. And, because there’s a ton of bad sample code lying around to copy from, it’s a matter of moments to create a nice big broken C program.

But real C programmers don’t work that way. They have developed their own tool box of useful code over the years and they build their software from scratch, using their tools to make that process simpler. One of the utilities that every real C programmer has developed is a library that completely eliminates any possibility of buffer overruns. These are easy to write—after all the scripting languages are written in C and have these things—and are a good exercise for developing one’s skills with the language.

The fact that the roads are filled with people who have never learned properly how to drive and who go about killing themselves off in great numbers is no proof that cars are especially dangerous. Similarly, the fact that there are lots of examples of C code with buffer overruns is no proof that C is at fault. Both problems can be largely solved by suitable training. The obstacles to that are pretty similar, too.

And, while it’s true that many scripting languages will protect programmers from buffer overruns, it’s not true that they can turn bad programmers into good ones. Programming is hard. Software is almost infinitely malleable, but many of the shapes that it can adopt are not desirable. Bad programmers can write buggy code in any language.

The real strength of scripting languages is found in those languages that provide useful abstractions that allow the programmer to work towards the solution to a problem by writing fewer lines of code. The computer doesn’t give a toss what language we use, or how elegant our code is; it will digest it all. We write software in some chosen language because we need to be able to understand where we’re going with it; and because we want other people to be able to come along later and understand it. If a scripting language helps us do that better, it’s a good tool to use. If we later find that we have performance bottlenecks, we can use our profilers to measure things and then we can turn to more efficient tools like C to help us work around the problems. But we have to find people who really know C to do that.

Whatever language(s) we choose to use, we need to take the time to study the language, to develop real software with it, and to apply the many lessons of software design to the work that we do. The lessons come from good books—still regrettably much fewer in number than bad books—and from the study of good code and from mentors and colleagues; and many of the best lessons come from experience. No language will ever free us from the need to study and to work hard.


Wed, 04 Aug 2004

$170 per line plus a company

To followup on my earlier comments on this story, Adrian Sutton has directed me to a much better version of the story by Ken Coar in which it appears that the $US85M was the price paid for the company which originally developed the software. Just goes to show that news organisations can’t ever be trusted to get a story, even a simple one, right. It’s still an utterly absurd way to ascribe value to software.


$170 per line

I’ve now seen several news stories that claim that IBM has announced that it is going to donate half a million lines of its source code to an open source software group. That’s neither here nor there. The bit that startled me in the stories was the claim that IBM valued this code at $US85M—which works out to $US170 per line. That’s incredibly expensive, even for IBM.

On one hand, it makes me wonder what my customers would have said if I’d costed my code like that—a recent project that I did for $AU50k should have cost $AU2.6M at that rate. Might have been fun for me, but the client could not have paid—and would never have been stupid enough to contemplate paying that much. And it’s inconceivable that IBM corporate code is that much more valuable than the stuff I write. I spent enough time as a consultant to IBM to know about their code quality. It could not possibly be valued in the way this has been reported. Perhaps the news stories are wrong. Maybe it’s all part of the anti-SCO program. But it’s crazy.


Sun, 01 Aug 2004

The great hackers furore

I’ve already made mention of Paul Graham’s recent paper, Great Hackers. Since then, I’ve discovered that everybody on earth seems to have an opinion on it and can’t wait to share it. I might as well join in.

I’m sure that there are people out there who think Paul Graham was right, or largely right, in this paper; but the articles that I’m seeing seem to be exclusively on the theme: “Paul Graham is wrong” or the alternate theme: “Paul Graham is an idiot.”

It’s easy to dispose of the “idiot” proposal. Paul Graham is not an idiot, at least not as an overall judgement. But he’s certainly wrong, in most of his papers, and more than usual in this one. The curious thing is that the critics all seem to see different devils in the great hackers paper. One could easily conclude, especially if one were only to read the critics rather than the paper, that the critics were all feeling seriously on the defensive because Paul Graham managed to press some of their sensitive buttons. I’m pretty certain that this is so to some degree.

The worst thing about the paper—and this applies to all his papers in the form they appear on the web—is that he really is an idiot when it comes to publishing web pages. Every paper is published in an extremely narrow column that just makes it a chore to read. That’s plain silly. And it’s inexcusable in somebody who is intelligent and who has been around the web a bit. The printed form, as found in his recent book Hackers & Painters, is quite attractive (largely thanks to LaTeX).

But back to great hackers. It’s easy to find stuff to disagree with in this paper; and it’s easy enough to find stuff that you can feel comfortable describing as just plain wrong—but I think many of the critics who are doing this might have missed the point. This paper is not a scholarly treatise, such as he would have submitted for his PhD. It was a keynote speech at a hackers’ conference. It was meant to be at least mildly controversial; it was meant to make people argue the toss about its intentions. It seems to me that it succeeded admirably in that.

Of course, since I said it was easy to find fault with it, I suppose I should do that too. Let’s take one quote that rankled with many people:

The programmers you’ll be able to hire to work on a Java project won’t be as smart as the ones you could get to work on a project written in Python.

That’s obviously just silly. Anybody can learn Python in an hour or two, assuming it’s not their first programming language. So it will be easy to get Python programmers. But there’s no guarantee that they’ll be very smart. On the other hand, since nobody can learn Java, anybody who has got their head around enough of Java to look like a Java programmer just has to be fairly smart. Naturally, the smartest programmers in the set who learned Java will have been smart enough to move on—and their skill levels will have made the moving on simple enough to do. What will the smartest programmers be using? They will be skilled in C; they’ll be strong in Python and/or Perl; they’ll have some interesting and powerful languages at their command, such as Lisp, Scheme, Erlang, and a bunch of others that will be obvious enough.

Here’s another one that provoked quite a lot of resentment:

The mere prospect of being interrupted is enough to prevent hackers from working on hard problems.

This claim seems to be more of a matter of personal preferences. I know that some people seem to work well in the midst of chaos. I know that I need quite time when I’m doing some serious programming. If I had to work in noisy offices so that I could fulfil my role of expert on call, then I made it my business to find some way to give myself quiet time to do the programming part of my job. Sometimes that meant doing it at home; sometimes, starting very early in the morning or working late at night; and sometimes it involved weekend work. But it was always possible to arrange things so that I could manage the high concentration I needed for programming some of the time, while also providing the other services that I was being paid to provide.

There’s lots more, but it’s obvious from what I’ve already covered that this is too easy a target. I think a large part of the problem is that, not only have people failed to take into account the purpose of the paper, but they have also managed to get confused over definitions. There are lots of definitions of “hacker”; there are lots of people who fit one or more of those definitions; but it seems to me that a lot of people are letting their definition get in the way of understanding the intent of the paper. It has some interesting and provocative ideas. Nobody is going to agree with all of it. Probably most people will disagree with a lot of it. But if it makes you think about what it is to be a programmer and about where you want to go with your work, then it has served its purpose.