Now that we’ve covered the elements that make programs high-quality here’s a non-exhaustive list of ways to actually achieve it. None of them are absolutely necessary for achieving the quality, but they certainly help a lot.
Many people often confuse them with quality. “This software has God-damn awful code, so it’s a pile-of-crap.” Well, what do the users care how bad the code is, as long as it is functional, has all the necessary features, and is (mostly) bug-free? It’s important not to confuse quality with the ways to achieve it.
The aim of this section is to briefly cover as many measures for achieving good quality as possible.
The more modular a project’s code is, the easier it is to change it, understand it and extend it, and the faster development will take. Refactoring is the name given to the process used to transform code from sub-optimal and “ugly”, but still mostly functional and bug-free code, to a code that is equally functional but more modular and clean. See the Joel on Software excellent article “Rub a dub dub” for some of the motivation and practices of good refactoring instead of throwing away the code and restarting from scratch.
A reviewer of an early version of this article told me about an early and relatively large PHP code of her that was badly written, relatively buggy and yet proved to be popular among some of her clients, who have deployed it on many hosts and won’t effectively upgrade it. So she still has to maintain it, even though she’d rather not recommend it. She claimed that this is an indication that such low-quality in code, is a criterion of low-quality.
She has a point in the fact that usually badly-written or non-modular code results in more external low-quality factors such as bugs, security problems and lack of extensibility. However, even if the code was extremely well-written it is likely that it would need to be maintained, extended, and corrected. And if the clients in question don’t have or don’t want a good way to pull changes from a central place, or install updates properly, it’s a procedural and organisational problem.
Organisational quality deserves its own separate article (or arguably book, web-site, or even more than one book), but external software quality (much less internal one), is not a substitute for it. Please refer to a partial list I prepared on a different article, and my software “gurus” links on my home-site’s links’ list.
Automated tests aim to test the behaviour of code automatically, instead of manual testing. The classical example for them is that if we wish to write a function called add(i, j)
that aims to add two integers then we should check that add(2, 3) == 5
, that add(2, 0) == 2
, that add(0, 2) == 2
, that add(5, -2) == 3
, that add(10, -24) == -14
, etc.
Then we can run all the tests and if any of them failed, we can fix them. Then after we write or modify the code, we can test using them again to see if there are any regressions.
Writing automated tests before we write the actual code, or before we fix a bug, and accumulating such tests (the so-called “Test-driven development” paradigm), is a good practice which helps maintain high-quality code, and facilitates refactoring and makes it safer.
If you have beta testers for the code, or publish development versions frequently, you can get a lot of feedback for various different platforms and configurations your code is running on. These beta-testers can run the automated tests and also use the beta-code for their own testing or even production.
The more frequent your releases are, the more people can test your code, and the more they can upgrade to the latest version, and the quicker bugs that disturb your users are fixed, etc.
Naturally, there are advantages for slower release cycles, or for predictable release cycles like GNOME 2.x has. I won’t voice a definite opinion for which is the best methodology, but such a decision should be taken into consideration.
There are several sources, online and offline, explaining good software management for “shrinkwrap” software (open-source, commercial or other distributed) and for other types of software development (embedded, in-house, etc.), from which good advice can be taken for how to best run a software project. While they are sometimes contradictory, and often false, they still make a good read and are thought-provoking.
Here are some links:
It certainly helps for the project’s communities to have good social engineering skills. From tactfulness, to humour, to controlling one’s temper, to saying “Thanks” and congratulating people for their contribution, to timely applying patches and fixing bugs - all of these makes contributing to a project and using the program more fun and less frustrating.
Often, social engineering should be made part of the design of the software, or the web-sites dedicated to it. For example, it took me several iterations of having to fill the same project form in the GNU Savannah software-hub, only for it to be rejected, and me having to follow the process again. Despite the fact the admins were polite, it still was annoying.
Eventually, they implemented a way to save previous project submissions and to re-send them, so future users won’t become as frustrated as I did.
Again, some projects have succeeded despite the fact they had, or even still have, bad social engineering. But adopting a good software engineering policy can certainly help a lot.
Bad politics in a software project is a lot like subjectivity, it can never be fully eliminated, but we should still strive to reduce it to the minimum.. If bad political processes become common in a project, then important features are dropped, bugs are left unfixed, patches are stalled, external projects gets stalled or are killed, people become frustrated or angry and possibly leave the project - and the project may risk forking.
So it’s a good idea to keep bad politics at bay and to a minimum. How to do that is out of the scope of the document, but it’s usually up to the leaders to keep it so and maintain a good policy that will make as few people as possible frustrated and will not flabbergast external contributors. And naturally, for open-source and similar projects, a fork is often an option in this case, or in other similar cases.
A project leader and other participants should have good communication skills: very good English; pleasantness and tact; good phrasing, proper grammar, syntax, phrasing and capitalisation; clear writing; patience and tolerance; etc. If they don’t, then the project may encounter problems as people will find the project’s developers hard to understand or tolerate, and, thus, hard to work with.
Despite common belief I believe that the less hype and general noise there is regarding a software project, the better off it is. For example, as Paul Graham notes regarding Java:
[Java] has been so energetically hyped. Real standards don’t have to be promoted. No one had to promote C, or Unix, or HTML. A real standard tends to be already established by the time most people hear about it. On the hacker radar screen, Perl is as big as Java, or bigger, just on the strength of its own merits.
In fact, I can argue that if your project receives a lot of negative hype, then it is an indication that it is successful. Perl, for example, has received (and still receives) a lot of criticism, and Perl aficionados often get tired of constantly hearing or seeing the same repetitive and tired arguments by its opponents. However, the perl 5 interpreter is in good shape, it has many automated tests (much more than most other competing dynamic languages), an active community, many modules on CPAN ( the Comprehensive Perl Archive Network) with a lot of third-party , open-source functionality, and relatively few critical bugs. It is still heavily actively used and has many fans.
Similar criticism has been voiced against the Subversion version control system, Linux, etc. One thing one can notice about such highly-criticised projects is that they tend not to be bothered by it too much. Rather, what they say is that “If you want to use a competing project, I won’t stop you. It probably is good. It may be better in some respects. I like my own project and that’s what I’m used to using and use.”
This is by all means the right policy of “hyping” to adopt, if you want your project to be successful based on its own merits. Some projects compete for the same niche, without voicing too much hype against each other or for them, and this is a better indication that they are all healthy.
Finally, a project should have a good name. One example for a project with an awful name is CVSNT. There are two problems with the name:
It is based on CVS, which most people have ran into its limitations, is considered passé and unloved, and people would rather avoid.
The “NT” part implies it only runs on Windows-NT, which is both misleading and undesirable.
On the other hand, the competing project “Subversion” has a much better name, since it has nothing to do with CVS, or Windows NT, and since it is an English word and sounds cool.
Some projects are successful despite being badly named, while some have a very cool name, but languish. Still, a good name helps a lot.
Also consider what Linus Torvalds said about Linux and 386BSD (half jokingly):
No. That’s it. The cool name, that is. We worked very hard on creating a name that would appeal to the majority of people, and it certainly paid off: thousands of people are using linux just to be able to say “OS/2? Hah. I’ve got Linux. What a cool name”. 386BSD made the mistake of putting a lot of numbers and weird abbreviations into the name, and is scaring away a lot of people just because it sounds too technical.
Entire books (and web-sites) were filled with the various measures to achieve software quality, and what I described here was just a sample of it. The point is that these are not aspects of quality by themselves, but rather measures that help. None of them is a required or adequate condition for the success of a project, but the more are implemented the easier, faster, and more enjoying working on the project will be.