Carl, over at cysquatch, who is more types of ninja than most people are aware exist, has written about something I think is pretty important in his recent article: metrics. This is awesome because I've been trying to find a point to start talking about metrics for a while.
Metrics in a project are super-important, because anything you can't measure, you don't have. Every important aspect of your project needs to have a system for tracking its performance, preferably an automated system. There's all manner of ways to think about this, but at a very basic level, if you can't measure it, you can't put it on a brochure, and brochures sell stuff and make you money.
This was expressed to me somewhat more succinctly by one of my superiors some time ago as "you get what you measure". I've seen this phenomenon first hand, because my employer has historically been seriously concerned with performance. This came right from the top, so what it meant in practical terms is that our CEO would stalk the cubicles randomly asking engineers "how fast does it go?", and to answer that we needed to measure the speed. The net result of this is that everyone understood that there was a priority on speed, and we had this extremely simple test against which we could measure every change we made. Over time, people ran hundreds or thousands of experiments with this single variable, speed, the result. Over time the software got faster, and over time, people got better at writing fast software.
Now that's pretty awesome when you think about it: the simple act of asking how fast software went produced, only slightly indirectly, fast software. It communicated the most important goals for the project more efficiently and more effectively than any spec ever will. And it's a motivator as well -- having a number that says "my code this week made us 10% better" is fantastic -- you can see tangible results from your work.
The catch is that when you ask only for fast software, you get fast software, but you might also get unstable software or inaccurate software, or resource-hungry software, and odds are, unstable, inaccurate, or resource-hungry software isn't software that's good enough.
The other problem, is that you might think that all that stuff will somehow "just be ok", you might even think that some of those variables aren't so important for your target market. You might even be right, but those are all things you should keep an eye on; thinking isn't good enough, you need to know.
Back to Carl's article -- he's listed some bitchin' tools to monitor code complexity. The goal then, is to reduce code complexity through that act of performing measurements. This is a killer idea, and it's one that really does work, and better yet it's easy to implement: you can set it up to be an artifact of your nightly build system and get up-to-date data on this stuff every day! Indeed, the nightly build system itself is a massively useful metric -- the question of "can you build release packages?" is pretty important, because one day you're gonna want to do that.
So what else? Memory leaks are a good one to keep an eye on, and similarly straightforward -- run your unit-tests under valgrind, and electric fence for good measure. And while we're talking about unit-tests, there's metrics for those as well. Code-coverage tests let you measure how much of your code your unit-tests are actually checking. gcov is a good tool for this stuff. Making sure you have unit-tests for every line of code will give you much greater confidence in the quality of your code, and so in turn, you will get a metric of code-functionality. Setting up a metric that the unit-tests meet the spec is one of those domain specific problems, but it can be done, and you should think about how you can do it for your project.
There's other metrics you should watch as well: the deficit between a programmer's estimate on time to fix a bug and actual time taken is honestly the only way to start getting good time-estimates. Which is another important point: metrics don't just let you improve your product now, they help your staff get better at their jobs.
Similarly, it's imperfect, but trends from time-lines of bugs open, and rates of bugs opened per unit time show interesting trends. I've found these to be good indicators of project completeness in the past, but these metrics are dangerous: it's easy to be misled by this stuff, so treat such metrics with suitably large salt-grains.
And most important of all is that you need to have metrics for all the stuff you're going to sell your software on. If you're going to push out huge ads that claim your software is user-friendly, you wanna be damn sure you've conducted the usability studies to ensure that it actually is. Secure? Get some kick-ass penn-testers, and measure holes found per unit time. Fast? Buy an Avalanche. Sexy? Well, I'll leave that one as an exercise for the reader.
I've been pretty lucky in my thus far short-lived management career and worked with universally awesome people who were all able to grasp these benefits straight off the bat, and embraced any new metrics I tried to put in place. I imagine not everyone will be so lucky, so introducing these things slowly, and getting people used to the idea that they're there to help everyone are likely the strategies you want to use.
I don't have a reference to the article at hand, but Joel has previously submitted that there are some metrics you should not measure. The example he gave was in reference to his company's bug-tracking software, and he submitted that once you can measure the bugs-closed per unit time value, programmers will start artificially inflating their count by lying to the bug-tracker and to you. I frankly find this position a bit offensive, but if such situations exist it reflects poorly on management rather than the engineers. You need to be accepting of metrics that show problems, because if people hide the problems, you can't solve them, but similarly: if you can't measure, or, don't look for the problems, you can't solve them either. Now that takes some discipline, but building that kind of trust is the only way you'll be able to make your metrics valuable, and as I alluded to before: if your metrics can't demonstrate value, your program is without value.
Anyway, I've spent enough time on this one already, though there's probably still much more to say. So I'll suggest this as something to try: make a list of all the things that are important to you about your product. Speed? Stability? Memory-footprint? If there are any items on your list that you can't put a number next to, have a ponder about why it is so and if you could. If nothing else you'll get some killer graphs out of it.