Originally Posted By: wfaulk
I'm far from a professional developer, but maybe instead of raw numbers you could look at an increase in numbers.

That's actually the best use for metrics -- looking at changes over time. Raw numbers are meaningless without the context. Is a complexity of 10 good or bad? Well, it's good, if the rest of your software has complexity numbers in the 50s. It might be bad, if the rest of your software has complexity numbers in the low single digits. It might be good, if it's encapsulating a complex piece of business logic in a minimally atomic piece of code. It's worth noting if it doubled from the last time you collected metrics.

It's also a good way of collecting data on under-performing team-mates. Isolate change sets, note the complexity of the functions being touched, pre- and post-edit. Did it go up? By a lot? Are they consistently writing code with complexity metrics significantly higher than the project average, even taking into account those cases where the complexity is a necessity? Does the high-complexity code correspond to where you're finding all the bugs? It probably will (based on existing research), but if the bugs are in low-complexity code, that's a darn good indicator that someone is doing a sloppy job of testing.

Compare the numbers of people following the practice of TDD and writing short, modular methods, with those who aren't. All of a sudden, there's hard data to back up assertions that TDD and short, modular methods are a cornerstone of writing quality software, and it's no longer a subjective argument.