A good commercial static analysis tool will probably find more bugs in your company's code than you have time to fix. This sounds like bad news, but take a moment for some attitude adjustment: The bugs were there all along, so this is actually a good thing, albeit embarrassing. But it means that your top priority is prioritizing the bugs—which, unfortunately, is a weak point of these tools.
So tweak the heck out of your settings. There's a lot you can configure, and the vendor’s support staff probably left you with a fairly generic configuration—something that won’t cause problems for them, but which is suboptimal for your needs.
Triage a representative sample of the defects, and if a checker doesn't seem to be producing a lot more signal than noise, move it to the bottom of your public priority list. (More on this later.) If a checker seems totally bogus (which is fairly rare) or is irrelevant to your company's concerns (far more common), turn it off completely. Do this sparingly, as it may be politically impossible to reenable a checker once you've turned it off. (Conversely, it's also worth looking at the checkers your vendor turns off by default—some of them produce surprisingly important defects.) If a checker seems flakey, check the configuration and error logs. It's regrettably easy to waste a lot of time on false positives caused by build errors. And if build configuration problems are causing obvious false positives, they're probably causing invisible false negatives as well.
Be cynically realistic about which part of your codebase you're actually going to fix soon. Define components so you can skip analysis on the code you're not going to fix. For instance, if your product uses third-party code, in theory you're responsible for its quality, and should patch bugs in it. In practice, such bugs rarely get fixed. And if it's mature third-party code, the false positive rate will be high.
Conversely, in-house code that everyone knows is going to be replaced, can be a rich source of accurate bugs that nobody cares about. Try to get buy-in on excluding such code from analysis. Skipping don't-care code will also speed up the analysis. Components also help with assigning individual responsibility for defects, which I've found crucial to getting them fixed.
Give a lot of thought to how you report defects, and in what order; the commercial tools aren't going to share your priorities. Their primary means of reporting defects is on a web page, but their page won't fit your concerns. Spend a little time reverse-engineering the URL format for browser display of defects. Do a little URL scripting to generate a simple web page with the links your organization cares about, in priority order. Give this page a short, simple, URL and publicize it internally. (I once used a lovely, scary Coverity poster of a bug under a microscope for this; I put the URL and poster over the corporate pinball machine.)
If different parts of your organization own different components, segment the report so that an engineer can just look at what he or she cares about. Be as fine-grained as possible in the report; ideally, have a separate row for each engineer, with his or her name prominently displayed. (Programmers' pride is one of your most important tools.)
Within each section, put the most important links first. This may just be a live link for defects found by the checker(s) that do the best on your code—where what’s best will take judgement and experience to decide. It will be a tradeoff between how rarely the checker is wrong, versus how serious a bug is when it's right.
Another candidate for top billing on your report is new bugs. If you've started with an unmanageable number of bugs in your old code, you may at least be able to forbid adding new ones. One last thing you might want on the report is a manually-maintained Top Ten List of particularly amusing bugs. Used carefully, humor can be an important tool for getting developers' attention, and it can take the sting out of facing harsh truths—which static analysis is embarrassingly good at turning up.
Acknowledgements
This posting is partly based on a talk I gave at the Stanford Computer Science Department and a posting at StackOverflow. My main debt is to "Weird Things that Surprise Academics Trying to Commercialize a Static Checking Tool" by Dawson Engler, Andy Chou, Ben Chelf, Chris Zak, et al., and to "The Benefits of Source Code Analysis" by Andy Yang. After I wrote this post, a follow-on to "Weird Things" was published, which covers some of the same issues, and which I highly recommend: "A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World."
No comments:
Post a Comment