Static analysis tools are often used to help detect defects in source code. Simple in concept, they analyze the code and report potential defects. Of course, what everyone wants is a tool with zero false positive reports and zero unreported defects (or false negatives). Developers in particular will be resistant to using a tool that they do not see as accurate. Improving the tools accuracy often requires looking under the hood.
Like most build engines, your static analysis tool(s) create a log somewhere of the steps it takes to analyze the code. Most tools I'm familiar with call it a build log. The build log, I believe, is the first important place to look. Why should you look there? There are errors that are reported in the build log that are not reported any where else. Specifically errors are reported by the compiler ( for each file it looks at) and then by the analysis engine itself. In their default configuration not all compilation errors are visible in the defect list.
What this means is if you run an analysis and then look at just the results these tools provide you may not be getting the whole picture. For example if an entire file fails to compile, static analysis tools will often record this as a error and be duly reported to the end user or administrator. However if just a single function or data structure is unable to be resolved or in some cases include files cannot be found, this may actually go unreported except in the log files. Of course when details are lost so too will analysis fidelity.
To take a recent example I was working with a customer where the analysis build summary reported zero errors. However as is my practice, I searched the build log for error messages from the analysis compiler. I found that the analysis compiler reported 75 errors in which it could not find some include files. In this particular case when we added the include files to the compiler configuration we found that out of 1362 defect that were reported by the original analysis, 315 became "fixed", resulting in a reduction of approximately 25% in the defect density.
So what's an analysis administrator to do? I recommend you count the errors in your build log at the end of every build and if any occur, evaluate them and attempt to fix them. Usually it's pretty straight forward. Sometime it takes more effort, sometimes it's not worth the effort. Usually the payoff is worthwhile. In this case, it took a couple of hours to resolve 315 defect reports. As a result I reduced the number of defects needed to be triaged for this team by 25% (without any loss of real bugs), saving significant time.
-- Kenneth Kron