Saturday, February 27, 2010

Who's Guarding the Henhouse When Fixing Source Code Analysis Bugs?

What happens when you have a large development team where each developer is responsible for their own quality? On the face of it, this is what every development organization wants. Developers should feel ownership of their code. However, without the right minimum standards instituted you end up with inconsistency and a false sense of security.

Development organizations are heterogeneous. You have great developers and you have developers that may be well, not so great. You have experienced developers and you have junior developers. One thing is for sure, is that everyone has their own sense for what is sufficient. Training, code review, institution of clear standards are all important ways to raise the bar at least to what is acceptable. This is why a transparent acceptance criteria is so important.

Yet, source code analysis often misses the bar. Many organizations have started to integrate source code analysis into their development process and have put in acceptance criteria gates. They are most commonly, "no new static analysis defects should be introduced" or even the more stringent "no static analysis defects in code." All very good criteria that greatly improve the quality and security of the developed product while also improving developer efficiency.

But many organizations fall down in execution. Developers analyze code, they see the results and fix the bugs. All seems good. But, I've seen in many high end software development organizations, where developers routinely mark reported defects as "false positive" or "intentional" (meaning this is how I intended it to be - not a bug). We find by auditing these defects that they are indeed real bugs, some of them quite nasty.

The developer who is under time pressure just marks these problems away. Other developers, who may have their own biases against source code analysis simply don't take time to understand what the tool is uncovering. Good bugs that could be caught at its earliest point in the development cycle get missed, only to be found in later, more expensive downstream process.

The problem occurs when you have developers deciding on their own criteria. Unlike bugs from a bug database, many organizations let the developers self-prioritize. In doing so, the "bad" developers do whatever they can to shut the tool up and move on. The good developers take the time to understand. Source code analysis is great for good developers but even more needed for bad developers.

What Can Be Done?
Training is useful. Prioritization by a different developer or a review team is important and can be tied to release criteria. Code review can create good learning experiences. Spot-checking and other type of auditing can help catch bugs and present new learning opportunities.

A good way to start is to do a good audit of your "false positives" and "intentional"'s. At Code Integrity Solutions, we provide audits to companies' results so that companies can
  • get a sense for how pervasive the problem is
  • catch a few good bugs that they would have missed
  • identify problem areas for training
  • identify learning opportunities to help developers know what is acceptable
  • help provide needed data for coding standards and acceptance criteria
Having the developers take ownership of quality through static analysis is critical to creating a good scalable process. But without the right checks and balances in place you defeat the benefit of improving the areas that need it the most.

1 comment:

  1. Audits are also good to fine tune the analysis tool itself. Too many developers don't realize that they can adjust the configuration of analysis to reduce or eliminate the "don't care" categories of defects; of course audit of FPs can also uncover an opportunity for easy additional configuration of the analysis which can help find new real bugs as well as reduce false alarms.

    ReplyDelete