At Code Integrity Solutions, we've seen backlogs as small as a few hundred on a small codebase to over 50,000 on the largest codebases in the world. Average backlogs are in the thousands. Getting through this backlog takes time, effort and skill. The rewards are usually worthwhile because within this backlog are usually some very good showstoppers. And yet software development teams are always short on resources. Rarely will a product owner stop all development to through the backlog to find those gems hidden among the other high, medium, low and false positive results.
Oftentimes, we see the engineering management suggest, "why don't we have our offshore team do it." The offshore team may already be charged with maintenance and so it seems a logical group to address a big backlog of static analysis defects. It certainly is a cost effective solution and it will make the backlog go away. Problem neatly solved and management is happy.
Of course, successful development through offshore teams is a discipline all unto its own. We won't repeat any of that here. We discuss here the static analysis flavors of these challenges below. These problems exist in any team but are amplified in an offshore environment.
- The backlog must be prioritized. The question is who should own the prioritization? Who has a vested interest in quality that you trust to make that decision? The backlog will go away but will it be done in a way that results in significantly improved quality?
- Teams are heterogeneous and thus the quality of the prioritization and the quality of the fix will be heterogeneous
- Configuration and optimization almost always beats brute force. If a static source code analysis tool misunderstands a construct in your code, this can result in a cascade of false positives. Oftentimes a quick configuration change can wipe away hundreds to thousands of defects saving time (now and in the future) and making the analysis better understand the codebase.
Any good manager knows that they need to provide the right support, infrastructure and structure to make their team successful. For example:
- Set good standards for diagnosis of issues. Everyone needs to understand what a critical, high, medium, low, etc. categorization is. Some prioritize based on checker types (e.g. memory leaks, concurrency, etc.). Some prioritize based on whether the bug is found in a critical path, conditional path or in an exception path. Some prioritize based on what part of the code it is found in. Some use the built-in prioritization mechanisms of the static tool. Most use a combination.
- Train the team. We found mentoring the team through real bugs is the best way to get everyone on the same page. Expectation setting is critical here. These sessions help gain consistency in results. In addition, finding the "ah-ha" moments gets the team jazzed and energized to do good.
- Set up a trust but verify model where results are audited. Use these audits as learning opportunities to steer the team in the right direction and finetune further. We've audited results and identified many opportunities to improve standards, to train the team appropriately on certain topics and to open up discussion on how best to handle certain situations.
- Have an expert on hand who is aggressively seeking ways to tune the analysis. It is much more efficient than brute-force going through the backlog and it set ups the analysis to be much more accurate in the future. A simple configuration change can save weeks of effort. Paul Anderson, the VP of Engineering at Grammatech recently did a presentation at the ESC Show in San Jose that suggested that the more false positives there are the more exponentially higher misdiagnoses there are.