Engineers have differing expectations of static source code analysis tools. Opinions of its usefulness vary widely but generally fall into one of two extreme camps.
On the one hand, some engineers think that program analysis couldn't possibly produce anything worthwhile. They believe all the results are useless and not worth the effort to examine. They may have had experience in the past with products such as Lint, which produced copious amounts of “useless” warnings. These “naysayer” engineers will rarely give static analysis a real chance and instead avoid using it, or worse yet, dismiss every result as a “false positive” or “never going to happen”.
Conversely, some engineers have the unrealistic expectation that static analysis will only produce valid, critical defects that must be fixed. These “naive” engineers may not have a good grounding of how static analysis technologies work and what its limitations are. They may equate static analysis tools with dynamic tools, which almost always report real bugs but lack the coverage and ease of static tools. These engineers will very quickly be disappointed. Static analysis finds potential problems based on sophisticated algorithms. These algorithms operate on source code and build-level information and may not have access to all of the run-time assumptions. Highly complex or very custom code can also cause the tool to misfire unless the tool has been properly tuned. Naive engineers may also "fix" issues that don't really need fixing on blind faith or to quiet the tool. Attempted bug fixes can cause other areas to break.
Nearly every organization has these polar extremes and everything in between. Naysayers cause others to lose confidence in the tool. The naive engineers may lose credibility by overstating the tool’s capability. Somewhere in between is the truth -- that static analysis produces good and useful but not perfect results. There are going to be good bugs you really want to fix. There are also going to be low priority issues that you don’t really need to fix (similarly, very few organizations fix even a quarter of the bugs logged in their bug tracking system). Other issues will be just plainly wrong (a false positive) and signals that the tool couldn’t understand that part of the code well or that the analysis needs to be configured.
Engineers need to understand not just what the tool can do, but what it can’t do as well. When an engineer is trying to interpret a result, it helps to understand what the tool did to produce that result. There are many strategies for getting a group of engineers to understand the range of value a static tool provides. Training, documentation and process changes are necessary but not every engineer learns by reading a manual or attending training class. One of the most effective ways for engineers to learn is to do some joint triaging of results. For Code Integrity Solutions, we’ve ran very successful sessions by simply going through bugs with a group of engineers in the room. We discuss a handpicked set of bugs, debate them and resolve them together. These sessions (even as short as an hour) enable everyone to come to a general conclusion around a common set of results and start the basis for a standard by which to triage defects. Senior engineers shine in these types of environments because they not only quickly understand the tool's capabilities but also are able to impart their standards to the less experienced team members.
We often hear these points made:
* This is not really a bug but this is poorly written code and needs to be redone
* Ah, I see why the tool is reporting this incorrectly. All I have to do is make a mod to the analysis to fix that
* I wouldn't have thought that it was a bug. I now see why it is after taking a deeper look.
* If we change this assumption slightly, then this would become a bug. For defensive coding reasons, we should change it, especially since we occasionally change ownership of code.
* I understand why this tool is not catching this type of problem. I need to use a different method for finding these types of problems.
Invariably the naysayer comes to understand the value the tool can provide for him or her and the rest of the organization. By also showing false positives, the naive engineers can see any limitations of the tool. For everybody involved, they see the nuances of the tool and the level of effort required for effectively evaluating results. A few hours of time makes for weeks of productivity gains.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment