Wednesday, July 22, 2009

Do Your Software Quality Practices Have Teeth?

These days, no one has time to do the extra things that should be done. We only have time to do the things that absolutely need to be done. We don’t strive for perfection – instead we strive for barely acceptable. Working in business today is all about compromise -- prioritizing our efforts on the things that are important AND urgent with little to no room for anything else. Over time, urgent takes precedence over important. In general, quality clearly falls into the important but nonurgent category. Proactive measures, such as using static analysis to improve quality are abandoned or delayed to address the need of the day.

Organizations see static analysis (or source code analysis) as a great way to:
  • find costly bugs and security vulnerabilties earlier in the development process
  • save developer time and money and to shorten delivery cycles
  • detect severe defects before letting them get out into production
  • increase code coverage along many more paths than dynamic testing and code review
  • improve software as the code is being written to get better working software faster
While organizations may spend significant money to rightfully purchase the best static analysis tools, they frequently don’t set up the right process and role structure for success. The ownership of the tool becomes a side responsibility for an already harried tools engineer and the tool is simply made available for developer use on a voluntary basis. Time and again, the engineers who least need static analysis, end up using it the most – these are the engineers who use everything to their advantage to improve the quality of their code. While a generalization, the engineers who complain the most often are the ones who should be using it more. Usage then degrades rapidly over time and it becomes an expensive tool that is run only occasionally delivering only a fraction of the benefit.

In most organizations, addressing the results from a static tool is frankly not considered urgent. Resolving static analysis issues provides great value but it is not putting out fires. When urgent AND important issues come up (which is status quo), the use of static analysis falls to the wayside, or never even gets started. Why?
  • static analysis reports flaws in source code. The effect on a customer isn't always apparent. For instance, it might detect a memory leak on a specific line of code but it doesn't have the same effect as a bug report from your biggest customer.
  • static analysis defects are reported from a tool, not from customer support or a sales person trying to win a deal. Problems found by a tool in the development process don’t have nearly as much urgency.
  • static analysis tools aren’t always right. In fact, some tools have very high false positive rates where the tool is more wrong than it is right. This makes it very hard for many developers to trust the tool and feel it is a good use of their time.
What's a development organization to do? How does a software development group save time overall while improving quality with static analysis? As with any new process you want to introduce to a group of people, it has to have some teeth to it. If using static analysis is purely voluntary then there's not much hope you’ll achieve the type of benefits you had hoped for.

If you want to have some teeth in using static analysis you have to have both tangible incentives and consequences for using static analysis. Or, if you prefer, this is the carrot and stick analogy. Because every organization is varied in people's skill levels and motivations, having real incentives and consequences is critical to moving the whole organization in a consistent fashion.

Some examples of incentives are:
  • educating the team on the benefits of static analysis. This is typically the job of the hampion or managers to make sure that the benefits to the organization and to the individual are spelled out.
  • peer pressure is also a powerful tool to motivate people to use the tool. If the top developers are bought into the process and are using the tool then you're more likely to get more people in the organization to follow. Cultivating the influential leaders is necessary.
  • public recognition drives many developers, oftentimes more than salary and benefits. Recognizing proper usage of the tool helps the rest of the organization see the benefits of using static analysis and drive developers to perform the right behaviors.
  • lower the barriers for usage. Make it simple to use. Automate everything as much as possible so that you lower the cost for a developer to use static analysis. Integrate with the workflow and minimize change to the process flow.
  • don't penalize people for its usage. Some developers are afraid of showing flaws in their code. After all, it's a tool that points out mistakes, sometimes really dumb ones. Set up usage models that enable developers to find all of their mistakes up front to give them a chance to fix problems before everyone knows.
Some examples of consequences are:
  • make it a check-in criteria to have all static analysis issues inspected and addressed. Developers have a lot of pride in their work. Nobody wants to be the developer that holds up the release.
  • managers typically set MBO's for their employees. Managers can set specific static analysis goals such as meeting thresholds for fixing critical problems as minimizing regressions
  • published metrics can create peer pressure. Breaking out reports by individual developers can promote healthy, productive competition. Using negative reinforcement techniques have to be coupled with the right culture. Metrics can also create the wrong behavior. For instance, publishing defect fixes could cause gaming – the creation of intentional defects in order to produce a high fix rate. Metrics focused on the overall desired results such as the number of times the developers checked code with unaddressed defects.
You either want the tool to be used or you don't. Without the proper incentives and consequences the tool will likely languish. There's a reason why people deride the phrase, "If you build it, they will come." Simply putting a tool out there for use doesn't mean it's going to be used. If the tool isn't used, then what are the consequences? Are there any ramifications for choosing not to use the tool?

There are too many things to do with too few hours to do them. Change is only worthwhile if you are going to follow through with it. Setting yourself up for success with static analysis is not hard, but because it falls into the important but nonurgent bucket, you need to set up the right structure so that it is addressed to the level where it will provide good value.

No comments:

Post a Comment