If you take a look at all the software written in this world you'd find a vastly heterogeneous set of applications. The list of these applications include medical devices, military/aerospace, banking, games, network telecommunications, automotive, gaming, accounting, wireless and much much more.
As to the importance of quality, security and reliability, even a gaming application can demand good quality and high performance. While a game crashing may be a trivial situation for an individual games customer, the game publisher may be faced with an expensive patch they must rollout or a poor review to hurt their sales. Good software quality and quality is good business.
Static source code analysis is up to the task of helping software developers improve the quality and security of their code in the earliest part of the development process. Finding critical problems in source code is simply easier and faster to fix. However, codebases are very different. They use different libraries, have autogenerated code. They have different memory allocation mechanisms and concurrency and locking mechanisms. They have different internal coding standards by which they adhere to. They support many different platforms. They may come in very different flavors like C, C++, C# and Objective-C some of which bear very little resemblance to the other. Even within C, there are different implementations of the C standard. What's a static analyzer to do?
Static source code analysis does its best job to interpret the code at the cost of potential false positives. Clearly if the static analyzer cannot understand the codebase it has less information to make good decisions. Static source analysis vendors ship out their best configuration but it's a one-size fit all. A medical devices company analyzing 50,000 lines of code for a pacemaker is getting the same settings as a networking company with a 18,000,000 lines of code operating system. What's important for one company may be extremely different from another. For instance, a company who provides accounting software that runs on the desktop could care less about memory leaks as the application is frequently exited and reentered. Counter that with network routing software that must be up 99.999% of the time and may run continuously for years. A crash on an embedded system on an airplane or automobile may be different in priority than to a crash on a web application where automatic failover to a different server is built into the infrastructure.
In addition, the way an application is coded may be different. The application may use standard memory allocators provided by the operating system or the application may use its own proprietary memory management techniques.
The analysis may be more conservative or aggressive than desired. A medical devices company who wants to find every potential bug has a different tolerance for the "noise" than a large gaming application which has to deal with tens of thousands of bugs and only wants to hit the highest priority ones.
One size cannot fit all. Only one version of the software comes from the provider with the same factory settings as everybody else. That same version is used for all industries and even in vendor presales situations as well. It's understandable that this is the case. Vendors understandably must provide the tool with the best possible setting for as many different situations as possible.
But clearly it can't address all situations optimally. With just some smart tuning, the analysis can be much smarter about the specific codebase it is looking at and make the results more relevant and accurate. In some cases, we've been able to tune away thousands of defects with just a single configuration change which saves weeks of collective developer time. Other cases of tuning have resulted in hundreds of more critical bugs being reported that wouldn't have otherwise with the "factory settings." The payoff to good expert tuning is both immediate and long term.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment