We were recently working with a company that had a very large codebase. It took 2 hours to build and took another 4 to 24 hours to run a variety of tests such as unit tests and static analysis. Developers were doing full builds locally on their sad, underpowered machines. In this time of multi-core computing, developers were doing parallel computing the old fashion way: using the lengthy build and test time to catch up on email.
In the worst case, you find a problem that you need to fix. Guess what? Verifying whether the fix was sufficient creates yet another cycle. Not only that, if the build systems have a problem, the turnaround on fixing those problems can literally fritter away critical days from the development cycle. Enlightened companies realize that this is a lot of time wasted but most importantly, creates a latency to the information you need to move code toward the finish line. If you are going to fail, you should fail fast so you can fix it quickly.
Software is complex and brittle. Not surprisingly, the infrastructure to build and test and manage the software can be similarly unreliable. There is good reason: software systems must support many multiple operating systems, integrations and devices and development cycle times have been compressed with Agile methodology.
After this company spent effort (and some hardware) to move their build to 15 minutes and all of their tests to run in 5 hours, their developers could at least get same day response times. One more iteration meant that higher quality features could be checked in faster making their developers more productive.
They created a rough matrix:
Build/Test
========
15 minute = "immediate" feedback
30 minute = basically "on demand"
1 hour -> 3 hours = run multiple times per day
4 hours -> 6 hours = potentially same day response time
7 hours -> 14 hours = overnight process (ready by next morning)
Their next goal, through further optimizations was to get to multiple times per day.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment