Testing, 1, 2, 3 . . . 25

These tips go beyond the "test early and often" mantra and will improve your IT organization's testing capabilities - not to mention the quality of the software you release.

Next to requirements, testing is the most overlooked, most under funded, most rushed, yet most critical aspect of the software development cycle. Here are 25 ways to boost the level of success .

Three years ago, Station Casinos came up with a great promotion to lure customers: $25 worth of free slot play on their electronic loyalty cards. It worked like a charm too. Gamblers flocked to the casino in droves.

That should have been a good thing.

But one Friday night, shortly after the promotion began, when players inserted their cards into the slot machines, nothing happened. The sheer number of people trying to access the machines - at the same time the accounting department was running a number of financial applications - caused the servers that stored all the promotional information to freeze. Irate, players threw their loyalty cards on the floor and raised a ruckus.

That was a bad thing.

The source of the problem? Testing. Marshall Andrew, Station Casinos' VP of information technology and CIO, says Station Casinos never anticipated such an overwhelming response to the promotion. Consequently, IT did not test the system for such large volumes of activity, and certainly not while other programs were running. Station lost the cash they would have made that Friday, alienated customers and had to run another campaign to apologize; the casino invited some customers to return another weekend for $50 worth of free slots.

The moral: Testing is essential to developing high-quality software and to ensuring smooth business operations. It can't be given short shrift; the consequences are too dire. Businesses - and, in some cases, lives - are at risk when a company fails to adequately and effectively test software for bugs and performance issues, or to determine whether the software meets business requirements or end users' needs (see "The High Cost of Flawed Testing", page 66).

"The important thing when you roll out a system is to make sure it works," says Andrew, who has made significant changes to his testing organization (known as quality assurance, or QA) since then. First, he changed the testing process itself. Previously, developers had a great deal of freedom to change code while it was being tested to keep the project moving. Now, there are tight controls on the developers' access to test code. To keep everyone honest, Andrew had the QA specialists begin reporting to the business analyst group rather than to the development group, whose work it was evaluating. Next, he hired more QA specialists - with business training - and involved them in the development process earlier, when business analysts are creating requirements documents, so that they can then develop test scripts based on business specifications right from the beginning.

The following list of best practices for testing software and running your testing organization were gleaned from interviews with companies that have rigorous testing needs and standards. These tips go beyond the "test early and often" mantra and will improve your IT organization's testing capabilities - not to mention the quality of the software you release.

    1. Respect your testers. In many companies, testing is an entry-level job. As a result, testing isn't done well. Instead of hiring people off the turnip truck, recruit candidates who are detail-oriented, methodical and patient. Look for people who know how to code. Your developers will respect them more, and they can code some of their own testing tools. "If the development organization and the QA organization don't respect each other, we won't be able to achieve our high-level quality goals," says e-Bay's VP in charge of QA, David Pride.

    2. Collocate your testers and developers. Putting developers and testers together goes a long way toward improving communication between two groups that often lock horns (after all, testers are paid to find fault with developers' work). Physical proximity "facilitates the nuances of testing" that are best communicated through personal interaction rather than by e-mail or an application development workflow tool, says Pride.

    3. Set up an independent reporting structure. Testing should not report to any group that's evaluated on meeting deadlines or keeping costs down for a project, according to John Novak, senior VP of hotel chain La Quinta. Having testers report to the development group is the worst choice of all, Novak says. If developers are behind or having trouble with code, they will be tempted to keep testers out of the loop. Instead, Novak has testers report directly to him. Andrew has testing report into his business analyst group as a way to foster communication and to get testers involved in the development life cycle early.

    4. Dedicate testers to specific systems. At Barnes & Noble, one group of testers focuses on store systems, while others tackle financial and warehouse systems. Barnes & Noble CIO Chris Troia says focusing testers on one set of systems deepens their understanding of how those systems are supposed to work and gives them the expertise to identify problems that might not show up in a formal test document. E-Bay takes the same approach, but goes one step further. The company has three distinct testing groups: one for site functionality, one for payments and one for data warehousing applications.

    5. Give them business training. Station Casinos' Andrew makes members of his testing department work the front desk, the casino floor and in different corporate departments so they can learn the lingo and better understand the systems they're testing. (Most of his 125-person IT staff had never placed a bet on a sporting event at a casino prior to joining the company.)

    6. Allow business users to test too. Most testing involves banging on systems and fiddling with code - technical stuff - which can tempt IT to leave business users out of the loop. Bad mistake. At La Quinta, "the testers are always coming out of the business community", says Novak, to ensure that the systems IT is developing meet their specs. For some applications, especially those that run in hospitals, getting end users to test applications is a matter of life and death. "Technology people can only go so far," says Patricia Skarulis, vice president of information systems and CIO of Memorial Sloan-Kettering Cancer Centre. "We need to have users involved."

    7. Involve network operations. Nate Hayward, vice president and director of quality management with HomeBanc Mortgage, says that during testing, his company's network operations group uses a software tool to monitor servers for performance issues that could originate from the way hardware or software is configured. Involving the network operations experts in testing also gives them the opportunity to rehearse a deployment before a system goes into production, ensuring that the actual implementation will proceed smoothly.

    8. Build a lab that replicates your business environment. Four years ago, Station Casinos built a costly test lab that looks like a mini casino with slot machines, point-of-sale terminals and Web-based kiosks that simulate the computing environments at all 13 of Station Casinos' properties. Ninety percent of the applications the company runs, including wireless apps, are duplicated in the test lab. For the other 10 percent of applications, which are too big or complex to create an exact testing replica, Andrew comes up with a scaled-down subset of the app to predict how it will run when it's fully rolled out. Or he gets help. With Station Casinos' last system roll-out, he used Microsoft's test labs to run simulation models.

    9. Develop tests during the requirements phase. Companies traditionally have waited to do testing until requirements have been established and coding has begun - or finished. A growing school of thought says that testing can still be done effectively even if the requirements have not been developed fully. Fans of "agile programming" (see "Fixing the Requirements Mess", page 54) believe that testing should be done continually from the beginning of the project until the end.

    10. Test the old with the new. E-Bay uses a statistical analysis tool it built in-house to compare defects discovered by testers to the code that was tested during a particular testing cycle. The goal is to make sure that previously tested pieces of software still work properly when new features are added. Pride says the statistical analysis tool pinpoints where testers need to add test cases in the current project and also helps determine the overall effectiveness of current regression tests for forthcoming software projects. E-Bay needs to continually refine the tests because some new projects may contain the same functionality as previous projects. The better those tests can be, the better future projects will be.

    11. Apply equivalence class partitioning. This is a mathematical technique that testers can use to identify additional functional requirements that business analysts and users might have overlooked or not articulated, says Magdy Hanna, chairman and CEO of the International Institute for Software Testing. He says equivalence class partitioning gives testers a clear picture of the number of test cases they need to run to adequately exercise all of a system's functional requirements. Pride says equivalence class partitioning is one way his group can determine all the ways in which e-Bay's 157 million users might use its online auction platform.

    12. Involve testers early in the development cycle. Nathan Hayward, HomeBanc Mortgage's vice president and director of quality management, says his quality assurance workers meet with business analysts and business users before developers even start writing code - while requirements are being written - to determine what requirements they ought to test and to develop test cases for each requirement.

    13. Establish quality checkpoints or milestones throughout the entire development cycle. David Pride, e-Bay's vice president of quality assurance, says these milestones are one way the company fosters a culture of quality among its development and testing groups. Before coding begins, e-Bay's first milestone occurs when the QA and product development groups review requirements. The second milestone occurs before development ends, when e-Bay's product development and project management groups review the QA team's test plan to make sure it's adequate. Just before QA begins testing, the third checkpoint occurs as the development group shows QA that their code meets all functional and business requirements, that developers have tested the code in their environment and that it's now ready for QA to test.

    14. Write a tech guide. "A lot of the problems that come up when you're testing software are a result of people not knowing the right way to do certain things," says Mike Fields, State Farm Insurance's technology lead for claims. To crack down on bugs that can be prevented, IT workers inside State Farm's Claims department developed a technology guide filled with practical advice, templates, documentation and how-to information on the right way to go about certain design, development and testing activities. If anyone in Claims IT has a question about the best way to approach a specific task, they can refer to the tech guide.

    15. Centralize your test groups. At The Hartford's Property & Casualty Company, employees who do functional testing (that is, those who test the functionality of systems and applications, as opposed to those who do bench-testing, usability testing or integration testing) are centralized in one group. Functional testers are deployed directly to a project and then they return to the central organization when their work on a particular project is complete, according to John Lamb, The Hartford Property & Casualty's assistant vice president of technology infrastructure. Centralizing testers into one group - as opposed to staffing testers by application area - ensures that testers share best practices and lessons learned when they come off a project. If the group wasn't centralized, says Lamb, each of the testers would have their own methodologies, and communicating lessons learned from projects would be much more difficult.

    16. Raise testers' awareness of their value. State Farm created a poster and Web site that highlighted the number of defects that testers and developers found early on in the development process and the amount of money (over a million dollars) they were saving by finding those defects sooner rather than later. Highlighting the importance of their work and the impact it has on the company improves their morale and makes them approach their job with even more diligence.

    17. Don't forget about negative testing. So-called negative testing ensures that the proper error messages show up on screen when a user, say, fails to fill out required fields on a form or types in data that the application can't understand.

    18. Tell programmers to chill out. More than one source CIO interviewed for this story talked about the friction that exists between programmers and testers, and how sensitive programmers can be when it comes time for quality assurance specialists, who are evaluated on their ability to find bugs, put developers' work to the test. When testers find fault with their applications, they tend to get their knickers in a twist. You can't blame them: After all, they're worried that the problems testers find with their work will reflect poorly on them and that they'll be penalized for making mistakes. While you want hardworking programmers who take pride in their work in your IT organization, you have to make them understand that the tester's role is to find fault with their work and that testers are just doing their jobs when they do so. You also have to assure them that if they are truly diligent developers who make few mistakes and learn from the mistakes they do make, you won't hold it against them in performance reviews.

    19. Cross-train developers and testers in each other's roles. Cross-training is an excellent way to foster understanding between testers and developers and thus improve relations between the two groups. The Hartford Property & Casualty Company's Lamb says it also leads to better quality applications because each group approaches their task with a new and broader understanding of the larger software development life cycle.

    20. Test in a locked-down environment. Don't let developers into your testing environment because they'll inevitably want to modify code they've written to improve it. If developers meddle with code while QA specialists are trying to test it, keeping track of what code has changed and what's been adequately tested becomes impossible for QA. This practice is also known as code control.

    21. Analyze the impact of changes to code/make sure testers and developers are in constant communication. Test managers must speak with development managers on a regular basis to find out what changes developers have made to code after it's been tested so testers know to re-test that code, since changes can impact the entire application, says Magdy Hanna, chairman of the International Institute for Software Testing. "Analyzing the impact of changes can greatly improve the reliability of software," he says.

    22. Ensure that test cases are run against any code that developers have changed or added. This is called code coverage. Code coverage tools track the number of new or modified lines of code that have actually been tested and, in this manner, give you an idea of the effectiveness of your testing. Code coverage is also a way to ensure that you're actually testing the changes you made, since modifications often lead to bugs. Before State Farm began doing code coverage, its unit test cases covered approximately 34 percent of all changes to code. Since the insurance company started doing code coverage, its test cases cover between 70 and 90 percent of all changes.

    23. Scan your source code for known problems. State Farm's Mike Fields says vendors sell tools that will scan source code for known problems and generate reports based on that analysis. For instance, the tools will detect and report that doing X always leads to a memory leak or assigning a variable in a particular manner is not an industry best practice.

    Although tools are widely available on the market, State Farm developed its own tool for scanning source code because the ones on the market weren't adequate for State Farm's needs, says Fields.

    24. Identify patterns. State Farm uses a pareto tool that looks for patterns in data about defects. The tool helps them identify root causes for defects in software, such as not getting accurate enough requirements or not doing good documentation.

    25. Develop a Plan B. When it comes to testing, you can never be too careful. Since there will be times when applications fail in spite of your best efforts to test and re-test, it's always a good idea to have a contingency plan in place in the event a system doesn't work the way it's supposed to when it goes into production. You need to know what you're going to do in the event that a worst case scenario takes place. Marshall Andrew, CIO of Station Casinos, determines in his contingency plans how his company can back the system in question out of production and go back to the way the company did things before it was put in place, as well as how Station Casinos will handle whatever impact the failure has on customers.

SIDEBAR: The High Cost of Flawed Testing

A brief, sad but instructive history of futility and failure

• Bugs in connections between Hewlett-Packard's legacy order-entry system and SAP systems caused a backlog of customer orders for servers beginning in June 2004. The computer problems and resulting backlog cost the company $US40 million in lost revenue.

• A failure to test for specific conditions contributed to the August 2003 blackout that affected much of the north-eastern United States and parts of Canada.

• Insufficient testing was one of the causes of Nike's failed i2 demand forecasting software implementation in June 2000, which reportedly cost the company more than $100 million in lost sales.

E-Bay's 22-hour outage in 1999 prompted the online auctioneer to re-engineer its technology organization, including systems architecture and development, and testing approaches.

• Glitches in the software controlling London's emergency response system resulted in ambulances being dispatched to the wrong locations and citizens not getting proper medical care in a timely manner in 1992.

• During the 1980s, the user interface of a computerized radiation therapy machine, the Therac-25, was not adequately tested, and undetected bugs in the device's radiation administration engine made it possible for technicians to program the wrong doses of radiation. As a result, several patients died or sustained serious injury from overexposure.

Join the newsletter!

Error: Please check your email address.

More about Barnes & NobleHewlett-Packard AustraliaHISi2MicrosoftMilestoneNikePlan BSAP Australia

Show Comments

Featured Whitepapers

Editor's Recommendations

Solution Centres

Stories by Meridith Levinson

Latest Videos

More videos

Blog Posts