Business Analysis Guidebook/System Acceptance, Test Planning and Strategy

From Wikibooks, open books for an open world
< Business Analysis Guidebook
Jump to: navigation, search

System Acceptance, Test Planning and Strategy[edit]

The Testing Role and Responsibilities[edit]

The primary purpose of testing is to ensure that system development work has as few defects as possible before implementation. Developing a communication plan is essential to ensure defects are found, addressed and fixed. Communication tools for testing and implementation include the following: Test Strategy, Test Plan, UAT Checklist and Defect Tracking. Each document serves a specific purpose in communicating the testing needs to the project team.

A Defect Tracking tool is the primary task used for communicating testing results. Most IT shops have an OTS (Off-The-Shelf) product that serves as a repository to record, report and provide metrics on defects. In the absence of a defect management product, and if a project is small enough, a spreadsheet can be used to capture defects and their resolution. Each of the tester’s tools will be discussed below.

NOTE: Although testing, test planning and the development of test cases is usually performed by the Business Analyst, these functions will not earn experiential credit towards the IIBA CBAP or CCBA certifications. Review the certification requirements in the IIBA Handbooks at http://www.iiba.org.

See Also: Software Testing

The Test Strategy Document[edit]

Test Strategy document involves determination of the what, when and how testing is performed; it identifies and documents everything that should be tested before considered complete. The Test Strategy is the scope of the testing. The Test Strategy is a high-level plan which identifies all system and processes that should be considered. It is the tester’s responsibility to write up the Test Strategy, but it should be presented to the team for feedback, and to the team leads for input and final approval.

The Test Strategy identifies who is responsible for testing the different system components within the stages of development. It includes the date the product is expected to be received from the development team and to be received, ready to begin testing. The Test Strategy identifies the length of time it will be tested and also identifies what tools and methods will be used to confirm the expected results.

The Test Strategy is a communication plan as related to testing. The Test Strategy identifies the responsible parties for testing, approval, and User Acceptance Testing (UAT). Team members should be communicated with on the status of defect tracking and have confidence that defects are fixed and retested before release to production. Testers and other team members need to have assurance that the Test Strategy is sufficient and correct and that other systems are not negatively affected.

See Also: Testing Strategy Planning See Also: Templates

Developing the Test Case Plan[edit]

The Test Plan is developed once the business rules and functional specifications are complete. It may be created by the Business Analyst, the Tester, or Programmer, or collectively. The Test Plan will identify the change being tested. The Test Case Plan can be as simple as a chart of the components (below) on a Word or Excel document.

  • The Business Rule to be tested
  • The Condition to be tested
  • The Activity performed to run the test
  • Test Record data needed to perform the test
  • The Expected Results of the test
  • The Actual Results of the tests

The Tester has a unique role, in that they have technical knowledge of the system, but also have the perspective and insights of a non-technical user. A good test plan will test the proper technical function of the business rule, but also perform functions that are not in the normal path. Non-technical users use systems in ways that are unexpected and unplanned.

Priority Testing[edit]

All business rules should be tested, but sometimes there isn’t enough time to test each and every possible scenario on every business rule. Risk assessment and prioritization is part of the considerations for developing the test plan. Testing the cases and systems that have the risk with the highest cost of failure should be given priority. In prioritizing the test cases, professional judgment should be used, and feedback from the developers and approval from business unit decision-makers should be obtained.

Regression Testing[edit]

The tester also needs to ensure that the system has not been changed or has been affected in unexpected ways. Regression testing is performed for this function. For example, when a single module is updated and should affect only a certain transaction or flow, the tester should test that other unchanged transactions that share that module still function as they did before the change was implemented.

Regression testing is a form of testing that ensures that the processes that should not have been changed are not. It is a critical part of system testing and should be part of the Test Strategy planning. Automated testing tools are used for this purpose, such as HP Quick Test Professional (QTP). The automated tests are called scripts and are developed before any changes have been migrated to the test region. The scripts should be developed to run through a large variety of test cases. When they are developed, they record the expected output as a baseline of the system. When running regression scripts for integration testing, the tester can easily identify unexpected results in the regression test analysis and then report on the found bugs accordingly. Scripts will need to be re-recorded and / or re-baselined as a result of program changes.

Positive / Negative Testing[edit]

This type of testing perspective looks at a scenario that ensures a certain test does what it should(positive) or does not do what is expected (negative). For example, if we want to test a scenario where entering a zip code in an online form will populate the city and state fields automatically after hitting enter, our test case may say “Enter the zip code ‘12208’, with an expected result of Albany, NY.” A negative test case would say “If the user enters a valid zip code, the Error Message ‘Invalid Zip Code Entered’ will not display.”

Positive and Negative test cases are important because they ensure that the testing performed on a single business rule is ensuring the process and is working as expected when it is performed correctly. In addition, it also confirms that the case is working as expected when the transaction/process is not working within a valid test path. The error paths need to perform the correct functions when the correct path is not followed.

Volume / Load Testing[edit]

Other forms of testing include Volume and Load testing. Volume testing involves a check of the system’s response to requests when the system has large amounts of data. For example, if a user wanted to perform a complex query against a database that has a significant quantity of records, the query may bog down the system and cause it to take longer to return results than if the query were simple. Load testing involves a test of the system’s capability of handling high demand situations. If the traffic is at its peak, the load test will measure how the system handles the condition. For example, if there were a great bargain online the large number of concurrent users trying to access the shopping cart at once will put demand on the system. The demand from many users is a Load test. Load testing is also considered ‘Stress’ testing.

Environments[edit]

Depending upon the IT shop and the processes set up, the tester may or may not be the gatekeeper for code migration coordination. But, testers should be part of the communication chain of approving code migrations. Programmers develop and test in their own region, usually called the development region. The developer is responsible for unit testing the code in the development environment to ensure it works properly. As code is tested and approved, it moves through the to the test region and eventually to the production region.

System/integration testing is usually performed in a separate test environment staged between development and production. The test region should be a copy of production along with any programming changes that have not yet been migrated to production. Some IT shops may have additional regions such as Training and Volume and even a Pre-Production region. Each region is for specific testing needs of the IT shop. For example, an entire duplicate region of production may be kept for the sole use of running ad-hoc reports. The purpose of keeping a separate region for this purpose is to provide the needed information for users and also to keep the load off of the production region.

Note: A discussion on regions is warranted, since IT shops may call their regions something different. They may be called 'regions', 'environments', or something else. Also, as mentioned above, there are difference regions for the various stages of code development.

See Also: Development Environment

Defect Management and Tracking[edit]

Defect Management and tracking is an important communication tool used by the tester. Defect tracking tools allow the tester to capture and report on found defects to developers and the appropriate team members. Not only are tracking tools good for logging and tracking defects, but they also provide value to the project and users. Defect management tools can link defects to business rules, test cases and regression scripts. They also allow the user to create custom reports to identify metrics such as defect trends and time-to-production reports.

Tools for this purpose exist that provide tracking and management capabilities while also keeping a repository of all defects recorded. One widely used tool is HP Quality Center. The main purpose of defect management is visibility and accountability. By the use of filters, users can easily see the defects they need to see without spending time sorting through non-pertinent information. Sometimes a business analyst may not have access to a defect management system and then will need to track and manage defects manually.

Whichever method is used, defect tracking is an important process for the tester. Defects are logged to record incorrect processes or other issues found. Subsequently, a defect report serves as a checklist of what needs to be fixed and re-tested. Defect tracking provides a good sense of the status of a project. Depending upon how far along a project is in the lifecycle, the number of defects found could be an indication of the actual level of completion of the project. As part of the risk assessment of each defect found, it may be determined that some defects are minor enough to wait to be fixed until after implementation to production so that the project is not delayed.

User Acceptance Testing (UAT)[edit]

The UAT sessions are critical to a project’s success because they will allow the project team to assess whether their objectives have been met and/or whether further changes are necessary prior to implementation. The involvement of users in creating Test Plans/Checklists for UAT is critical to its success. Users are the stakeholders that have solid knowledge of what the system should do and the business purposes. They offer valuable insight into identifying test records, testing scenarios, and identifying issues with systems from the perspective of non-technical users. These users will use the system, possibly for the majority of their workday, and as such should be consulted on the following:

  • Use of shortcuts and/or adaptability to Windows shortcuts.
  • To ensure the functionality is visually ergonomic
  • Navigational ease with multiple input capabilities, such as keyboard vs. mouse input.

When designing a Test Plan /Checklist for UAT, the designers of the Test Plan /Checklist not only need to consider the performance of any new functionality being added, but also how it may impact existing processing. It is advisable that the team working on the Test Plan /Checklist has one member to take the lead in organizing and leading any meetings related to working on the documents, by serving as a Test Coordinator. The Test Coordinator should also be responsible for consolidating the group’s work into one final draft. In addition, the Test Coordinator must keep the Project Manager and technical leads for the project apprised of the UAT group’s progress and any issues that arise. The following testing types should be reviewed and incorporated as necessary when designing the test cases:

  • Positive Testing – Any new “processing” scenarios are confirmed to work as expected
  • Negative Testing – Any new “failure” or “error” scenarios are confirmed to work as expected
  • Regression Testing - Existing functionality is not adversely impacted by changes
  • Volume Testing - New/existing functionality can withstand a high volume of transaction processing.

Much like the format of the Test Plan used by the tester, the Test Plan /Checklist for UAT is a list of itemized test scenarios with a perspective of the user of the system. The document is not technical, and it should be written in everyday language. It is also written in a format where the successful outcome of the test is answered with a “Yes”. The UAT checklist has the following components:

  • Numbering System
  • System Being Tested
  • Description of Transaction or Test Scenario
  • Test Record
  • Yes / No
  • Test Result & Error Message (if result from previous column is “No”)

The users’ time spent on UAT is time that is taken away from supporting their normal functions within their work unit, so it is crucial to use the time as efficiently as possible. With that in mind, the following should be taken into consideration:

  • Have all testers confirmed availability to test according to the testing schedule?
  • Do all testers clearly understand the objectives of the testing?
  • Have all desired testing scenarios been identified?
  • Have all necessary accounts/permissions for testers have been established and confirmed to be working prior to testing sessions?
  • Do sufficient numbers of test records exist so that multiple testers can test at the same time?
  • Is a clear communication plan in place and shared with testers to report issues with testing/test results?
  • Is a defect management process in place to track and prioritize any defects found during UAT?

System Acceptance[edit]

System Acceptance is the final approval and sign off of the system testing and UAT. For documentation purposes, the sign-off should be written (such as an email), rather than verbal if possible. It is obtained by the person or people identified in the project documentation. Once the system has received signoff and approval, it is ready for implementation. Scheduling the implementation date should not wait until system acceptance. Rather, it should be contingent upon system acceptance.