Business Analysis Guidebook/System Acceptance, Test Planning and Strategy

From Wikibooks, open books for an open world
Jump to: navigation, search

Test Strategy, Test Planning and System Acceptance[edit]

Testing Role and Responsibilities[edit]

The primary purpose of testing is to ensure that the finished product has as few defects as possible before implementation. Developing a comprehensive test approach is essential to ensure that defects are found, addressed and fixed. Tools related to developing and executing the test approach can include the following: Test Strategy, Test Plan, UAT Checklist, Testing Results Tracker, and Defect Management software. Each serves a specific purpose in providing thorough test coverage that is carefully documented for the project team. These tools will be described in more detail below.

NOTE: Although testing, test planning and the development of test cases can be performed by the Business Analyst, these functions will not earn experiential credit towards the IIBA CBAP or CCBA certifications. Review the certification requirements in the IIBA Handbooks at http://www.iiba.org.

See Also: Software Engineering/Testing

Test Strategy[edit]

Project testing is typically much more complex than non-project testing because there are often multiple systems/components involved. It becomes critical to document all test planning activities. A Test Strategy document can be used to outline the test preparation tasks needed to successfully complete project testing prior to implementation. All project personnel must ensure they are aware of the location and contents of this document. After the document has been drafted, all project personnel must review it to ensure they can commit to any assigned responsibilities. The process of creating the Test Strategy document should begin as soon as possible after business rules for a project have been defined. Systems/components to be tested and the associated test preparation tasks will be included in the document. As the project progresses, the Test Strategy should be reviewed and updated as necessary. Below is a general summary of common Test Strategy fields.

  • Project Information: This section defines the testing scope and provides basic project information (including but not limited to personnel, general areas of testing, project documentation).
  • Test Approach: This section provides the location of the Testing Results Tracker and the User Acceptance Testing (UAT) plans for the project.
  • Defect Management and Performance Measures: This section documents project-specific procedures for tracking defects. It also tracks testing performance measures and their impact on a testing schedule.
  • Test Strategy Signoff: This section is where the IT Lead indicates their agreement with the testing approach for the project.
  • Glossary: This section is used to document project-specific terminology.

See Also: Test Strategy Template

Test Plan[edit]

A manual test plan is commonly used for Unit, System, and Integration testing.

Unit Testing[edit]

Unit testing is done to verify functioning of a unit of code and to ensure that adequate statement, condition, decision and path coverage are provided as expected by the Developer.

System Testing[edit]

System testing is done to verify proper configuration, implementation and execution of all the components of a specific application or system against all documented requirements (business, system, software, hardware).

Integration Testing[edit]

Integration testing is done to verify all interfaces or data exchange points are functioning as designed.

The Test Plan is developed once the business rules/requirements and functional specifications are complete. It may be created by the Business Analyst, Tester, Developer, or collectively. The Test Plan will identify the change being tested. It can be as simple as a table of the components (below) on a Word or Excel document. The Test Plan should be developed to test the business requirements that have been identified. It should take every business requirement and define the different ways that it can be verified. All business requirements should be tested, but sometimes there isn’t enough time to test each and every possible scenario. Risk assessment and prioritization is part of the considerations for developing the test plan. Testing the cases and systems that have the risk with the highest cost of failure should be given priority as determined by the project team. Below is a general summary of common Test Plan fields.

  • Description: Contains reference to both the business rule and its related functional specification. This is so that the Tester will be able to verify not only that the transaction is processing correctly, but that a database update, for example, is also correct. The business rule/functional specification can be completely spelled out as it is in this example. Alternately, it could just list the business rule/functional specification number, and the Tester would consult the appropriate document for the full text.
  • Condition: Designed to validate both the business rules AND the functional specifications. The test cases could include both a “positive” test and a “regression” test. Other tests besides “positive” and “regression” tests could be included if applicable.
  • Test Activity: Instructs the Tester what activity they need to perform to validate the business rule and functional specification.
  • Expected Results: The correct outcome of the Test Activity.
  • Actual Results: The Tester will record “OK” to indicate the test passed, or will enter comments describing any errors or unexpected responses to the test.
  • Test Data: The test records to be used to run the Test Activity. If there are multiple Testers, try to avoid all Testers using the same set of records.

The purpose of the Test Plan is to make sure that all of the business requirements have been adequately defined to a level that they can be tested. If a way to test the requirement cannot be identified, more clarification of the business requirement is necessary. There may be multiple scenarios for each requirement. When designing scenarios, it is important to consider the following common types of conditions to ensure a comprehensive set of tests has been performed:

INPUTS

  • Screens – design/layout/navigation
  • Edits – boundary conditions (high-low range values)
  • Data Integrity checks (alpha-numeric data/file content)
  • Error Checking - positive and negative tests/Message content
  • Dates – leap year/if date checks in program – day before, exact day, day after

PROCESSING

  • Transaction security
  • Test all possible paths of flow, including unchanged code
  • Transaction testing – negative exceptions and positive conditions
  • Fees, Calculations, $ amounts
  • Data Updates
  • Audit Trails
  • Backup/Restore procedures
  • Run Times/Response Times/Batch Windows
  • New JCL/Changes to JCL
  • Any Batch Dependencies on changes
  • Batch Commit/Restart capabilities
  • Cancel/back out transactions
  • Shared/Reused Code and Dependencies – Test other processes that use the same code
  • Differences in production environment from test

OUTPUTS

  • Reports, messages, documents
  • Files sent external, FTP, database loads (DTS)
  • Verify that all data is in sync
  • Procedures/User Manuals updated
  • Program/system documentation updated
  • Logs
  • Document Prints
  • Response Times
  • Report/letter layouts and content

The Tester has a unique role in that they have technical knowledge of the system, but also have the perspective and insights of a non-technical user. A good test plan will test the proper technical function of the business rule, but also perform functions that are not in the normal path. Non-technical users use systems in ways that are unexpected and unplanned.

See Also:
Test Plan
Test Plan Template

Other types of testing that must be considered are:

Regression Testing[edit]

Regression testing is done to validate that previously tested functionality still performs as expected when new or changed functionality is introduced into the system under test. For example, when a single module is updated and should affect only a certain transaction or flow, the Tester should also test that other transactions sharing that module still function as they did before the change was implemented.

Automated testing tools are commonly used for regression testing, such as HP Unified Functional Testing (UFT). The automated tests are called scripts and are developed before any changes have been migrated to the test environment. The scripts should be developed to run through a large variety of test cases. When they are developed, they record the expected output as a baseline of the system. When running regression scripts, the Tester can easily identify unexpected results in the regression test analysis and then report on the found bugs accordingly. Scripts will often need to be re-recorded and/or re-baselined as a result of program changes.

See Also: Regression Testing

User Acceptance Testing (UAT)[edit]

UAT sessions are critical to a project’s success because they will allow the project team to assess whether their objectives have been met and/or whether further changes are necessary prior to implementation. The involvement of users in creating test plans for UAT is critical to the testing success. Users are the stakeholders that have solid knowledge of what the system should do and the business purposes. They offer valuable insight into identifying test records, testing scenarios, and identifying issues with systems from the perspective of non-technical users. These users will use the system, possibly for the majority of their workday, and as such should be consulted on the following:

  • Use of shortcuts and/or adaptability to Windows shortcuts.
  • Ensure the functionality is visually ergonomic.
  • Navigational ease with multiple input capabilities, such as keyboard vs. mouse input.

One possible format for user test plans is the User Checklist. Much like the format of a Test Plan, the User Checklist is a list of itemized test scenarios with a perspective of the user of the system. However, unlike the Test Plan, the User Checklist should be written in non-technical terminology so that any user can easily understand what is being accomplished. One possible format is Yes/No questions, where “Yes” is the desired outcome for the test activity. This makes it easier for the Tester to determine if there is a problem with any of the tests – if they receive a “no” response, they know that something is wrong that needs to be reviewed with the project team. The test cases could include both a “positive” test and a “regression” test. Other tests could be included if applicable. Included for each test case is the test activity that the Tester needs to perform to validate the business rule and functional specification. Below is a general summary of common User Checklist fields.

  • Description: Contains reference to the business rule. The business rule can be completely spelled out or could just list the business rule number, and the Tester would consult the business rules document for the full text. This is so that the business unit Tester performing the test will be able to verify that the transaction is processing correctly according to the business rule. The test cases appear immediately beneath the business rule, and are designed as Yes/No questions, where “Yes” is the expected result for the test activity. This makes it easier for the Tester to determine if there is a problem with any of the tests – if they receive a “no” response, they know that something is wrong that needs to be reviewed with the project team. The test cases could include both a “positive” test and a “regression” test. Other tests could be included if applicable. Included for each test case is the test activity that the Tester needs to perform to validate the business rule and functional specification.
  • Test Record(s): The test records to be used to run the Test Activity. If there are multiple Testers, try to avoid all Testers using the same set of records.
  • Result (Y/N): The Tester will record “Y” to indicate the test passed, or “N” to indicate that it failed.
  • Test Result: If the test did not perform as expected, the Tester will use this column to enter comments describing the errors or unexpected responses.

When designing a User Checklist for UAT, the designers not only need to consider the performance of any new functionality being added, but also how it may impact existing processing. It is advisable that the team working on the User Checklist has one member to take the lead in organizing and leading any meetings related to working on the documents, by serving as a Test Coordinator. The Test Coordinator should also be responsible for consolidating the group’s work into one final draft. In addition, the Test Coordinator must keep the Project Manager and IT Leads for the project apprised of the UAT group’s progress and any issues that arise.

The users’ time spent on UAT is time that is taken away from supporting their normal functions within their work unit, so it is crucial to use the time as efficiently as possible. With that in mind, the following should be taken into consideration:

  • Have all Testers confirmed availability to test according to the testing schedule?
  • Do all Testers clearly understand the objectives of the testing?
  • Have all desired testing scenarios been identified?
  • Have all necessary accounts/permissions for Testers have been established and confirmed to be working prior to testing sessions?
  • Do sufficient numbers of test records exist so that multiple Testers can test at the same time?
  • Is a clear communication plan in place and shared with Testers to report issues with testing/test results?
  • Is a defect management process in place to track and prioritize any defects found during UAT?

See Also:
User Acceptance Testing
User Checklist Template

Performance Testing[edit]

Performance Testing is done to determine how applications perform in terms of responsiveness and stability under a particular workload.

Load Testing[edit]

Load Testing is a type of Performance Testing. It is done to ensure that the solution can withstand an anticipated peak load of users.

Volume Testing[edit]

Volume Testing is a type of Performance Testing. It is done to ensure that the solution can withstand a high volume of transaction processing.

See Also: Performance Testing

Security Testing[edit]

Security testing is done to assess/test whether a system meets specific security objectives that were predefined or can be derived from the context.

See Also: Security Testing

Experience Testing[edit]

Experience testing is done to ensure that the solution is logical and accessible to an average intended user of the system.

See Also: Experience Testing

Environments[edit]

Depending upon the IT organization and the processes set up, the Tester may or may not be the gatekeeper for code migration coordination. But, Testers should be part of the communication chain of approving code migrations. Developers develop and test in their own environment, usually called the Development environment. The Developer is responsible for unit testing the code in the Development environment to ensure it works properly. As code is tested and approved, it moves through the to the Test environment and eventually to the Production environment.

System/integration testing is usually performed in a separate Test environment staged between Development and Production. The Test environment should be a copy of Production along with any programming changes that have not yet been migrated to Production. Some IT organizations may have additional environments such as Training and Volume and even a Pre-Production environment. Each environment is for the specific testing needs of the IT organization. For example, an entire duplicate environment of Production may be kept for the sole use of running ad-hoc reports. The purpose of keeping a separate environment for this purpose is to provide the needed information for users and also to keep the load off of the Production environment.

Note: A discussion on environments is warranted, since IT organizations may call their environments something different. They may be called 'regions' or something else. Also, as mentioned above, there are differences in environments for the various stages of code development.

See Also: Development Environment

Defect Management[edit]

Defect Management is the tracking, prioritization, and resolution of defects found during testing. Defect tracking tools allow the tester to capture and report on found defects to Developers and the appropriate team members. Not only are tracking tools good for logging and tracking defects, but they also provide value to the project and users. Defect management tools can link defects to business rules/requirements, test cases and regression scripts. They also allow the user to create custom reports to identify metrics such as defect trends and time-to-production reports.

Tools for this purpose exist that provide tracking and management capabilities while also keeping a repository of all defects recorded. One widely used tool is HP Quality Center. The main purpose of defect management is visibility and accountability. By the use of filters, users can easily see the defects they need to see without spending time sorting through non-pertinent information. Sometimes a Test Coordinator may not have access to a defect management system and then will need to track and manage defects manually.

Whichever method is used, defect tracking is an important process for the Tester. Defects are logged to record incorrect processes or other issues found. Subsequently, a defect report serves as a checklist of what needs to be fixed and re-tested. Defect tracking provides a good sense of the status of a project. Depending upon how far along a project is in the lifecycle, the number of defects found could be an indication of the actual level of completion of the project. As part of the risk assessment of each defect found, it may be determined that some defects are minor enough to wait to be fixed until after implementation to production so that the project is not delayed.

See Also: Quality Center

Tracking Test Results[edit]

In order to test effectively, it is important that each testing phase is documented as far as what was tested when, and by whom. This should help to reduce the number of overall defects found and give the Tester(s) of each testing type transparency as to whether a previous testing type is complete. A Testing Results Tracker is recommended to document the proposed testing schedule as well as the results of the various testing phases. It outlines the target testing dates for each type of testing being done (Unit, Integration, System, Regression, User Acceptance, etc.) In addition, it tracks the responsible Testers for each testing type. The document can track items including the number of tests run, the number of requirements tested, and the number of defects found.

A testing phase that has a dependency on a previous phase should not begin until that previous phase has been tested and documented in the Testing Results Tracker. For example, a Tester would not want to begin System testing if Unit testing had not been performed and properly documented.

See Also: Test Results Tracker Template

System Acceptance[edit]

System Acceptance is the final approval and sign off from all Testers of the testing and UAT. For documentation purposes, the sign-off should be written (such as an email), rather than verbal if possible. Once the system has received signoff and approval, it is ready for implementation.