In the agile release lifecycle, user acceptance testing (UAT) stage plays a short-lived yet prominent role at the end of a sprint or at the end of the development cycle for hybrid models. UAT can play multiple roles, but it’s primary focus should be to keep potential defects and experience gaps from getting to the customers and users. An end-to-end UAT stage starts with a clear understanding of the experiences, defining a testing plan, designing test cases, documenting desired outcomes and testing steps, and system and environmental readiness.
Following are seven lapses to avoid during user acceptance testing:
Lapse #1: Inadequate evaluation of the functionality from the users point of view
Successfully creating a new functionality is a great achievement without a doubt. Functional testing ensures that the code runs without hiccups and as expected. A key failure of user testing is to get tunnel vision on rushing the functionality into Production without ensuring that the experience also works for the users. For a feature to be successful in the marketplace, it must function correctly, but it must also be value add and help the users reach their goals with minimum hassle. For example, after a purchase, the customer is issued an invoice. The feature may work correctly and generate an invoice. However, if the invoice is not easy to explain, doesn’t show the complete charges, returns and credits, then it will cause confusion for the users. Confused users will make call sot support support and could cause loss of users’ trust. So, user acceptance testing should give equal weight to the acceptance of experience from the user’s perspective as it does to functional acceptance criteria.
Lapse #2: Focusing on the “sunny day” scenarios
Test cases must simulate the real world and the actual customer experiences as much as possible. The “sunny day” scenarios, also referrred to as minimum viable product path or happy path scenarios, only check that the intended functionality is working versus testing the real-world conditions. Customers will use the services and apps to solve their real-world issues. Creating real world scenarios, requires the knowledge of how customers are using the functionality within the context of the platform. To create the right scenarios with the right level of intricacy, it’s important to look into historical support tickets and discuss with subject matter experts in business and development about how users interface with the platform to staisfy their ongoing needs. In addition to functional and integration testing, UAT must focus on critical and high importance user scenarios that simulate the real world.
Lapse #3. Start user testing before validating the scenarios with both development and business
The Business owns the experience once the development work is completed. It’s always easier, and more cost effective, to fix defects during the development phase. After the release it’s more difficult to fix experiences when everyone has moved on to other priorities. As a result, to get the best outcomes, test plans, test cases and expected outcomes must be validated and signed off by both the business and the development prior to start of testing including validation of release blocking scenarios.
Lapse #4: Releasing a feature without end-to-end UAT testing
At the end of an agile sprint, completed user stories are functionally tested to ensure that the feature is done. User testing on the other hand, requires taking the end-to-end experience in mind with no shortcuts or surrogates. The users will go through the full end-to-end experience. and they expect the feature to work within the end-to-end experience. In agile development teams may work on only one feature. Continuous integration testing is used to ensure the end-to-end system works. UAT must test the end-to-end experience to ensure user experience works, prevent higher volumes of support calls, and drive new business results. UAT’s role is not to check-the-box but test the feature from the user’s perspective within the end-to-end experience.
Lapse #5: Delve into testing without clearly articulating start and acceptance criteria
UAT starts at the end of development cycle, after functional and component testing are done. Development teams, business owners, and the testing team must all sign off on the UAT start criteria and UAT acceptance criteria. At kick-off, the prod/test environment, must be ready for UAT testing. Changing code during user testing requires additional regression testing to avoid technical debt from creeping into the functionality. At the conclusion of testing, it should be clear who will have the sign-off authority and who will make the final go-no-go decision. The status of each key scenario must be reported so that at go-no-go there can be a data-driven decision to release or delay based on criticality of the user experience. It’s best to gain agreement on the UAT start and sign-off criteria before the testing starts.
Lapse #6: Confrontational relationship between development teams and UAT
UAT and development teams may form distrust towards one another. There could be differences in terms of priority of defects, and impact of defects on release timing and costs versus driving for experience quality. Undoubtedly, there is a level of conflict in the relationship between UAT and development teams, but excessive conflict leads to finger pointing. As a result, testers should understand the desired experiences in detail, put the experience in context when explaining defects/test objectives, and help with understanding of the root causes of defects. The test team must maintain high integrity and transparency and avoid seeing defects as the ‘path to glory’. Testers are finding defects so the customers don’t. Testers must not give up easily or get frustrated with ambiguities in the functionality or due to issues with test environment. Good testers must also be team players and be willing to work with developers, business owners, and other testers to learn, teach and work together to attain the best results for the customers and users.
Lapse #7. Not asking lots of questions during testing.
A new feature must work per the requirements. UAT relies on the requirements to set up and execute their test scenarios. To ensure the quality of the experience and to understand the risks of the release, and operational readiness needs, UAT must ask a lot of proactive questions about the scope, functionality, and experience so the development teams can correct system and design defects. When requirements gaps are caught during testing the costs of making corrections goes up. Before the start of UAT, there should be opportunities for the UAT team to pose questions about possible gaps and unclear requirements directly to the development and business teams. When reviewing the requirements, the test team must ask “what is the user trying to achieve?” and “How well is the user able to reach their objectives?” There are also compliance and regulatory requirements that that some software must meet. Also, accessibility testing for individuals with temporary or permanent disabilities starts with building requirements for usability and accessibility. As a result, testers must ask a lot of questions about the functionality, users’ objectives, and the expected outcomes before the start of testing and during the testing.