The Seven Slip-Ups to Avoid in User Acceptance Testing

In the lifecycle of releasing a new feature, user acceptance testing plays a short-lived yet prominent role. For agile development, or for hybrid models, user acceptance testing comes at the end of the sprint or at the end of the development cycle. The goal of UAT is manifold, but it’s primary goal should be to keep potential defects and issues from getting to the customers. An end to end UAT lifecycle starts with a clear understanding of the requirements, defining a test plan, designing test cases with expected outcomes and testing steps, and environmental readiness.

Following are seven slip-ups to avoid in user acceptance testing:

Slip-up #1: Inadequate evaluation of the functionality from the user point of view.

Successfully creating a new functionality is a great achievement without a doubt. Functional testing ensures that the code runs without hiccups as expected. A key failure of user testing is to get tunnel vision on rushing the functionality into Production without ensuring that the experience also works for the customer. For a feature to be successful in the marketplace, it must function correctly but it must also be value add and help the users reach their goals with minimum hassle. For example, after purchase the customer is issued an invoice. However, if the invoice is not easy to explain and doesn’t show the complete charges, returns and credits, then it will cause confusion for the customers. It will result in support calls and possibly loss of business. Included in the testing should also be accessibility testing for individuals with temporary or permanent disabilities. Testing for disabilities starts with building requirements for accessibility. So, give equal weight to the acceptance of functionality from the customer perspective.

Slip Up #2: Only simulating the Sunny Day Scenarios during testing

The test cases must simulate the real world and the actual customer experiences as much as possible. The sunny day scenarios, or the minimum viable product path, only checks that the intended functionality is working versus testing the real-world conditions. Customers will use the services and apps to solve their real-world issues. Creating real world scenarios requires the knowledge of how customers are using the system and functionality. To create the right scenarios with the right level of intricacy, it’s important to look into support tickets and talk to subject matter experts in business and engineering teams about how customers use the system and the intricacies that their ongoing needs creates. Testing must focus on critical and high importance test scenarios that simulate the real world.

Slip Up #3. Testing design not validated with Engineering and the Business before the start of test execution

The Business owns the experience once the development work is completed. It’s always easier, and more cost effective, to fix defects during the development phase instead of having to fix things after the release when everyone is busy with other priorities. As a result to get the best outcomes, test plans,  test cases and expected outcomes must be validated and signed off by both the business and the developers prior to start of testing including validation of Priority 0 and Priority 1 test cases.

Slip Up #4: UAT not taking an end-to-end view of the feature release  

In an agile environment, testing can come at the end of a sprint. The user stories that are completed during the sprint are functionally tested to ensure that the feature is done. However, user testing requires taking the entire end to end experience in mind without taking shortcuts or using surrogates. The customers will go through the full end to end experience and they expect everything to wok. This is also true in an agile environment since a team may work only on one feature and only through continuous integration can the testing to go over the entire end to end experience. If the end to end experience doesn’t work, the result will be higher volumes of support calls and impact to the business results. UAT’s role is not to check the box or focus on the happy path, but to look at the problem from the customer’s perspective take an end view of the customer experience.

Slip Up #5: Start and acceptance criteria not clearly articulated before testing starts

UAT comes at the end of development cycle after functional and component testing are done. Development teams, business and the testing team must all sign off on the UAT start criteria and UAT acceptance criteria at the end of testing. At kick-off, the test environment must be ready for UAT testing. Changing code during User testing means that the user tests are not final and without regression technical debt creeps into the functionality. At the conclusion of testing, it should be clear who will have the sign off authority and who will make the final go-no-go decision. When the testing is done, the progress report must include the status of each scenario. If there are critical bugs, at go-no-go there should be a decision to release or delay or to go ahead with the release if the bugs are not considered  critical without impacting the user experience. It’s best to articulate the entry and exit criteria before the start of testing.

Slip Up #6: Lack of trust between development teams and UAT

The lack of trust can come from a number different places. How bugs are addressed. How UAT bugs are viewed. The types of bugs filed. A confrontational or competitive relationship between the development teams and UAT is unproductive. Undoubtedly, there is a level of conflict built into the relationship between Testing and Development, but excessive conflict becomes corrosive and leads to lack of trust and finger pointing. As a results, testers should take care to document the defects and repro steps in detail. When faced with a push back from developers, testers should set the context, explain the test objectives and help with root cause analysis. The test team must maintain high integrity and transparency by focusing on P0 and P1 scenarios. Testers are finding defects so the customers don’t Testers must not give up easily or get frustrated with ambiguities in the functionality or issues with test environment. Testers must find issues so the customers don’t. Good testers must also be team players and be willing to work with developers, business owners, and other testers to learn, teach and work together to attain the best results for the customers.

Slipup #7. Not asking lots of questions as part of testing.

For a new feature, one goal is to make sure that everything works per the requirements. Product managers and UAT testers rely on the requirements to set up and execute their testing. Another goal of conducting UAT is to ensure that there is proper quality built into the feature. A third goal of UAT is to understand the risks and readiness requirements after the release. Asking a lot of proactive questions about the scope, functionality, and experience before the testing starts helps development teams make corrections prior to testing. Often, requirements issues are caught during testing which raises the costs of making corrections. Before the start of UAT, there should be opportunities for the UAT team to pose questions about the gaps and unclear requirements directly to the Engineering and Business teams. When reviewing the requirements, the test team must ask “what is the customer trying to achieve?” and “How well is the customer able to reach their goal?”. There are also compliance and regulatory requirements that that some software must meet. It’s also important to consider usability and accessibility requirements. Testers must ask a lot of questions about the functionality and the expected outcomes before start of testing and during the testing.

%d bloggers like this: