User acceptance testing (UAT) automation

User acceptance testing

User acceptance testing is your last line of defence before each release, which sounds like a good reason to take UAT seriously. But is this reason good enough to warrant investing time and effort into UAT automation?

Automated acceptance tests are something that very few QA teams actually do. Only ~3% of software testing teams automate the UAT process, according to TestDrive UAT. While these numbers might lack precision, they certainly illustrate a general trend. And the general trend for most teams is to rely on manual testing.

So what is wrong with automated user acceptance tests and what can we do about it? Based on what I’ve seen in a good dozen of teams, working with code-based solutions is what turns UAT into a headache for most people. To prove this, let’s take a deep dive into user acceptance testing and how people automate it.

What is user acceptance testing all about?

Definitions are boring, so instead of defining user acceptance testing, let’s briefly look at what everyone needs to know about UAT:

  • User acceptance testing verifies the user-facing functionality of a software product in real-world scenarios.
  • Each user acceptance test reflects the description of a functionality in the software requirements.
  • Scope-wise, UAT strives for a comprehensive coverage of the product in its entirety. This is one of the factors making the task of automating the acceptance testing so difficult.
  • Process-wise, UAT follows system testing. As mentioned earlier, user acceptance testing is the final stage of testing before the software goes live.
  • Running acceptance tests only makes sense after you’ve identified and fixed all major defects during unit and system testing.
  • Automated user acceptance testing can be a part of regression testing where teams rerun UAT suites before major releases.

With these points in mind, there are two important things that explain why teams fail at automating user acceptance testing with hand-written code.

1. User acceptance testing is not for techies

First, UAT mirrors requirements specifications, which means it’s conceptually as close to the expertise of product owners as to that of test developers. With UAT, we’re not just testing if a feature works, we’re testing if it works for the end user.

In fact, many teams actually consider it a good practice to have product owners and requirements writers specify user acceptance tests. Given that we’re essentially talking about end user testing, having people from product/business run and manage acceptance tests seem like a logical step further. This said, very few product owners or requirements managers have the technical skills needed to work with hand-coded or BDD-style UAT testing suites.

2. Handwritten user acceptance tests are unproductive

Second, the tests written for UAT essentially provide a second layer of coverage on top of what unit tests and integration tests already cover. Basically, we’re talking about 200% test coverage: 100% go for unit and integration tests, and additional 100% are UAT. Writing this much test code is way too time consuming.

To make things worse, up to 30% of initial requirements typically change mid-process as your project evolves, increasing the test maintenance burden. Knowing this, no wonder that so many teams choose not to automate the acceptance testing process in the first place. But are they right in taking this route?

Are manual tests the right way to do UAT?

Running pre-release UAT suites manually may take 2–3 testers up to a week in a typical project. As projects grow, so does the time needed to run user acceptance tests. At some point, the ever-increasing pressure to deliver fast makes companies skip some UAT runs or abandon UAT altogether.

Basically, manual user acceptance testing is a false economy that far too many testers buy into without considering its long-term consequences. Actually, the right approach to UAT doesn’t come down to a choice between manual and automated acceptance tests. Automation is possible in ~99% of cases, but the question is how to make automation work for your project.

Knowing that hand-coded tests are unproductive in UAT, it might make more sense to go with record-playback. Let’s see what advantages this approach can offer.

User acceptance testing with record-playback: what you get

One small digression before we can proceed. Historically, record-playback solutions got a bad reputation due to their numerous disadvantages. In particular, old versions of Ranorex and QTP made you edit auto-generated code, had a hefty installation footprint, and were a pain to manage.

The good news is lots of things have changed in record-playback testing in the past years. Modern visual testing solutions have ditched cumbersome desktop apps in favor of lightweight web-based platforms. What’s more, these platforms offer a more robust and pain-free test maintenance paired with rich visual testing functionality.

Using these tools to automate user acceptance testing makes QA operations more productive in a number of ways.

Codeless UAT automation is accessible to product owners and requirements managers

I’m sure you’ve seen teams where a part of QA-related tasks falls on the shoulders of product owners and product managers. This makes a lot of sense with UAT where the tester has to focus on user-facing functionality.

With codeless automation of user acceptance tests, non-programmers gain more control over the processes of creating, running, and managing test suites. In teams that follow this approach, testers still own UAT, yet people outside of QA can handle automated user acceptance tests too.

Keeping codeless UAT suites in sync with requirements and implementation code is easier

Keeping user acceptance tests in sync with specifications is challenging in Agile projects where changes happen often. Due to its heavy focus on end user testing, UAT also overlaps with UI testing where things tend to be less change-proof. Needing to constantly work your way around brittle selectors, in particular, is one major factor that slows you down when testing the ever-changing UI.

One of the key advantages of modern codeless platforms is the speed at which you can create and edit tests. The speed boost you get from not having to hand-code your tests really makes it possible to quickly cover new features with tests.

In addition, some platforms address the problem of brittle locators with proprietary solutions. If the acceptance testing tool that you’re using has this feature, the maintenance of your tests will become a lot less burdening.

Testing what the user sees

A major issue with coded tests is that they only check series of UX touchpoints defined by a user story. While doing so, these tests ignore dozens of potential issues.

Open a page, locate a form, input the right user credentials, check if the action takes you to a correct UI state. Doesn’t it seem like there should be something more to it? While testing this routine, a human tester will also see if anything is wrong with the layout, fonts, images, and content. More importantly, that’s what human users will notice.

Visual testing is also something that you get with modern record-playback solutions. While some tools don’t go beyond simplistic screenshot comparison, there are platforms that offer advanced features like content verification and handling of dynamic UI regions.

In the end, having the right acceptance testing tool is what matters

Moving from hand-coded to recorded tests can make automated UAT work for you — if you have the right tool. If your project revolves around a web application, our very own solution Screenster can become the tool that will help you fix UAT automation.

Screenster offers low-code automation of visual testing, with a record-playback-verification mechanism at its core. The tool enables you to record baselines of user sessions, capturing screenshots and the DOM structure of every page and UI state.

When recording a test, the platform automatically determines optimal waiting time for complex page loading scenarios and automatically identifies UI regions with dynamic content. When comparing new UI samples with their baselines, Screenster can distinguish between visual and content-related changes, and it lets your easily integrate expected changes into your suites.

Does this sound like the functionality that you can use to optimise your UAT automation? If that’s so, try and record a demo test. The platform is simple enough for a non-technical person, and it’ll take you no longer than five minutes to record a basic test. So why not give it a try?:)

 

Want to try Screenster on the cloud?

Try Online
User acceptance testing (UAT) automation was last modified: September 6th, 2018 by Ilya Goncharov
Thoughts on “User acceptance testing (UAT) automation”
  1. Hellо to еvery one, for tthe reason thazt Ӏ aam rеally
    eager of reading this webpage’ѕ post to bee updated regularly.
    Іt carries good material.

  2. I have checked your website and i’ve found some duplicate content,
    that’s why you don’t rank high in google’s search results, but there is a tool that can help you to create 100%
    unique content, search for: boorfe’s tips unlimited content

Leave a Reply

Your email address will not be published. Required fields are marked *

WordPress Image Lightbox Plugin