What is User Acceptance Testing?
Within the last few years, there have been a selection of
definitions employed to User Acceptance Testing (UAT). Your success in
validating that a system or application is"fit for use" by the user
is dependent upon how you specify this period of testing.
For example, if you see UAT as a functional test based
solely on user requirements, you will probably miss the same things in testing
which were missed in specifying the requirements. Another example is that in
case you see UAT as the evaluations which can be automated in agile testing,
then you may overlook the"hands on" assessment of the actual user to
ascertain how the application really meets their needs.
I want to be clear that I'm not saying that you must use my
definition of UAT or else you'll be hopelessly doomed to project failure. What
I am saying is that you can find a number of views of UAT that might or might
not satisfy your wants - and that you had better be certain that you understand
the differences in the manners UAT is described.
I'm a radical, so I suggest the genuine current and/or
future users do the preparation, evaluation and testing for acceptance. There
are individuals who see otherwise. Some people would rather have testers take
the role of users. Others have a UAT team composed of users who currently do
only testing. In other associations UAT might be performed by business
analysts.
I love to have real users perform UAT because 1) they are going to be using the system anyway, 2)
they understand the current means of doing their tasks and so can tell when
something won't work for them, and 3) they should know what they'll be
receiving in terms of system quality and features.
This isn't without
challenges. Here are a Few of the reasons that are typically given for not
between users in UAT:
1) Not enough time due to performing regular job
responsibilities
2) No training about the new system to be analyzed
3) No interest
4) insufficient testing knowledge or experience
These are all significant challenges, but could typically be
handled.
What
is"Acceptance?"
Actually, contracts have been signed and money was spent,
therefore"acceptance" is generally not an"accept or reject"
proposition. UAT is more about discovering gaps between the way the system
works and how operational processes are performed.
A Word About
Validation
It seems to me that the distinction between verification and
validation has been lost in recent decades. It's important that we understand
the difference between both of these different types of testing so that we can
get a complete and accurate appraisal of what we're analyzing.
I'll refer to this
ISTQB glossary at this point, which references ISO 9000:
Validation
"1 - In design and development, validation concerns the
process of examining a product to determine conformity with user needs."
"2 - Validation is normally conducted on the final
product under defined operating conditions. It could be required in earlier
phases."
Verification
"1 - In design and development, verification concerns
the process of examining the consequence of a certain activity to ascertain
conformity with the specified requirement for this activity."
So, let me paraphrase somewhat. Verification decides if
something was constructed according to specifications.
Those are two greatly
diverse actions!
UAT is usually regarded as validation. In reality, it is
typically the only time validation is performed in a job. System testing,
integration testing, unit testing, in addition to testimonials are all examples
of verification as they are based on specifications and requirements.
Therefore, it is very important that in the one validation opportunity we've,
we get it right.
Now let's examine
some of the differing perspectives of UAT.
The Beta Test
Inside this definition, applications is given (or sold) to
customers for them to test as they perform their normal pursuits. Some beta
testers go beyond that and actually attempt to break the program.
The problem with beta testing is that you will never know
how much testing was actually done. Even worse, you never know in advance just
how much testing will be done. If you're relying on people to beta test your
software, you're probably going to forget a lot of things.
Beta testing does serve a practical function in locating
configuration conditions that might not otherwise not be found in your testing.
Additionally, it provides the chance to get opinions about the item early. But
this still does not actually meet the bar of validation because beta testing
does not imply acceptance and it often lacks the rigor of a controlled test.
Agile Acceptance
Testinguat training2
In agile development, approval testing is the functional
testing that's based on an individual's stated needs. Functional tests are
designed based on these needs. A number of the operational evaluation are
automatic, while others have been performed manually. In agile procedures, the
developer might be the one really performing these tests. This usually means
that the user may or may not be observing the results of the test.
In the absence of defined requirements (at least to the
extent since they are seem traditionally) these acceptance tests are close to
the functional tests that could be contemplated system testing in additional
development approaches. It's great that these tests have been performed, but
these are still more confirmation than validation. Both these types of testing
concentrate on finding defects as opposed to supporting fitness to be used.
The Traditional View
There is also much confusion in the traditional view of
acceptance testing. Some consider UAT to imply that the system or program is
tested by users to affirm that documented requirements are fulfilled. Others,
like myself, see UAT as a real-world validation of the system or validation
done by consumers.
The distinction is exactly what the test relies upon for the
use of the analysis of outcomes. The problem with basing UAT on demands is that
1) many times we do not have well defined requirements on projects, and 2) even
if we do have well defined requirements they can have flaws in them.
Now, many people
ask,"Then, what do you base your evaluation upon?"
In fact, there are several ways to test without specified
requirements. You can refer to my post, Testing Without Defined Requirements to
observe a further listing of those approaches.
For the purposes of UAT, a quite efficient means to design
evaluations would be to base them on user procedures. These can be workflows or
alternative procedure driven activities which people will use the software to
achieve.
Not only is this a different foundation for analyzing, but it is a
process driven view that's usually not achievable according to requirements. In
reality, this might be the only chance to execute any type of business process
validation. One of the most critical issues in any system deployment is that
the machine will work immediately after deployment to support what the
consumers do.
I also unite process-driven test design with data-driven
test layout to attain a test that models the real world. To illustrate this, I
still use an analogy of plumbing. The pipes represent processes, the water
reflects the data that flows through the processes and the taps represent the
controls in a system.
I'm a big believer in requirements-based testing. I only
believe that the users need a second, independent standpoint of testing to
compensate for the mistakes and gaps that are ordinarily seen in requirements
-- even when they've been reviewed.
In any event, UAT should be based on a specified set of
approval criteria which are defined at the project beginning.
Regrettably, UAT is often performed at the worst possible
time at a project -- at the very end. If you wait till this stage to discover
where the machine fails to meet user needs, there's a huge risk that the
project will be late or not approved in any way. To mitigate this risk, I
advise between users in reviews and testing that precede UAT.
In my experience, there really isn't much significance in a
surprise factor of analyzing. In other words, it is fine for consumers to see
what is coming their way long before they have to test it.
Conventional UAT can also be done manually. I prefer this
approach since 1) UAT is typically performed only once which means there is
minimal return on investment for test automation, and 2) users will need to be
viewing what the software really looks like and how it performs. Automation
takes this perspective from individuals.
There's perhaps a function for automation in UAT. There may
be mundane testing which can be readily automated. However, the user needs to
first understand how the program is performing these functions. Additionally,
users might be called on to carry out some amount of functionality and
regression testing, but it's the rare instance this may be carried out without
assistance from people who understand how to do this kind of testing. So I
consider these are the exceptions rather than the rule.
There is also the possibility of automating repeated UAT.
This is occasionally observed when multiple releases are delivered. A user may
have to examine new functions along with each of the functions previously
tested in a previous release, like in a regression test. This is typically
found in the agile world in addition to any kind of iterative development
strategy.
Thus, there may be an opportunity for automating some
acceptance tests. The main concern is that users get to encounter firsthand how
the machine supports their needs.
Summary
No matter which standpoint of UAT is performed on your
endeavors, keep in mind that the need for real users to be testing the
applications in their own world, with their own processes, is not only helpful,
but also needful.
Irrespective of your strategy or development strategy, there
is a critical requirement to verify the machine fulfills requirements and to
validate the system meets user needs. Always keep in mind that defined
requirements may not reflect real user requirements. Consequently, you want
both validation and verification.
Comments
Post a Comment