Anda di halaman 1dari 14

Acceptance testing

From Wikipedia, the free encyclopedia


Jump to: navigation, search
Acceptance testing of an aircraft catapult

Six of the primary mirrors of the James Webb Space Telescope being prepared for acceptance
testing

In engineering and its various subdisciplines, acceptance testing is a test conducted to determine
if the requirements of a specification or contract are met. It may involve chemical tests, physical
tests, or performance tests.

In systems engineering it may involve black-box testing performed on a system (for example: a
piece of software, lots of manufactured mechanical parts, or batches of chemical products) prior
to its delivery.[1]

In software testing the ISTQB defines acceptance as: formal testing with respect to user needs,
requirements, and business processes conducted to determine whether a system satisfies the
acceptance criteria and to enable the user, customers or other authorized entity to determine
whether or not to accept the system.[2] Acceptance testing is also known as user acceptance
testing (UAT), end-user testing, operational acceptance testing (OAT) or field (acceptance)
testing.

A smoke test may be used as an acceptance test prior to introducing a build of software to the
main testing process.[not verified in body]

Contents
[hide]

1 Overview
2 Process
3 User acceptance testing
4 Operational acceptance testing
5 Acceptance testing in extreme programming
6 Types of acceptance testing
7 List of acceptance-testing frameworks
8 See also
9 References
10 Further reading
11 External links

Overview[edit]
Testing is a set of activities conducted to facilitate discovery and/or evaluation of properties of
one or more items under test.[3] Each individual test, known as a test case, exercises a set of
predefined test activities, developed to drive the execution of the test item to meet test objectives;
including correct implementation, error identification, quality verification and other valued
detail.[3] The test environment is usually designed to be identical, or as close as possible, to the
anticipated production environment. It includes all facilities, hardware, software, firmware,
procedures and/or documentation intended for or used to perform the testing of software.[3]

UAT and OAT test cases are ideally derived in collaboration with business customers, business
analysts, testers, and developers. It's essential that these tests include both business logic tests as
well as operational environment conditions. The business customers (product owners) are the
primary stakeholders of these tests. As the test conditions successfully achieve their acceptance
criteria, the stakeholders are reassured the development is progressing in the right direction.[4]

User acceptance test (UAT) criteria (in agile software development) are usually created
by business customers and expressed in a business domain language. These are high-level
tests to verify the completeness of a user story or stories 'played' during any
sprint/iteration.
Operational acceptance test (OAT) criteria (regardless if using agile, iterative or
sequential development) are defined in terms of functional and non-functional
requirements; covering key quality attributes of functional stability, portability and
reliability.

Process[edit]
The acceptance test suite may need to be performed multiple times, as all of the test cases may
not be executed within a single test iteration.[5]

The acceptance test suite is run using predefined acceptance test procedures to direct the testers
which data to use, the step-by-step processes to follow and the expected result following
execution. The actual results are retained for comparison with the expected results.[5] If the actual
results match the expected results for each test case, the test case is said to pass. If the quantity of
non-passing test cases does not breach the project's predetermined threshold, the test suite is said
to pass. If it does, the system may either be rejected or accepted on conditions previously agreed
between the sponsor and the manufacturer.

The anticipated result of a successful test execution:


test cases are executed, using predetermined data
actual results are recorded
actual and expected results are compared, and
test results are determined.

The objective is to provide confidence that the developed product meets both the functional and
non-functional requirements. The purpose of conducting acceptance testing is that once
completed, and provided the acceptance criteria are met, it is expected the sponsors will sign-off
on the product development/enhancement as satisfying the defined requirements (previously
agreed between business and product provider/developer).

User acceptance testing[edit]


User acceptance testing (UAT) consists of a process of verifying that a solution works for the
user.[6] It is not system testing (ensuring software does not crash and meets documented
requirements), but rather ensures that the solution will work for the user (i.e., tests that the user
accepts the solution); software vendors often refer to this as "Beta testing".

This testing should be undertaken by a subject-matter expert (SME), preferably the owner or
client of the solution under test, and provide a summary of the findings for confirmation to
proceed after trial or review. In software development, UAT as one of the final stages of a
project often occurs before a client or customer accepts the new system. Users of the system
perform tests in line with what would occur in real-life scenarios.[7]

It is important that the materials given to the tester be similar to the materials that the end user
will have. Testers should be given real-life scenarios such as the three most common or difficult
tasks that the users they represent will undertake.[citation needed]

The UAT acts as a final verification of the required business functionality and proper functioning
of the system, emulating real-world conditions on behalf of the paying client or a specific large
customer. If the software works as required and without issues during normal use, one can
reasonably extrapolate the same level of stability in production.[8]

User tests, usually performed by clients or by end-users, do not normally focus on identifying
simple cosmetic problems such as spelling errors, nor on showstopper defects, such as software
crashes; testers and developers identify and fix these issues during earlier unit testing, integration
testing, and system testing phases.

UAT should be executed against test scenarios.[citation needed] Test scenarios usually differ from
System or Functional test cases in that they represent a "player" or "user" journey. The broad
nature of the test scenario ensures that the focus is on the journey and not on technical or system-
specific details, staying away from "click-by-click" test steps to allow for a variance in users'
behaviour. Test scenarios can be broken down into logical "days", which are usually where the
actor (player/customer/operator) or system (backoffice, front end) changes.[citation needed]
In industry, a common UAT is a factory acceptance test (FAT). This test takes place before
installation of the equipment. Most of the time testers not only check that the equipment meets
the specification, but also that it is fully functional. A FAT usually includes a check of
completeness, a verification against contractual requirements, a proof of functionality (either by
simulation or a conventional function test) and a final inspection.[9][10]

The results of these tests give clients confidence in how the system will perform in production.
There may also be legal or contractual requirements for acceptance of the system.

Operational acceptance testing[edit]


Operational acceptance testing (OAT) is used to conduct operational readiness (pre-release) of a
product, service or system as part of a quality management system. OAT is a common type of
non-functional software testing, used mainly in software development and software maintenance
projects. This type of testing focuses on the operational readiness of the system to be supported,
and/or to become part of the production environment.

Acceptance testing in extreme programming[edit]


Acceptance testing is a term used in agile software development methodologies, particularly
extreme programming, referring to the functional testing of a user story by the software
development team during the implementation phase.[11]

The customer specifies scenarios to test when a user story has been correctly implemented. A
story can have one or many acceptance tests, whatever it takes to ensure the functionality works.
Acceptance tests are black-box system tests. Each acceptance test represents some expected
result from the system. Customers are responsible for verifying the correctness of the acceptance
tests and reviewing test scores to decide which failed tests are of highest priority. Acceptance
tests are also used as regression tests prior to a production release. A user story is not considered
complete until it has passed its acceptance tests. This means that new acceptance tests must be
created for each iteration or the development team will report zero progress.[12]

This section needs expansion. You can help by adding to it. (May 2008)

Types of acceptance testing[edit]


This section does not cite any sources. Please help improve this section by
adding citations to reliable sources. Unsourced material may be challenged and
removed. (March 2015) (Learn how and when to remove this template message)

Typical types of acceptance testing include the following

User acceptance testing


This may include factory acceptance testing (FAT), i.e. the testing done by a vendor
before the product or system is moved to its destination site, after which site acceptance
testing (SAT) may be performed by the users at the site.[13]
Operational acceptance testing
Also known as operational readiness testing, this refers to the checking done to a system
to ensure that processes and procedures are in place to allow the system to be used and
maintained. This may include checks done to back-up facilities, procedures for disaster
recovery, training for end users, maintenance procedures, and security procedures.
Contract and regulation acceptance testing
In contract acceptance testing, a system is tested against acceptance criteria as
documented in a contract, before the system is accepted. In regulation acceptance testing,
a system is tested to ensure it meets governmental, legal and safety standards.
Alpha and beta testing
Alpha testing takes place at developers' sites, and involves testing of the operational
system by internal staff, before it is released to external customers. Beta testing takes
place at customers' sites, and involves testing by a group of customers who use the
system at their own locations and provide feedback, before the system is released to other
customers. The latter is often called "field testing".

List of acceptance-testing frameworks[edit]


This section does not cite any sources. Please help improve this section by
adding citations to reliable sources. Unsourced material may be challenged and
removed. (March 2015) (Learn how and when to remove this template message)

Concordion, Specification by example (SbE) framework


o Concordion.NET, acceptance testing in .NET
Cucumber, a behavior-driven development (BDD) acceptance test framework
o Capybara, Acceptance test framework for Ruby web applications
o Behat, BDD acceptance framework for PHP
o Lettuce, BDD acceptance framework for Python
Fabasoft app.test for automated acceptance tests
Framework for Integrated Test (Fit)
o FitNesse, a fork of Fit
iMacros
ItsNat Java Ajax web framework with built-in, server based, functional web testing
capabilities.
Mocha, a popular web acceptance test framework based on Javascript and Node.js
Ranorex
Robot Framework
Selenium
Specification by example (Specs2)
Watir
Gauge (software)
See also[edit]

Software Testing portal

Acceptance sampling
Conference room pilot
Development stage
Dynamic testing
Engineering validation test
Grey box testing
Test-driven development
White box testing

References[edit]
1. Jump up ^ Black, Rex (August 2009). Managing the Testing Process: Practical Tools and
Techniques for Managing Hardware and Software Testing. Hoboken, NJ: Wiley. ISBN 0-470-
40415-9.
2. Jump up ^ Standard glossary of terms used in Software Testing, Version 2.1. ISTQB. 2010.
3. ^ Jump up to: a b c ISO/IEC/IEEE 29119-1-2013 Software and Systems Engineering - Software
Testing - Part 1- Concepts and Definitions. ISO. 2013. Retrieved 2014-10-14.
4. Jump up ^ ISO/IEC/IEEE DIS 29119-4 Software and Systems Engineering - Software Testing -
Part 4- Test Techniques. ISO. 2013. Retrieved 2014-10-14.
5. ^ Jump up to: a b ISO/IEC/IEEE 29119-2-2013 Software and Systems Engineering - Software
Testing - Part 2- Test Processes. ISO. 2013. Retrieved 2014-05-21.
6. Jump up ^ Cimperman, Rob (2006). UAT Defined: A Guide to Practical User Acceptance
Testing. Pearson Education. pp. Chapter 2. ISBN 9780132702621.
7. Jump up ^ Goethem, Brian Hambling, Pauline van (2013). User acceptance testing : a step-by-
step guide. BCS Learning & Development Limited. ISBN 9781780171678.
8. Jump up ^ Pusuluri, Nageshwar Rao (2006). Software Testing Concepts And Tools. Dreamtech
Press. p. 62. ISBN 9788177227123.
9. Jump up ^ "Factory Acceptance Test (FAT)". Tuv.com. Retrieved September 18, 2012.
10. Jump up ^ "Factory Acceptance Test". Inspection-for-industry.com. Retrieved September 18,
2012.
11. Jump up ^ "Introduction to Acceptance/Customer Tests as Requirements Artifacts".
agilemodeling.com. Agile Modeling. Retrieved 9 December 2013.
12. Jump up ^ Don Wells. "Acceptance Tests". Extremeprogramming.org. Retrieved September 20,
2011.
13. Jump up ^ Prasad, Durga (2012-03-29). "The Difference Between a FAT and a SAT".
Kneat.com. Retrieved 2016-07-27.

Electronic test equipment


From Wikipedia, the free encyclopedia

Jump to: navigation, search


Tektronix 7854 oscilloscope with curve tracer and time-domain reflectometer plug-ins. Lower module
has a digital voltmeter, a digital counter, an old WWVB frequency standard receiver with phase
comparator, and function generator.

Electronic test equipment is used to create signals and capture responses from electronic
devices under test (DUTs). In this way, the proper operation of the DUT can be proven or faults
in the device can be traced. Use of electronic test equipment is essential to any serious work on
electronics systems.

Practical electronics engineering and assembly requires the use of many different kinds of
electronic test equipment ranging from the very simple and inexpensive (such as a test light
consisting of just a light bulb and a test lead) to extremely complex and sophisticated such as
automatic test equipment (ATE). ATE often includes many of these instruments in real and
simulated forms.

Generally, more advanced test gear is necessary when developing circuits and systems than is
needed when doing production testing or when troubleshooting existing production units in the
field.[citation needed]

Contents
[hide]

1 Test equipment switching


2 Types of test equipment
o 2.1 Basic equipment
o 2.2 Advanced or less commonly used equipment
2.2.1 Probes
2.2.2 Analyzers
2.2.3 Signal-generating devices
o 2.3 Miscellaneous devices
3 Platforms
o 3.1 GPIB/IEEE-488
o 3.2 LAN eXtensions for Instrumentation
o 3.3 VME eXtensions for Instrumentation
o 3.4 PCI eXtensions for Instrumentation
o 3.5 Universal Serial Bus
o 3.6 RS-232
o 3.7 Test script processors and a channel expansion bus
4 See also
5 References
6 External links

Test equipment switching[edit]


The addition of a high-speed switching system to a test systems configuration allows for faster,
more cost-effective testing of multiple devices, and is designed to reduce both test errors and
costs. Designing a test systems switching configuration requires an understanding of the signals
to be switched and the tests to be performed, as well as the switching hardware form factors
available.

Types of test equipment[edit]


Basic equipment[edit]

Agilent commercial digital voltmeter checking a prototype


The following items are used for basic measurement of voltages, currents, and components in the
circuit under test.

Voltmeter (Measures voltage)


Ohmmeter (Measures resistance)
Ammeter, e.g. Galvanometer or Milliameter (Measures current)
Multimeter e.g., VOM (Volt-Ohm-Milliammeter) or DMM (Digital Multimeter) (Measures all of
the above)
RLC Meter e.g., RLC meter or Resistance,Inductance and capacitance meter (measure RLC
values)

The following are used for stimulus of the circuit under test:

Power supplies
Signal generator
Digital pattern generator
Pulse generator

Howard piA digital multimeter

The following analyze the response of the circuit under test:

Oscilloscope (Displays voltage as it changes over time)


Frequency counter (Measures frequency)

And connecting it all together:

Test probes

Advanced or less commonly used equipment[edit]

Meters

Solenoid voltmeter (Wiggy)


Clamp meter (current transducer)
Wheatstone bridge (Precisely measures resistance)
Capacitance meter (Measures capacitance)
LCR meter (Measures inductance, capacitance, resistance and combinations thereof)
EMF Meter (Measures Electric and Magnetic Fields)
Electrometer (Measures voltages, sometimes even tiny ones, via a charge effect)

Probes[edit]

A multimeter with a built in clamp facility. Pushing the large button at the bottom opens the lower jaw
of the clamp, allowing the clamp to be placed around a conductor (wire). Depending on sensor, some
can measure both AC and DC current.

RF probe
Signal tracer

Analyzers[edit]

Logic analyzer (Tests digital circuits)


Spectrum analyzer (SA) (Measures spectral energy of signals)
Protocol analyzer (Tests functionality, performance and conformance of protocols)
Vector signal analyzer (VSA) (Like the SA but it can also perform many more useful digital
demodulation functions)
Time-domain reflectometer (Tests integrity of long cables)
Semiconductor curve tracer

Signal-generating devices[edit]

Leader Instruments LSG-15 signal generator.

Signal generator usually distinguished by frequency range (e.g., audio or radio frequencies) or
waveform type (e.g., sine, square, sawtooth, ramp, sweep, modudulated, ...)
Frequency synthesiser
Function generator
Digital pattern generator
Pulse generator
Signal injector

Miscellaneous devices[edit]

Boxcar averager
Continuity tester
Cable tester
Hipot tester
Network analyzer (used to characterize an electrical network of components)
Test light
Transistor tester
Tube tester

Platforms[edit]

Keithley Instruments Series 4200 CVU

Several modular electronic instrumentation platforms are currently in common use for
configuring automated electronic test and measurement systems. These systems are widely
employed for incoming inspection, quality assurance, and production testing of electronic
devices and subassemblies. Industry-standard communication interfaces link signal sources with
measurement instruments in rack-and-stack or chassis-/mainframe-based systems, often under
the control of a custom software application running on an external PC.

GPIB/IEEE-488[edit]

The General Purpose Interface Bus (GPIB) is an IEEE-488 (a standard created by the Institute of
Electrical and Electronics Engineers) standard parallel interface used for attaching sensors and
programmable instruments to a computer. GPIB is a digital 8-bit parallel communications
interface capable of achieving data transfers of more than 8 Mbytes/s. It allows daisy-chaining
up to 14 instruments to a system controller using a 24-pin connector. It is one of the most
common I/O interfaces present in instruments and is designed specifically for instrument control
applications. The IEEE-488 specifications standardized this bus and defined its electrical,
mechanical, and functional specifications, while also defining its basic software communication
rules. GPIB works best for applications in industrial settings that require a rugged connection for
instrument control.

The original GPIB standard was developed in the late 1960s by Hewlett-Packard to connect and
control the programmable instruments the company manufactured. The introduction of digital
controllers and programmable test equipment created a need for a standard, high-speed interface
for communication between instruments and controllers from various vendors. In 1975, the IEEE
published ANSI/IEEE Standard 488-1975, IEEE Standard Digital Interface for Programmable
Instrumentation, which contained the electrical, mechanical, and functional specifications of an
interfacing system. This standard was subsequently revised in 1978 (IEEE-488.1) and 1990
(IEEE-488.2). The IEEE 488.2 specification includes the Standard Commands for Programmable
Instrumentation (SCPI), which define specific commands that each instrument class must obey.
SCPI ensures compatibility and configurability among these instruments.

The IEEE-488 bus has long been popular because it is simple to use and takes advantage of a
large selection of programmable instruments and stimuli. Large systems, however, have the
following limitations:

Driver fanout capacity limits the system to 14 devices plus a controller.


Cable length limits the controller-device distance to two meters per device or 20 meters total,
whichever is less. This imposes transmission problems on systems spread out in a room or on
systems that require remote measurements.
Primary addresses limit the system to 30 devices with primary addresses. Modern instruments
rarely use secondary addresses so this puts a 30-device limit on system size.[1]

LAN eXtensions for Instrumentation[edit]

The LXI (LXI) Standard defines the communication protocols for instrumentation and data
acquisition systems using Ethernet. These systems are based on small, modular instruments,
using low-cost, open-standard LAN (Ethernet). LXI-compliant instruments offer the size and
integration advantages of modular instruments without the cost and form factor constraints of
card-cage architectures. Through the use of Ethernet communications, the LXI Standard allows
for flexible packaging, high-speed I/O, and standardized use of LAN connectivity in a broad
range of commercial, industrial, aerospace, and military applications. Every LXI-compliant
instrument includes an Interchangeable Virtual Instrument (IVI) driver to simplify
communication with non-LXI instruments, so LXI-compliant devices can communicate with
devices that are not themselves LXI compliant (i.e., instruments that employ GPIB, VXI, PXI,
etc.). This simplifies building and operating hybrid configurations of instruments.

LXI instruments sometimes employ scripting using embedded test script processors for
configuring test and measurement applications. Script-based instruments provide architectural
flexibility, improved performance, and lower cost for many applications. Scripting enhances the
benefits of LXI instruments, and LXI offers features that both enable and enhance scripting.
Although the current LXI standards for instrumentation do not require that instruments be
programmable or implement scripting, several features in the LXI specification anticipate
programmable instruments and provide useful functionality that enhances scriptings capabilities
on LXI-compliant instruments.[2]

VME eXtensions for Instrumentation[edit]

The VME eXtensions for Instrumentation (VXI) bus architecture is an open standard platform
for automated test based on the VMEbus. Introduced in 1987, VXI uses all Eurocard form
factors and adds trigger lines, a local bus, and other functions suited for measurement
applications. VXI systems are based on a mainframe or chassis with up to 13 slots into which
various VXI instrument modules can be installed.[3] The chassis also provides all the power
supply and cooling requirements for the chassis and the instruments it contains. VXI bus
modules are typically 6U in height.

PCI eXtensions for Instrumentation[edit]

PCI eXtensions for Instrumentation, (PXI), is a peripheral bus specialized for data acquisition
and real-time control systems. Introduced in 1997, PXI uses the CompactPCI 3U and 6U form
factors and adds trigger lines, a local bus, and other functions suited for measurement
applications. PXI hardware and software specifications are developed and maintained by the PXI
Systems Alliance.[4] More than 50 manufacturers around the world produce PXI hardware.[5]

Universal Serial Bus[edit]

The Universal Serial Bus (USB) connects peripheral devices, such as keyboards and mice, to
PCs. The USB is a Plug and Play bus that can handle up to 127 devices on one port, and has a
theoretical maximum throughput of 480 Mbit/s (high-speed USB defined by the USB 2.0
specification). Because USB ports are standard features of PCs, they are a natural evolution of
conventional serial port technology. However, it is not widely used in building industrial test and
measurement systems for several (e.g., USB cables are rarely industrial grade, are noise
sensitive, are not positively attached and so are rather easily detachable, and the maximum
distance between the controller and device is limited to a few meters). Like some other
connections, USB is primarily used for applications in a laboratory setting that do not require a
rugged bus connection.

RS-232[edit]

RS-232 is a specification for serial communication that is popular in analytical and scientific
instruments, as well for controlling peripherals such as printers. Unlike GPIB, with the RS-232
interface, it is possible to connect and control only one device at a time. RS-232 is also a
relatively slow interface with typical data rates of less than 20 kbytes/s. RS-232 is best suited for
laboratory applications compatible with a slower, less rugged connection.

Test script processors and a channel expansion bus[edit]


One of the most recently developed test system platforms employs instrumentation equipped
with onboard test script processors combined with a high-speed bus. In this approach, one
master instrument runs a test script (a small program) that controls the operation of the various
slave instruments in the test system, to which it is linked via a high-speed LAN-based trigger
synchronization and inter-unit communication bus. Scripting is writing programs in a scripting
language to coordinate a sequence of actions.

This approach is optimized for small message transfers that are characteristic of test and
measurement applications. With very little network overhead and a 100 Mbit/s data rate, it is
significantly faster than GPIB and 100BaseT Ethernet in real applications.

The advantage of this platform is that all connected instruments behave as one tightly integrated
multi-channel system, so users can scale their test system to fit their required channel counts
cost-effectively. A system configured on this type of platform can stand alone as a complete
measurement and automation solution, with the master unit controlling sourcing, measuring,
pass/fail decisions, test sequence flow control, binning, and the component handler or prober.
Support for dedicated trigger lines means that synchronous operations between multiple
instruments equipped with onboard Test Script Processors that are linked by this high speed bus
can be achieved without the need for additional trigger connections.[6]

See also[edit]
Automatic test equipment
List of electrical and electronic measuring equipment
Load pull, a colloquial term applied to the process of systematically varying the impedance
presented to a device under test

References[edit]
1. Jump up ^ ICS Electronics. Extending the GPIB Bus Retrieved December 29, 2009.
2. Jump up ^ Franklin, Paul and Todd A. Hayes. LXI Connection.Benefits of LXI and Scripting. July 2008.
Retrieved January 5, 2010.
3. Jump up ^ Hardware Mechanical Components VXI Chassis and Case Manufacturers. Retrieved December
30, 2009.
4. Jump up ^ PXI Systems Alliance. Specifications. Retrieved December 30, 2009.
5. Jump up ^ PXI Systems Alliance. Specifications Archived 2010-09-05 at the Wayback Machine. Retrieved
December 30, 2009.
6. Jump up ^ Cigoy, Dale. R&D Magazine.Smart Instruments Keep Up With Changing RD Needs Retrieved
January 4, 2009.