Based On

Software testing includes activities such as test planning, analyzing, designing, implementing tests, and executing, reporting test progress and results, and evaluating the quality of a test object.

The Foundation in SW Testing

Objectives of Testing

Evaluate - requirements, user stories, design, and code

Verify - specified requirements have been fulfilled

Validate - test object is complete and works as expected

Build Confidence - level of quality of test object

Prevent - defects

Find - failures

Inform - the level of quality of test objects

Reduce - risk of inadequate software quality

Comply with - contractual, legal, or regulatory requirements or standards

Testing

Find bugs

Done by tester

Can be automated

Can be done by anyone

Major portion of testing can be done with design knowledge

Debugging

Correct bugs

By programmer

Can't be automated

Only programmer or who has access to code

Can't be done without proper design knowledge

Testing Necessary?

Reduce risk of failure - increase product quality - meet contractual or legal requirements or industry-specific standards

Testers in Development Process?

In Requirements review - or user story refinement - detect defects in work products - the elimination reduces the risk of incorrect or un-testable functionality being developed

In Design phase - increase understanding of the design and how to test it - reduce the risk of fundamental design defects and enable tests to be identified at an early stage

In Code development - increase each party’s understanding of the code and how to test it

Verify and validate - prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements...

Quality Assurance - Process oriented - preventing defects by ensuring the processes used to manage and create deliverables works. Is about engineering processes that assure quality is achieved in an effective and efficient way

Quality Control - Product oriented - what was expected - QA proactive - QC reactive - checking the product against a predetermined set of requirements and validating that the product meets those requirements - Eg. technical reviews, software testing and code inspections

Testing - subset of QC. Process of executing a system in order to detect bugs in the product so that they get fixed. Testing is an integral part of QC as it helps demonstrate that the product runs the way it is expected and designed for

Errors, Defects, and Failures

Error (mistake) leads to defects (fault or bug) in software code or work product. If there is a defect in the product not necessary that it should trigger failure in all circumstances - it may occur rarely or never.

Reasons for Errors

Time pressure

Human Fallibility

Inexperienced or insufficiently skilled project participants

Miscommunication between project participants, including miscommunication about requirements
and design
Complexity of the code, design, architecture, the underlying problem to be solved, and/or the
technologies used
Misunderstandings about intra-system and inter-system interfaces, especially when such intrasystem
and inter-system interactions are large in number
New, unfamiliar technologies

Failures can also be caused by environmental conditions. For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions

Not all unexpected test results are failures. False positives may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other test-ware, or for other reasons. The inverse situation can also occur, where similar errors or defects lead to false negatives. False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analyzed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced

Testing Principles

Test Process

No universal process but there are common sets of test activities without which testing will be less likely to achieve its established objectives - test process. Test activities involved - how they are implemented - and when they occur

Contextual Factors

Software development lifecycle model and project methodologies being used

Test levels and test types being considered
Product and project risks
Business domain

Operational constraints, including but not limited to:

  • Budgets and resources

  • Timescales

  • Complexity

  • Contractual and regulatory requirements

Organizational policies and practices
Required internal and external standards

It is very useful if the test basis (for any level or type of  testing that is being considered) has measurable coverage criteria defined. The coverage criteria can act effectively as key performance indicators (KPIs) to drive the activities that demonstrate achievement of software test objectives

For example, for a mobile application, the test basis may include a list of requirements and a list of supported mobile devices. Each requirement is an element of the test basis. Each supported device is also an element of the test basis. The coverage criteria may require at least one test case for each element of the test basis. Once executed, the results of these tests tell stakeholders whether specified requirements are fulfilled and whether failures were observed on supported devices.

Test Activities and Tasks

  • Test planning

  • Test monitoring and control

  • Test analysis

  • Test design

  • Test implementation

  • Test execution

  • Test completion 

Many of these activities may appear logically sequential, they are often implemented iteratively. Eg. Agile development involves small iterations of software design, build, and test that happen on a continuous basis, supported by on-going planning. So test activities are also happening on an iterative, continuous basis within this development approach. Even in sequential development, the stepped logical sequence of activities will involve overlap, combination, concurrency, or omission, so tailoring these main activities within the context of the system and the project is usually required. 

Test Basis

Test Basis is defined as the source for creation of test Cases. It can be the Application itself or the requirement documents like SRS (Software Requirement Specification), BRS (Business Requirement Specification), etc.

Proof of concept (PoC) is a realization of a certain method or idea in order to demonstrate its feasibility, or a demonstration in principle with the aim of verifying that some concept or theory has practical potential. A proof of concept is usually small and may or may not be complete

Test Implementation Work Products

  • Test procedures and the sequencing of those test procedures

  • Test suites

  • A test execution schedule

The test achievements can be demonstrated via bi-directional traceability between test procedures and specific elements of the test basis, through the test cases and test conditions. The activities may involve test automation scripting, verification of test data, test environment etc. Test data serve to assign concrete values to the inputs and expected results of test cases. 

These values, together with explicit directions about its use, turn high-level test cases into executable low-level test cases. The same high-level test case may use different test data when executed on different releases of the test object. The concrete expected results which are associated with concrete test data are identified by using a test oracle. In exploratory testing, some test design and implementation work products may be created during test execution, though the extent to which exploratory tests (and their traceability to specific elements of the test basis) are documented may vary significantly.


Test conditions defined in test analysis may be further refined in test implementation.

Test Execution Work Products

Include:

  • Documentation of the status of individual test cases or test procedures (e.g., ready to run, pass, fail, blocked, deliberately skipped, etc.)

  • Defect reports (add link to section 5.6)

  • Documentation about which test item(s), test object(s), test tools, and testware were involved in the testing

 

Ideally once the test execution is done the status of each test basis can be determined and reported via bi-directional traceability with the associated the test procedure(s).

This enables verification that the coverage criteria have been met, and enables the reporting of test results in terms that are understandable to stakeholders.

Test completion work products
Test completion work products include test summary reports, action items for improvement of subsequent projects or iterations (e.g., following a project Agile retrospective), change requests or product backlog items, and finalized testware.

Test Oracle

A test oracle is a mechanism that determines whether software executed correctly for a test case. We define a test oracle to contain two essential parts: oracle information that represents expected output; and an oracle procedure that compares the oracle information with the actual output

1.4.4 Traceability between the Test Basis and Test Work Products

To implement effective test monitoring and control traceability between test basis and work products are very essential. Good traceability supports:

  • Analyzing the impact of changes

  • Making testing auditable

  • Meeting IT governance criteria

  • Improving the understandability of test progress reports and test summary reports to include the status of elements of the test basis (e.g., requirements that passed their tests, requirements that failed their tests, and requirements that have pending tests)

  • Relating the technical aspects of testing to stakeholders in terms that they can understand

  • Providing information to assess product quality, process capability, and project progress against business goals 

acceptance testing

alpha testing

beta testing

commercial off-the-shelf (COTS)

component integration testing

component testing

confirmation testing

contractual acceptance testing

functional testing

impact analysis

integration testing

interoperability testing - to check whether software can inter-operate with other software component, software's or systems

maintenance testing

non-functional testing

operational acceptance testing

regression testing

regulatory acceptance testing

sequential development model

system integration testing

system testing

test basis

test case

test environment

test level

test object

test objective

test type

user acceptance testing

white-box testing

The Psychology of Testing

Human psychology has important effects on software testing

1.5.1 Human Psychology and Testing

Identification of an error or bug may be perceived as a criticism of product and its author. Confirmation bias, an element of human psychology, can make it difficult to accept a disagreement of a currently held belief. Further, it is a common human trait to blame the bearer of bad news, and information produced by testing often contains bad news. To over come these shortcomings information about defects and failures should be communicated in a constructive way. One need to have a good interpersonal skills to communicate effectively about defects, failures, test results, test progress, and risks, and to build positive  relationships with  colleagues.

  • Start with collaboration rather than battles. Remind everyone of the common goal of better quality systems.

  • Emphasize the benefits of testing. For example, for the authors, defect information can help them improve their work products and their skills. For the organization, defects found and fixed during testing will save time and money and reduce overall risk to product quality.

  • Communicate test results and other findings in a neutral, fact-focused way without criticizing the person who created the defective item. Write objective and factual defect reports and review findings.

  • Try to understand how the other person feels and the reasons they may react negatively to the information.

  • Confirm that the other person has understood what has been said and vice versa.

Typical test objectives were discussed earlier (see section 1.1). Clearly defining the right set of test objectives has important psychological implications. Most people tend to align their plans and behaviors with the objectives set by the team, management, and other stakeholders. It is also important that testers adhere to these objectives with minimal personal bias.

1.5.2 Tester’s and Developer’s Mindsets

Developers and Testers often think differently. Developer - design and build a product. Tester - verifying, validating, finding defects at the earliest. Bringing these mindsets together helps to achieve a higher level of product quality.

A tester’s mindset should include curiosity, professional pessimism, a critical eye, attention to detail, and a motivation for good and positive communications and relationships. A tester’s mindset tends to grow and mature as the tester gains experience.

A developer’s mindset may include some of the elements of a tester’s mindset, but successful developers are often more interested in designing and building solutions than in contemplating what might be wrong with those solutions. In addition, confirmation bias makes it difficult to find mistakes in their own work.

With the right mindset, developers are able to test their own code. Different software development lifecycle models often have different ways of organizing the testers and test activities. Having some of the test activities done by independent testers increases defect detection effectiveness, which is particularly important for large, complex, or safety-critical systems. Independent testers bring a perspective which is different than that of the work product authors (i.e., business analysts, product owners, designers, and programmers), since they have different cognitive biases from the authors.

Software Development Life Cycle Models and Testing 

 Sequential Development Models

-  Linear, sequential flow of activities - any phase in the development process should begin when the previous phase is complete

-  In the Waterfall model, the development activities (e.g., requirements analysis, design, coding, testing) are completed one after another. In this model, test activities only occur after all other development activities have been completed.

- V-model integrates the test process throughout the development process, implementing the principle of early testing. Further, the V-model includes test levels associated with each corresponding development phase, which further supports early testing

- Sequential development models deliver software that contains the complete set of features, but typically require months or years for delivery to stakeholders and users.

2.1.2 Software Development Lifecycle Models in Context​

Depending on the context of the project, it may be necessary to combine or reorganize test levels and/or test activities. For example, for the integration of a commercial off-the-shelf (COTS) software product into a larger system, the purchaser may perform interoperability testing at the system integration test level. For example, for the integration of a commercial off-the-shelf (COTS) software product into a larger system, the purchaser may perform interoperability testing at the system integration test level (e.g., integration to the infrastructure and other systems) and at the acceptance test level (functional and non-functional, along with user acceptance testing and operational acceptance testing).

Software development lifecycle models themselves may be combined. For example, a Vmodel may be used for the development and testing of the backend systems and their integrations, while an Agile development model may be used to develop and test the front-end user interface (UI) and functionality. Prototyping may be used early in a project, with an incremental development model adopted once the experimental phase is complete.

 Iterative and incremental development models

- Incremental development involves establishing requirements, designing, building, and testing a system in pieces, which means that the software’s features grow incrementally.

- Iterative development occurs when groups of features are specified, designed, built, and tested together in a series of cycles, often of a fixed duration.

  • Rational Unified Process: Each iteration tends to be relatively long (e.g., two to three months), and the feature increments are correspondingly large, such as two or three groups of related features

  • Scrum: Each iteration tends to be relatively short (e.g., hours, days, or a few weeks), and the feature increments are correspondingly small, such as a few enhancements and/or two or three new features

  • Kanban: Implemented with or without fixed-length iterations, which can deliver either a single enhancement or feature upon completion, or can group features together to release at once 

  • Spiral (or prototyping): Involves creating experimental increments, some of which may be heavily re-worked or even abandoned in subsequent development work

Internet of Things (IoT) systems, which consist of many different objects, such as devices, products, and services, typically apply separate software development lifecycle models for each object. This presents a particular challenge for the development of Internet of Things system versions. Additionally the software development lifecycle of such objects places stronger emphasis on the later phases of the software development lifecycle after they have been introduced to operational use (e.g., operate, update, and decommission phases).

Test Levels

Group of test activities that are organized and managed together. Each level consists of the entire testing process, from planning through reporting and the levels vary from unit to user acceptance. Here we shall discuss Component, integration, system, and acceptance testing. These levels are characterized by the attributes: Specific objectives, Test basis, referenced to derive test cases, Test object (i.e., what is being tested), Typical defects and failures, Specific approaches and responsibilities.

For every test level, a suitable test environment is required. In acceptance testing, for example, a
production-like test environment is ideal, while in component testing the developers typically use their
own development environment.

Component

Unit or Module Testing

Verify whether the functional and non-functional behaviors of the component are as designed and specified

Preventing defects from escaping to higher test levels

In incremental and iterative development models automated component regression tests play a key role in building confidence

Component testing may cover functionality (e.g., correctness of calculations), non-functional characteristics (e.g., searching for memory leaks), and structural properties
(e.g., decision testing)

Test Basis

Work products that can be used as a test basis

Detailed design, Code, Data model, Component specifications

Test objects

Components, units or modules

Code and data structures

Classes

Database modules

Typical defects and failures

Incorrect functionality (e.g., not as described in design specifications)

Data flow problems

Incorrect code and logic

Defects are typically fixed as soon as they are found, often with no formal defect management. However,
when developers do report defects, this provides important information for root cause analysis and
process improvement

Specific approaches and responsibilities

Performed by the developer who wrote the code,

Developers will often write and execute tests after having written the code for a component.
However, in Agile development especially, writing automated component test cases may precede writing
application code

Read More >

integration

Objectives same as unit

Automated integration regression tests provide confidence that changes have not broken existing interfaces, components, or systems

Two different levels of integration: 

Component integration testing - focuses on the interactions and interfaces between integrated components.

System integration testing - focuses on the interactions and interfaces between systems, packages, and microservices. System integration testing can also cover interactions with, and interfaces provided by, external organizations (e.g., web services)

Test Basis

Software and system design, Sequence diagrams, Interface and communication protocol specifications, Use cases

Architecture at component or system level, Workflows, External interface definitions

Test objects

Subsystems, Databases

Infrastructure, Interfaces

APIs, Microservices

Typical defects and failures

component integration

Incorrect data, missing data, or incorrect data encoding

Incorrect sequencing or timing of interface calls

Interface mismatch

Failures in communication between components

Unhandled or improperly handled communication failures between components

Incorrect assumptions about the meaning, units, or boundaries of the data being passed between
components

system integration

Inconsistent message structures between systems

Incorrect data, missing data, or incorrect data encoding

Interface mismatch

Failures in communication between systems

Unhandled or improperly handled communication failures between systems

Incorrect assumptions about the meaning, units, or boundaries of the data being passed between
systems

Failure to comply with mandatory security regulations

Component integration tests and system integration tests should concentrate on the integration not the functionality of the individual modules

Read More >

system

System testing focuses on the behavior and capabilities of a whole system or product, often considering
the end-to-end tasks the system can perform and the non-functional behaviors it exhibits while performing
those tasks. Objectives of system same as other tests

Verifying data quality may be an objective for certain tests

Test basis

System and software requirement specifications (functional and non-functional)

Risk analysis reports

Use cases

Epics and user stories

Models of system behavior

State diagrams

System and user manuals

Test objects

Applications

Hardware/software systems

Operating systems

System under test (SUT)

System configuration and configuration data

Typical defects and failures

Incorrect calculations

Incorrect or unexpected system functional or non-functional behavior

Incorrect control and/or data flows within the system

Failure to properly and completely carry out end-to-end functional tasks

Failure of the system to work properly in the production environment(s)

Failure of the system to work as described in system and user manuals

Specific approaches and responsibilities

System testing should focus on the overall, end-to-end behavior of the system as a whole, both functional
and non-functional. System testing should use the most appropriate techniques for the
aspect(s) of the system to be tested. For example, a decision table may be created to verify whether
functional behavior is as described in business rules.

Independent testers typically carry out system testing. Defects in specifications (e.g., missing user stories,
incorrectly stated business requirements, etc.) can lead to a lack of understanding of, or disagreements
about, expected system behavior. Such situations can cause false positives and false negatives, which
waste time and reduce defect detection effectiveness, respectively. Early involvement of testers in user
story refinement or static testing activities, such as reviews, helps to reduce the incidence of such
situations.

Read More >

acceptance

Acceptance testing, like system testing, typically focuses on the behavior and capabilities of a whole
system or product. Objectives of acceptance testing include:

Establishing confidence in the quality of the system as a whole

Validating that the system is complete and will work as expected

Verifying that functional and non-functional behaviors of the system are as specified

Acceptance testing may produce information to assess the system’s readiness for deployment and use by
the customer (end-user). Defects may be found during acceptance testing, but finding defects is often not
an objective, and finding a significant number of defects during acceptance testing may in some cases be
considered a major project risk. Acceptance testing may also satisfy legal or regulatory requirements or
standards.

Common forms of acceptance testing

Test basis

Business processes, User or business requirements

Regulations, legal contracts and standards, Use cases, System requirements, System or user documentation, Installation procedures, Risk analysis reports

In addition, as a test basis for deriving test cases for operational acceptance testing, one or more of the
following work products can be used:

Backup and restore procedures, Disaster recovery procedures, Non-functional requirements, Operations documentation, Deployment and installation instructions, Performance targets, Database packages, Security standards or regulations

Typical test objects

System under test, System configuration and configuration data, Business processes for a fully integrated system, Recovery systems and hot sites (for business continuity and disaster recovery testing), Operational and maintenance processes, Forms, Reports, Existing and converted production data

Typical defects and failures

System workflows do not meet business or user requirements

Business rules are not implemented correctly

System does not satisfy contractual or regulatory requirements

Non-functional failures such as security vulnerabilities, inadequate performance efficiency under
high loads, or improper operation on a supported platform

test types

01

functional

Tests that evaluate functions that the system should perform

may be described in work products such as business requirements
specifications, epics, user stories, use cases, or functional specifications, or they may be undocumented.
The functions are “what” the system should do should be performed at all test levels (e.g., tests for components may be based on a component specification), though the focus is different at each level this testing considers the behavior of the software, so black-box techniques may be used to derive test conditions and test cases for the functionality of the component or system

Meticulousness of functional testing can be measured through functional coverage. Functional coverage is the extent to which some type of functional element has been exercised by tests, and is
expressed as a percentage of the type(s) of element being covered. For example, using traceability between tests and functional requirements, the percentage of these requirements which are addressed by testing can be calculated, potentially identifying coverage gaps 
Functional test design and execution may involve special skills or knowledge, such as knowledge of the particular business problem the software solves

test types

02

Non-functional Testing

Non-functional testing of a system evaluates characteristics of systems and software such as usability, performance efficiency or security. Refer to ISO standard (ISO/IEC 25010) for a classification of software product quality characteristics. Non-functional testing is the testing of “how well” the system behaves. Non-functional testing can and often should be performed at all test levels, and done as early as possible. The late discovery of non-functional defects can be extremely dangerous to the success of a project

Black-box techniques (--add link here--) may be used to derive test conditions and test cases for nonfunctional testing. For example, boundary value analysis can be used to define the stress conditions for performance tests.

The thoroughness of non-functional testing can be measured through non-functional coverage. Nonfunctional coverage is the extent to which some type of non-functional element has been exercised by tests, and is expressed as a percentage of the type(s) of element being covered. For example, using traceability between tests and supported devices for a mobile application, the percentage of devices which are addressed by compatibility testing can be calculated, potentially identifying coverage gaps

Non-functional test design and execution may involve special skills or knowledge, such as knowledge of the inherent weaknesses of a design or technology (e.g., security vulnerabilities associated with particular programming languages) or the particular user base (e.g., the personas of users of healthcare facility management systems).

test types

03

White-box Testing

White-box testing derives tests based on the system’s internal structure or implementation. Internal structure may include code, architecture, work flows, and/or data flows within the system

The thoroughness of white-box testing can be measured through structural coverage. Structural coverage is the extent to which some type of structural element has been exercised by tests, and is expressed as a percentage of the type of element being covered

At the component testing level, code coverage is based on the percentage of component code that has been tested, and may be measured in terms of different aspects of code (coverage items) such as the percentage of executable statements tested in the component, or the percentage of decision outcomes tested. These types of coverage are collectively called code coverage. At the component integration testing level, white-box testing may be based on the architecture of the system, such as interfaces between components, and structural coverage may be measured in terms of the percentage of interfaces exercised by tests.
White-box test design and execution may involve special skills or knowledge, such as the way the code is built (e.g., to use code coverage tools), how data is stored (e.g., to evaluate possible database queries), and how to use coverage tools and to correctly interpret their results.

test types

04

Change-related Testing

When changes are made to a system, either to correct a defect or because of new or changing functionality, testing should be done to confirm that the changes have corrected the defect or
implemented the functionality correctly, and have not caused any unforeseen adverse consequences

Confirmation testing: After a defect is fixed, the software may be tested with all test cases that failed due to the defect, which should be re-executed on the new software version. The software
may also be tested with new tests if, for instance, the defect was missing functionality. At the very least, the steps to reproduce the failure(s) caused by the defect must be re-executed on the new
software version. The purpose of a confirmation test is to confirm whether the original defect has been successfully fixed

Regression testing: It is possible that a change made in one part of the code, whether a fix or another type of change, may accidentally affect the behavior of other parts of the code, whether within the same component, in other components of the same system, or even in other systems.
Changes may include changes to the environment, such as a new version of an operating system or database management system.

Confirmation testing and regression testing are performed at all test levels.
Especially in iterative and incremental development lifecycles (e.g., Agile), new features, changes to existing features, and code refactoring result in frequent changes to the code, which also requires change-related testing. Due to the evolving nature of the system, confirmation and regression testing are very important. This is particularly relevant for Internet of Things systems where individual objects (e.g., devices) are frequently updated or replaced.
Regression test suites are run many times and generally evolve slowly, so regression testing is a strong candidate for automation. Automation of these tests should start early in the project