SW Testing

SW Testing

planning, analyzing, designing, implementing tests, and executing, reporting test progress and results, and evaluating the quality of a test object.

Objectives of Testing


Evaluate - requirements, user stories, design, and code Verify - specified requirements have been fulfilled Validate - test object is complete and works as expected Build Confidence - level of quality of test object Prevent - defects Find - failures Inform - the level of quality of test objects Reduce - risk of inadequate software quality Comply with - contractual, legal, or regulatory requirements or standards



Find bugs Done by tester Can be automated Can be done by anyone _ Major portion of testing can be done with design knowledge



Correct bugs By programmer Can't be automated Only programmer or who has access to code Can't be done without proper design knowledge

Testing Necessary?

Relevance of Testing

Reduce risk of failure - increase product quality - meet contractual or legal requirements or industry-specific standards

Testers in Development Process?

Nee of Testers

In Requirements review - or user story refinement - detect defects in work products - the elimination reduces the risk of incorrect or un-testable functionality being developed In Design phase - increase understanding of the design and how to test it - reduce the risk of fundamental design defects and enable tests to be identified at an early stage In Code development - increase each party�s understanding of the code and how to test it Verify and validate - prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements...

Quality Assurance


Process oriented - preventing defects by ensuring the processes used to manage and create deliverables works. Is about engineering processes that assure quality is achieved in an effective and efficient way

Quality Control


Product oriented - what was expected - QA proactive - QC reactive - checking the product against a predetermined set of requirements and validating that the product meets those requirements - Eg. technical reviews, software testing and code inspections



subset of QC. Process of executing a system in order to detect bugs in the product so that they get fixed. Testing is an integral part of QC as it helps demonstrate that the product runs the way it is expected and designed for

Errors, Defects, and Failures


Error (mistake) leads to defects (fault or bug) in software code or work product. If there is a defect in the product not necessary that it should trigger failure in all circumstances - it may occur rarely or never.

Reasons for Errors


Time pressure Human Fallibility Inexperienced or insufficiently skilled project participants Miscommunication between project participants, including miscommunication about requirements and design Complexity of the code, design, architecture, the underlying problem to be solved, and/or the technologies used Misunderstandings about intra-system and inter-system interfaces, especially when such intrasystem and inter-system interactions are large in number New, unfamiliar technologies

False positives


Not all unexpected test results are failures It may occur due to errors in the way tests were executed, or due to defects in the test data, the test environment, or other test-ware, or for other reasons False positives are reported as defects, but aren�t actually defects

False negatives


Tests that do not detect defects that they should have detected

Defects, Root Causes and Effects

The root causes of defects are the earliest actions or conditions that contributed to creating the defects. Defects can be analyzed to identify their root causes, so as to reduce the occurrence of similar defects in the future. By focusing on the most significant root causes, root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced

Testing Principles 1

Testing shows the presence of defects, not their absence

Reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, testing is not a proof of correctness

Testing Principles 2

Exhaustive testing is impossible

Rather than attempting to test exhaustively, risk analysis, test techniques, and priorities should be used to focus test efforts

Testing Principles 3

Early testing saves time and money

Early testing is sometimes referred to as shift left. Testing early in the software development life cycle helps reduce or eliminate costly changes

Testing Principles 4

Defects cluster together

A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures. Predicted defect clusters, and the actual observed defect clusters in test or operation, are an important input into a risk analysis used to focus the test effort

Testing Principles 5

Beware of the pesticide paradox

If the same tests are repeated over and over again, eventually these tests no longer find any new defects. To detect new defects, existing tests and test data may need changing, and new tests may need to be written.

Testing Principles 6

Testing is context dependent

Testing is done differently in different contexts. For example, safety-critical industrial control software is tested differently from an e-commerce mobile app. As another example, testing in an Agile project is done differently than testing in a sequential life cycle project

Testing Principles 7

Absence-of-errors is a fallacy

It is a fallacy (i.e., a mistaken belief) to expect that just finding and fixing a large number of defects will ensure the success of a system, thus the need of usability engineering

Test Process


No universal process but�there are common sets of test activities without which testing will be less likely to achieve its established objectives - test process. Test activities involved - how they are implemented - and when they occur

Contextual Factors


Software development lifecycle model and project methodologies being used Test levels and test types being considered Product and project risks Business domain Operational constraints, including but not limited to: Budgets and resources Timescales Complexity Contractual and regulatory requirements Organizational policies and practices Required internal and external standards

Measurable Coverage Criteria

It is very useful if the test basis (for any level or type of testing that is being considered) has measurable coverage criteria defined. The coverage criteria can act effectively as key performance indicators (KPIs) to drive the activities that demonstrate achievement of software test objectives _ For example, for a mobile application, the test basis may include a list of requirements and a list of supported mobile devices. Each requirement is an element of the test basis. Each supported device is also an element of the test basis. The coverage criteria may require at least one test case for each element of the test basis. Once executed, the results of these tests tell stakeholders whether specified requirements are fulfilled and whether failures were observed on supported devices.

Proof of concept (PoC)


It is a realization of a certain method or idea in order to demonstrate its feasibility, or a demonstration in principle with the aim of verifying that some concept or theory has practical potential. A proof of concept is usually small and may or may not be complete

Test Activities and Tasks


Test planning Test monitoring and control Test analysis Test design Test implementation Test execution Test completion Many of these activities may appear logically sequential, they are often implemented iteratively. Eg. Agile development involves small iterations of software design, build, and test that happen on a continuous basis, supported by on-going planning. So test activities are also happening on an iterative, continuous basis within this development approach. Even in sequential development, the stepped logical sequence of activities will involve overlap, combination, concurrency, or omission, so tailoring these main activities within the context of the system and the project is usually required.

Test Planning

Test Planning

Test planning involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed by the context

Test monitoring and control


Monitoring involves the on-going comparison of actual progress against the test plan using any test monitoring metrics defined in the test plan. Test Control involves taking actions necessary to meet the objectives of the test plan (which may be updated over time). Test monitoring and control are supported by the evaluation of exit criteria which includes: Checking test results and logs against specified coverage criteria Assessing the level of component or system quality based on test results and logs Determining if more tests are needed (e.g., if tests originally intended to achieve a certain level of product risk coverage failed to do so, requiring additional tests to be written and executed) Test progress against the plan is communicated to stakeholders in test progress reports, including deviations from the plan and information to support any decision to stop testing.

Test Analysis


What to test in terms of measurable coverage criteria. Activities involved: Analyzing the test basis appropriate to the test level being considered - requirement specifications, design and implementation info, implementation of component itself, risk analysis report Evaluating the test basis and test items to identify defects of various types, such as: Ambiguities, Omissions, Inconsistencies, Inaccuracies, Contradictions, Superfluous statements Identifying features and sets of features to be tested Defining and prioritizing test conditions for each feature based on analysis of the test basis, and considering unctional, non-functional, and structural characteristics, other business and technical factors, and levels of risks Capturing bi-directional traceability between each element of the test basis and the associated test conditions

Test Design


How to test. Test design includes the following major activities: _ Designing and prioritizing test cases and sets of test cases _ Identifying necessary test data to support test conditions and test cases _ Designing the test environment and identifying any required infrastructure and tools _ Capturing bi-directional traceability between the test basis, test conditions, test cases, and test procedures (add link to 1.4.4) The elaboration of test conditions into test cases and sets of test cases during test design often involves using test techniques (add link to chapter 4). As with test analysis, test design may also result in the identification of similar types of defects in the test basis. Also as with test analysis, the identification of defects during test design is an important potential benefit.

Test Implementation


How to test. Testware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures. Test implementation includes the following major activities: _ Developing and prioritizing test procedures, and, potentially, creating automated test scripts _ Creating test suites from the test procedures and (if any) automated test scripts _ Arranging the test suites within a test execution schedule in a way that results in efficient test execution (see section 5.2.4) _ Building the test environment (including, potentially, test harnesses, service virtualization, simulators, and other infrastructure items) and verifying that everything needed has been set up correctly _ Preparing test data and ensuring it is properly loaded in the test environment _ Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test suites (add link to section 1.4.4) Test design and test implementation tasks are often combined. In exploratory testing and other types of experience-based testing, test design and implementation may occur, and may be documented, as part of test execution. Exploratory testing may be based on test charters (produced as part of test analysis), and exploratory tests are executed immediately as they are designed and implemented (add link to section 4.4.2).

Test Execution


Running test suites as per the schedule. It involves: __ Recording the IDs and versions of the test item(s) or test object, test tool(s), and testware _ Executing tests either manually or by using test execution tools _ Comparing actual results with expected results _ Analyzing anomalies to establish their likely causes (e.g., failures may occur due to defects in the code, but false positives also may occur [see section 1.2.3]) _ Reporting defects based on the failures observed (see section 5.6) _ Logging the outcome of test execution (e.g., pass, fail, blocked) _ Repeating test activities either as a result of action taken for an anomaly, or as part of the planned testing (e.g., execution of a corrected test, confirmation testing, and/or regression testing) _ Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test results

Test Completion


Collect data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a software system is released, a test project is completed (or cancelled), an Agile project iteration is finished (e.g., as part of a retrospective meeting), a test level is completed, or a maintenance release has been completed. Test completion includes: _ Checking whether all defect reports are closed, entering change requests or product backlog items for any defects that remain unresolved at the end of test execution _ Creating a test summary report to be communicated to stakeholders _ Finalizing and archiving the test environment, the test data, the test infrastructure, and other testware for later reuse _ Handing over the testware to the maintenance teams, other project teams, and/or other stakeholders who could benefit from its use _ Analyzing lessons learned from the completed test activities to determine changes needed for future iterations, releases, and projects _ Using the information gathered to improve test process maturity

Test Implementation Work Products

Work Products

Test procedures and the sequencing of those test procedures Test suites A test execution schedule The test achievements can be demonstrated via bi-directional traceability between test procedures and specific elements of the test basis, through the test cases and test conditions. The activities may involve test automation scripting, verification of test data, test environment etc. Test data serve to assign concrete values to the inputs and expected results of test cases. These values, together with explicit directions about its use, turn high-level test cases into executable low-level test cases. The same high-level test case may use different test data when executed on different releases of the test object. The concrete expected results which are associated with concrete test data are identified by using a test oracle. In exploratory testing, some test design and implementation work products may be created during test execution, though the extent to which exploratory tests (and their traceability to specific elements of the test basis) are documented may vary significantly. Test conditions defined in test analysis may be further refined in test implementation.

Test Execution Work Products


Documentation of the status of individual test cases or test procedures (e.g., ready to run, pass, fail, blocked, deliberately skipped, etc.) Defect reports (add link to section 5.6) Documentation about which test item(s), test object(s), test tools, and testware were involved in the testing Ideally once the test execution is done the status of each test basis can be determined and reported via bi-directional traceability with the associated the test procedure(s). This enables verification that the coverage criteria have been met, and enables the reporting of test results in terms that are understandable to stakeholders.

Test completion work products

work Completion

Test completion work products include test summary reports, action items for improvement of subsequent projects or iterations (e.g., following a project Agile retrospective), change requests or product backlog items, and finalized testware.

Test Oracle


A test oracle is a mechanism that determines whether software executed correctly for a test case. We define a test oracle to contain two essential parts: oracle information that represents expected output; and an oracle procedure that compares the oracle information with the actual output

Traceability between the Test Basis and Test Work Products

To implement effective test monitoring and control traceability between test basis and work products are very essential. Good traceability supports: Analyzing the impact of changes Making testing auditable Meeting IT governance criteria Improving the understandability of test progress reports and test summary reports to include the status of elements of the test basis (e.g., requirements that passed their tests, requirements that failed their tests, and requirements that have pending tests) Relating the technical aspects of testing to stakeholders in terms that they can understand Providing information to assess product quality, process capability, and project progress against business goals

The Psychology of Testing


Identification of an error or bug may be perceived as a criticism of product and its author. Confirmation bias, an element of human psychology, can make it difficult to accept a disagreement of a currently held belief. Further, it is a common human trait to blame the bearer of bad news, and information produced by testing often contains bad news. To over come these shortcomings information about defects and failures should be communicated in a constructive way. One need to have a good interpersonal skills to communicate effectively about defects, failures, test results, test progress, and risks, and to build positive relationships with colleagues. Start with collaboration rather than battles. Remind everyone of the common goal of better quality systems. Emphasize the benefits of testing. For example, for the authors, defect information can help them improve their work products and their skills. For the organization, defects found and fixed during testing will save time and money and reduce overall risk to product quality. Communicate test results and other findings in a neutral, fact-focused way without criticizing the person who created the defective item. Write objective and factual defect reports and review findings. Try to understand how the other person feels and the reasons they may react negatively to the information. Confirm that the other person has understood what has been said and vice versa. _ Typical test objectives were discussed earlier (see section 1.1). Clearly defining the right set of test objectives has important psychological implications. Most people tend to align their plans and behaviors with the objectives set by the team, management, and other stakeholders. It is also important that testers adhere to these objectives with minimal personal bias

The Psychology of Testing

Tester's and Developer's Mindsets

Developers and Testers often think differently. Developer - design and build a product. Tester - verifying, validating, finding defects at the earliest. Bringing these mindsets together helps to achieve a higher level of product quality. _ A tester�s mindset should include curiosity, professional pessimism, a critical eye, attention to detail, and a motivation for good and positive communications and relationships. A tester�s mindset tends to grow and mature as the tester gains experience. _ A developer�s mindset may include some of the elements of a tester�s mindset, but successful developers are often more interested in designing and building solutions than in contemplating what might be wrong with those solutions. In addition, confirmation bias makes it difficult to find mistakes in their own work. _ With the right mindset, developers are able to test their own code. Different software development lifecycle models often have different ways of organizing the testers and test activities. Having some of the test activities done by independent testers increases defect detection effectiveness, which is particularly important for large, complex, or safety-critical systems. Independent testers bring a perspective which is different than that of the work product authors (i.e., business analysts, product owners, designers, and programmers), since they have different cognitive biases from the authors.

The Foundation in SW Testing

The Foundation in SW Testing

The Foundation in SW Testing