Skip to main content

Defect, Failure, and Error

Defect

  • In ISTQB, a Defect (also known as a Bug) is a key concept that refers to any variance between the expected and actual behavior of a software product. Defects are central to software testing as they directly impact the quality of the software, and the goal of testing is to identify, track, and manage these defects effectively.

  • A Defect is an issue in the software that causes the system to behave incorrectly or unexpectedly. According to ISTQB, a defect is defined as: "A flaw in a component or system that can cause the component or system to fail to perform its required function."

Key Concepts

  • The inability of a system or component to perform its required function. Failures are usually the result of one or more defects.

Types of Defects

  • Defects can be categorized based on their impact, origin, and characteristics:
  • Defects that occur when the software does not meet functional requirements, such as incorrect calculations, broken functionality, or missing features.

Defect Life Cycle (Bug Life Cycle)

  • The Defect Life Cycle (or Bug Life Cycle) refers to the various stages that a defect goes through from its initial discovery to its eventual resolution. The defect life cycle can vary slightly between organizations, but the general stages are as follows:
  • The defect is discovered and logged in a defect tracking system.
  • The tester retests the defect to ensure the fix is effective.

Defect Severity

  • Severity refers to the impact a defect has on the functionality of the system. The severity is usually assigned by the tester and indicates how critical the defect is.

Severity Levels

  • A critical defect causes system failure or data loss and prevents the system from functioning. Immediate attention is required.

Defect Priority

  • Priority indicates how urgently a defect should be fixed. This is usually assigned by the project manager or product owner, based on business needs or customer impact.

Priority Levels

  • The defect must be fixed immediately as it impacts the core functionality or the business-critical process.

Defect Reports

  • A Defect Report (also called a Bug Report) is a document that details the defect encountered during testing. The purpose of a defect report is to provide developers with enough information to reproduce, investigate, and fix the issue.

Key Components of a Defect Report

  • Defect ID: A unique identifier for tracking the defect.

  • Summary / Title: A brief description of the defect.

  • Description: Detailed information about the defect, including what was expected and what actually occurred.

  • Steps to Reproduce: The exact steps to replicate the defect.

  • Actual Result: The behavior of the system when the defect occurs.

  • Expected Result: What the system should do according to the requirements or design.

  • Screenshots / Logs: Visual evidence or logs to assist in reproducing or diagnosing the defect.

  • Severity and Priority: The criticality and urgency of the defect.

  • Environment: The environment where the defect was found. ie: OS, browser, and version.

Defect Tracking System

  • A Defect Tracking System (or Bug Tracking Tool) is used to log, manage, and track defects through their life cycle. It helps in organizing and prioritizing defects, making collaboration between testers, developers, and managers more efficient.

Common Defect Tracking Tools

  • Jira: A widely used issue and project tracking tool for managing defects, tasks, and Agile workflows.

  • Bugzilla: An open-source bug tracking tool that allows detailed tracking and reporting of bugs.

  • Redmine: A flexible project management tool that includes bug tracking features.

  • Mantis: A simple, open-source defect tracking tool with a focus on ease of use.

Defect Metrics

  • Defect Metrics are quantitative measures used to assess the effectiveness of the defect management process. These metrics help in understanding the rate of defect identification, fixing, and closure, which in turn indicates the quality of the product and the efficiency of the testing process.

Common Defect Metrics

  • Defect Density: The number of defects found per unit of software. ie: Per thousand lines of code.

  • Defect Removal Efficiency (DRE): The percentage of defects found and fixed during testing compared to the total number of defects found in production.

  • Defect Arrival Rate: The number of defects identified in a given time period.

  • Defect Fix Rate: The rate at which defects are being fixed over a specific period.

  • Defect Leakage: The number of defects missed during testing but found in production.

  • Mean Time to Repair (MTTR): The average time taken to fix a defect once it is identified.

Defect Prevention

  • Defect Prevention focuses on identifying the root causes of defects and implementing measures to prevent them from occurring in the future. This is typically done through process improvements and proactive actions.

Key Techniques

  • Root Cause Analysis (RCA): A method used to determine the underlying causes of defects.

  • Preventive Actions: Steps taken to improve processes, training, or tools to reduce the likelihood of future defects.

  • Retrospectives: Sessions held after testing or development cycles to review what went wrong and how to prevent similar defects in the future.

Defect Leakage

  • Defect Leakage occurs when a defect is not identified during testing and is discovered after the product has been released. High defect leakage rates indicate a weakness in the testing process.

Key Concepts

  • Leakage Rate: A metric used to track how many defects escape the testing phase and are found in production. Post-Release Defects: Defects found after the software has been deployed to production, often reported by end-users or customers.

Defect Triage

  • Defect Triage is the process of prioritizing and assigning defects based on their severity, impact, and priority. The goal is to determine which defects should be fixed first based on business and technical needs.

Key Concepts

  • Triage Meetings: Regular meetings where stakeholders (testers, developers, managers) review and prioritize defects.

  • Triage Criteria: Factors such as business impact, customer importance, criticality, and resource availability are used to prioritize defects.

Failure

  • In the ISTQB (International Software Testing Qualifications Board) framework, failure is a key term related to software testing. Understanding failure helps improve the quality of software and the effectiveness of testing. A failure is a deviation of the software from its expected behavior during operation. In other words, it occurs when the system behaves in a way that is inconsistent with the requirements or user expectations.

  • Example: A web page should display the correct price when an item is added to the cart, but it shows an incorrect price, indicating a failure.

  • Relation to Fault: Failures are caused by faults (or defects) in the software, but a fault does not always lead to a failure unless it is triggered.

Error

  • An error is a human action or mistake that produces an incorrect result. Errors made by developers, such as writing incorrect code, are often the root cause of faults in the system.

Defect (Bug or Fault)

  • A defect is a flaw in the software that can cause the system to fail during execution. It is the physical manifestation of an error made by a developer.

Incident

  • An incident is any event that requires investigation, potentially related to failure. When the system behaves unexpectedly, an incident is logged for further analysis to determine whether it is due to a failure.

Risk

  • Risk refers to the potential for a negative outcome, such as a failure, based on identified defects or errors. It can be technical, operational, or related to business outcomes.

Test Case

  • A test case is a set of inputs, execution conditions, and expected results developed for a particular objective, such as testing a specific path through the code.

Test Result

  • The result of executing a test case can either be a pass or a fail. A failure means that the system did not behave as expected based on the test.

Error

  • In ISTQB (International Software Testing Qualifications Board) terminology, error is closely tied to several key concepts in software testing.

  • An error refers to a human mistake made during the development or maintenance of the software. These mistakes can occur during requirements specification, design, coding, or other phases of software development.

  • Example: A developer accidentally writes incorrect logic in the code, such as using the wrong formula for a calculation. Relation to Other Concepts: Errors lead to defects (also known as bugs or faults), which can cause the software to fail when executed.

Defect (Bug or Fault)

  • A defect is a flaw or imperfection in the software that results from an error. It is the physical manifestation of the mistake (error) made by the developer or other stakeholders.

Failure

  • Failure is the observable deviation of the software from its expected behavior during operation. A failure occurs when the software executes a defect, which may be caused by an earlier error.

Incident

  • An incident refers to any observed event that requires further investigation, often indicating a potential failure. The root cause of an incident might be a defect caused by an error.

Error Guessing

  • Error guessing is a test technique where the tester uses experience and intuition to guess where errors might occur in the system and design tests to uncover those errors.

Test Case

  • A test case is a set of input conditions, execution steps, and expected outcomes designed to verify specific functionality in the software.

Test Result

  • The outcome of executing a test case, which can either pass or fail. A failed test case could indicate an error in the software that has led to a defect.

Root Cause Analysis

  • This is the process of investigating the underlying cause of a defect or failure. The goal is to trace back the failure to the original error in the development process.

Error-based Testing

  • Error-based testing focuses on specific types of errors that commonly occur in certain types of software, such as memory leaks in embedded systems or boundary errors in calculations.

Test Case

  • A test case is a specific set of conditions under which a tester determines whether a feature or function in a software application is working as expected. It includes preconditions, inputs, actions, expected results, and postconditions.
  • Test Case ID: A unique identifier for each test case.

  • Description: The purpose or objective of the test.

  • Preconditions: The state the system must be in before the test can be executed.

  • Test Steps: A series of actions or inputs performed during the test.

  • Expected Result: The expected outcome based on the requirements.

  • Actual Result: The outcome observed when executing the test.

  • Status: Pass, fail, or blocked depending on the actual result.

Test Scenario

  • A test scenario is a high-level description of what needs to be tested. It outlines the functionality or business process being tested but does not go into the details of test cases. Relation to Test Case: A test scenario can include multiple test cases that explore various conditions or behaviors related to that scenario.

  • Example: A test scenario could be "Test the user login functionality," and it might include different test cases like logging in with valid credentials, invalid credentials, or leaving fields blank.

Test Plan

  • A test plan is a document that outlines the strategy, scope, objectives, resources, schedule, and activities required for the testing process. It serves as a guide for the testing team to ensure that all aspects of the testing process are planned and tracked.
  • Test Scope: What will and will not be tested.

  • Objectives: The goals of testing.

  • Resources: People, tools, and environments needed.

  • Test Criteria: Entry and exit criteria for testing.

  • Schedule: Timeline for testing activities.

  • Risks and Mitigations: Potential risks and how they will be addressed.

Test Script

  • A test script is a set of instructions or code that automates the execution of test cases. It is often used in automated testing to repeat the same steps consistently across multiple test runs. Relation to Test Case: A test script is often derived from a manual test case but is written in a programming language or testing framework to allow for automation.

  • Example: A Selenium script might be written to automate the process of logging into a web application, adding an item to the cart, and checking out.

Test Log

  • A test log is a chronological record of all relevant details about the execution of test cases. It includes information such as the time of test execution, the environment used, test case results, and any anomalies encountered. The test log helps track the progress of testing, identify issues, and provide a history of what has been tested.

  • Example: A test log might indicate that a test case was executed on a particular date, who executed it, whether it passed or failed, and any defects logged as a result.

Test Environment

  • The test environment is the setup of software, hardware, network configurations, and other elements that are necessary for conducting tests. It should mimic the production environment as closely as possible to ensure that the results of the tests are valid.
  • Hardware: Servers, workstations, or other devices used for testing.

  • Software: Operating systems, databases, web servers, and the application being tested.

  • Network: Internet speed, firewalls, and proxies that might affect performance.

  • Tools: Testing frameworks, bug tracking systems, or monitoring tools.

Test Data

  • Test data is the data used as input during testing to verify that the software behaves as expected. It includes both positive and negative data (valid and invalid) to cover a range of scenarios.
  • Valid Data: Data that is within the expected input range.

  • Invalid Data: Data that is outside the expected input range and should trigger error handling.

  • Boundary Data: Data that is at the edge of the valid input range to test boundary conditions.

Module Review

Click to start the definition to term matching quiz
Drag the defintion to the correct term.
Test type item not available at this time.
Click to start the multiple choice quiz
Choose from the listed options below.
Test type item not available at this time.