Defect, Failure, and Error
Defect
-
In ISTQB, a Defect (also known as a Bug) is a key concept that refers to any variance between the expected and actual behavior of a software product. Defects are central to software testing as they directly impact the quality of the software, and the goal of testing is to identify, track, and manage these defects effectively.
-
A Defect is an issue in the software that causes the system to behave incorrectly or unexpectedly. According to ISTQB, a defect is defined as: "A flaw in a component or system that can cause the component or system to fail to perform its required function."
Key Concepts
- Failure
- Error
- Fault
- The inability of a system or component to perform its required function. Failures are usually the result of one or more defects.
- A human action that produces an incorrect result, such as a mistake made by a developer while coding.
- A condition that causes the software to fail or behave incorrectly. In ISTQB terminology, the terms defect and fault are often used interchangeably.
Types of Defects
- Defects can be categorized based on their impact, origin, and characteristics:
- Functional Defects
- Non-Functional Defects
- Interface Defects
- Regression Defects
- Defects that occur when the software does not meet functional requirements, such as incorrect calculations, broken functionality, or missing features.
- Defects related to the quality attributes of the software, such as performance issues, security vulnerabilities, or usability problems.
- Defects in the interaction between different modules or systems, such as incorrect API responses or data formatting issues.
- Defects introduced into the software when changes are made, such as new features, bug fixes, or patches. These changes might unintentionally break existing functionality.
Defect Life Cycle (Bug Life Cycle)
- The Defect Life Cycle (or Bug Life Cycle) refers to the various stages that a defect goes through from its initial discovery to its eventual resolution. The defect life cycle can vary slightly between organizations, but the general stages are as follows:
- New
- Assigned
- Open / In Progress
- Fixed
- The defect is discovered and logged in a defect tracking system.
- The defect is assigned to a developer or a team for investigation.
- The developer begins working on fixing the defect.
- The defect has been resolved by the developer, and the fix is ready to be verified.
- Retest / Verification
- Closed
- Reopened
- The tester retests the defect to ensure the fix is effective.
- The tester confirms the defect is fixed, and it is marked as closed.
- If the issue persists or the fix is ineffective, the defect is reopened for further investigation.
Defect Severity
- Severity refers to the impact a defect has on the functionality of the system. The severity is usually assigned by the tester and indicates how critical the defect is.
Severity Levels
- Critical
- Major
- Moderate
- Minor
- Trivial
- A critical defect causes system failure or data loss and prevents the system from functioning. Immediate attention is required.
- A major defect significantly impacts system functionality or usability, but the system can still operate with limited functionality.
- A defect that affects some parts of the system but does not prevent the system from functioning correctly.
- A minor defect has little impact on the system’s functionality, such as cosmetic issues. ie: UI bugs or spelling mistakes.
- The defect is very minor, with little to no impact on functionality. ie: slight misalignment of UI elements.
Defect Priority
- Priority indicates how urgently a defect should be fixed. This is usually assigned by the project manager or product owner, based on business needs or customer impact.
Priority Levels
- High
- Medium
- Low
- The defect must be fixed immediately as it impacts the core functionality or the business-critical process.
- The defect should be fixed soon, but it does not stop key functionality.
- The defect is non-urgent and can be fixed later, often in future releases.
Defect Reports
- A Defect Report (also called a Bug Report) is a document that details the defect encountered during testing. The purpose of a defect report is to provide developers with enough information to reproduce, investigate, and fix the issue.
Key Components of a Defect Report
-
Defect ID: A unique identifier for tracking the defect.
-
Summary / Title: A brief description of the defect.
-
Description: Detailed information about the defect, including what was expected and what actually occurred.
-
Steps to Reproduce: The exact steps to replicate the defect.
-
Actual Result: The behavior of the system when the defect occurs.
-
Expected Result: What the system should do according to the requirements or design.
-
Screenshots / Logs: Visual evidence or logs to assist in reproducing or diagnosing the defect.
-
Severity and Priority: The criticality and urgency of the defect.
-
Environment: The environment where the defect was found. ie: OS, browser, and version.
Defect Tracking System
- A Defect Tracking System (or Bug Tracking Tool) is used to log, manage, and track defects through their life cycle. It helps in organizing and prioritizing defects, making collaboration between testers, developers, and managers more efficient.
Common Defect Tracking Tools
-
Jira: A widely used issue and project tracking tool for managing defects, tasks, and Agile workflows.
-
Bugzilla: An open-source bug tracking tool that allows detailed tracking and reporting of bugs.
-
Redmine: A flexible project management tool that includes bug tracking features.
-
Mantis: A simple, open-source defect tracking tool with a focus on ease of use.
Defect Metrics
- Defect Metrics are quantitative measures used to assess the effectiveness of the defect management process. These metrics help in understanding the rate of defect identification, fixing, and closure, which in turn indicates the quality of the product and the efficiency of the testing process.
Common Defect Metrics
-
Defect Density: The number of defects found per unit of software. ie: Per thousand lines of code.
-
Defect Removal Efficiency (DRE): The percentage of defects found and fixed during testing compared to the total number of defects found in production.
-
Defect Arrival Rate: The number of defects identified in a given time period.
-
Defect Fix Rate: The rate at which defects are being fixed over a specific period.
-
Defect Leakage: The number of defects missed during testing but found in production.
-
Mean Time to Repair (MTTR): The average time taken to fix a defect once it is identified.
Defect Prevention
- Defect Prevention focuses on identifying the root causes of defects and implementing measures to prevent them from occurring in the future. This is typically done through process improvements and proactive actions.
Key Techniques
-
Root Cause Analysis (RCA): A method used to determine the underlying causes of defects.
-
Preventive Actions: Steps taken to improve processes, training, or tools to reduce the likelihood of future defects.
-
Retrospectives: Sessions held after testing or development cycles to review what went wrong and how to prevent similar defects in the future.
Defect Leakage
- Defect Leakage occurs when a defect is not identified during testing and is discovered after the product has been released. High defect leakage rates indicate a weakness in the testing process.
Key Concepts
- Leakage Rate: A metric used to track how many defects escape the testing phase and are found in production. Post-Release Defects: Defects found after the software has been deployed to production, often reported by end-users or customers.
Defect Triage
- Defect Triage is the process of prioritizing and assigning defects based on their severity, impact, and priority. The goal is to determine which defects should be fixed first based on business and technical needs.
Key Concepts
-
Triage Meetings: Regular meetings where stakeholders (testers, developers, managers) review and prioritize defects.
-
Triage Criteria: Factors such as business impact, customer importance, criticality, and resource availability are used to prioritize defects.
Failure
-
In the ISTQB (International Software Testing Qualifications Board) framework, failure is a key term related to software testing. Understanding failure helps improve the quality of software and the effectiveness of testing. A failure is a deviation of the software from its expected behavior during operation. In other words, it occurs when the system behaves in a way that is inconsistent with the requirements or user expectations.
-
Example: A web page should display the correct price when an item is added to the cart, but it shows an incorrect price, indicating a failure.
-
Relation to Fault: Failures are caused by faults (or defects) in the software, but a fault does not always lead to a failure unless it is triggered.
Error
- Definition
- Relation to Failure
- Example
- An error is a human action or mistake that produces an incorrect result. Errors made by developers, such as writing incorrect code, are often the root cause of faults in the system.
- Errors lead to defects (or faults), which, when executed, may cause the software to fail.
- A programmer accidentally writes code that causes a division by zero.
Defect (Bug or Fault)
- Definition
- Relation to Failure
- Example
- A defect is a flaw in the software that can cause the system to fail during execution. It is the physical manifestation of an error made by a developer.
- When a defect is executed, it can cause a failure, but not all defects result in visible failures. ie: dead code.
- A missing validation check in the code that allows users to input invalid data, leading to failure.
Incident
- Definition
- Relation to Failure
- Example
- An incident is any event that requires investigation, potentially related to failure. When the system behaves unexpectedly, an incident is logged for further analysis to determine whether it is due to a failure.
- An incident is typically the first signal of a failure in a system.
- Users report that a web page crashes during checkout, which is treated as an incident to be analyzed by the development team.
Risk
- Definition
- Relation to Failure
- Example
- Risk refers to the potential for a negative outcome, such as a failure, based on identified defects or errors. It can be technical, operational, or related to business outcomes.
- Higher risk areas in software have a greater likelihood of failure, which influences how testing is prioritized.
- A module handling financial transactions is considered high risk because failure in this area could have significant business impact.
Test Case
- Definition
- Relation to Failure
- Example
- A test case is a set of inputs, execution conditions, and expected results developed for a particular objective, such as testing a specific path through the code.
- Test cases are designed to detect failures by validating whether the actual behavior matches the expected behavior.
- A test case to verify that a login page rejects incorrect passwords.
Test Result
- Definition
- Relation to Failure
- Example
- The result of executing a test case can either be a pass or a fail. A failure means that the system did not behave as expected based on the test.
- If the test fails, it indicates the presence of a failure in the software.
- Running a test case where a login form is expected to reject invalid usernames, but the system accepts them, resulting in a failure.
Error
-
In ISTQB (International Software Testing Qualifications Board) terminology, error is closely tied to several key concepts in software testing.
-
An error refers to a human mistake made during the development or maintenance of the software. These mistakes can occur during requirements specification, design, coding, or other phases of software development.
-
Example: A developer accidentally writes incorrect logic in the code, such as using the wrong formula for a calculation. Relation to Other Concepts: Errors lead to defects (also known as bugs or faults), which can cause the software to fail when executed.
Defect (Bug or Fault)
- Definition
- Relation to Error
- Example
- A defect is a flaw or imperfection in the software that results from an error. It is the physical manifestation of the mistake (error) made by the developer or other stakeholders.
- Errors directly cause defects. A defect occurs because the software does not conform to the expected requirements due to the error.
- An error in writing the logic for tax calculations could lead to a defect where the system calculates the wrong tax amount.
Failure
- Definition
- Relation to Error
- Example
- Failure is the observable deviation of the software from its expected behavior during operation. A failure occurs when the software executes a defect, which may be caused by an earlier error.
- Errors lead to defects, and when those defects are executed, they may cause the system to fail, resulting in a visible failure.
- Due to an error in coding, a program crashes when a user submits a form. This crash is a failure.
Incident
- Definition
- Relation to Error
- Example
- An incident refers to any observed event that requires further investigation, often indicating a potential failure. The root cause of an incident might be a defect caused by an error.
- Incidents are often the result of failures, which ultimately stem from errors in the software’s development.
- Users report that a page freezes during checkout. This incident is investigated, and the root cause might be an error in the code leading to a defect.
Error Guessing
- Definition
- Relation to Error
- Example
- Error guessing is a test technique where the tester uses experience and intuition to guess where errors might occur in the system and design tests to uncover those errors.
- The focus is on identifying areas of the system where errors are likely, which could lead to defects and failures.
- An experienced tester might focus on areas where errors like null pointer exceptions or boundary conditions commonly occur.
Test Case
- Definition
- Relation to Error
- Example
- A test case is a set of input conditions, execution steps, and expected outcomes designed to verify specific functionality in the software.
- Test cases are created to detect errors and the resulting defects before the software is released. If an error has been made in the code, running the test case will expose the defect.
- A test case checks whether the login page correctly handles incorrect passwords. If there’s an error in the logic that checks passwords, the defect will be uncovered.
Test Result
- Definition
- Relation to Error
- Example
- The outcome of executing a test case, which can either pass or fail. A failed test case could indicate an error in the software that has led to a defect.
- A failure in a test result often points to an error in the software that led to the execution of a defect.
- A test case designed to verify the correctness of a price calculation fails, which indicates that there might be an error in the pricing logic.
Root Cause Analysis
- Definition
- Relation to Error
- Example
- This is the process of investigating the underlying cause of a defect or failure. The goal is to trace back the failure to the original error in the development process.
- In root cause analysis, the team identifies the original human error (whether in requirements, design, or code) that caused the defect, which then led to the failure.
- If an issue in the system is traced back to a misunderstanding in the requirements document, this misunderstanding represents the error that caused the defect.
Error-based Testing
- Definition
- Relation to Error
- Example
- Error-based testing focuses on specific types of errors that commonly occur in certain types of software, such as memory leaks in embedded systems or boundary errors in calculations.
- By focusing on typical errors, this approach aims to uncover defects that result from those common errors.
- A tester writing test cases that specifically target buffer overflows because such errors are common in low-level programming.
Test Case
- A test case is a specific set of conditions under which a tester determines whether a feature or function in a software application is working as expected. It includes preconditions, inputs, actions, expected results, and postconditions.
- Components
- Example
Test Case ID: A unique identifier for each test case.
Description: The purpose or objective of the test.
Preconditions: The state the system must be in before the test can be executed.
Test Steps: A series of actions or inputs performed during the test.
Expected Result: The expected outcome based on the requirements.
Actual Result: The outcome observed when executing the test.
Status: Pass, fail, or blocked depending on the actual result.
- A test case for a login page could involve inputting valid credentials to verify that the user can successfully log in.
Test Scenario
-
A test scenario is a high-level description of what needs to be tested. It outlines the functionality or business process being tested but does not go into the details of test cases. Relation to Test Case: A test scenario can include multiple test cases that explore various conditions or behaviors related to that scenario.
-
Example: A test scenario could be "Test the user login functionality," and it might include different test cases like logging in with valid credentials, invalid credentials, or leaving fields blank.
Test Plan
- A test plan is a document that outlines the strategy, scope, objectives, resources, schedule, and activities required for the testing process. It serves as a guide for the testing team to ensure that all aspects of the testing process are planned and tracked.
- Components
- Example
Test Scope: What will and will not be tested.
Objectives: The goals of testing.
Resources: People, tools, and environments needed.
Test Criteria: Entry and exit criteria for testing.
Schedule: Timeline for testing activities.
Risks and Mitigations: Potential risks and how they will be addressed.
- A test plan for an e-commerce website might specify that testing will focus on payment processing, user registration, and search functionalities.
Test Script
-
A test script is a set of instructions or code that automates the execution of test cases. It is often used in automated testing to repeat the same steps consistently across multiple test runs. Relation to Test Case: A test script is often derived from a manual test case but is written in a programming language or testing framework to allow for automation.
-
Example: A Selenium script might be written to automate the process of logging into a web application, adding an item to the cart, and checking out.
Test Log
-
A test log is a chronological record of all relevant details about the execution of test cases. It includes information such as the time of test execution, the environment used, test case results, and any anomalies encountered. The test log helps track the progress of testing, identify issues, and provide a history of what has been tested.
-
Example: A test log might indicate that a test case was executed on a particular date, who executed it, whether it passed or failed, and any defects logged as a result.
Test Environment
- The test environment is the setup of software, hardware, network configurations, and other elements that are necessary for conducting tests. It should mimic the production environment as closely as possible to ensure that the results of the tests are valid.
- Components
- Example
Hardware: Servers, workstations, or other devices used for testing.
Software: Operating systems, databases, web servers, and the application being tested.
Network: Internet speed, firewalls, and proxies that might affect performance.
Tools: Testing frameworks, bug tracking systems, or monitoring tools.
- A test environment for a mobile app might include various devices with different operating systems (iOS and Android) to ensure the app works correctly across platforms.
Test Data
- Test data is the data used as input during testing to verify that the software behaves as expected. It includes both positive and negative data (valid and invalid) to cover a range of scenarios.
- Types
- Example
Valid Data: Data that is within the expected input range.
Invalid Data: Data that is outside the expected input range and should trigger error handling.
Boundary Data: Data that is at the edge of the valid input range to test boundary conditions.
- For testing a login form, valid test data would include a correct username and password, while invalid data might include an empty username field or a password with invalid characters.