Testing Types
- In ISTQB, Testing Types refer to the different categories of testing activities, each with specific objectives, techniques, and approaches. These types ensure that various aspects of the software, such as functionality, performance, security, and usability, are thoroughly evaluated. Here’s an overview of the key ISTQB terminologies and concepts in terms of testing types:
Functional Testing
- Functional testing is aimed at verifying that the software behaves according to the specified functional requirements.
- Objective
- Key Concepts
- Example
- To ensure that the software performs all required functions as expected. Based on the functional specification of the software, often using black-box techniques where the internal structure of the system is not considered.
Black-Box Testing: Involves testing the system’s behavior based on inputs and expected outputs, without knowledge of the internal code.
Equivalence Partitioning: Divides input data into valid and invalid partitions to reduce the number of test cases.
Boundary Value Analysis: Focuses on testing the edge cases at the boundaries of input ranges.
Decision Table Testing: Uses decision tables to represent combinations of inputs and their associated outputs.
State Transition Testing: Verifies system behavior when transitioning between different states.
Verifying whether the login feature works as specified. ie: Correct username and password combinations allow access.
Ensuring that user input is correctly validated and processed.
Non-Functional Testing
- Non-functional testing focuses on testing attributes of the system that are not related to specific functions or behaviors but are more concerned with how the system performs under various conditions.
- Objective
- Key Concepts
- Example
- To assess how well the system meets non-functional requirements like performance, usability, security, and reliability.
Performance Testing: Evaluates how the system performs under certain conditions.
Load Testing: Tests how the system handles expected user loads.
Stress Testing: Pushes the system beyond normal operational capacity to see how it behaves under extreme conditions.
Scalability Testing: Measures how well the system can scale to handle increased loads.
Security Testing: Assesses the system’s ability to protect data and resources from malicious actors. ie: Testing for vulnerabilities like SQL injection or cross-site scripting.
Usability Testing: Measures how easy it is for users to interact with the system, focusing on user experience, interface design, and navigation.
Reliability Testing: Ensures that the system consistently performs without failure under specified conditions.
Compatibility Testing: Tests how well the software works in different environments, such as browsers, operating systems, or devices.
Testing how the system behaves under heavy user traffic (performance testing).
Verifying that the application’s security protocols, such as password encryption, function correctly (security testing).
Structural or White-Box Testing
- Structural or White-box testing involves testing the internal structure, logic, and flow of the software.
- Objective
- Key Concepts
- Example
To ensure that the internal workings of the system (code, algorithms, data flow) function as intended.
Requires knowledge of the internal code, and tests are based on code structure.
Code Coverage: Measures how much of the code has been tested, including:
Statement Coverage: Ensures that each line of code is executed at least once.
Branch Coverage: Verifies that each branch has been tested. ie: if else conditions.
Path Coverage: Ensures that all possible paths through the code have been tested.
Control Flow Testing: Verifies the correct sequence of execution of different control structures. ie: Loops and Conditionals.
Data Flow Testing: Focuses on how data moves through the system, ensuring correct initialization and usage of variables.
Testing that all branches of a decision structure are covered and work as expected. ie: if else conditions.
Verifying that every statement in the code has been executed at least once.
Change-Related Testing
- Change-related testing focuses on validating changes in the system, such as bug fixes, new features, or updates, to ensure they do not introduce new defects or negatively impact existing functionality.
- Objective
- Key Concepts
- Example
- Confirm that any change such as bug fixes, new features, or updates does not affect the system.
Regression Testing: Ensures that changes or updates to the software have not introduced new defects in previously working functionality.
Automated Regression Testing: Often automated due to the repetitive nature of this testing, especially in Agile or continuous integration environments.
Retesting: Involves retesting a specific defect after it has been fixed to verify that the issue has been resolved.
- Difference from Regression Testing: Retesting focuses on verifying that a specific defect has been fixed, while regression testing checks that the fix hasn’t caused new issues elsewhere.
Running a set of automated tests to verify that a bug fix doesn’t break existing features. ie: Regression testing.
Rechecking a particular defect after a patch is applied to ensure it’s resolved. ie: Retesting.
Confirmation Testing or Retesting
- Confirmation testing or retesting is a subset of change-related testing and involves verifying that a specific defect has been fixed.
- Objective
- Key Concepts
- Example
- To confirm that the defect reported in a previous test has been successfully fixed.
- Test cases related to the specific defect are executed again to ensure the issue is resolved.
- A tester would be rechecking a particular defect after a patch is applied to ensure it’s resolved.
Exploratory Testing
- Exploratory testing is an unscripted, experience-based approach where testers dynamically explore the application, looking for defects that may not be caught through formal test cases.
- Objective
- Key Concepts
- Example
- To discover hidden issues, especially in areas not covered by traditional scripted testing.
Simultaneous Learning and Testing: Testers explore the system without predefined test cases, learning the system as they test it.
Session-Based Testing: Testing is conducted within specific time-bound sessions where the tester explores a particular area of the application.
- A tester navigates through the application without a script to look for unexpected behavior or potential issues.
Smoke Testing
- Smoke testing or Build Verification testing involves performing a quick set of tests to verify that the basic functionality of the application works and that it is stable enough for further testing.
- Objective
- Key Concepts
- Example
- To ensure that the critical functionalities of the software work and the build is stable enough for detailed testing.
- Minimal Test Coverage: Focuses on testing the most critical and basic functions to determine if the software is ready for more rigorous testing.
- Checking if the application launches and basic navigation works after a new build.
Ad-Hoc Testing
- Ad-hoc testing is an informal testing approach without any structured or documented test cases. Testers use their understanding of the system to find defects.
- Objective
- Key Concepts
- Example
- To find defects quickly through random, unplanned testing.
- Random Testing: Testers randomly interact with the system, without following predefined steps or cases.
- A tester clicks through different sections of the software randomly to see if any unexpected errors occur.