Test Levels and Types
-
In the Software Testing Life Cycle (STLC), different Test Levels and Test Types are used to ensure comprehensive coverage and quality of the software.
-
Test levels refer to the different stages where testing occurs in the software development process. Each level focuses on specific aspects of the software.
-
Test types focus on different aspects of the software to check its functionality, performance, and security.
STLC - Test Levels
- In the Software Testing Life Cycle (STLC), Test Levels refer to the different stages at which testing is performed during the software development process. Each test level focuses on a specific part of the software and ensures that the software is thoroughly tested at various stages, from small units of code to the entire system.
Unit Testing
- What it is
- Goal
- Who performs it
- Example
- Tools
- Key Points
- Testing individual components or units of the software in isolation (like functions, methods, or classes).
- Ensure that each unit of code works correctly on its own.
- Typically done by developers.
- Testing a function that adds two numbers to ensure it returns the correct sum.
- JUnit (Java), NUnit (.NET), PyTest (Python).
Focuses on the smallest testable parts of the application.
Helps catch issues early in the development process.
Usually automated.
Integration Testing
- What it is
- Goal
- Who performs it
- Example
- Tools
- Key Points
- Testing the interaction between different modules or components of the software.
- Ensure that integrated components work together as expected.
- Testers or developers.
Testing the interaction between a login module and a database to ensure that valid user credentials are accepted.
Approaches: Top-down, bottom-up, and hybrid integration testing.
- JUnit, TestNG, Postman (for APIs)
Focuses on verifying data flow and interaction between different modules.
Identifies issues like incorrect data passing or interface mismatches between components.
System Testing
- What it is
- Goal
- Who performs it
- Example
- Tools
- Key Points
- Testing the complete and integrated system to ensure it meets the specified requirements.
- Validate that the entire system works as a whole.
- Testers (QA team).
- Testing an e-commerce application to ensure users can browse products, add items to the cart, and complete a purchase.
- Selenium, QTP / UFT, TestRail (for test management).
Conducted in an environment that closely mimics production.
Involves testing the entire software system, including hardware and interfaces.
Includes both functional and non-functional testing (performance, usability, etc.).
Acceptance Testing
- What it is
- Goal
- Who performs it
- Example
- Tools
- Key Points
- Testing to ensure the software meets the business requirements and is ready for release.
- Validate that the software satisfies the end-user or business needs.
- Business users, stakeholders, or a dedicated QA team.
A client testing a banking application to ensure it meets the agreed-upon requirements before launching it.
Types:
User Acceptance Testing (UAT): Done by end-users to verify that the software meets their needs.
Business Acceptance Testing (BAT): Done by business stakeholders to ensure the software aligns with business goals.
Alpha / Beta Testing: Alpha testing is done internally, while beta testing is done by real users in a production-like environment.
- Test management tools like TestRail or Zephyr for tracking acceptance tests.
The final level of testing before the software is released.
Ensures the software is ready for production and meets the client’s or user’s needs.
Involves stakeholders to validate business requirements.
STLC - Test Types
- In the Software Testing Life Cycle (STLC), Test Types refer to the various categories of tests that focus on different aspects of the software to ensure it meets both functional and non-functional requirements. Each test type serves a specific purpose and targets specific quality attributes of the software.
Functional Testing
- Functional testing focuses on verifying that the software behaves as expected according to the functional requirements and specifications.
- Goal
- What’s tested
- Techniques
- Example
- Ensure that each feature of the software works correctly.
- User interactions, APIs, databases, security features, and business rules.
Unit Testing: Tests individual components in isolation.
Integration Testing: Tests how different components interact with each other.
System Testing: Tests the entire application as a whole.
User Acceptance Testing (UAT): Tests if the software meets the business requirements.
- Testing a login feature to check if the system correctly validates user credentials and redirects to the appropriate dashboard.
Non-Functional Testing
- Non-functional testing focuses on the performance, usability, and security of the software rather than the specific functionalities.
- Goal
- What’s tested
- Techniques
- Example
- Ensure the software meets non-functional requirements such as speed, security, and usability.
- Performance, scalability, security, user experience, and reliability of the app or system.
Performance Testing: Measures how the system performs under specific conditions.
Load Testing: Checks how the software behaves under normal load conditions.
Stress Testing: Tests how the software behaves under extreme conditions or loads.
Scalability Testing: Evaluates the system’s ability to scale up to handle increasing load.
Security Testing: Ensures the system is protected from vulnerabilities and threats.
Vulnerability Scanning: Identifies security weaknesses.
Penetration Testing: Simulates attacks to find vulnerabilities.
Usability Testing: Focuses on the user experience, ensuring the application is easy to use.
Checking how intuitive and simple it is to navigate a website. Compatibility Testing: Ensures the software works across different browsers, devices, operating systems, and networks.
Testing if a website displays correctly on different browsers. ie: Chrome, Firefox, and Safari.
Regression Testing
- Regression testing is performed to ensure that new changes, such as bug fixes or feature additions, do not negatively impact the existing functionality of the software.
- Goal
- What’s tested
- Techniques
- Example
- Verify that previously working functionality has not been broken by recent changes.
- Core features and functions that might be affected by updates or changes.
Partial Regression: Focuses on testing specific areas impacted by changes.
Full Regression: Tests the entire application to ensure no functionality is broken.
- After a bug fix in the shopping cart functionality of an e-commerce application, testing is done to ensure the checkout process, payment gateways, and product display still work as expected.
Smoke Testing
- Smoke testing is a preliminary test conducted to check whether the basic functionality of the application works. It is often called a "build verification test."
- Goal
- What’s tested
- Techniques
- Example
- Quickly verify that the main features of the software work, allowing testers to move on to deeper testing.
- Core functionalities like launching the application, basic navigation, and critical workflows.
- Performed on every new build to ensure the software is stable enough for further testing.
- Testing whether a mobile app launches successfully, loads the homepage, and allows basic navigation.
Sanity Testing
- Sanity testing is a subset of regression testing. It is performed to ensure that a specific functionality or bug fix works as expected after a recent change.
- Goal
- What’s tested
- Techniques
- Example
- Quickly verify that a particular feature or bug fix works as expected.
- Only the specific modules or components affected by recent changes.
- It is less exhaustive than regression testing and focuses on the specific area of concern.
- After fixing an issue with the checkout button on an e-commerce site, sanity testing would be done to check only the checkout functionality without testing other parts of the application.
Ad-Hoc Testing
- Ad-hoc testing is an informal, unplanned testing approach where testers explore the software in an attempt to find defects that might not be covered by formal test cases.
- Goal
- What’s tested
- Techniques
- Example
- Any aspect of the application based on the tester’s intuition or experience.
- Identify hidden defects by testing without predefined plans or test cases.
- Testers do not follow any structured test cases, and testing is performed randomly.
- Randomly interacting with an app’s features, like clicking on various buttons and trying different inputs, to discover any unplanned defects.
Exploratory Testing
- Exploratory testing is similar to ad-hoc testing but more structured. It involves testers actively exploring the software while designing and executing tests in real time.
- Goal
- What’s tested
- Techniques
- Example
- Discover defects and improve the understanding of how the application works.
- Different areas of the software, focusing on critical paths and features.
- Testers learn and explore the application on the fly, creating new test cases based on their findings.
- A tester exploring an app’s profile management section, testing various scenarios, and adjusting the test approach based on how the application behaves.
Recovery Testing
- Recovery testing ensures that the software can recover from crashes, failures, or other unexpected disruptions.
- Goal
- What’s tested
- Techniques
- Example
- Verify that the system can return to normal operation after a failure.
- System’s ability to recover from power failures, crashes, or network disruptions.
- Induce failure conditions and check if the system recovers without data loss or corruption.
- Intentionally disconnecting a server from the network during a file upload and checking if the upload resumes after reconnection.
Installation Testing
- Installation testing checks whether the software installs and uninstalls properly across different environments.
- Goal
- What’s tested
- Techniques
- Example
- Ensure the software installs, upgrades, and uninstalls smoothly.
- Installation process, configuration, and removal of the software.
- Perform the installation, look for issues like missing files, incorrect configurations, or failures in the uninstall process.
- Testing if a desktop application installs correctly on Windows 10, including verifying file paths, system registry entries, and uninstallation.
Localization Testing
- Localization testing ensures that the software behaves correctly in different languages, regions, or locales.
- Goal
- What’s tested
- Techniques
- Example
- Verify that the software works as expected with specific language settings and cultural preferences.
- Text translations, date formats, currency formats, UI alignment, and content.
- Can be manual or automated testing.
- Testing a website in both English and Spanish to ensure that all content is correctly translated, and the UI properly handles different text lengths.
Sample Documents
Test Plan Document Sample
1. Introduction
The Online Shopping System allows users to browse products, add them to a shopping cart, and process payments. This Test Plan outlines the testing approach for ensuring that all features meet the specified requirements and are free of defects.
1.1 Objective
-
The primary objective of this test plan is to:
-
Define the testing strategy and scope for the Online Shopping System.
-
Identify the testing resources, environments, tools, and schedule.
-
Provide guidelines for risk management, test case design, and test execution.
-
2. Scope of Testing
This test plan covers the functional and non-functional requirements of the Online Shopping System.
2.1 In-Scope
-
User Registration and Login
-
Product Browsing and Searching
-
Shopping Cart Management
-
Payment Processing
-
Order Management
-
Performance and Security Testing
2.2 Out-of-Scope
-
External integrations with third-party vendors outside of PayPal and Stripe.
-
Database migrations.
-
Localization and internationalization testing (since the system will be launched for the local market).
Test Strategy
- The following testing types will be performed to ensure software quality:
3.1 Functional Testing
-
Goal - Validate that the system functions as expected according to the SRS.
-
Testing Method - Both manual and automated testing will be employed to verify functionality such as login, product search, cart management, and order placement.
3.2 Non-Functional Testing
-
Performance Testing - Ensure the system meets performance requirements like page load times and payment processing within 2 seconds.
-
Security Testing - Validate that the system uses HTTPS, encrypts the sensitive data, and handles user authentication securely.
3.3 Regression Testing
-
Goal - Ensure that new code changes do not break existing functionality.
-
Testing Method - Automated tests will be set up using Selenium to perform regression testing after each build.
3.4 User Acceptance Testing (UAT)
-
Goal - Validate that the system meets business requirements and provides a seamless user experience.
-
Participants - Business stakeholders and end-users.
-
Method - UAT will be conducted after the final round of system testing.
4. Testing Approach
4.1 Test Case Design
-
Test cases will be designed based on the requirements outlined in the SRS and the RTM. Each test case will specify:
-
Test Case ID - Unique identifier ie: TC-001, TC-002.
-
Preconditions - Any setup required before running the test.
-
Steps - Detailed actions to be performed.
-
Expected Results - The expected behavior of the system.
-
4.2 Test Execution
Test cases will be executed once the environment is set up and the development team delivers the code.
Test Case Document Sample
Project Name: Sample Project Name
Module: Sample Module Name
Prepared By: Sample Name
Date: Sample Date of Creation
| Test Case 1 - User Login Functionality |
|---|
| Test Case ID - TC001 |
| Test Scenario - Verify the login functionality with valid credentials. |
| Preconditions: |
| 1. The user must be registered. |
| 2. The application should be accessible. |
| Test Steps: |
| 1. Open the application. |
| 2. Navigate to the login screen. |
| 3. Enter valid username and password. |
| 4. Click on the "Login" button. |
| Expected Result: |
| 1. The user should be successfully logged into the application and redirected to the dashboard. |
| Postconditions: |
| 1. User is logged into the system. |
| Test Case 2 - Invalid Login Attempt |
|---|
| Test Case ID - TC002 |
| Test Scenario - Verify login functionality with invalid credentials. |
| Preconditions: |
| 1. Users should have access to the login screen. |
| Test Steps: |
| 1. Open the application. |
| 2. Navigate to the login screen. |
| 3. Enter an invalid username and / or password. |
| 4. Click on the "Login" button. |
| Expected Result: |
| 1. The system should display an error message stating "Invalid credentials" and remain on the login page. |
| Postconditions: |
| 1. The user is not logged in, and the login page remains active. |
| Test Case 3 - Forgot Password Functionality |
|---|
| Test Case ID - TC003 |
| Test Scenario - Verify the forgot password functionality. |
| Preconditions: |
| 1. The application should be accessible. |
| 2. The user must have an email address registered in the system. |
| Test Steps: |
| 1. Open the application. |
| 2. Navigate to the login screen. |
| 3. Click on the "Forgot Password" link. |
| 4. Enter the registered email address. |
| 5. Click on the "Submit" button. |
| Expected Result: |
| 1. The user should receive an email with instructions to reset the password. |
| Postconditions: |
| 1. User receives a password reset email. |
| Test Case 4 - User Logout Functionality |
|---|
| Test Case ID - TC004 |
| Test Scenario - Verify that a logged-in user can log out successfully. |
| Preconditions: |
| 1. The user must be logged in. |
| Test Steps: |
| 1. Click on the "Logout" button from the dashboard. |
| Expected Result: |
| 1. The user should be successfully logged out and redirected to the login screen. |
| Postconditions: |
| 1. User is logged out of the system. |
| Test Case 5 - Password Reset Validation |
|---|
| Test Case ID - TC005 |
| Test Scenario - Verify the password reset with valid input. |
| Preconditions: |
| 1. The user has requested a password reset. |
| 2. The user has received a password reset email. |
| Test Steps: |
| 1. Open the password reset link sent via email. |
| 2. Enter a valid new password that meets complexity requirements. |
| 3. Confirm the password and submit. |
| Expected Result: |
| 1. The system should update the password and confirm the change with a success message. |
| Postconditions: |
| 1. The user can log in with the new password. |
Notes:
-
Test cases can be executed manually or through automated testing frameworks.
-
This document should be updated as the application evolves, and additional scenarios arise.
-
All test cases must be reviewed and approved before execution.
Test Environment Setup Document Sample
1. Document Information
| Document Title: Test Environment Setup Document |
|---|
| Project Name: Sample Project Name |
| Prepared By: Sample Prepared By |
| Date: Sample Date |
| Version: Sample Version |
2. Purpose
The purpose of this document is to outline the setup and configuration of the test environment for the Sample Project Name. It ensures that all necessary hardware, software, and network configurations are in place and ready for test execution.
3. Scope
This document covers the configuration of the test environment for:
-
Functional testing.
-
Non-functional testing ie: Performance or Security.
-
Automated testing.
-
Integration testing.
-
Any other relevant testing activities.
4. Test Environment Overview
Provide a high-level overview of the test environment setup.
-
Environment Type - Development, QA, or Production-like environment.
-
Supported Platforms - Windows, Linux, macOS, Android, iOS.
-
Number of Test Environments - QA1, QA2, or Staging.
-
Test Types Supported - Functional, Regression, Performance, or Security.
5. Hardware Requirements
Details of the hardware resources needed for setting up the environment.
| Hardware Component | Configuration Details | Quantity | Remarks |
|---|---|---|---|
| Server Type | ie: Virtual or Physical | Number of items | Additional Notes |
| CPU | ie: 4 vCPUs | Number of items | Additional Notes |
| RAM | ie: 16 GB | Number of items | Additional Notes |
| Disk Space | ie: 500 GB SSD | Number of items | Additional Notes |
| Network Requirements | ie: 1 Gbps | Number of items | Additional Notes |
6. Software Requirements
Details of the software components that will be installed/configured in the test environment.
| Software Component | Version Installation | Location | Remarks |
|---|---|---|---|
| Operating System | ie: Windows or Linux | ie: Virtual Machine | Additional Notes |
| Database | ie: MySQL 5.7 or PostgreSQL | ie: DB Server | Additional Notes |
| Application Server | ie: Apache Tomcat or Nginx | ie: Server | Additional Notes |
| Middleware | ie: Redis or RabbitMQ | ie: Middleware Server | Additional Notes |
| Testing Tools | ie: Selenium or JIRA | ie: Tester Machines | Additional Notes |
| Version Control System | ie: Git | ie: GitHub / Local Server | Additional Notes |
7. Network Configuration
Describe the network architecture and configurations needed for the environment.
-
Network Type - ie: LAN / WAN.
-
Subnet Configuration - ie: 192.168.1.0/24.
-
Firewalls - Describe firewall rules, if applicable.
-
Load Balancers - Describe any load balancers required.
-
VPN Access - If required for remote testing, include VPN details.
8. Test Data Setup
Outline how test data will be set up and managed in the test environment.
-
Data Sources - ie: Production-like data, Mock data.
-
Data Population Method - ie: SQL scripts, APIs.
-
Data Privacy Considerations - If sensitive data is used, mention masking or anonymization processes.
-
Backup Strategy - Plan for backing up test data.
9. Access Management
Describe how access to the environment will be controlled.
| Role | Access Type | Tools / Servers | Access Method |
|---|---|---|---|
| Tester | Read / Write | ie: Application Server or DB | SSH, HTTP, or VPN |
| Developer | Read / Write | ie: GitHub, Jenkins | SSH, HTTP, or VPN |
| Administrator | Full Access | ie: All servers | SSH, HTTP, VPN, and Root Access |
10. Test Environment Validation
Describe the steps to validate that the environment is working as expected.
-
Smoke Testing - Outline the basic tests that will be executed to validate the environment setup, such as login or data retrieval.
-
Environment Health Checks - Tools used for monitoring, such as logs and dashboards.
-
Backup and Recovery - Procedures to ensure the environment can be recovered if a failure occurs.
11. Risk and Mitigation Plan
Identify potential risks related to the test environment and mitigation strategies.
| Risk | Impact | Probability | Mitigation Strategy |
|---|---|---|---|
| Environment Downtime | High | Medium | Setup redundancy and frequent backups. |
| Network Latency | Medium | Low | Optimize network routes and set up mirrors. |
| Hardware Failure | High | Low | Maintain spares and cloud-based backups. |
12. Environment Maintenance and Monitoring
Describe ongoing activities to maintain the environment during testing.
-
Monitoring Tools - List monitoring tools such as Prometheus or Grafana.
-
Performance Monitoring - Describe how the environment’s performance will be monitored.
-
Routine Maintenance - Any scheduled updates or backups.
-
Log Management - Tools or processes to collect and analyze logs.
13. Dependencies
List all external dependencies that must be available for the environment to function. ie: APIs, third-party services and external databases.
| Dependency Type | Configuration | Details | Availability |
|---|---|---|---|
| Payment Gateway | API | External API Endpoint | 9 AM - 5 PM |
| Third-Party | Authentication | External API Endpoint | 24 / 7 |
14. Approval and Sign-off
Once the environment has been set up and validated, this section is completed by stakeholders to approve the environment for testing.
| Name | Role | Signature | Date |
|---|---|---|---|
| Name of Stakeholder | Project Manager | Signature | Date |
| Name of Stakeholder | QA Lead | Signature | Date |
| Name of Stakeholder | System Administrator | Signature | Date |