Skip to main content

Test Levels and Types

  • In the Software Testing Life Cycle (STLC), different Test Levels and Test Types are used to ensure comprehensive coverage and quality of the software.

  • Test levels refer to the different stages where testing occurs in the software development process. Each level focuses on specific aspects of the software.

  • Test types focus on different aspects of the software to check its functionality, performance, and security.

STLC - Test Levels

  • In the Software Testing Life Cycle (STLC), Test Levels refer to the different stages at which testing is performed during the software development process. Each test level focuses on a specific part of the software and ensures that the software is thoroughly tested at various stages, from small units of code to the entire system.

Unit Testing

  • Testing individual components or units of the software in isolation (like functions, methods, or classes).

Integration Testing

  • Testing the interaction between different modules or components of the software.

System Testing

  • Testing the complete and integrated system to ensure it meets the specified requirements.

Acceptance Testing

  • Testing to ensure the software meets the business requirements and is ready for release.

STLC - Test Types

  • In the Software Testing Life Cycle (STLC), Test Types refer to the various categories of tests that focus on different aspects of the software to ensure it meets both functional and non-functional requirements. Each test type serves a specific purpose and targets specific quality attributes of the software.

Functional Testing

  • Functional testing focuses on verifying that the software behaves as expected according to the functional requirements and specifications.
  • Ensure that each feature of the software works correctly.

Non-Functional Testing

  • Non-functional testing focuses on the performance, usability, and security of the software rather than the specific functionalities.
  • Ensure the software meets non-functional requirements such as speed, security, and usability.

Regression Testing

  • Regression testing is performed to ensure that new changes, such as bug fixes or feature additions, do not negatively impact the existing functionality of the software.
  • Verify that previously working functionality has not been broken by recent changes.

Smoke Testing

  • Smoke testing is a preliminary test conducted to check whether the basic functionality of the application works. It is often called a "build verification test."
  • Quickly verify that the main features of the software work, allowing testers to move on to deeper testing.

Sanity Testing

  • Sanity testing is a subset of regression testing. It is performed to ensure that a specific functionality or bug fix works as expected after a recent change.
  • Quickly verify that a particular feature or bug fix works as expected.

Ad-Hoc Testing

  • Ad-hoc testing is an informal, unplanned testing approach where testers explore the software in an attempt to find defects that might not be covered by formal test cases.
  • Any aspect of the application based on the tester’s intuition or experience.

Exploratory Testing

  • Exploratory testing is similar to ad-hoc testing but more structured. It involves testers actively exploring the software while designing and executing tests in real time.
  • Discover defects and improve the understanding of how the application works.

Recovery Testing

  • Recovery testing ensures that the software can recover from crashes, failures, or other unexpected disruptions.
  • Verify that the system can return to normal operation after a failure.

Installation Testing

  • Installation testing checks whether the software installs and uninstalls properly across different environments.
  • Ensure the software installs, upgrades, and uninstalls smoothly.

Localization Testing

  • Localization testing ensures that the software behaves correctly in different languages, regions, or locales.
  • Verify that the software works as expected with specific language settings and cultural preferences.

Sample Documents

Test Plan Document Sample

1. Introduction

The Online Shopping System allows users to browse products, add them to a shopping cart, and process payments. This Test Plan outlines the testing approach for ensuring that all features meet the specified requirements and are free of defects.

1.1 Objective

  • The primary objective of this test plan is to:

    • Define the testing strategy and scope for the Online Shopping System.

    • Identify the testing resources, environments, tools, and schedule.

    • Provide guidelines for risk management, test case design, and test execution.

2. Scope of Testing

This test plan covers the functional and non-functional requirements of the Online Shopping System.

2.1 In-Scope

  • User Registration and Login

  • Product Browsing and Searching

  • Shopping Cart Management

  • Payment Processing

  • Order Management

  • Performance and Security Testing

2.2 Out-of-Scope

  • External integrations with third-party vendors outside of PayPal and Stripe.

  • Database migrations.

  • Localization and internationalization testing (since the system will be launched for the local market).

Test Strategy

  • The following testing types will be performed to ensure software quality:

3.1 Functional Testing

  • Goal - Validate that the system functions as expected according to the SRS.

  • Testing Method - Both manual and automated testing will be employed to verify functionality such as login, product search, cart management, and order placement.

3.2 Non-Functional Testing

  • Performance Testing - Ensure the system meets performance requirements like page load times and payment processing within 2 seconds.

  • Security Testing - Validate that the system uses HTTPS, encrypts the sensitive data, and handles user authentication securely.

3.3 Regression Testing

  • Goal - Ensure that new code changes do not break existing functionality.

  • Testing Method - Automated tests will be set up using Selenium to perform regression testing after each build.

3.4 User Acceptance Testing (UAT)

  • Goal - Validate that the system meets business requirements and provides a seamless user experience.

  • Participants - Business stakeholders and end-users.

  • Method - UAT will be conducted after the final round of system testing.

4. Testing Approach

4.1 Test Case Design

  • Test cases will be designed based on the requirements outlined in the SRS and the RTM. Each test case will specify:

    • Test Case ID - Unique identifier ie: TC-001, TC-002.

    • Preconditions - Any setup required before running the test.

    • Steps - Detailed actions to be performed.

    • Expected Results - The expected behavior of the system.

4.2 Test Execution

Test cases will be executed once the environment is set up and the development team delivers the code.

Test Case Document Sample

Project Name: Sample Project Name

Module: Sample Module Name

Prepared By: Sample Name

Date: Sample Date of Creation

Test Case 1 - User Login Functionality
Test Case ID - TC001
Test Scenario - Verify the login functionality with valid credentials.
Preconditions:
1. The user must be registered.
2. The application should be accessible.
Test Steps:
1. Open the application.
2. Navigate to the login screen.
3. Enter valid username and password.
4. Click on the "Login" button.
Expected Result:
1. The user should be successfully logged into the application and redirected to the dashboard.
Postconditions:
1. User is logged into the system.

Test Case 2 - Invalid Login Attempt
Test Case ID - TC002
Test Scenario - Verify login functionality with invalid credentials.
Preconditions:
1. Users should have access to the login screen.
Test Steps:
1. Open the application.
2. Navigate to the login screen.
3. Enter an invalid username and / or password.
4. Click on the "Login" button.
Expected Result:
1. The system should display an error message stating "Invalid credentials" and remain on the login page.
Postconditions:
1. The user is not logged in, and the login page remains active.

Test Case 3 - Forgot Password Functionality
Test Case ID - TC003
Test Scenario - Verify the forgot password functionality.
Preconditions:
1. The application should be accessible.
2. The user must have an email address registered in the system.
Test Steps:
1. Open the application.
2. Navigate to the login screen.
3. Click on the "Forgot Password" link.
4. Enter the registered email address.
5. Click on the "Submit" button.
Expected Result:
1. The user should receive an email with instructions to reset the password.
Postconditions:
1. User receives a password reset email.

Test Case 4 - User Logout Functionality
Test Case ID - TC004
Test Scenario - Verify that a logged-in user can log out successfully.
Preconditions:
1. The user must be logged in.
Test Steps:
1. Click on the "Logout" button from the dashboard.
Expected Result:
1. The user should be successfully logged out and redirected to the login screen.
Postconditions:
1. User is logged out of the system.

Test Case 5 - Password Reset Validation
Test Case ID - TC005
Test Scenario - Verify the password reset with valid input.
Preconditions:
1. The user has requested a password reset.
2. The user has received a password reset email.
Test Steps:
1. Open the password reset link sent via email.
2. Enter a valid new password that meets complexity requirements.
3. Confirm the password and submit.
Expected Result:
1. The system should update the password and confirm the change with a success message.
Postconditions:
1. The user can log in with the new password.

Notes:

  • Test cases can be executed manually or through automated testing frameworks.

  • This document should be updated as the application evolves, and additional scenarios arise.

  • All test cases must be reviewed and approved before execution.

Test Environment Setup Document Sample

1. Document Information

Document Title: Test Environment Setup Document
Project Name: Sample Project Name
Prepared By: Sample Prepared By
Date: Sample Date
Version: Sample Version

2. Purpose

The purpose of this document is to outline the setup and configuration of the test environment for the Sample Project Name. It ensures that all necessary hardware, software, and network configurations are in place and ready for test execution.

3. Scope

This document covers the configuration of the test environment for:

  • Functional testing.

  • Non-functional testing ie: Performance or Security.

  • Automated testing.

  • Integration testing.

  • Any other relevant testing activities.

4. Test Environment Overview

Provide a high-level overview of the test environment setup.

  • Environment Type - Development, QA, or Production-like environment.

  • Supported Platforms - Windows, Linux, macOS, Android, iOS.

  • Number of Test Environments - QA1, QA2, or Staging.

  • Test Types Supported - Functional, Regression, Performance, or Security.

5. Hardware Requirements

Details of the hardware resources needed for setting up the environment.

Hardware ComponentConfiguration DetailsQuantityRemarks
Server Typeie: Virtual or PhysicalNumber of itemsAdditional Notes
CPUie: 4 vCPUsNumber of itemsAdditional Notes
RAMie: 16 GBNumber of itemsAdditional Notes
Disk Spaceie: 500 GB SSDNumber of itemsAdditional Notes
Network Requirementsie: 1 GbpsNumber of itemsAdditional Notes

6. Software Requirements

Details of the software components that will be installed/configured in the test environment.

Software ComponentVersion InstallationLocationRemarks
Operating Systemie: Windows or Linuxie: Virtual MachineAdditional Notes
Databaseie: MySQL 5.7 or PostgreSQLie: DB ServerAdditional Notes
Application Serverie: Apache Tomcat or Nginxie: ServerAdditional Notes
Middlewareie: Redis or RabbitMQie: Middleware ServerAdditional Notes
Testing Toolsie: Selenium or JIRAie: Tester MachinesAdditional Notes
Version Control Systemie: Gitie: GitHub / Local ServerAdditional Notes

7. Network Configuration

Describe the network architecture and configurations needed for the environment.

  • Network Type - ie: LAN / WAN.

  • Subnet Configuration - ie: 192.168.1.0/24.

  • Firewalls - Describe firewall rules, if applicable.

  • Load Balancers - Describe any load balancers required.

  • VPN Access - If required for remote testing, include VPN details.

8. Test Data Setup

Outline how test data will be set up and managed in the test environment.

  • Data Sources - ie: Production-like data, Mock data.

  • Data Population Method - ie: SQL scripts, APIs.

  • Data Privacy Considerations - If sensitive data is used, mention masking or anonymization processes.

  • Backup Strategy - Plan for backing up test data.

9. Access Management

Describe how access to the environment will be controlled.

RoleAccess TypeTools / ServersAccess Method
TesterRead / Writeie: Application Server or DBSSH, HTTP, or VPN
DeveloperRead / Writeie: GitHub, JenkinsSSH, HTTP, or VPN
AdministratorFull Accessie: All serversSSH, HTTP, VPN, and Root Access

10. Test Environment Validation

Describe the steps to validate that the environment is working as expected.

  • Smoke Testing - Outline the basic tests that will be executed to validate the environment setup, such as login or data retrieval.

  • Environment Health Checks - Tools used for monitoring, such as logs and dashboards.

  • Backup and Recovery - Procedures to ensure the environment can be recovered if a failure occurs.

11. Risk and Mitigation Plan

Identify potential risks related to the test environment and mitigation strategies.

RiskImpactProbabilityMitigation Strategy
Environment DowntimeHighMediumSetup redundancy and frequent backups.
Network LatencyMediumLowOptimize network routes and set up mirrors.
Hardware FailureHighLowMaintain spares and cloud-based backups.

12. Environment Maintenance and Monitoring

Describe ongoing activities to maintain the environment during testing.

  • Monitoring Tools - List monitoring tools such as Prometheus or Grafana.

  • Performance Monitoring - Describe how the environment’s performance will be monitored.

  • Routine Maintenance - Any scheduled updates or backups.

  • Log Management - Tools or processes to collect and analyze logs.

13. Dependencies

List all external dependencies that must be available for the environment to function. ie: APIs, third-party services and external databases.

Dependency TypeConfigurationDetailsAvailability
Payment GatewayAPIExternal API Endpoint9 AM - 5 PM
Third-PartyAuthenticationExternal API Endpoint24 / 7

14. Approval and Sign-off

Once the environment has been set up and validated, this section is completed by stakeholders to approve the environment for testing.

NameRoleSignatureDate
Name of StakeholderProject ManagerSignatureDate
Name of StakeholderQA LeadSignatureDate
Name of StakeholderSystem AdministratorSignatureDate

Module Review

Click to start the definition to term matching quiz
Drag the defintion to the correct term.
Test type item not available at this time.
Click to start the multiple choice quiz
Choose from the listed options below.
Test type item not available at this time.