LogoLogo
Product
  • Knowledge Base
  • What's New
  • Guides
  • User Story Coverage
    • Getting Started
    • User Story Challenges & Solution
      • Typical Implementation
      • The Challenges
      • The Solution
    • User Story Coverage Report Overview
      • Release Go / No Go Report
        • How to Generate / Edit the Report
      • User Story Quality Overview
        • How to Generate the User Story View
      • User Story Coverage Analysis
        • How to Generate the Analysis View
      • Uncovered Methods View
        • How to Generate the View
      • Customization
      • Integration
    • Use Cases by Persona
      • Managers
        • Informed Go/No Go Decisions Making
        • Effective Resources Prioritization
        • Overall Progress Monitoring
      • Developers
        • Code Quality Ownership
        • Seamless Collaboration with QA
        • Code Review Facilitator
      • QA Engineers
        • Test Execution Progress Monitoring
        • Testing Effort Prioritization
        • Testing Strategy Planing
    • Technical Overview
      • User Story Coverage Mechanism
      • Technical Architecture
      • Deployment Guide
        • US1_getResults.sh
        • US2_createReport.sh
        • US_UpdateConfluence.sh
  • Test Optimization
    • Getting Started
    • Test Execution Challenges & Solution
      • The Challenges
      • Test Optimization Solution
      • Test Optimization Main Advantages
    • Test Optimization Overview
      • Automated Test Optimization
      • Manual Test Optimization
      • Test Optimization for Pull Request
      • Test Selection Policies
        • Full Run Policy
        • No Code Changes Policy
        • Common Code Policy
        • Fastest Path to 100% Coverage Policy
      • Integrations
    • Use Cases by Persona
      • Managers
        • Fast Delivery
        • Resource Optimization
        • Thorough Testing in Tight Schedule
      • Developers
        • Exploring Only Relevant Test Failures
        • Faster Feedback Loop
        • Shift Left Testing
      • QA Engineers & Manual Testers
        • Faster & Focused Manual Testing
        • Optimizing Test Suite
        • Having Stable Product for Testing
    • Technical Overview
      • Test Optimization Mechanism
        • Associating Code With Tests
          • Statistical modeling
          • One-to-One Mapping
          • Calibration
        • Detecting Modified Code
        • Generating Test Recommendations
      • Technical Architecture
      • Deployment Guide
  • Quality Improvement
    • Getting Started
    • Challenges & Approach Comparison
      • The Challenges
      • Quality Improvement Approaches
      • Choosing the Right Approach
    • Quality Improvement Solution Overview
      • Test Gaps Analysis Report
        • How to Generate / Edit the Report
      • Coverage Trend Report
        • How to Generate / Edit the Report
      • Proof of Testing Report
        • How to Generate / Edit the Report
      • Release Quality Improvement Guide
        • STEP 1: Deploy SeaLights
        • STEP 2: Take a Quality Snapshot
        • STEP 3: Prioritize Code Areas
          • Add Code Labels
          • Ignore Irrelevant Code
          • Perform a Deep CSV Analysis
        • STEP 4: Set Baseline & Threshold
        • STEP 5: Analyze Test Gaps
        • STEP 6: Write Tests
        • Step 7: Make a Go / No Go Decision Based on Quality Gate
        • STEP 8: Measure Defect Escape Rate
      • Over Time Quality Improvement Guide
        • STEP 1: Deploy SeaLights
        • STEP 2: Take a Quality Snapshot
        • STEP 3: Prioritize code areas
          • Add Code Labels
          • Ignore Irrelevant Code
          • Perform a Deep CSV Analysis
        • STEP 4: Set Baseline & Goal
        • STEP 5: Set timeline
        • STEP 6: Write tests
        • STEP 7: Monitor progress
        • STEP 8: Measure Defect Escape Rate
    • Use Cases by Persona
      • Managers
        • Effective Prioritization & Budget Allocation
        • Tracking Progress & Measuring Impact
        • Data-Driven Release Decisions
        • Transparency & Communication
      • Developers
        • Mastering Code Coverage
        • Seamless Collaboration with QA
        • Code Quality Ownership
      • QA Engineers
        • Prioritizing Test Efforts
        • Contributing to Release Informed Decisions
        • Seamless Collaboration with Developers
        • Evaluating Testing Strategy
    • Technical Overview
      • Solution Mechanism
      • Technical Architecture
      • Deployment Guide
  • Value Proposition
    • Overview
    • Quality Use Cases
      • Go/No Go Decisions
      • Quality Improvement & Test Gaps
      • Governance & Quality Gates
      • Compliance & Proof of Testing
    • Test Optimization Use Cases
      • Reduce Costs & Infrastructure
      • Shorten Release Cycles
      • Reduce Troubleshooting
Powered by GitBook
On this page
  • Efficient Code-to-Test Mapping with Parallel Testing
  • Single Lab (Test Environment)
  • Multiple Labs (Test Environments)

Was this helpful?

  1. Test Optimization
  2. Technical Overview
  3. Test Optimization Mechanism
  4. Associating Code With Tests

Statistical modeling

With the increasing popularity of parallel testing to shorten testing cycles, SeaLights utilizes machine learning and AI through statistical modeling to meticulously link each code method with its corresponding tests.

This process involves analyzing every build during testing to pinpoint which tests activate specific code areas, drilling down to the method level. The analysis primarily centers on the timing of tests, determining the code that was triggered during specific time intervals.

Test Impact Analysis (TIA) kicks off its mission to reduce test execution time and costs right from the analysis of the second build. Over time, it progressively enhances mapping accuracy. Initially, each test is associated with broad code coverage, which then narrows down based on statistical insights regarding code areas that are triggered. The effectiveness of this mapping hinges on various factors, including how test execution is orchestrated, the testing environments and labs. Enhanced mapping accuracy leads to substantial time and cost savings through TIA.


Efficient Code-to-Test Mapping with Parallel Testing

SeaLights utilizes machine learning and AI to establish precise connections between code methods and their corresponding tests via statistical modeling. The effectiveness of this modeling is directly impacted by how test execution is orchestrated. By clearly separating builds, testing environments, test types, and test labs, statistical modeling becomes more efficient, resulting in greater savings. Let's examine the impact of different configurations on TIA.

Single Lab (Test Environment)

When employing a single lab to run all tests concurrently, timing becomes a crucial factor. Since there can only be one test stage, all tests, regardless of type, are executed on the same stage without distinction. TIA relies solely on the start and end times of each test, comparing them to the code triggered during specific time intervals. Test order and timing variations enable the identification of which test triggers which code. Over time, these differences can enhance statistical modeling accuracy.

However, consistently timing tests, meaning running the same tests in parallel with identical order and timing on every execution of the test stage, can hinder statistical modeling's ability to learn the individual impact of each test. This approach offers no improvement over time and leads to many tests being linked to numerous code pieces, resulting in larger test recommendation lists.

Multiple Labs (Test Environments)

Employing multiple test labs in parallel testing can significantly elevate the effectiveness of statistical modeling, continuously refining the accuracy and efficiency of Test Impact Analysis (TIA). However, successful implementation necessitates efficient orchestration and monitoring of test interactions with the code. By minimizing test overlap and maximizing separation between test execution environments, statistical modeling can rapidly generate an accurate map of code and test connections. An ideal approach involves running multiple test groups/sets in parallel, where each group sequentially executes on a distinct lab.

Advanced technologies like containers, such as PCF and Kubernetes, have revolutionized the creation, management, and decommissioning of test environments, making it both simpler and more cost-effective to tailor testing workflows to specific requirements and achieve superior efficiency in TIA. This newfound flexibility empowers organizations to optimize their testing processes and maximize the benefits of TIA.

PreviousAssociating Code With TestsNextOne-to-One Mapping

Was this helpful?