Test Optimization Models: Bounded vs. Unbounded

Overview: Two Models for Test Optimization

Test Optimization supports two primary models for generating test recommendations, ensuring efficiency across both single-application and complex, multi-component testing environments. The model used depends on whether your test stage is coupled to a specific application build or targets a whole environment.

1. Bounded Test Stages (Build Level Optimization)

This is the traditional model for test stages that execute against a single application build.

  • Definition: A Bounded Test Stage is explicitly linked to a single application. Bounded stages are primarily executed within CI pipelines and are often the first testing step. The onboarding process is simpler as it only requires linking to a single application repository and a unique build identifier.

  • Applicability: This model is for tests where coverage and test recommendations need to be tightly coupled to a single application's source code, such as:

    • Unit Tests: Testing a single application component.

    • Component Tests: Testing a service in isolation.

    • Any testing where the scope is limited to one application.

  • Recommendation Logic: Test Optimization calculates recommendations by comparing the current application build against the immediately preceding build of that same application. This provides a precise view of code changes only within that single application.

2. Unbounded Test Stages (Environment Level Optimization)

The Unbounded Test Stage provides a streamlined approach to testing in dynamic lab environments. It is onboarded directly onto a lab/environment and executes tests against all applications currently deployed in that environment.

  • Definition: An Unbounded Test Stage is configured directly to a lab or testing environment and is not tied to any single application or build. They are designed for Integration or System Test Environments where multiple services interact.

  • Applicability: This model is ideal for integration, system, or end-to-end (E2E) test stages that span multiple services, where the state of the entire environment dictates the required test subset.

  • Recommendation Logic: Test Optimization supports this concept by generating test recommendations at the Lab/Environment level, rather than the individual application or build level. This allows for accurate test impact analysis across complex, multi-component deployments.

  • Tracking Included Applications: It is easy to track precisely which applications (AUTs - Applications Under Test) were considered for the recommendation calculation. This information is visible in the UI on the Savings Breakdown page, under the AUT tab. This ensures transparency in the optimization process.

Detailed Implementation: Unbounded Test Stage Recommendation Logic

This section details how Test Optimization calculates recommendations specifically for an Unbounded Test Stage.

Calculation Method

Test Optimization calculates recommendations for an Unbounded Test Stage by comparing the current state of the environment against the versions present in the previous successful test execution.

The resulting recommendations are always based on version differences between:

  1. The applications and versions available in the environment when the recommendations were requested (Current Test Cycle).

  2. The applications and versions that were part of the previous recommendations calculations consumed in that Test Stage (Previous Test Cycle).

Exclusions and Stable Comparison Rules

To ensure performance, consistency, and stability in dynamic environments, the following differences are intentionally not considered during the recommendation calculation process:

Exclusion Type
Description

Missing Applications

Applications present in the Previous Test Cycle but not in the Current Cycle are ignored.

Late-Loading Applications

Applications that appear after the recommendation generation process has started are ignored.

Branch Differences

The comparison engine explicitly ignores differences in branches between the current and previous states. Test Optimization achieves this by comparing the hash of the method/function, ensuring that the comparison is based purely on code content, not branch name.

Handling Multiple Builds for a Single Application

If an application is reported with multiple builds within a single Test Cycle, specific logic is applied to determine which build version is used for comparison. This is a conservative approach designed to maximize coverage and prevent the unintended omission of necessary tests.

Scenario
Comparison Logic
Rationale

Current Test Cycle

If there are multiple builds in the Current Test Cycle, the newer one will be used for comparison.

Uses the most up-to-date version of the code present in the environment for determining changes.

Previous Test Cycle (Baseline)

If there were multiple builds in the Previous Test Cycle used for calculation, the older one will be used for comparison.

Establishes a safe, older baseline, ensuring that all changes introduced since that baseline are captured and recommended for testing.

Last updated

Was this helpful?