Visibility for Test-Stage at Environment Level
Following our recent move toward Test Optimization at the Environment Level, we are excited to announce a suite of new visibility tools designed to streamline how you track and analyze your testing efficiency.
SeaLights now dynamically determines relevant components for each Test Stage based on the specific execution context. Instead of being restricted by fixed application boundaries, test runners can now request recommendations tailored specifically to their current stage (e.g., Integration, Regression, or E2E).
Your Hub for Configuration and Cycle Insights
The Savings Summary and Savings Breakdown pages have evolved to support these new environment-level capabilities. While the Monthly Savings Report is now the primary destination for high-level ROI and test optimization impact, these updated views provide the control and granularity needed to manage your test stages effectively.
Savings Summary: Configuration & Access

The Savings Summary page is your starting point for monitoring your optimization footprint and accessing specific cycle data.
App vs. Stage Toggle: Seamlessly switch between an application-centric view and a stage-centric view.
New Baseline Metric – Compute Time Without TIA: We’ve added a dedicated column to quantify your "what-if" scenario. This provides a clear baseline for the compute hours saved by SeaLights optimization.
Direct Drill-Down: Each entry in the Test Stage list is interactive, navigating you directly to a pre-filtered Savings Breakdown page for deep-dive analysis of individual cycles.
Deep Dive: Unbounded Savings Breakdown

The Savings Breakdown page now natively supports the "Unbounded Test Stage" mode. By disabling the Application filter, the dashboard pivots to Test Stage Mode, focusing exclusively on the timeline of your test cycles and the granular impact of environment-level optimization.
Key Enhancements:
Cross-App Context: In the "Impacted Tests" tab, the table now includes an application level. This ensures that even when you are looking at a broad test stage (like a shared Integration environment), you can still identify exactly which applications were affected.

Comparison Foundations: The AUT table now explicitly lists the Base Build and Branch used for comparison, ensuring total transparency into the optimization logic.
Granular Performance Status: In the AUT table, we have replaced the standard "Status" column with a detailed Impact on TIA column and a Impact Reason column. This provides a clear, descriptive breakdown of how optimization was applied to each component within the stage.
Last updated
Was this helpful?

