Group Coverage Trend Report

The Group Coverage Trend Report is a cross-portfolio analytical dashboard designed for managers and quality leaders to monitor coverage trends across a group of applications simultaneously. While it shares the underlying engine of the Coverage Trend Report, its primary purpose is to provide a side-by-side comparison of different projects, microservices, or team-owned applications.

In this report, the visualization is broken down per application, allowing users to see how various projects in a business unit are performing relative to one another, while also providing a primary aggregated view of the entire group's health.


Why It Is Useful

The Group Coverage Trend Report is essential for organizations practicing microservices architecture or managing large portfolios of applications. Its primary benefits include:

  • Cross-Project Benchmarking: Easily identify which applications are improving their coverage and which are stagnating or declining compared to their peers.

  • Portfolio Oversight: Provide executives with a single source of truth for the overall quality status of a business unit or a specific product group.

  • Efficiency at Scale: By visualizing multiple apps side-by-side, managers can quickly spot "laggards" that require additional testing resources or architectural review.

  • Unified Quality Standards: Verify if a specific test stage (e.g., Integration) is being adopted consistently across all apps in a cluster.

  • Trend Analysis: Observe long-term patterns to ensure that the introduction of new features across a group of apps isn't resulting in a "coverage debt" over time.


Key Differences from Single-App Report

The Group Coverage Trend Report is optimized for comparative analysis. Below are the functional differences you will encounter:

1. Generating the Report

  • Multiple Entities: Unlike the standard report, you select a list of multiple applications. For every application added to the group, you must select the specific branch to be tracked. This ensures that the "Main" or "Release" branches of different services are compared accurately.

  • Interval-Only Analysis (No Reference Builds): Because different applications have different release cycles, the Group Coverage Trend Report does not support the "Reference Build" view.

    • Fixed Intervals: To allow for a synchronized X-axis across multiple projects, the report strictly uses time-based intervals (1 week, 2 weeks, 3 weeks, 4 weeks, or 1 month). This ensures an "apples-to-apples" comparison of performance over the same period.

circle-info

Please note that the start date for the data shown in the charts depends on the selected builds and is currently limited to the following Monday or the 1st of the month (for all builds option).

  • Test Stage Selection: A list of all test stages reported for the selected app and branch during the past year is available, ordered alphabetically. Click individual stages to include/exclude them. The aggregated coverage line (Bold Blue) recalculates dynamically. Notice the breakdown on the charts and tables will stay by app, and not by test stage.

2. Data Representation

  • Aggregated Coverage (Main Blue Line): The most prominent feature of the chart is the bold blue line. This calculates the coverage across all selected apps for each interval, providing a single metric for the group's quality status. Each point of interval provides a snapshot of the breakdown for every app included in the report.

  • Individual App Trends: In addition to the aggregate, the chart displays a separate line for each selected application. This allows for immediate visual comparison of trends and helps identify which specific apps are driving the aggregate numbers up or down.

  • Table View: Below the chart, a detailed Table View lists the raw coverage percentages and method counts for every application in the group, sorted by interval. This table is the primary tool for auditing specific values that may be crowded in the visual chart. Here too, the breakdown is by app and not by test stage.

  • Global Test Stage Filtering: You can still filter by Test Stages (e.g., viewing only "Functional" coverage), but the filter applies globally across all apps in the group. This allows you to see how "Functional Coverage" specifically is trending across 10 different services. Notice the breakdown on the charts and tables will stay by app, and not by test stage.

3. Interactive Legend

The legend on the right-hand side serves as a navigational hub:

  • Deep Links: Each application in the legend has a link that will automatically open that specific application's Standard Coverage Trend Report, allowing you to instantly pivot from the group "macro" view to the app "micro" view (stage-by-stage breakdown).

4. Advanced Chart Elements

Unlike the standard trend report, only the following Chart Elements are available:

  • Number of Methods: Shows bars representing the relative size/scope of each application in the group.

  • Production Defects: View defects correlated across the entire group's timeline.


FAQ

chevron-rightHow does this report differ from the standard Coverage Trend Report?hashtag

The standard report focuses on a single application and breaks down the trend by test stages (Unit vs. Component vs. System). The Group report focuses on multiple applications and breaks down the trend per app.

chevron-rightCan I see the "Total Aggregate Coverage" for the whole group?hashtag

Yes. In addition to the individual lines for each app, the report displays a blue line representing the overall coverage health of the entire selected group.

chevron-rightIf I filter by "Integration Tests", what happens to apps that don't have that stage defined?hashtag

Applications without the selected test stage will report 0% or "No Data" for that specific filter, making it easy to identify gaps in your testing strategy across the organization.

chevron-rightDoes the report update in real-time? hashtag

The report updates as soon as new coverage data is processed by the SeaLights backend, typically following the completion of a build or test execution cycle.

chevron-rightWhy can't I select "Reference Builds" in this report?hashtag

Reference builds are specific to an individual app's release lifecycle. Since different apps release at different times, the report uses time-intervals to provide a synchronized comparative view.

See more related question on Coverage Trend Report.

Last updated

Was this helpful?