Coverage Trend Report
The Coverage Trend Report is designed to track and visualize code coverage performance for a specific application and branch over time. It provides a granular look at how different testing stages (e.g., Unit, Component, Integration) contribute to the overall quality of the codebase, allowing teams to monitor both Modified Coverage (new/changed code) and Overall Coverage (total codebase).
Why It Is Useful
This report is the primary health indicator for an application's testing lifecycle. It is particularly useful for:
Testing Efficiency Tracking: By monitoring Modified Coverage trends, teams can evaluate the efficiency and consistency of their testing process across different development cycles.
Quality Culture Benchmarking: Consistent high modified coverage over time indicates a mature quality culture where testing is integrated into the development flow.
Identifying Gaps: It highlights which test stages are underperforming or where coverage drops occur following major code refactors.
Correlation Analysis: Visualizing coverage alongside "Production Defects" allows stakeholders to identify correlations between testing efforts and production stability. Ideally, a high coverage trend should correlate with a decrease in defect density, validating the effectiveness of the testing strategy.

Detailed Functionality
1. Report Configuration & Persistence
First-Time Experience: Upon entering Quality Analytics / Coverage Trend, users can start fresh via the "Create new report" button.
Saving & Privacy: Reports must be manually saved to appear in the "Saved Reports" list. By default, newly created reports are private, though they can be shared or made public for organizational visibility.
API Support: Reports can also be generated and managed via the Quality Trend Report API.
2. Data Slicing (Filters)
App & Branch: Select a single application and a specific branch to analyze.
Date Range: Choose from presets (Last 1-12 months) or a custom range. A minimum of 4 weeks is recommended for meaningful trend analysis.
Please note that the start date for the data shown in the charts depends on the selected builds and is currently limited to the following Monday or the 1st of the month (for all builds option).
Analysis Mode (Builds):
All Builds: Aggregate data by intervals (1, 2, 3, 4 weeks or 1 month). When selecting 1 month, the data shown in the chart will start at the 1st of the relevant month. For the rest of the interval options, the data shown in the chart will start at the following Monday.
Reference Builds: Track only specific builds manually marked as "Reference" (e.g., Production or Release builds).
Test Stage Selection: A list of all test stages reported for the selected app and branch during the past year is available, ordered alphabetically. Click individual stages to include/exclude them. The aggregated coverage line (Bold Blue) recalculates dynamically.
3. Understanding the Charts

The report presents two essential charts and their corresponding tables:
Modified Coverage Chart: Tracks coverage of changed/new code. Code changes you haven’t tested are your quality risks. The higher the coverage, the lower the chances for escaping defects. Aim for consistently high coverage for minimized risks and a strong quality culture.
Overall Coverage Chart: Monitors the coverage of your entire codebase. Strive for a positive trend, primarily driven by covering newly added or modified code. Temporary drops might occur after significant code changes without immediate coverage.
The charts data is based on the following:
Aggregated coverage of the selected test stages
All selected builds within the selected date range. There are two options for selecting builds:
All builds, aggregated based on selected time interval: 1/2/3/4 weeks or month. Each point in the X-axis represents an interval and consider all code changes from the last build in interval X-1 and all related test stage coverage reported in builds within X.
Reference builds. Reference build X, as defined in SeaLights Coverage Dashboard, takes code changes from reference build X-1 and only considers coverage from test stages reported to that build. Each point in the X-axis represents a single reference build.

4. Advanced Visualization (Chart Elements)

Users can toggle additional layers of data to provide deeper context:
Coverage of All Test Stages: Displays a "Bold Green" line representing the aggregate of every reported stage, useful for comparing a specific subset against the total.
Number of Methods: Adds a bar chart representing the scope/size of the code being examined.
Coverage Quality Gate: Overlays the defined quality gate thresholds directly onto the chart to see if the team is meeting internal standards.
Production Defects: Upload a
.csvfile of production defects to see defects per interval, allowing for a direct correlation between low coverage and production issues.The data will be added to both modified and overall coverage charts, as an aggregated bar (no breakdown according to the apps included in the report).
The data is correlated to the coverage using the defect’s date - the production defects bar will be displayed for the date range which is the closest to the defect’s date (the last date range before the defect was created).

5. Automatic Insights
The report includes an Insights engine that alerts users to specific quality risks, such as:
Large Uncovered Modifications: Highlights instances where significant code changes occurred but were not properly covered by tests.
FAQ
Can I share a report with a colleague who doesn't use SeaLights?
After generating a report you can use the share button to copy the report URL to you clipboard or to publish the report as a public report that will be available to all users.
Using the copied URL, other users will be able to view and work with a draft report that includes the same configuration as the report you were sharing.
When publishing a report as a public report, all users will be able to view that report from their reports list view. It’s a great way to share the report with others in your organization.
What is the difference between an Interval and a Reference Build point?
An Interval aggregates all code changes and all test executions within a time window (e.g., July). A Reference Build point represents a snapshot of a single specific build, comparing it only to the previous reference build.
How do I see the raw numbers behind the lines?
You can hover over any point on the chart for a detailed tooltip, or view the data in the Table View provided below the charts.
Should I use this report for "Go/No-Go" release decisions?
While this report provides historical context, "Go/No-Go" decisions for specific releases are better served by the Test Gap Analysis (TGA), the Multi-App Report, or the SeaLights for Jira/ADO integration, which provide detailed status.
I created a report, but it is empty. Why can't I see any data on my charts?
There might be a few reasons for that:
When you are viewing a coverage trend report with 'All Builds' and an interval longer or equal to the date range (e.g 1 month interval on date range of 3 weeks).
When you are viewing a group coverage trend report with an interval longer or equal to the date range (e.g 1 month interval on date range of 3 weeks).
When you are viewing a coverage trend report with 'Reference Builds' and there is only one build selected.
When no builds were found in the selected date range.
Is it possible that some interval points on the charts have no data?
Yes, when no builds were found for a specific interval point, these interval points will be presented in the x-axis but will be skipped on the chart line (chart line will become a dotted line). Same goes when looking at the Modified Coverage chart, in group trend report and in coverage trend report (whether on ‘All Builds' mode or 'Reference Builds’ mode), and there were no code changes on one interval / reference build.
Is the coverage calculated like in the Coverage Dashboard?
On coverage trend report > ‘All Builds' mode and in group coverage report the calculation is different than in the Dashboard. Interval X takes all code changes from the last build in interval X-1, and takes all coverage from all related test stages (for trend report) or from all selected apps/branches (for group coverage report) that were reported in all builds within interval X. (e.g. Interval of 1 month: coverage in July means all code changes since last build on June and their coverage from all test stages executions reported on builds within July, for trend report, or from all selected apps/branches, for group trend report). On coverage trend report > 'Reference Builds’ mode the calculation is similar to the Dashboard calculation, when comparing between two builds.
Does deselecting a test stage in a trend report remove its coverage from the aggregated coverage?
Yes. Deselecting a test stage in a trend report removes the test stage line from the chart and deducts the test stage data from the aggregated coverage line.
Why are there drops in the overall coverage?
If you are using the TIA tool when running your test suites, you should expect drops in the overall coverage whenever TIA was ON and recommended running only a subset of your tests. The drops mean that your coverage was temporarily lower than usual.
The start date of the data shown in the chart is different than the report date range. Why?
When selecting 1 month interval for your trend or group trend report, the data shown in the chart will start at the 1st of the relevant month. For other interval options, the data shown in the chart will start at the following Monday.
If your sprints start on a Wednesday, for instance, and you wish to create a report for a specific sprint, you can use the trend report with the reference builds setting as a workaround. This will require you to mark the relevant sprint builds as reference builds and select the “Reference build” option under the Builds section in your trend report Filter left pane.
Why are there differences between the number of methods shown in the report vs. the number shown in the Dashboard?
Quality Analytic reports comply with the ignored code rules that were defined by the user under Settings > Data Scope > Ignored Code. So while the Dashboard shows the total number of methods in the code, the report will show only the total number of un-ignored methods of code. TGA works in a similar way to Quality Analytics so when comparing the two - you should find the same number of methods reported.
Last updated
Was this helpful?

