LogoLogo
Product
  • Knowledge Base
  • What's New
  • Guides
  • Welcome!
  • Intro to SeaLights
    • What is SeaLights
      • Glossary
      • Working in Conjunction with Your Quality / Coverage Tools
        • SeaLights vs SonarQube
        • SeaLights vs JaCoCo
    • Technical Overview
      • Test Stage Cycle
    • FAQ
  • User Story Coverage
    • Getting Started
    • User Story Challenges & Solution
      • Typical Implementation
      • The Challenges
      • The Solution
    • User Story Coverage Report Overview
      • Release Go / No Go Report
        • How to Generate / Edit the Report
      • User Story Quality Overview
        • How to Generate the User Story View
      • User Story Coverage Analysis
        • How to Generate the Analysis View
      • Uncovered Methods View
        • How to Generate the View
      • Customization
      • Integration
    • Use Cases by Persona
      • Managers
        • Informed Go/No Go Decisions Making
        • Effective Resources Prioritization
        • Overall Progress Monitoring
      • Developers
        • Code Quality Ownership
        • Seamless Collaboration with QA
        • Code Review Facilitator
      • QA Engineers
        • Test Execution Progress Monitoring
        • Testing Effort Prioritization
        • Testing Strategy Planing
    • Technical Overview
      • User Story Coverage Mechanism
      • Technical Architecture
      • Deployment Guide
        • US1_getResults.sh
        • US2_createReport.sh
        • US_UpdateConfluence.sh
  • Test Optimization
    • Getting Started
    • Test Execution Challenges & Solution
      • The Challenges
      • Test Optimization Solution
      • Test Optimization Main Advantages
    • Test Optimization Overview
      • Automated Test Optimization
      • Manual Test Optimization
      • Test Optimization for Pull Request
      • Test Selection Policies
        • Full Run Policy
        • No Code Changes Policy
        • Common Code Policy
        • Fastest Path to 100% Coverage Policy
      • Integrations
    • Use Cases by Persona
      • Managers
        • Fast Delivery
        • Resource Optimization
        • Thorough Testing in Tight Schedule
      • Developers
        • Exploring Only Relevant Test Failures
        • Faster Feedback Loop
        • Shift Left Testing
      • QA Engineers & Manual Testers
        • Faster & Focused Manual Testing
        • Optimizing Test Suite
        • Having Stable Product for Testing
    • Technical Overview
      • Test Optimization Mechanism
        • Associating Code With Tests
          • Statistical modeling
          • One-to-One Mapping
          • Calibration
        • Detecting Modified Code
        • Generating Test Recommendations
      • Technical Architecture
      • Deployment Guide
  • Quality Improvement
    • Getting Started
    • Challenges & Approach Comparison
      • The Challenges
      • Quality Improvement Approaches
      • Choosing the Right Approach
    • Quality Improvement Solution Overview
      • Test Gaps Analysis Report
        • How to Generate / Edit the Report
      • Coverage Trend Report
        • How to Generate / Edit the Report
      • Proof of Testing Report
        • How to Generate / Edit the Report
      • Release Quality Improvement Guide
        • STEP 1: Deploy SeaLights
        • STEP 2: Take a Quality Snapshot
        • STEP 3: Prioritize Code Areas
          • Add Code Labels
          • Ignore Irrelevant Code
          • Perform a Deep CSV Analysis
        • STEP 4: Set Baseline & Threshold
        • STEP 5: Analyze Test Gaps
        • STEP 6: Write Tests
        • Step 7: Make a Go / No Go Decision Based on Quality Gate
        • STEP 8: Measure Defect Escape Rate
      • Over Time Quality Improvement Guide
        • STEP 1: Deploy SeaLights
        • STEP 2: Take a Quality Snapshot
        • STEP 3: Prioritize code areas
          • Add Code Labels
          • Ignore Irrelevant Code
          • Perform a Deep CSV Analysis
        • STEP 4: Set Baseline & Goal
        • STEP 5: Set timeline
        • STEP 6: Write tests
        • STEP 7: Monitor progress
        • STEP 8: Measure Defect Escape Rate
    • Use Cases by Persona
      • Managers
        • Effective Prioritization & Budget Allocation
        • Tracking Progress & Measuring Impact
        • Data-Driven Release Decisions
        • Transparency & Communication
      • Developers
        • Mastering Code Coverage
        • Seamless Collaboration with QA
        • Code Quality Ownership
      • QA Engineers
        • Prioritizing Test Efforts
        • Contributing to Release Informed Decisions
        • Seamless Collaboration with Developers
        • Evaluating Testing Strategy
    • Technical Overview
      • Solution Mechanism
      • Technical Architecture
      • Deployment Guide
  • Features
    • Coverage Dashboard
    • Coverage Report
    • Test Gaps Report
      • Code Changes Calculation
        • Hidden Changes Detection
    • Test Optimization - Savings Breakdown
      • TIA Configuration
    • Settings
      • Token Access & Management
      • Quality Gates
Powered by GitBook
On this page
  • Key Coverage Metrics
  • Understanding Coverage Calculations
  • Understanding Reference Builds
  • Navigating Build History
  • Quality Gate Status
  • Applying Filters and Code Labels
  • Unveiling Test Stage Details

Was this helpful?

  1. Features

Coverage Dashboard

PreviousDeployment GuideNextCoverage Report

Last updated 8 months ago

Was this helpful?

New Dashboard Design Available!

We've redesigned the dashboard to be more intuitive and user-friendly, while still offering all the features you know and love.

SeaLights Coverage Dashboard provides a centralized hub for visualizing and analyzing your code coverage data. It empowers you to gain valuable insights into the effectiveness of your testing efforts and identify areas for improvement. This guide offers a detailed explanation of the key features and functionalities within the Coverage Dashboard.

Key Coverage Metrics

  • Overall Coverage: This metric represents the percentage of methods within your codebase that have been exercised by at least one test stage during the latest build. It's calculated as:

Overall Coverage (%) = Number of Tested Methods / Overall Number of Methods
  • Code Changes Coverage: This metric focuses on recently modified methods within your codebase. It reflects the percentage of these modified methods that have been covered by at least one test stage in the latest build. The calculation is as follows:

Modified Code Coverage (%) = Number of Modified & Tested Methods / Overall Number of Modified Methods
  • Untested Code Changes: This metric represents the number of modified methods that have not been executed by any of your test stages during the latest build. These untested code changes represent areas that potentially lack sufficient test coverage and considered as quality risks. It is recommended to have 0 untested code changes (or at least 0 critical untested code changes).

Understanding Coverage Calculations

The coverage numbers displayed on the dashboard are aggregated across all test stages executed against a specific app/branch in the latest build. Importantly, this coverage is calculated at the method level, providing a granular view of how effectively your tests exercise your codebase.

SeaLights compares the current build's methods with those in the reference build to determine which methods have been added or modified. This allows you to focus on the coverage specifically for these recently changed sections of your codebase.

Understanding Reference Builds

A Reference build serves as the baseline for calculating Code Changes Coverage and Untested Code Changes metrics.

By default, the reference build is the previous build and is defined per branch. The current reference build can be identified in the "Reference Build" column for each application.

Different builds can have different reference builds assigned. However, SeaLights does not allow for retroactive modification of the Reference Build for a processed build.

Navigating Build History

The Coverage Dashboard allows you to explore your build history, providing insights into how code coverage has evolved over time for each app/branch within your project.

Quality Gate Status

Quality Gates define thresholds that determine if a build meets your predefined quality standards. Three parameters influence the Quality Gate status:

  • Code Changes Coverage

  • Overall Code Coverage

  • Failed Tests

By clicking on the status within the dashboard, you can access a pop-up window that offers a summary of the specific build status and its associated Quality Gate definitions.

There are four potential statuses for each build reported per application. The table below summarizes these statuses and their corresponding reasons:

Status
Reasons

Passed

All Quality Gate conditions were met.

Failed

The code did not meet one or more quality thresholds.

Missing Data

This status is assigned when all criteria for Quality Gate evaluation cannot be assessed due to missing data:

  • No Reported Tests: A criterion relies on the quantity of failed tests, but no tests were executed or reported.

  • Missing Test Stage Data: A criterion depends on coverage data, but no test stages were defined or executed.

  • No Detected Code Changes: A criterion is based on code changes, but no modifications were found between the current and reference builds.

Scan Issue

The build scan encounters errors during the reporting process.

Applying Filters and Code Labels

While filters and code labels can help you focus on specific app/branches within your project, it's important to note that they may also influence the coverage metrics calculations for those filtered elements.

Unveiling Test Stage Details

By clicking on any line within the Coverage Dashboard table, you'll be presented with a deeper dive into the coverage data. This expanded view empowers you to understand the nuances of your testing strategy and identify potential areas for improvement. Here's what you can expect:

  • Test Stage List: Gain a comprehensive view of all the test stages executed against the selected app/branch in the latest build. This list provides a clear understanding of the various testing perspectives employed to assess your codebase.

  • Coverage per Test Stage: Analyze the coverage details for each individual test stage. This breakdown allows you to pinpoint which test stages are effectively exercising specific sections of your code and identify any potential gaps in coverage.

  • Impact Analysis: While not explicitly mentioned in previous sections, SeaLights offer functionalities to understand factors that could have affected the coverage data. This could include the ability to see if the build was executed with Test Impact Analysis (TIA) enabled, potentially influencing which tests were run. It is possible to directly access detailed tables showcasing the specific tests executed within each test stage. This can provide even more granular insights into your testing process.

You can customize the reference build and set a specific build (e.g., the latest production release) as the baseline. This can be done through the or manually within the UI. Once set, all subsequent builds within the branch will utilize this chosen build for comparison purposes when calculating code changes coverage and identifying untested code changes.

SeaLights public API
🎉
Coverage Dashboard
Reference Build Button
Reference Build Popup
Build History Button
Build History Popup
Quality Gate Status per Build
Quality Gate Summary - Passed
Quality Gate Summary - Failed
Test Stage List