LogoLogo
Product
  • Knowledge Base
  • What's New
  • Guides
  • Welcome!
  • Intro to SeaLights
    • What is SeaLights
      • Glossary
      • Working in Conjunction with Your Quality / Coverage Tools
        • SeaLights vs SonarQube
        • SeaLights vs JaCoCo
    • Technical Overview
      • Test Stage Cycle
    • FAQ
  • User Story Coverage
    • Getting Started
    • User Story Challenges & Solution
      • Typical Implementation
      • The Challenges
      • The Solution
    • User Story Coverage Report Overview
      • Release Go / No Go Report
        • How to Generate / Edit the Report
      • User Story Quality Overview
        • How to Generate the User Story View
      • User Story Coverage Analysis
        • How to Generate the Analysis View
      • Uncovered Methods View
        • How to Generate the View
      • Customization
      • Integration
    • Use Cases by Persona
      • Managers
        • Informed Go/No Go Decisions Making
        • Effective Resources Prioritization
        • Overall Progress Monitoring
      • Developers
        • Code Quality Ownership
        • Seamless Collaboration with QA
        • Code Review Facilitator
      • QA Engineers
        • Test Execution Progress Monitoring
        • Testing Effort Prioritization
        • Testing Strategy Planing
    • Technical Overview
      • User Story Coverage Mechanism
      • Technical Architecture
      • Deployment Guide
        • US1_getResults.sh
        • US2_createReport.sh
        • US_UpdateConfluence.sh
  • Test Optimization
    • Getting Started
    • Test Execution Challenges & Solution
      • The Challenges
      • Test Optimization Solution
      • Test Optimization Main Advantages
    • Test Optimization Overview
      • Automated Test Optimization
      • Manual Test Optimization
      • Test Optimization for Pull Request
      • Test Selection Policies
        • Full Run Policy
        • No Code Changes Policy
        • Common Code Policy
        • Fastest Path to 100% Coverage Policy
      • Integrations
    • Use Cases by Persona
      • Managers
        • Fast Delivery
        • Resource Optimization
        • Thorough Testing in Tight Schedule
      • Developers
        • Exploring Only Relevant Test Failures
        • Faster Feedback Loop
        • Shift Left Testing
      • QA Engineers & Manual Testers
        • Faster & Focused Manual Testing
        • Optimizing Test Suite
        • Having Stable Product for Testing
    • Technical Overview
      • Test Optimization Mechanism
        • Associating Code With Tests
          • Statistical modeling
          • One-to-One Mapping
          • Calibration
        • Detecting Modified Code
        • Generating Test Recommendations
      • Technical Architecture
      • Deployment Guide
  • Quality Improvement
    • Getting Started
    • Challenges & Approach Comparison
      • The Challenges
      • Quality Improvement Approaches
      • Choosing the Right Approach
    • Quality Improvement Solution Overview
      • Test Gaps Analysis Report
        • How to Generate / Edit the Report
      • Coverage Trend Report
        • How to Generate / Edit the Report
      • Proof of Testing Report
        • How to Generate / Edit the Report
      • Release Quality Improvement Guide
        • STEP 1: Deploy SeaLights
        • STEP 2: Take a Quality Snapshot
        • STEP 3: Prioritize Code Areas
          • Add Code Labels
          • Ignore Irrelevant Code
          • Perform a Deep CSV Analysis
        • STEP 4: Set Baseline & Threshold
        • STEP 5: Analyze Test Gaps
        • STEP 6: Write Tests
        • Step 7: Make a Go / No Go Decision Based on Quality Gate
        • STEP 8: Measure Defect Escape Rate
      • Over Time Quality Improvement Guide
        • STEP 1: Deploy SeaLights
        • STEP 2: Take a Quality Snapshot
        • STEP 3: Prioritize code areas
          • Add Code Labels
          • Ignore Irrelevant Code
          • Perform a Deep CSV Analysis
        • STEP 4: Set Baseline & Goal
        • STEP 5: Set timeline
        • STEP 6: Write tests
        • STEP 7: Monitor progress
        • STEP 8: Measure Defect Escape Rate
    • Use Cases by Persona
      • Managers
        • Effective Prioritization & Budget Allocation
        • Tracking Progress & Measuring Impact
        • Data-Driven Release Decisions
        • Transparency & Communication
      • Developers
        • Mastering Code Coverage
        • Seamless Collaboration with QA
        • Code Quality Ownership
      • QA Engineers
        • Prioritizing Test Efforts
        • Contributing to Release Informed Decisions
        • Seamless Collaboration with Developers
        • Evaluating Testing Strategy
    • Technical Overview
      • Solution Mechanism
      • Technical Architecture
      • Deployment Guide
  • Features
    • Coverage Dashboard
    • Coverage Report
    • Test Gaps Report
      • Code Changes Calculation
        • Hidden Changes Detection
    • Test Optimization - Savings Breakdown
      • TIA Configuration
    • Settings
      • Token Access & Management
      • Quality Gates
Powered by GitBook
On this page
  • Quality Gate Parameters
  • Configuring Quality Gates
  • Management
  • Failed Test Counting Methodology
  • Example
  • Understanding Quality Gate Status
  • Quality Gate Status Display
  • Identifying Code Changes
  • Coverage Calculation Specificity

Was this helpful?

  1. Features
  2. Settings

Quality Gates

PreviousToken Access & Management

Last updated 2 months ago

Was this helpful?

SeaLights Quality Gate feature allows you to define thresholds for build quality and ensure that only builds meeting your standards are promoted. Quality Gates act as checkpoints, evaluating builds against specific criteria and providing a clear indication of whether a build is ready for release or requires further attention.

Quality Gate Parameters

Quality Gates evaluate builds based on three key parameters:

  • Code Changes Coverage (Mandatory): The percentage of code changes covered by tests. This metric helps ensure that recent modifications are adequately tested. You can specify minimum coverage for all test stages or for specific ones.

  • Overall Code Coverage (Optional): The percentage of the entire codebase covered by tests. This metric provides a holistic view of test coverage across the application/component. You can specify minimum coverage for all test stages or for specific ones.

  • Failed Tests (Optional): This parameter offers two options for defining acceptable failure thresholds:

    • No Failures (Strict): The Quality Gate fails if any tests fail. This is a stringent requirement suitable for projects where zero failures are the target. You can apply this to all test stages or specific ones.

    • Percentage of Failures (Flexible): Allows you to define a threshold for acceptable failure rates. The Quality Gate will pass if the percentage of failed tests is below the defined value. This option provides more flexibility and can be tailored to projects with varying risk tolerance. You define the acceptable percentage. You can apply this to all test stages or specific ones.

Configuring Quality Gates

Quality Gates can be configured at different levels:

  • SeaLights Default: SeaLights provides default settings for Quality Gates. These are automatically applied as the default for new customer accounts.

  • Customer Default: Customers can edit the SeaLights default settings to create their own customer default. This customer default can then be applied to all applications.

  • Application-Specific: It's possible to define different Quality Gates for individual applications. These settings override the customer default. Applications can revert to the customer default or the SeaLights default at any time.

Management

  • Default Settings (SeaLights & Customer)

    • Managed from Settings / Quality Gates page

  • Application Quality Gate Settings

    • Managed from the Coverage Dashboard

    • Managed from the Coverage Report

    • Managed from the Settings / Quality Gates page

    • Managed using .

Failed Test Counting Methodology

SeaLights offers two methods for counting failed tests:

  • All Results Method (Current Behavior): Counts all test runs, even if subsequent runs of the same test pass. This can inflate the perceived number of failures.

  • Latest Result Method (New): Counts only the latest test run result for each unique test. This provides a more accurate representation of actual failures.

Example

Test
Run 1
Run 2

A

Passed

Passed

B

Failed

Failed

C

Failed

Passed

D

Failed

Failed

  • All Results Method: 5 failures out of 8 test runs.

  • Latest Result Method: 2 failures out of 4 tests.

Note: Skipped tests are not counted in either method.

Configuration

This setting is currently managed manually with the help of your SeaLights customer success representative. Changing this setting will likely impact the calculated percentages of failed tests and potentially the Quality Gate status of existing builds. Changing this setting affects all applications in the account.

Understanding Quality Gate Status

Each build is evaluated against the defined Quality Gate criteria, resulting in one of the following statuses:

  • Passed: All Quality Gate conditions were met. The build successfully satisfies the predefined quality thresholds.

  • Failed: The code did not meet one or more quality thresholds. The build failed to meet the minimum requirements defined in the Quality Gate.

  • Missing Data: The Quality Gate could not be fully evaluated due to one or more of the following:

    • No Reported Tests

    • Missing Test Stage Data

    • No Detected Code Changes

  • Scan Issue: The build scan encountered errors during the reporting process. This indicates a problem with the scan itself, rather than the code quality. Investigate the scan logs to identify the root cause of the issue.

Quality Gate Status Display

The Quality Gate status is displayed on the Coverage Dashboard and Coverage Report. Clicking on the status opens a pop-up window summarizing the build status and Quality Gate definitions, including:

  • Overall Quality Gate status.

  • Defined thresholds for each parameter.

  • Actual values measured for each parameter.

  • Clear indication of which criteria were met and which were not.

Identifying Code Changes

By default, each build is compared to its previous build to identify code modifications. However, you have the flexibility to define a reference build, which can be any prior build. In this case, When a reference build is defined, code modifications between the current build and the reference build are considered. For example, you can select the production release build as your reference build to ensure all code changes since the last release are accounted for.

Coverage Calculation Specificity

While code changes are identified based on the comparison to the reference build, the coverage calculation is specific to the tests executed within the current build being measured. This means that SeaLights only considers the test coverage achieved by tests running directly against the build under evaluation, and does not include coverage from tests executed in previous builds, even those since the reference build.

Sealights Public API
Quality Gate Status