LogoLogo
Product
  • Knowledge Base
  • What's New
  • Guides
  • User Story Coverage
    • Getting Started
    • User Story Challenges & Solution
      • Typical Implementation
      • The Challenges
      • The Solution
    • User Story Coverage Report Overview
      • Release Go / No Go Report
        • How to Generate / Edit the Report
      • User Story Quality Overview
        • How to Generate the User Story View
      • User Story Coverage Analysis
        • How to Generate the Analysis View
      • Uncovered Methods View
        • How to Generate the View
      • Customization
      • Integration
    • Use Cases by Persona
      • Managers
        • Informed Go/No Go Decisions Making
        • Effective Resources Prioritization
        • Overall Progress Monitoring
      • Developers
        • Code Quality Ownership
        • Seamless Collaboration with QA
        • Code Review Facilitator
      • QA Engineers
        • Test Execution Progress Monitoring
        • Testing Effort Prioritization
        • Testing Strategy Planing
    • Technical Overview
      • User Story Coverage Mechanism
      • Technical Architecture
      • Deployment Guide
        • US1_getResults.sh
        • US2_createReport.sh
        • US_UpdateConfluence.sh
  • Test Optimization
    • Getting Started
    • Test Execution Challenges & Solution
      • The Challenges
      • Test Optimization Solution
      • Test Optimization Main Advantages
    • Test Optimization Overview
      • Automated Test Optimization
      • Manual Test Optimization
      • Test Optimization for Pull Request
      • Test Selection Policies
        • Full Run Policy
        • No Code Changes Policy
        • Common Code Policy
        • Fastest Path to 100% Coverage Policy
      • Integrations
    • Use Cases by Persona
      • Managers
        • Fast Delivery
        • Resource Optimization
        • Thorough Testing in Tight Schedule
      • Developers
        • Exploring Only Relevant Test Failures
        • Faster Feedback Loop
        • Shift Left Testing
      • QA Engineers & Manual Testers
        • Faster & Focused Manual Testing
        • Optimizing Test Suite
        • Having Stable Product for Testing
    • Technical Overview
      • Test Optimization Mechanism
        • Associating Code With Tests
          • Statistical modeling
          • One-to-One Mapping
          • Calibration
        • Detecting Modified Code
        • Generating Test Recommendations
      • Technical Architecture
      • Deployment Guide
  • Quality Improvement
    • Getting Started
    • Challenges & Approach Comparison
      • The Challenges
      • Quality Improvement Approaches
      • Choosing the Right Approach
    • Quality Improvement Solution Overview
      • Test Gaps Analysis Report
        • How to Generate / Edit the Report
      • Coverage Trend Report
        • How to Generate / Edit the Report
      • Proof of Testing Report
        • How to Generate / Edit the Report
      • Release Quality Improvement Guide
        • STEP 1: Deploy SeaLights
        • STEP 2: Take a Quality Snapshot
        • STEP 3: Prioritize Code Areas
          • Add Code Labels
          • Ignore Irrelevant Code
          • Perform a Deep CSV Analysis
        • STEP 4: Set Baseline & Threshold
        • STEP 5: Analyze Test Gaps
        • STEP 6: Write Tests
        • Step 7: Make a Go / No Go Decision Based on Quality Gate
        • STEP 8: Measure Defect Escape Rate
      • Over Time Quality Improvement Guide
        • STEP 1: Deploy SeaLights
        • STEP 2: Take a Quality Snapshot
        • STEP 3: Prioritize code areas
          • Add Code Labels
          • Ignore Irrelevant Code
          • Perform a Deep CSV Analysis
        • STEP 4: Set Baseline & Goal
        • STEP 5: Set timeline
        • STEP 6: Write tests
        • STEP 7: Monitor progress
        • STEP 8: Measure Defect Escape Rate
    • Use Cases by Persona
      • Managers
        • Effective Prioritization & Budget Allocation
        • Tracking Progress & Measuring Impact
        • Data-Driven Release Decisions
        • Transparency & Communication
      • Developers
        • Mastering Code Coverage
        • Seamless Collaboration with QA
        • Code Quality Ownership
      • QA Engineers
        • Prioritizing Test Efforts
        • Contributing to Release Informed Decisions
        • Seamless Collaboration with Developers
        • Evaluating Testing Strategy
    • Technical Overview
      • Solution Mechanism
      • Technical Architecture
      • Deployment Guide
  • Value Proposition
    • Overview
    • Quality Use Cases
      • Go/No Go Decisions
      • Quality Improvement & Test Gaps
      • Governance & Quality Gates
      • Compliance & Proof of Testing
    • Test Optimization Use Cases
      • Reduce Costs & Infrastructure
      • Shorten Release Cycles
      • Reduce Troubleshooting
Powered by GitBook
On this page
  • Key Metrics
  • Release Quality Focus
  • Over Time Quality Focus
  • Layers of Data
  • Level 1: App Test Gap Summary
  • Level 2: Files
  • Level 3: Methods
  • Use Cases Overview
  • Essential Pre-requisites
  • Key Use Cases

Was this helpful?

  1. Quality Improvement
  2. Quality Improvement Solution Overview

Test Gaps Analysis Report

PreviousQuality Improvement Solution OverviewNextHow to Generate / Edit the Report

Was this helpful?

Unlock valuable insights into your code health with the SeaLights Test Gaps Analysis Report. Whether you're focusing on immediate release stability (Release Quality Improvement) or long-term code health (Over Time Quality Improvement), SeaLights will tailor your analysis for success, providing comprehensive code coverage breakdowns within your chosen timeframe.


Key Metrics

Release Quality Focus

  • Modified Coverage Gap: This key metric reveals immediate testing needs. Prioritize these gaps to ensure confident and stable releases.

  • Untested Modified Methods: A high number (>100) indicates significant testing gaps. Focus on relevant sections for efficient improvement.

Over Time Quality Focus

  • Overall Coverage Gap: Track your journey towards comprehensive code coverage.

  • Untested Methods: Gain a long-term perspective on areas needing attention for sustainable quality enhancement.


Layers of Data

Level 1: App Test Gap Summary

High-level overview of untested methods across your application within the chosen timeframe, including coverage gaps and total numbers:

  • Overall Coverage Gap: Understand the percentage of methods that lack testing across your entire application.

  • Modified Code Coverage Gap: Focus on newly changed code by seeing the percentage of untested methods specifically within recently modified areas. This helps prioritize testing for immediate release stability.

  • Total Untested Methods: Get the raw number of methods awaiting testing, along with the total number of methods in your application for context.

Actionable Insights at Your Fingertips:

  • Adjust the Date Range: Easily switch between different timeframes to track coverage trends or analyze specific sprints.

  • Download as CSV: Export the data to your preferred spreadsheet tool for further analysis and sharing.

  • Delete the Report: Keep your reports organized by removing outdated ones when needed.

Note:

  • Generating report data might take a few minutes, especially for the first time or after significant code changes.

  • The percentages represent coverage gaps, not actual coverage.

Level 2: Files

Individual files with their coverage details, contributor information, and direct access to code locations in your repository for further scrutiny.

Sort by file name, biggest gaps, biggest number of untested methods, etc. Prioritize based on importance (logic, calculations, etc.).

Level 3: Methods

Drill down to specific methods within key files to discuss changes, ignore irrelevant ones (staring from the next build), and gain in-depth understanding.


Use Cases Overview

Test Gaps Analysis identifies untested areas, providing crucial insights for:

  • Prioritizing testing: Focus resources on newly changed code for optimal impact.

  • Identifying outdated tests: Eliminate redundant efforts and optimize testing efficiency (e.g., outdated code that continues to be tested, but is no longer in use by the application or system).

  • Assessing quality risk: Gain a clear understanding of potential quality issues in recent changes.

SeaLights offers dynamic analysis that covers various test types, including Regression, Functional, API, Integration, Exploratory and manual tests. It analyzes all builds, all test stages, and all code changes within your chosen timeframe.

Essential Pre-requisites

  • Scope your code: Define relevant areas for analysis.

  • Clean irrelevant data: Focus on actionable insights by removing unnecessary information..

  • Schedule reports: Automate generation for consistent visibility.

  • Align with sprints: Tailor reports to your development cycles.

  • Set up SeaLights tools: Leverage the Code Viewer Chrome extension for deeper exploration.

Key Use Cases

  • Definition of Done for Sprint Quality (DoD): Ensure code added in the last sprint is adequately tested, by analyzing the specific sprint schedule and prioritizing test gaps in modified code.

  • Monthly Quality Report: Gain a broader understanding of your team's overall quality performance, by analyzing a full calendar month and focusing on modified code to track quality trends.

  • Test Development: Identify high-risk areas and prioritize test creation based on specific data, by choosing a month or quarter for more comprehensive analysis and looking at overall untested code to prioritize testing efforts.


Ready to dive deeper? Explore the specific guides associated with your chosen quality improvement approach and unlock the power of SeaLights Test Gaps Analysis!

TGA Report List
Test Gaps Analysis
Method List per File