LogoLogo
Product
  • Knowledge Base
  • What's New
  • Guides
  • User Story Coverage
    • Getting Started
    • User Story Challenges & Solution
      • Typical Implementation
      • The Challenges
      • The Solution
    • User Story Coverage Report Overview
      • Release Go / No Go Report
        • How to Generate / Edit the Report
      • User Story Quality Overview
        • How to Generate the User Story View
      • User Story Coverage Analysis
        • How to Generate the Analysis View
      • Uncovered Methods View
        • How to Generate the View
      • Customization
      • Integration
    • Use Cases by Persona
      • Managers
        • Informed Go/No Go Decisions Making
        • Effective Resources Prioritization
        • Overall Progress Monitoring
      • Developers
        • Code Quality Ownership
        • Seamless Collaboration with QA
        • Code Review Facilitator
      • QA Engineers
        • Test Execution Progress Monitoring
        • Testing Effort Prioritization
        • Testing Strategy Planing
    • Technical Overview
      • User Story Coverage Mechanism
      • Technical Architecture
      • Deployment Guide
        • US1_getResults.sh
        • US2_createReport.sh
        • US_UpdateConfluence.sh
  • Test Optimization
    • Getting Started
    • Test Execution Challenges & Solution
      • The Challenges
      • Test Optimization Solution
      • Test Optimization Main Advantages
    • Test Optimization Overview
      • Automated Test Optimization
      • Manual Test Optimization
      • Test Optimization for Pull Request
      • Test Selection Policies
        • Full Run Policy
        • No Code Changes Policy
        • Common Code Policy
        • Fastest Path to 100% Coverage Policy
      • Integrations
    • Use Cases by Persona
      • Managers
        • Fast Delivery
        • Resource Optimization
        • Thorough Testing in Tight Schedule
      • Developers
        • Exploring Only Relevant Test Failures
        • Faster Feedback Loop
        • Shift Left Testing
      • QA Engineers & Manual Testers
        • Faster & Focused Manual Testing
        • Optimizing Test Suite
        • Having Stable Product for Testing
    • Technical Overview
      • Test Optimization Mechanism
        • Associating Code With Tests
          • Statistical modeling
          • One-to-One Mapping
          • Calibration
        • Detecting Modified Code
        • Generating Test Recommendations
      • Technical Architecture
      • Deployment Guide
  • Quality Improvement
    • Getting Started
    • Challenges & Approach Comparison
      • The Challenges
      • Quality Improvement Approaches
      • Choosing the Right Approach
    • Quality Improvement Solution Overview
      • Test Gaps Analysis Report
        • How to Generate / Edit the Report
      • Coverage Trend Report
        • How to Generate / Edit the Report
      • Proof of Testing Report
        • How to Generate / Edit the Report
      • Release Quality Improvement Guide
        • STEP 1: Deploy SeaLights
        • STEP 2: Take a Quality Snapshot
        • STEP 3: Prioritize Code Areas
          • Add Code Labels
          • Ignore Irrelevant Code
          • Perform a Deep CSV Analysis
        • STEP 4: Set Baseline & Threshold
        • STEP 5: Analyze Test Gaps
        • STEP 6: Write Tests
        • Step 7: Make a Go / No Go Decision Based on Quality Gate
        • STEP 8: Measure Defect Escape Rate
      • Over Time Quality Improvement Guide
        • STEP 1: Deploy SeaLights
        • STEP 2: Take a Quality Snapshot
        • STEP 3: Prioritize code areas
          • Add Code Labels
          • Ignore Irrelevant Code
          • Perform a Deep CSV Analysis
        • STEP 4: Set Baseline & Goal
        • STEP 5: Set timeline
        • STEP 6: Write tests
        • STEP 7: Monitor progress
        • STEP 8: Measure Defect Escape Rate
    • Use Cases by Persona
      • Managers
        • Effective Prioritization & Budget Allocation
        • Tracking Progress & Measuring Impact
        • Data-Driven Release Decisions
        • Transparency & Communication
      • Developers
        • Mastering Code Coverage
        • Seamless Collaboration with QA
        • Code Quality Ownership
      • QA Engineers
        • Prioritizing Test Efforts
        • Contributing to Release Informed Decisions
        • Seamless Collaboration with Developers
        • Evaluating Testing Strategy
    • Technical Overview
      • Solution Mechanism
      • Technical Architecture
      • Deployment Guide
  • Value Proposition
    • Overview
    • Quality Use Cases
      • Go/No Go Decisions
      • Quality Improvement & Test Gaps
      • Governance & Quality Gates
      • Compliance & Proof of Testing
    • Test Optimization Use Cases
      • Reduce Costs & Infrastructure
      • Shorten Release Cycles
      • Reduce Troubleshooting
Powered by GitBook
On this page
  • Flexibility is key
  • Implementing Code Labels

Was this helpful?

  1. Quality Improvement
  2. Quality Improvement Solution Overview
  3. Release Quality Improvement Guide
  4. STEP 3: Prioritize Code Areas

Add Code Labels

PreviousSTEP 3: Prioritize Code AreasNextIgnore Irrelevant Code

Was this helpful?

Managing a complex codebase can be overwhelming. Enter code labels, your secret weapon for streamlined navigation and efficient prioritization. Benefits at a glance:

  • Meaningful categorization: Assign labels like teams, functionalities, or user stories to organize your codebase logically.

  • Enhanced collaboration: Foster productive discussions and teamwork by making it easier for everyone (Dev, QA, Product) to find relevant code sections.

  • Targeted testing: Prioritize gaps effectively by filtering based on specific labels, focusing efforts on high-impact areas.

Flexibility is key

  • Label anything: Teams, business transactions, features – the possibilities are endless, tailor it to your needs.

  • Granularity control: Label entire applications, classes, folders, or even individual files (though avoiding the latter is recommended, to ensure the inclusion of newly added files.).


Implementing Code Labels

Watch the video tutorial to learn how to label your code, or follow these steps:

  1. Go to Settings > Data Scope > Code Labels

  2. Click on "Create New Categories/Labels/Rules" and add a new label category.

  3. Under the category, click the "+" button to create a label.

  4. Define the rules to tag the relevant code using options like exact match, starts with, ends with, contains, or label the entire app. Click the "+" button to save a rule.

Why is it necessary to rerun tests after defining code labels?

After defining code labels, they are automatically added to the codebase from the first next run of the app. The coverage and test gaps data captured by SeaLights will only be saved with the relevant code labels from that point onward. SeaLights does not label previous builds retroactively. Therefore, to start getting metrics related to code labels, it is necessary to rerun your tests and generate an updated Test Gaps report that incorporates the new labels. This way, you can ensure that the data reflects the current codebase and the relevant code label assignments accurately.

Incorporating a routine of adding Code Labels whenever a build is ready for testing helps maintain consistency and ensures efficiency in your development and testing workflows.

Generating a Test Gaps report with labels allows you to focus on specific areas of the codebase, reducing the number of test gaps that require attention, enhancing your planning process and streamline your testing efforts.

Remember:

  • Future-proof your labeling: Keep it scalable by avoiding overly specific labels or file-level labeling.

  • Consistency is crucial: Establish clear labelling conventions for everyone to follow.