LogoLogo
Product
  • Knowledge Base
  • What's New
  • Guides
  • User Story Coverage
    • Getting Started
    • User Story Challenges & Solution
      • Typical Implementation
      • The Challenges
      • The Solution
    • User Story Coverage Report Overview
      • Release Go / No Go Report
        • How to Generate / Edit the Report
      • User Story Quality Overview
        • How to Generate the User Story View
      • User Story Coverage Analysis
        • How to Generate the Analysis View
      • Uncovered Methods View
        • How to Generate the View
      • Customization
      • Integration
    • Use Cases by Persona
      • Managers
        • Informed Go/No Go Decisions Making
        • Effective Resources Prioritization
        • Overall Progress Monitoring
      • Developers
        • Code Quality Ownership
        • Seamless Collaboration with QA
        • Code Review Facilitator
      • QA Engineers
        • Test Execution Progress Monitoring
        • Testing Effort Prioritization
        • Testing Strategy Planing
    • Technical Overview
      • User Story Coverage Mechanism
      • Technical Architecture
      • Deployment Guide
        • US1_getResults.sh
        • US2_createReport.sh
        • US_UpdateConfluence.sh
  • Test Optimization
    • Getting Started
    • Test Execution Challenges & Solution
      • The Challenges
      • Test Optimization Solution
      • Test Optimization Main Advantages
    • Test Optimization Overview
      • Automated Test Optimization
      • Manual Test Optimization
      • Test Optimization for Pull Request
      • Test Selection Policies
        • Full Run Policy
        • No Code Changes Policy
        • Common Code Policy
        • Fastest Path to 100% Coverage Policy
      • Integrations
    • Use Cases by Persona
      • Managers
        • Fast Delivery
        • Resource Optimization
        • Thorough Testing in Tight Schedule
      • Developers
        • Exploring Only Relevant Test Failures
        • Faster Feedback Loop
        • Shift Left Testing
      • QA Engineers & Manual Testers
        • Faster & Focused Manual Testing
        • Optimizing Test Suite
        • Having Stable Product for Testing
    • Technical Overview
      • Test Optimization Mechanism
        • Associating Code With Tests
          • Statistical modeling
          • One-to-One Mapping
          • Calibration
        • Detecting Modified Code
        • Generating Test Recommendations
      • Technical Architecture
      • Deployment Guide
  • Quality Improvement
    • Getting Started
    • Challenges & Approach Comparison
      • The Challenges
      • Quality Improvement Approaches
      • Choosing the Right Approach
    • Quality Improvement Solution Overview
      • Test Gaps Analysis Report
        • How to Generate / Edit the Report
      • Coverage Trend Report
        • How to Generate / Edit the Report
      • Proof of Testing Report
        • How to Generate / Edit the Report
      • Release Quality Improvement Guide
        • STEP 1: Deploy SeaLights
        • STEP 2: Take a Quality Snapshot
        • STEP 3: Prioritize Code Areas
          • Add Code Labels
          • Ignore Irrelevant Code
          • Perform a Deep CSV Analysis
        • STEP 4: Set Baseline & Threshold
        • STEP 5: Analyze Test Gaps
        • STEP 6: Write Tests
        • Step 7: Make a Go / No Go Decision Based on Quality Gate
        • STEP 8: Measure Defect Escape Rate
      • Over Time Quality Improvement Guide
        • STEP 1: Deploy SeaLights
        • STEP 2: Take a Quality Snapshot
        • STEP 3: Prioritize code areas
          • Add Code Labels
          • Ignore Irrelevant Code
          • Perform a Deep CSV Analysis
        • STEP 4: Set Baseline & Goal
        • STEP 5: Set timeline
        • STEP 6: Write tests
        • STEP 7: Monitor progress
        • STEP 8: Measure Defect Escape Rate
    • Use Cases by Persona
      • Managers
        • Effective Prioritization & Budget Allocation
        • Tracking Progress & Measuring Impact
        • Data-Driven Release Decisions
        • Transparency & Communication
      • Developers
        • Mastering Code Coverage
        • Seamless Collaboration with QA
        • Code Quality Ownership
      • QA Engineers
        • Prioritizing Test Efforts
        • Contributing to Release Informed Decisions
        • Seamless Collaboration with Developers
        • Evaluating Testing Strategy
    • Technical Overview
      • Solution Mechanism
      • Technical Architecture
      • Deployment Guide
  • Value Proposition
    • Overview
    • Quality Use Cases
      • Go/No Go Decisions
      • Quality Improvement & Test Gaps
      • Governance & Quality Gates
      • Compliance & Proof of Testing
    • Test Optimization Use Cases
      • Reduce Costs & Infrastructure
      • Shorten Release Cycles
      • Reduce Troubleshooting
Powered by GitBook
On this page
  • Required Variables
  • Output Reports
  • Customization and Tips

Was this helpful?

  1. User Story Coverage
  2. Technical Overview
  3. Deployment Guide

US2_createReport.sh

This guide explains how to configure the US2_createReport.sh script to generate reports for epics with coverage information, based on Jira and SeaLights data.


Required Variables

  • JIRA_TOKEN: Securely store your Jira access token.

  • JIRA_GET_LIST_JQL: JQL query to retrieve epics/features/stories for reports (default: issueType = Epic and created > startOfMonth(-2)).

    • Update this JQL based on your reporting needs (e.g., user stories instead of epics).

  • JIRA_BASE_URL: Your Jira base URL (e.g., https://yourdomain.atlassian.net).

  • SCM_BASE_URL: Your SCM/Git base URL (e.g., https://github.com).

  • TEST_STAGES: Array of individual test stage names as defined in SeaLights.

    • Define each stage with:

      • name: Name as displayed in the TGA report (underscores replace spaces).

      • reportJsonKey: Attribute name in the JSON file for this stage.

      • reportTitle: Column title in the generated HTML report.

  • GROUPED_TEST_STAGES: Optional array to define grouped test stages:

    • Define each group with:

      • name: Desired name in the temporary JSON files.

      • reportJsonKey: Attribute name in the exported JSON file.

      • reportTitle: Column title in the HTML report.

      • stages: Array of individual test stage names from TEST_STAGES to combine.

  • severity: Set to:

    • 1: Colors results orange if coverage is "No" in the HTML report.

    • 2: Colors results red if coverage is "No" in the HTML report.

Output Reports

  • ${TICKET_TYPE}_${TICKET}.html: HTML report for each ticket.

  • ReportInfo_${TICKET}.json: JSON file with report data for further analysis.


Customization and Tips

  • Adapt the JQL query to target specific Jira issue types.

  • Modify test stage definitions and grouping according to your testing processes.

  • Consider setting severity based on your desired visual indicator for missing coverage.

  • Adjust default variables and customize report appearance as needed.

PreviousUS1_getResults.shNextUS_UpdateConfluence.sh

Was this helpful?