Skip to main content
All metrics are calculated only for merged pull requests within the selected timeframe. Metrics can be filtered by repository, username, or team.

Summary

Merged Pull Requests: Total pull requests successfully merged. Active Users: Distinct users whose pull requests were reviewed by CodeRabbit. Median Time to Last Commit: Median time from review readiness to last commit. Reviewer Time Saved: AI-estimated human reviewer time saved during pull request reviews. CodeRabbit Review Comments: Review comments posted by CodeRabbit on merged PRs. Acceptance Rate: Percentage of CodeRabbit comments accepted by developers. Avg Review Comments Posted per PR: Average review comments per pull request from CodeRabbit and human reviewers. Review Comments by Severity: Distribution of CodeRabbit review comments grouped by severity. Severity Distribution: Radar view of CodeRabbit comments by severity, showing posted vs accepted. Review Comments by Category: Distribution of CodeRabbit review comments grouped by category. Category Distribution: Radar view of CodeRabbit comments by category, showing posted vs accepted. Avg Review Iterations per PR: Average number of review iterations per pull request. Tool Findings: Automated tool findings surfaced during reviews. Pipeline Failures: CI/CD pipeline failures detected during reviews.

Quality Metrics

Acceptance Rate by Severity: Percentage of CodeRabbit comments accepted, grouped by severity. Review Comment Count by Severity: Number of CodeRabbit comments posted and accepted, grouped by severity. Acceptance Rate by Category: Percentage of CodeRabbit comments accepted, grouped by category. Review Comment Count by Category: Number of CodeRabbit comments posted and accepted, grouped by category. Tool Findings by Tool Name: Automated tool findings grouped by individual tool. Tool Findings by Severity: Automated tool findings grouped by severity.

Comment categories

Categories describe the type of issue identified:
  • Security & Privacy: Vulnerabilities that enable exploitation or expose sensitive data (e.g., auth bypass, injection attacks, exposed secrets)
  • Data Integrity & Integration: Problems that corrupt data or break API/schema contracts (e.g., transaction issues, schema mismatches, broken migrations)
  • Performance & Scalability: Inefficiencies impacting speed or resource usage (e.g., N+1 queries, missing caching, unoptimized loops)
  • Stability & Availability: Issues causing crashes, hangs, or resource leaks at runtime (e.g., null pointer errors, memory leaks, deadlocks)
  • Functional Correctness: Logic errors producing wrong results (e.g., off-by-one errors, incorrect conditions, algorithm mistakes)
  • Maintainability & Code Quality: Code hygiene affecting readability and future changes (e.g., unclear naming, duplication, poor structure)

Time Metrics

Time to Merge: Duration from PR review-ready to merge. Shown as average, median, P75, and P90. Weekly Review-Ready → Merge Time: Weekly trend of time from review-ready to merge. Time to Last Commit: Duration from PR review-ready to final commit (or merge if no later commit). Shown as average, median, P75, and P90. Weekly Review-Ready → Last Commit Time: Weekly trend of time from review-ready to final commit. Time to First Human Review: Duration from PR review-ready to first human review comment. Shown as average, median, P75, and P90. Weekly Review-Ready → First Human Review Time: Weekly trend of time from review-ready to first human review.

Understanding time metrics

  • Average: Overall average duration across all PRs
  • Median: Typical duration (less affected by outliers)
  • P75: 75th percentile, helps identify PRs taking longer than usual
  • P90: 90th percentile, highlights potential bottlenecks

Weekly Pull Requests: Created & Merged: Weekly counts of pull requests created and merged. Weekly Avg Comments per PR: CodeRabbit & Human: Weekly average review comments posted by CodeRabbit and human reviewers. Weekly Active Users: Weekly count of distinct users whose PRs were reviewed by CodeRabbit. Weekly Pipeline Failures: Weekly trend of CI/CD pipeline failures detected during reviews. Most Active Pull Request Authors: Top 10 contributors ranked by number of merged PRs. Most Active Pull Request Reviewers: Top 10 contributors ranked by number of PRs reviewed.

Data Metrics

Active User Details: Per-user summary of pull request activity and reviews. Pull Request Details: Detailed review metrics for each merged pull request. Tool Finding Details: Automated tool findings surfaced during review for merged pull requests.