Skip to main content
The CodeRabbit Dashboard provides visibility into your team’s review speed, code quality, collaboration patterns, and the impact of AI-assisted reviews. Use the dashboard to track performance, identify bottlenecks, and measure the ROI of CodeRabbit across your organization. The dashboard is organized into two sections, accessible from the sidebar.

Filters

All dashboard metrics can be filtered by:
  • Timeframe: Select a date range for analysis
  • Org Name: Filter by a specific organization (self-hosted only, see Aggregate dashboard)
  • Repository: Focus on specific repositories
  • Username: View individual contributor metrics
  • Team: Filter by a specific team

Aggregate dashboard for self-hosted

If you are using a self-hosted CodeRabbit instance, the dashboard displays aggregate metrics across all organizations within your instance. Use the Org Name filter to drill down into a specific organization’s data, or select All to view combined metrics across every organization. This gives instance administrators a single view of review activity, code quality trends, and team performance across the entire deployment.

Git platform reviews

Metrics for pull request reviews on GitHub, GitLab, Azure DevOps, and Bitbucket. All metrics are calculated only for pull requests that were reviewed by CodeRabbit and merged within the selected timeframe. For detailed metric definitions, see Git platform review metrics.

Summary

Measure the ROI of AI-assisted reviews. The Summary page displays repository activity, pull request throughput, chat usage, time saved through AI-assisted reviews, comment volume and acceptance rates, and issues surfaced by automated tools. Key questions this page answers:
  • How much productivity is the team gaining from AI-assisted reviews?
  • Is AI-generated feedback trustworthy and relevant?
  • What types of review comments appear most frequently?

Quality Metrics

Understand what kinds of issues CodeRabbit catches and how often developers agree. The Quality Metrics page shows comment acceptance rates by severity and category, with drill-down into individual review comments to assess whether feedback is relevant and actionable. Key questions this page answers:
  • Are we improving code quality across critical domains?
  • Is CodeRabbit catching meaningful issues?
  • Do developers trust and act on AI suggestions?
  • Are certain teams or repositories seeing more severe problems?

Time Metrics

Identify bottlenecks in your review cycle. The Time Metrics page shows how long PRs wait before, during, and after reviews — from first human review through merge — so you can pinpoint where work stalls. Key questions this page answers:
  • Is our review process fast enough to support development velocity?
  • Where do PRs wait the longest—before, during, or after reviews?
  • Are certain repositories or teams experiencing delays unrelated to reviews?

Knowledge Base

See how your team’s accumulated knowledge and integrated tools improve reviews. The Knowledge Base page tracks how often Learnings from chat conversations and MCP server integrations are applied to pull requests, which context sources provide the most coverage, and what automated tool findings are being surfaced. Key questions this page answers:
  • How many Learnings have been created and how often are they applied?
  • Which MCP servers are providing coverage across pull requests?
  • Is the team building up institutional knowledge over time?
  • Which tools are surfacing the most findings, and at what severity?
Spot changes in team health over time. The Organization Trends page visualizes weekly PR throughput, reviewer participation, chat adoption, and CI/CD failures to help you catch emerging patterns early. Key questions this page answers:
  • Are we merging work consistently, or is a backlog forming?
  • Is review participation evenly distributed across the team?
  • Are weekly activity levels trending in a healthy direction?

Pre-merge Checks

Monitor how your quality gates are performing. The Pre-merge Checks page shows which built-in and custom checks are configured, how often they run, and their pass/fail outcomes so you can tune checks or catch gaps. Key questions this page answers:
  • How many Pre-merge Checks are configured and how often do they run?
  • What is the pass/fail distribution across checks?
  • Are custom checks catching issues before code is merged?

Reporting

Track report delivery across your organization. The Reporting page shows how many scheduled and on-demand reports are configured, how consistently they’re delivered, and which channels your team uses. Key questions this page answers:
  • How many reports have been delivered and through which channels?
  • How many reports are configured across the organization?

Data Metrics

Drill down to individual contributors and pull requests. The Data Metrics page is designed for auditability, coaching insights, and debugging review process issues. Key questions this page answers:
  • Which developers need more support?
  • Which PRs took unusually long to finalize, and why?
  • Are certain contributors struggling with specific issue types?
  • Which tools surface the most issues?

Data Export

Download per-PR review metrics as CSV for offline analysis, custom reporting, or integration with other tools. For export instructions and field definitions, see Data Export.
For programmatic access, use the Metrics Data API (Enterprise plan feature).

IDE/CLI reviews

Metrics for code reviews performed through CodeRabbit IDE extensions (such as Cursor, VS Code) and the CLI. For detailed metric definitions, see IDE/CLI review metrics.

IDE/CLI Summary

Understand how your team is adopting CodeRabbit reviews outside of pull requests. The Summary page shows which IDE extensions and CLI tools are being used, how many reviews are performed, and whether Learnings carry over to local reviews. Key questions this page answers:
  • How actively are developers using IDE/CLI reviews?
  • Which IDE extensions or CLI tools are most popular?
  • Are Learnings being applied to IDE/CLI reviews?
Track week-over-week adoption of IDE and CLI reviews across your team. The Organization Trends page shows weekly usage trends, compares activity across extensions, and surfaces automated tool findings by name and severity. Key questions this page answers:
  • Is IDE/CLI review adoption growing week over week?
  • Are reviews distributed across team members?
  • Which tools are surfacing the most findings?

IDE/CLI Data Metrics

Drill down into individual user activity for IDE and CLI reviews to identify who’s actively using local reviews and where to focus adoption efforts. Key questions this page answers:
  • Which team members are actively using IDE/CLI reviews?
  • How does usage compare across different IDE extensions and the CLI?