Requirements: This feature is avaialble in Preview exclusively as part of the Pro plan.
Up to 5 Custom Pre-merge checks are currently allowed during the Preview period. Pricing for this feature will be announced in a few weeks.
Agentic Pre-Merge Checks provide automated validation of pull requests against standard quality metrics and organization specific requirements. Use Built-in Checks for common requirements and add your own Custom Checks with natural language instructions tailored to your team’s policies. These AI-powered checks can be configured by an Admin user at the organization or repository level, and CodeRabbit automatically validates every pull request against these requirements.

Why use Pre-Merge Checks?

  • Consistent standards: Enforce naming, documentation, and change-management hygiene across every PR.
  • Safer merges: Catch breaking API changes, security gaps, or policy violations before they land.
  • Team-specific guardrails: Encode architectural patterns, compliance rules, or business logic as custom checks.
  • Faster reviews: Surface blocking issues early in the PR Walkthrough so reviewers can act quickly.

Built-in Checks

CodeRabbit includes four standard checks that address common organizational needs:

Docstring Coverage

Verify PR docstring coverage against a configurable threshold (80% by default)

Pull Request Title

Validate PR titles accurately reflect changes made and follow your specified requirements

Pull Request Description

Verify descriptions follow the template specified in your Git platform

Issue Assessment

Verify PRs address linked issues without containing out-of-scope changes

Custom Checks

Define your own validation logic using natural language instructions. Custom checks leverage AI to understand and validate complex requirements that go beyond standard code quality metrics.

How Custom Checks Work (scope & verification)

Custom checks run in a secure, read-only environment against your PR. They analyze the diff and context, verify details using tools, then return a clear result with reasoning.
1

Analyze

CodeRabbit interprets your Instructions against rich PR context:
  • Repository & diff: changed files, code snippets, and relevant git history
  • PR context: title/description, linked issues, and review discussion
  • Static analysis: pattern and code search (e.g., ast-grep, ripgrep)
2

Verify

To substantiate findings, the check may:
  • Run sandboxed shell commands to inspect the repo (e.g., scan for patterns, validate configs)
  • Consult public documentation and best practices via web lookups
  • Call connected MCP tools to pull context from internal systems (docs, knowledge bases, design tools, trackers).
See Integrate MCP servers for setup and behavior
3

Decide

After analysis and verification, the check emits a result — Passed, Failed, or Inconclusive — with brief reasoning so PR authors know what to fix or why a decision couldn’t be reached.

Configuration

Enforcement Modes

Each check can be configured with one of three enforcement modes:
  • off: Check is disabled
  • warning: Display warnings but don’t block merges (default)
  • error: When paired with Request Changes Workflow, block merges until resolved or manually overrided
    Configure: Request Changes Workflow

UI Configuration

Configure Pre-Merge Checks through the CodeRabbit dashboard:
1

Navigate to Settings

In CodeRabbit, go to Settings → Review → Pre-Merge Checks (org or repo scope)
2

Configure Built-in Checks

Update configuration and enforcement modes for Built-in Checks
3

Add Custom Checks

Add Custom Checks:
  • Name (≤ 50 chars, unique within the org)
  • Instructions (≤ 1000 chars; natural language)
  • Mode (off | warning | error)
4

Apply Changes

Click Apply Changes to save. The new checks are applied to subsequent reviews.

YAML Configuration

For version-controlled configuration, add checks to your .coderabbit.yaml file:
YAML
reviews:
  pre_merge_checks:
    docstrings:
      mode: "error"
      threshold: 85
    title:
      mode: "warning"
      requirements: "Start with an imperative verb; keep under 50 characters."
    description:
      mode: "error"
    issue_assessment:
      mode: "warning"
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: "Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the Breaking Change section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal)."

Results in the Walkthrough

Pre-Merge Check results appear alongside CodeRabbit’s analysis in the PR Walkthrough with clear visual organization for quick assessment.
Results are organized into two tables:

Failed checks

  • Prominently displayed to show errors and warnings requiring attention

Passed checks

Expandable to review checks that were validated successfully

Unblocking a PR

If Request Changes Workflow is enabled and a check in Error mode fails, the PR is blocked until the issue is resolved or you explicitly ignore it. To ignore, select the Ignore failed checks checkbox in the PR Walkthrough. The PR is then unblocked and the affected rows are tagged [IGNORED] for traceability.
The override applies only to that PR. Future PRs will still enforce checks as configured.

Manual commands

Trigger pre-merge checks manually using chat commands:

Run All Checks

@coderabbitai run pre-merge checks
This reruns all configured checks and updates results in the walkthrough.

Test Custom Check

@coderabbitai evaluate custom pre-merge check --name <check_name> --instructions <text> [--mode <error|warning>]
Tests custom check logic before saving to configuration.

Ignore failures

@coderabbitai ignore pre-merge checks
Manually ignore failed checks and unblock the PR.
See Manage code reviews for more commands and behaviors. 

Best practices

  • Write custom check instructions that are specific and actionable. Keep instructions concise and testable with one purpose per check.
  • Start new checks in warning mode to gather feedback, then move to error mode once the team is aligned on expectations.
  • Periodically review check results and inconclusive cases to refine instructions and identify gaps in validation logic.