About CodeRabbit learnings
As your team works with CodeRabbit, it learns your teamâs code-review preferences based on chat interactions, and adds these preferences to an internal database that it associates with your Git platform organization. We call these internal records learnings.
- Special instructions about reviewing particular files.
- Guidance for reviewing all of the files of one repository.
- Code-review preferences that CodeRabbit must apply across all of your organizationâs repositories.
Every time CodeRabbit prepares to add a comment to a pull request or issue, it loads the learnings that apply based on your configured scope. Depending on your settings, CodeRabbit applies either the repositoryâs learnings only, or all of your organizationâs learnings. CodeRabbit uses these learnings as additional context or instructions for its comment.
Use learnings for incremental preferences that emerge during code reviewsâpreferences that donât require a formal configuration change but should be applied to future reviews. Use review instructions or coding guidelines for formal, documented standards.
You can review and edit your organizationâs CodeRabbit learnings at any time, as detailed later in this page.
Add learnings
Adding a learning via comment
To add learnings to the database CodeRabbit keeps about your organizationâs preferences, communicate your preferences using natural language, in a comment attached to any pull request or issue. If CodeRabbit decides to create one or more learnings based on a comment, then it makes this explicit in its response to that comment by adding a collapsible section titled Learnings Added to its own response comment. Imagine a scenario where CodeRabbit has reviewed a GitHub-based pull request from your team that adds a feature to an authentication middleware library. CodeRabbit has flagged a certain file as needing improvement, attaching this comment to the relevant lines in its review:Consider using nested try-catch blocks here to handle different error scenarios more comprehensively.If this critique disagrees with your teamâs coding practices specific to try-catch blocks, then you can reply to this CodeRabbit using a comment of your own, addressed to the
@coderabbitai service account on GitHub:
@coderabbitai Actually, weâve found that nested try-catch blocks make our auth middleware harder to debug in production. We prefer early returns with specific error codes - itâs cleaner and our monitoring tools can track them better.CodeRabbit considers your reply, and decides to make a learning out of it. It acknowledges this in another reply:
Youâre absolutely right about the debugging benefits. Early returns with specific error codes do provide better clarity and monitoring capabilities. Iâll remember this preference for your authentication middleware. Thank you for the guidance!CodeRabbit also adds a Learnings added section to this comment, making explicit the fact that this chat has modified its future behavior with your team. CodeRabbit then creates a new learnings record for itself. Along with metadata such as the pull request number, filename, and GitHub user associated with the learning, CodeRabbit adds this self-instructive text to the new record:
In authentication middleware, prefer returning early with specific error codes rather than nested try-catch blocks. This improves readability and makes error tracking easier in production environments.
Add learnings from files
You can import content from any file in your repository as learnings. This is useful for converting existing team documentation into learnings that CodeRabbit will apply during reviews. To import a file as learnings, mention the file in a pull request comment:- Converting existing team documentation into CodeRabbit learnings
- Bulk-adding multiple preferences at once
- Importing learnings from an exported CSV file
If you have AI agent configuration files like
.cursorrules, CLAUDE.md, or
.github/copilot-instructions.md, use code
guidelines
instead. CodeRabbit automatically detects and applies these files without
manual import.Best practices for new learnings
When communicating with CodeRabbit during an active code review, follow these practices to create effective learnings:Consider if itâs a pattern or a one-off
Determine whether a correction represents a team-wide preference that should apply to all future reviews, or a situation specific to this pull request. Not every correction should become a learning. For one-time exceptions, such as unusual temporary code patterns during a migration, resolve the comment without creating a learning. For systemic preferences that should persist across reviews, provide feedback that CodeRabbit can store as a learning.Explain the why, not just the what
Donât just tell CodeRabbit what to do, explain the reasoning. The âwhyâ helps CodeRabbit apply the learning correctly in similar-but-not-identical situations:Reply to specific comments for maximum context
Prefer to reply directly to the comment on the specific line of code rather than leaving general comments on the PR. This gives CodeRabbit more context when considering feedback, allowing it to create more specific learnings. A generic comment on the PR might produce a vague learning. Replying to a specific line produces a learning tied to that file pattern and context.View learnings
To view the learnings that CodeRabbit has associated with your organization, follow these steps:- Visit the CodeRabbit web interface.
- In the sidebar, click Learnings.
Filter displayed learnings
Over time, the learnings that CodeRabbit gathers for your organization can become quite numerous. This can make manually browsing the full list difficult. The CodeRabbit web interface has search and filtering tools to help you find specific learnings, based on the topic of the learning text, or on other metadata. To filter the displayed learnings by topic or concept, enter that topic or concept into the Similarity search field, and set Top K to the number of results you want returned. Because this is a vector-based similarity search, the returned learnings donât necessarily contain the exact text of your search terms. For example, to see the top ten learnings that have to do with error reporting, entererror reporting into Similarity search and set Top K to 10. This will find learnings about exceptions, try-catch, error codes, and other semantically related topics.
To filter the displayed learnings by repository, user, or file path, click + Filters, and select additional criteria.
Edit or delete learnings
You can edit and delete learnings in two ways: Via the web interface: If your account has the Admin CodeRabbit role with your organization, then you can freely edit the text of any stored learning, or delete it outright through the CodeRabbit dashboard. To edit or delete a learning via the web interface:- Click the Action menu on the learning record, which resembles three dots.
- Select Edit or Delete.
@coderabbitai and describing the change you want.
Export and transfer learnings
You can export your organizationâs learnings and import them into another CodeRabbit account. This is useful when migrating accounts or consolidating organizations.Export learnings
To export your learnings:- Visit the CodeRabbit web interface.
- Navigate to Learnings in the sidebar.
- Click the export option to download your learnings as a CSV file.
Import learnings to a new account
To import learnings into a new CodeRabbit account:- Ensure the new account is connected to your repository and has an active CodeRabbit subscription.
- Add the exported learnings CSV file to a branch in your repository.
- Create a pull request from that branch.
- Use CodeRabbit chat to request the import:
Configure learnings storage and application
CodeRabbit has several configuration options that modify the storage and application of learnings.Opt out of learnings storage
CodeRabbit enables learnings by default. To disable learnings, modify one of the following configuration options:- To disable all CodeRabbit knowledge base features for your organization or repository, which includes learnings, enable the Opt out setting.
- To disable all CodeRabbit features that require long-term data retention about your organizationâs use of CodeRabbitâincluding learningsâdisable the Data retention setting.
Specify the scope of learnings
The Learnings configuration setting lets you specify the scope that CodeRabbit applies to all of the learnings it has collected about your organization. You can set this option to one of the following values:auto(default): When reviewing a public repository, CodeRabbit applies only the learnings specific to that repository. When reviewing private repository, CodeRabbit applies all of your organizationâs learnings. This is the default setting.global: CodeRabbit applies all of your organizationâs learnings to all code reviews.local: CodeRabbit applies only learnings associated with code reviewsâ respective repositories.
When to use each scope
The defaultauto scope can be suboptimal for organizations with diverse repositories. Consider these scenarios:
Use 'local' for diverse tech stacks
Use 'local' for diverse tech stacks
If your organization has repositories with different conventionsâsuch as a
Python backend and a React frontendâuse
local scope to prevent
cross-contamination of learnings. Without local scope, learnings about
Python exception handling might incorrectly influence React component
reviews, or vice versa. yaml # .coderabbit.yaml knowledge_base: learnings: scope: "local" Use 'global' for consistent org-wide standards
Use 'global' for consistent org-wide standards
If your organization maintains consistent coding standards across all
repositoriesâsuch as security practices, documentation requirements, or
naming conventionsâuse
global scope to apply learnings universally.
yaml # .coderabbit.yaml knowledge_base: learnings: scope: "global" Use 'auto' for mixed visibility
Use 'auto' for mixed visibility
The
auto setting works well when you have both public and private
repositories, and you want to: - Keep public repository learnings isolated -
Share learnings across private repositories This is the default behavior and
requires no configuration.Troubleshooting
Learnings appear to not be working
If CodeRabbit seems to ignore your learningsâfor example, continuing to make suggestions that contradict existing learningsâtry this workaround:- Review existing learnings. Go to your projectâs Learnings page and verify that all relevant learnings are active and clearly phrased.
- Consider possible conflicts with path instructions or coding guidelines. Path instructions precede learnings.
-
Add a reinforcement rule. Introduce a new rule that explicitly tells the model to stop and reconsider the Learnings before continuing the review. For example:
- Save and re-test. Commit this change and observe the next few automated reviews. CodeRabbit should now respect the learnings more consistently.
Maintaining learnings over time
Team conventions evolve, and learnings can become stale. To maintain learnings effectively:- Quarterly review. Set a reminder to review your learnings every quarter. Look for learnings that reference deprecated patterns, old file structures, or outdated team decisions.
- Delete contradictory learnings. If you find learnings that conflict with current practices, delete them to avoid confusing CodeRabbit.
- Update rather than accumulate. When team standards change, update or delete old learnings rather than adding new ones that contradict them. Multiple conflicting learnings on the same topic can produce inconsistent behavior.
- Use the similarity search to find learnings about areas where your practices have changed.
- Filter by creation date to find the oldest learnings.
- Review learnings from team members who are no longer active.
Whatâs next
- Add review instructions for formal, path-based rules
- Configure the knowledge base for broader context integration
- Set up issue tracking integration for linked issue context