license: apache-2.0
task_categories:
- text-generation
- question-answering
language:
- en
tags:
- code
- code-review
- software-engineering
- benchmark
- python
size_categories:
- n<1K
dataset_info:
features:
- name: instance_id
dtype: string
- name: repo
dtype: string
- name: language
dtype: string
- name: pull_number
dtype: int64
- name: title
dtype: string
- name: body
dtype: string
- name: created_at
dtype: string
- name: problem_statement
dtype: string
- name: hints_text
dtype: string
- name: resolved_issues
list:
- name: body
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: base_commit
dtype: string
- name: commit_to_review
struct:
- name: head_commit
dtype: string
- name: head_commit_message
dtype: string
- name: patch_to_review
dtype: string
- name: reference_review_comments
list:
- name: diff_hunk
dtype: string
- name: line
dtype: int64
- name: original_line
dtype: int64
- name: original_start_line
dtype: int64
- name: path
dtype: string
- name: start_line
dtype: int64
- name: text
dtype: string
- name: merged_commit
dtype: string
- name: merged_patch
dtype: string
- name: metadata
struct:
- name: difficulty
dtype: string
- name: estimated_review_effort
dtype: int64
- name: problem_domain
dtype: string
splits:
- name: dev
num_bytes: 341885132
num_examples: 7086
- name: test
num_bytes: 35656314
num_examples: 671
download_size: 137206004
dataset_size: 377541446
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
SWE-CARE: A Comprehensiveness-aware Benchmark for Code Review Evaluation
Dataset Description
SWE-CARE (Software Engineering - Comprehensive Analysis and Review Evaluation) is a comprehensiveness-aware benchmark for evaluating Large Language Models (LLMs) on repository-level code review tasks. The dataset features real-world code review scenarios from popular open-source Python and Java repositories, with comprehensive metadata and reference review comments.
Dataset Summary
- Repository: inclusionAI/SWE-CARE
- Paper: CodeFuse-CR-Bench: A Comprehensiveness-aware Benchmark for End-to-End Code Review Evaluation
- Languages: Python
- License: Apache 2.0
- Splits:
test: 671 instances (primary evaluation set)dev: 7,086 instances (development/training set)
Dataset Structure
Data Instances
Each instance in the dataset represents a code review task with the following structure:
{
"instance_id": "voxel51__fiftyone-2353@02e9ba1",
"repo": "voxel51/fiftyone",
"language": "Python",
"pull_number": 2353,
"title": "Fix issue with dataset loading",
"body": "This PR fixes...",
"created_at": "2023-01-15T10:30:00Z",
"problem_statement": "Issue #2350: Dataset fails to load...",
"hints_text": "Comments from the issue discussion...",
"resolved_issues": [
{
"number": 2350,
"title": "Dataset loading error",
"body": "When loading datasets..."
}
],
"base_commit": "abc123...",
"commit_to_review": {
"head_commit": "def456...",
"head_commit_message": "Fix dataset loading logic",
"patch_to_review": "diff --git a/file.py..."
},
"reference_review_comments": [
{
"text": "Consider adding error handling here",
"path": "src/dataset.py",
"diff_hunk": "@@ -10,5 +10,7 @@...",
"line": 15,
"start_line": 14,
"original_line": 15,
"original_start_line": 14
}
],
"merged_commit": "ghi789...",
"merged_patch": "diff --git a/file.py...",
"metadata": {
"problem_domain": "Bug Fixes",
"difficulty": "medium",
"estimated_review_effort": 3
}
}
Data Fields
Core Fields
instance_id(string): Unique identifier in formatrepo_owner__repo_name-PR_number@commit_sha_shortrepo(string): GitHub repository in formatowner/namelanguage(string): Primary programming language (PythonorJava)pull_number(int): GitHub pull request numbertitle(string): Pull request titlebody(string): Pull request descriptioncreated_at(string): ISO 8601 timestamp of PR creation
Problem Context
problem_statement(string): Combined title and body of resolved issue(s)hints_text(string): Relevant comments from issues prior to the PRresolved_issues(list): Array of resolved issues with:number(int): Issue numbertitle(string): Issue titlebody(string): Issue description
Code Changes
base_commit(string): Base commit SHA before changescommit_to_review(dict): The commit being reviewed:head_commit(string): Commit SHA to reviewhead_commit_message(string): Commit messagepatch_to_review(string): Git diff of changes to review
merged_commit(string): Final merged commit SHAmerged_patch(string): Final merged changes (ground truth)
Reference Reviews
reference_review_comments(list): Human code review comments with:text(string): Review comment textpath(string): File path being revieweddiff_hunk(string): Relevant code diff contextline(int): Line number in new versionstart_line(int): Start line for multi-line commentsoriginal_line(int): Line number in original versionoriginal_start_line(int): Original start line
Metadata
metadata(dict): LLM-classified attributes:problem_domain(string): Category like "Bug Fix", "Feature", "Refactoring", etc.difficulty(string): "Easy", "Medium", or "Hard"estimated_review_effort(int): Scale of 1-5 for review complexity
Data Splits
| Split | Instances | Description |
|---|---|---|
| test | 671 | Primary evaluation set for benchmarking |
| dev | 7,086 | Development set for training/fine-tuning |
Usage
Loading the Dataset
from datasets import load_dataset
# Load the test split (default for evaluation)
dataset = load_dataset("inclusionAI/SWE-CARE", split="test")
# Load the dev split
dev_dataset = load_dataset("inclusionAI/SWE-CARE", split="dev")
# Load both splits
full_dataset = load_dataset("inclusionAI/SWE-CARE")
Using with SWE-CARE Evaluation Framework
from swe_care.utils.load import load_code_review_dataset
# Load from Hugging Face (default)
instances = load_code_review_dataset()
# Access instance data
for instance in instances:
print(f"Instance: {instance.instance_id}")
print(f"Repository: {instance.repo}")
print(f"Problem: {instance.problem_statement}")
print(f"Patch to review: {instance.commit_to_review.patch_to_review}")
print(f"Reference comments: {len(instance.reference_review_comments)}")
Running Evaluation
See the GitHub repository for detailed documentation and examples.
Evaluation Metrics and Baselines Results
See the paper for comprehensive evaluation metrics and baseline results on various LLMs.
Additional Information
Citation
If you use this dataset in your research, please cite:
@misc{guo2025codefusecrbenchcomprehensivenessawarebenchmarkendtoend,
title={CodeFuse-CR-Bench: A Comprehensiveness-aware Benchmark for End-to-End Code Review Evaluation in Python Projects},
author={Hanyang Guo and Xunjin Zheng and Zihan Liao and Hang Yu and Peng DI and Ziyin Zhang and Hong-Ning Dai},
year={2025},
eprint={2509.14856},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2509.14856},
}
Contributions
We welcome contributions! Please see our GitHub repository for:
- Data collection improvements
- New evaluation metrics
- Baseline model results
- Bug reports and feature requests
License
This dataset is released under the Apache 2.0 License. See LICENSE for details.
Changelog
- v0.2.0 (2025-10): Expanded dataset to 671 test instances
- v0.1.0 (2025-09): Initial release with 601 test instances and 7,086 dev instances