Skip to content

alfin06/AgentIssue-Bench

Repository files navigation

AGENTISSUE-BENCH

Paper   Leaderboard

AGENTISSUE-BENCH is the first reproducible issue resolution benchmark focused on real-world agent system issues. It is designed to evaluate the efficacy of state-of-the-art software engineering (SE) agents in resolving these issues.

🗓️ Updates

  • 2025-05: Initial benchmark release

📚 Benchmark Dataset

Through a multi-step filtering process—including failure reproduction, patch reproduction, and non-flakiness verification—we collect 50 reproducible agents issues, which form AGENTISSUE-BENCH.

Each issue is containerized as a Docker image and hosted on Docker Hub: 🔗 Docker Hub Repository

To retrieve the images for all issues, run:

$ python pull_images.py

To pull a specific image by tag, use:

$ python pull_images.py --tag <tag>

To remove all pulled Docker images and containers, run:

$ python remove_images.py

To remove a specific image and container by tag:

$ python remove_images.py --tag <tag>

📊 Results

Overall Resoultion Rate

The following figure shows the resolution rate of AgentIssue-Bench v.s. traditional software issues: bar

The following table presents the overall results of SE agents on AgentIssue-Bench: table_results

The following figure shows the distribution of AgentIssue-Bench: pie

🔧 Patch Generation

We evaluate the capabilities of 3 state-of-the-art SE agents on AGENTISSUE-BENCH, collecting the patches they generate to resolve real-world agent issues.

🛠️ Setup Instructions

1. Clone the Repository

$ git clone https://github.com/To-D/AgentIssue-Bench.git

2. Run studied SE Agents

Note: please download the repo folder from 🔗Repo Link . Extract the file and store the repo/ folder in Agentless' root directory and AutoCodeRover's root directory for patch generation.

Agentless

$ cd Agentless
$ conda create -n agentless python=3.12
$ conda activate agentless
$ chmod +x run_agentless.sh
$ ./run_agentless.sh

AutoCodeRover

$ cd auto-code-rover
$ conda create -n auto-code-rover python=3.12
$ conda activate auto-code-rover
$ python run_autocoderover.py

SWE-agent

$ cd SWE-agent
$ conda create -n swe_agent python=3.12
$ conda activate swe_agent
$ chmod +x gen_patches_all.sh
$ ./gen_patches_all.sh

📁 Generated Patches

The Generated Patches directory contains all patches generated by our evaluation of different SE agents and Large Language Models (LLMs). The patches are organized as follows:

Generated Patches/
├── swe-agent/         # Patches generated by SWE-agent
├── Agentless/         # Patches generated by Agentless
└── Auto-code-rover/   # Patches generated by Auto-code-rover

Each agent directory contains patches generated using two state-of-the-art LLMs:

  • claude-3-5-sonnet-20241022
  • gpt-4o

About

Benchmark for issue resolutions in agent systems.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published