-
Notifications
You must be signed in to change notification settings - Fork 2k
feat: benches using criterion #10805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: benches using criterion #10805
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great, and probably cleaner + more flexible than the shell script approach
|
||
Command::new("forge") | ||
.current_dir(&self.root_path) | ||
.args(["build"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to use the --release
flag?
benches/src/lib.rs
Outdated
} | ||
|
||
/// Install a specific foundry version | ||
pub fn install_foundry_version(version: &str) -> Result<()> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with local benchmarks in mind, it could be useful to 1st check if the version is already installed (i.e. stable
).
if we do that though, we probably need a flag to force reinstall
use criterion::{black_box, criterion_group, criterion_main, BenchmarkId, Criterion}; | ||
use foundry_bench::{install_foundry_version, BenchmarkProject, BENCHMARK_REPOS, FOUNDRY_VERSIONS}; | ||
|
||
fn benchmark_forge_test(c: &mut Criterion) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think that in this case, since we are only benchmarking forge test
, it would make sense to run forge build
in parallel to save time
benches/src/lib.rs
Outdated
pub static BENCHMARK_REPOS: &[RepoConfig] = &[ | ||
RepoConfig { name: "account", org: "ithacaxyz", repo: "account", rev: "main" }, | ||
// Temporarily reduced for testing | ||
// RepoConfig { name: "solady", org: "Vectorized", repo: "solady", rev: "main" }, | ||
// RepoConfig { name: "v4-core", org: "Uniswap", repo: "v4-core", rev: "main" }, | ||
// RepoConfig { name: "morpho-blue", org: "morpho-org", repo: "morpho-blue", rev: "main" }, | ||
// RepoConfig { name: "spark-psm", org: "marsfoundation", repo: "spark-psm", rev: "master" }, | ||
]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think this should works just fine in mpst repost, cause the setup supports also npm install
. however, just flagging that we could also use a toml file for easier customization as i did in my repo: https://github.com/0xrusowsky/foundry-benchmarks/blob/master/benchmarks.toml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note that at the very least, we also need to support env vars (mainly RPCs), otherwise tests fill fail in some repos
- run forge build in parallet for forge-test bench - switch foundry versions - README specifying prereqs
- Add `get_benchmark_versions()` helper to read versions from env var - Update all benchmarks to use version helper for consistency - Add `--versions` and `--force-install` flags to shell script - Enable all three benchmarks (forge_test, build_no_cache, build_with_cache) - Improve error handling for corrupted forge installations - Remove complex workarounds in favor of clear error messages The benchmarks now support custom versions via: ./run_benchmarks.sh --versions stable,nightly,v1.2.0 🤖 Generated with Claude Code Co-Authored-By: Claude <[email protected]>
64baae2
into
yash/foundry-benchmarking-suite
* Add Foundry multi-version benchmarking suite - Automated benchmarking across multiple Foundry versions using hyperfine - Supports stable, nightly, and specific version tags (e.g., v1.0.0) - Benchmarks 5 major Foundry projects: account, v4-core, solady, morpho-blue, spark-psm - Tests forge test, forge build (no cache), and forge build (with cache) - Generates comparison tables in markdown format - Uses foundryup for version management - Exports JSON data for detailed analysis 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> * Fix benchmark script JSON data extraction and table formatting - Fix relative path issue causing JSON files to fail creation - Convert benchmark directories to absolute paths using SCRIPT_DIR - Improve markdown table formatting with proper column names and alignment - Use unified table generation with string concatenation for better formatting - Increase benchmark runs from 3 to 5 for more reliable results - Use --prepare instead of --cleanup for better cache management - Remove stderr suppression to catch hyperfine errors - Update table headers to show units (seconds) for clarity 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> * parallel benchmarking * refac: mv to benches/ dir * feat: criterion benches * fix: install foundry versions at once * nit * - setup benchmark repos in parallel - run forge build in parallet for forge-test bench - switch foundry versions - README specifying prereqs * feat: shell script to run benches * feat: ci workflow, fix script * update readme * feat: enhance benchmarking suite with version flexibility - Add `get_benchmark_versions()` helper to read versions from env var - Update all benchmarks to use version helper for consistency - Add `--versions` and `--force-install` flags to shell script - Enable all three benchmarks (forge_test, build_no_cache, build_with_cache) - Improve error handling for corrupted forge installations - Remove complex workarounds in favor of clear error messages The benchmarks now support custom versions via: ./run_benchmarks.sh --versions stable,nightly,v1.2.0 🤖 Generated with Claude Code Co-Authored-By: Claude <[email protected]> * latest bench * rm notes * remove shell based bench suite * feat: benches using criterion (#10805) * feat: criterion benches * - setup benchmark repos in parallel - run forge build in parallet for forge-test bench - switch foundry versions - README specifying prereqs * feat: shell script to run benches * feat: ci workflow, fix script * update readme * feat: enhance benchmarking suite with version flexibility - Add `get_benchmark_versions()` helper to read versions from env var - Update all benchmarks to use version helper for consistency - Add `--versions` and `--force-install` flags to shell script - Enable all three benchmarks (forge_test, build_no_cache, build_with_cache) - Improve error handling for corrupted forge installations - Remove complex workarounds in favor of clear error messages The benchmarks now support custom versions via: ./run_benchmarks.sh --versions stable,nightly,v1.2.0 🤖 Generated with Claude Code Co-Authored-By: Claude <[email protected]> * latest bench * rm notes * remove shell based bench suite --------- Co-authored-by: Claude <[email protected]> * unified benchmarker - * main.rs * forge version is controlled by the bin * parses criterion json to collect results - writes to LATEST.md * parallel bench * refac * refac benchmark results table generation * cleanup main.rs * rm dep * cleanup main.rs * deser estimate * nit * cleanup CriterionResult type * feat: specify repos via flag * nits * update bench ci and README * bench fuzz tests * fmt * license * coverage bench * nits * clippy * clippy * separate benches into different jobs in CI * remove criterion * feat: hyperfine setup in foundry-bench * forge version details: hash and date * run benches again - run cov with --ir-min * del * bench in separate ci jobs * move combine bench results logic to scripts * setup foundryup in ci * setup foundryup fix * clippy * ci: run on foundry-runner * ci: don't use wget * ci: add build essential * ci: nodejs and npm * install hyperfine for each job * fix * install deps script * add benchmark-setup, using setup-node action, remove redundant files * fix * fix * checkout repo * nits * nit * fix * show forge test result in top comment * force foundry install * fix bench comment aggregation * nit * fix * feat: create PR for manual runs, else commit in the PR itself. * fix * fetch and pull * chore(`benches`): update benchmark results 🤖 Generated with [Foundry Benchmarks](https://github.com/foundry-rs/foundry/actions) Co-Authored-By: github-actions <[email protected]> * fix * chore(`benches`): update benchmark results 🤖 Generated with [Foundry Benchmarks](https://github.com/foundry-rs/foundry/actions) Co-Authored-By: github-actions <[email protected]> --------- Co-authored-by: Claude <[email protected]> Co-authored-by: GitHub Action <[email protected]> Co-authored-by: github-actions <[email protected]>
Motivation
Alternative approach to #10804 in setting up the bencmark suite.
Adds some scaffolding to benchmark projects using criterion.
Solution
PR Checklist