You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Add Foundry multi-version benchmarking suite
- Automated benchmarking across multiple Foundry versions using hyperfine
- Supports stable, nightly, and specific version tags (e.g., v1.0.0)
- Benchmarks 5 major Foundry projects: account, v4-core, solady, morpho-blue, spark-psm
- Tests forge test, forge build (no cache), and forge build (with cache)
- Generates comparison tables in markdown format
- Uses foundryup for version management
- Exports JSON data for detailed analysis
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <[email protected]>
* Fix benchmark script JSON data extraction and table formatting
- Fix relative path issue causing JSON files to fail creation
- Convert benchmark directories to absolute paths using SCRIPT_DIR
- Improve markdown table formatting with proper column names and alignment
- Use unified table generation with string concatenation for better formatting
- Increase benchmark runs from 3 to 5 for more reliable results
- Use --prepare instead of --cleanup for better cache management
- Remove stderr suppression to catch hyperfine errors
- Update table headers to show units (seconds) for clarity
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <[email protected]>
* parallel benchmarking
* refac: mv to benches/ dir
* feat: criterion benches
* fix: install foundry versions at once
* nit
* - setup benchmark repos in parallel
- run forge build in parallet for forge-test bench
- switch foundry versions
- README specifying prereqs
* feat: shell script to run benches
* feat: ci workflow, fix script
* update readme
* feat: enhance benchmarking suite with version flexibility
- Add `get_benchmark_versions()` helper to read versions from env var
- Update all benchmarks to use version helper for consistency
- Add `--versions` and `--force-install` flags to shell script
- Enable all three benchmarks (forge_test, build_no_cache, build_with_cache)
- Improve error handling for corrupted forge installations
- Remove complex workarounds in favor of clear error messages
The benchmarks now support custom versions via:
./run_benchmarks.sh --versions stable,nightly,v1.2.0
🤖 Generated with Claude Code
Co-Authored-By: Claude <[email protected]>
* latest bench
* rm notes
* remove shell based bench suite
* feat: benches using criterion (#10805)
* feat: criterion benches
* - setup benchmark repos in parallel
- run forge build in parallet for forge-test bench
- switch foundry versions
- README specifying prereqs
* feat: shell script to run benches
* feat: ci workflow, fix script
* update readme
* feat: enhance benchmarking suite with version flexibility
- Add `get_benchmark_versions()` helper to read versions from env var
- Update all benchmarks to use version helper for consistency
- Add `--versions` and `--force-install` flags to shell script
- Enable all three benchmarks (forge_test, build_no_cache, build_with_cache)
- Improve error handling for corrupted forge installations
- Remove complex workarounds in favor of clear error messages
The benchmarks now support custom versions via:
./run_benchmarks.sh --versions stable,nightly,v1.2.0
🤖 Generated with Claude Code
Co-Authored-By: Claude <[email protected]>
* latest bench
* rm notes
* remove shell based bench suite
---------
Co-authored-by: Claude <[email protected]>
* unified benchmarker -
* main.rs
* forge version is controlled by the bin
* parses criterion json to collect results - writes to LATEST.md
* parallel bench
* refac
* refac benchmark results table generation
* cleanup main.rs
* rm dep
* cleanup main.rs
* deser estimate
* nit
* cleanup CriterionResult type
* feat: specify repos via flag
* nits
* update bench ci and README
* bench fuzz tests
* fmt
* license
* coverage bench
* nits
* clippy
* clippy
* separate benches into different jobs in CI
* remove criterion
* feat: hyperfine setup in foundry-bench
* forge version details: hash and date
* run benches again - run cov with --ir-min
* del
* bench in separate ci jobs
* move combine bench results logic to scripts
* setup foundryup in ci
* setup foundryup fix
* clippy
* ci: run on foundry-runner
* ci: don't use wget
* ci: add build essential
* ci: nodejs and npm
* install hyperfine for each job
* fix
* install deps script
* add benchmark-setup, using setup-node action, remove redundant files
* fix
* fix
* checkout repo
* nits
* nit
* fix
* show forge test result in top comment
* force foundry install
* fix bench comment aggregation
* nit
* fix
* feat: create PR for manual runs, else commit in the PR itself.
* fix
* fetch and pull
* chore(`benches`): update benchmark results
🤖 Generated with [Foundry Benchmarks](https://github.com/foundry-rs/foundry/actions)
Co-Authored-By: github-actions <[email protected]>
* fix
* chore(`benches`): update benchmark results
🤖 Generated with [Foundry Benchmarks](https://github.com/foundry-rs/foundry/actions)
Co-Authored-By: github-actions <[email protected]>
---------
Co-authored-by: Claude <[email protected]>
Co-authored-by: GitHub Action <[email protected]>
Co-authored-by: github-actions <[email protected]>
0 commit comments