-
Notifications
You must be signed in to change notification settings - Fork 603
Restructure benchmarks and use Vitest's native comparison #497
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Benchmark Restructuring: - Put each operation in its own describe() block to prevent Vitest from treating them as alternatives for comparison - Eliminates confusing "X times faster than Y" comparisons between unrelated operations (e.g., ping vs connection creation) - Each benchmark now measured independently Simplified Comparison Logic: - Replaced custom track-performance.mjs with Vitest's built-in --outputJson and --compare flags - Vitest natively shows ⇑/⇓ indicators and multipliers for changes - Baseline stored in standard Vitest JSON format - Updated scripts: - bench:baseline - Save results to baseline.json - bench:compare - Compare with baseline - bench:check - Check for regressions (CI mode) Updated Documentation: - Added benchmark structure explanation - Documented Vitest's comparison output format - Updated expected performance ranges 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Summary of ChangesHello @theturtle32, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refines the project's performance benchmarking system. It addresses an issue where Vitest's default behavior led to misleading comparisons between distinct operations by restructuring benchmarks into independent Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request is a great improvement, refactoring the benchmark suite to use Vitest's native comparison features and restructuring the tests for clearer, independent measurements. This removes custom scripting and leverages the framework, which enhances maintainability. My review includes a couple of suggestions to further improve maintainability and documentation clarity in package.json and the benchmark README.md.
package.json
Outdated
| "bench:baseline": "vitest bench --run --config vitest.bench.config.mjs --outputJson test/benchmark/baseline.json", | ||
| "bench:compare": "vitest bench --run --config vitest.bench.config.mjs --compare test/benchmark/baseline.json", | ||
| "bench:check": "vitest bench --run --config vitest.bench.config.mjs --compare test/benchmark/baseline.json", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The benchmark scripts bench:baseline, bench:compare, and bench:check repeat the base command from the bench script. To improve maintainability and reduce duplication, you can reuse the bench script and pass additional arguments. This way, if the base vitest command needs to be changed in the future, you'll only need to update it in one place.
| "bench:baseline": "vitest bench --run --config vitest.bench.config.mjs --outputJson test/benchmark/baseline.json", | |
| "bench:compare": "vitest bench --run --config vitest.bench.config.mjs --compare test/benchmark/baseline.json", | |
| "bench:check": "vitest bench --run --config vitest.bench.config.mjs --compare test/benchmark/baseline.json", | |
| "bench:baseline": "pnpm run bench -- --outputJson test/benchmark/baseline.json", | |
| "bench:compare": "pnpm run bench -- --compare test/benchmark/baseline.json", | |
| "bench:check": "pnpm run bench -- --compare test/benchmark/baseline.json", |
test/benchmark/README.md
Outdated
| # Check for regressions (CI mode) | ||
| pnpm run bench:check |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The description for bench:check is a bit brief. To make it clearer for other contributors, especially in a CI context, it would be helpful to explicitly state that this command will fail (exit with a non-zero code) if a performance regression is detected.
| # Check for regressions (CI mode) | |
| pnpm run bench:check | |
| # Check for regressions (exits with an error on performance drops, for CI) | |
| pnpm run bench:check |
- Use 'pnpm run bench --' in bench:baseline and bench:compare to avoid duplicating full command - Add note explaining bench:check is intended for CI environments where builds should fail on performance regressions 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Summary
Restructured benchmarks for clearer output and replaced custom comparison logic with Vitest's built-in functionality.
Changes
Benchmark Restructuring
Problem: Vitest was treating all benchmarks in a describe block as alternative implementations (like comparing sorting algorithms), resulting in confusing output like "ping is 82x faster than connection creation."
Solution: Each operation now gets its own
describeblock, so they're measured independently.Before:
After:
Simplified Comparison Logic
Replaced custom
track-performance.mjsscript (125 lines of text parsing)With Vitest's native flags:
--outputJson- Saves results in structured JSON--compare- Compares against baseline with visual indicatorsComparison Output:
Updated Scripts
bench:baseline- Now uses--outputJson test/benchmark/baseline.jsonbench:compare- Uses--compare test/benchmark/baseline.jsonbench:check- Uses--compare(same as compare, for CI)Files Changed
test/benchmark/track-performance.mjs(no longer needed)baseline.json- Now uses Vitest's JSON formatpackage.json- Updated scripts to use Vitest flagsREADME.md- Documented new structure and comparison formatTest Plan
pnpm run bench- all benchmarks passpnpm run bench:baseline- creates baseline.jsonpnpm run bench:compare- shows ⇑/⇓ indicatorspnpm lint:fix- no errors🤖 Generated with Claude Code