Skip to content

Conversation

@theturtle32
Copy link
Owner

Summary

Restructured benchmarks for clearer output and replaced custom comparison logic with Vitest's built-in functionality.

Changes

Benchmark Restructuring

Problem: Vitest was treating all benchmarks in a describe block as alternative implementations (like comparing sorting algorithms), resulting in confusing output like "ping is 82x faster than connection creation."

Solution: Each operation now gets its own describe block, so they're measured independently.

Before:

 BENCH  Summary
  send ping frame - fastest
    82.45x faster than create connection instance

After:

 BENCH  Summary
  create connection instance - Connection Creation
  send small UTF-8 message - Send Small UTF-8 Message
  send ping frame - Send Ping Frame

Simplified Comparison Logic

Replaced custom track-performance.mjs script (125 lines of text parsing)

With Vitest's native flags:

  • --outputJson - Saves results in structured JSON
  • --compare - Compares against baseline with visual indicators

Comparison Output:

· send ping frame  2,132,100.44  [1.02x] ⇑
  send ping frame  2,090,590.07  (baseline)

Updated Scripts

  • bench:baseline - Now uses --outputJson test/benchmark/baseline.json
  • bench:compare - Uses --compare test/benchmark/baseline.json
  • bench:check - Uses --compare (same as compare, for CI)

Files Changed

  • Removed: test/benchmark/track-performance.mjs (no longer needed)
  • Modified: Both benchmark files restructured with individual describe blocks
  • Modified: baseline.json - Now uses Vitest's JSON format
  • Modified: package.json - Updated scripts to use Vitest flags
  • Modified: README.md - Documented new structure and comparison format

Test Plan

  • Ran pnpm run bench - all benchmarks pass
  • Ran pnpm run bench:baseline - creates baseline.json
  • Ran pnpm run bench:compare - shows ⇑/⇓ indicators
  • Verified output no longer shows confusing cross-operation comparisons
  • Ran pnpm lint:fix - no errors

🤖 Generated with Claude Code

Benchmark Restructuring:
- Put each operation in its own describe() block to prevent Vitest from
  treating them as alternatives for comparison
- Eliminates confusing "X times faster than Y" comparisons between
  unrelated operations (e.g., ping vs connection creation)
- Each benchmark now measured independently

Simplified Comparison Logic:
- Replaced custom track-performance.mjs with Vitest's built-in --outputJson
  and --compare flags
- Vitest natively shows ⇑/⇓ indicators and multipliers for changes
- Baseline stored in standard Vitest JSON format
- Updated scripts:
  - bench:baseline - Save results to baseline.json
  - bench:compare - Compare with baseline
  - bench:check - Check for regressions (CI mode)

Updated Documentation:
- Added benchmark structure explanation
- Documented Vitest's comparison output format
- Updated expected performance ranges

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@gemini-code-assist
Copy link

Summary of Changes

Hello @theturtle32, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refines the project's performance benchmarking system. It addresses an issue where Vitest's default behavior led to misleading comparisons between distinct operations by restructuring benchmarks into independent describe blocks. Furthermore, it streamlines the baseline tracking and comparison process by fully adopting Vitest's native --outputJson and --compare flags, eliminating the need for a custom script and providing more integrated and visual performance insights.

Highlights

  • Benchmark Restructuring: Benchmarks are now organized into individual describe blocks to prevent Vitest from incorrectly comparing unrelated operations, leading to clearer and more accurate performance measurements.
  • Native Vitest Comparison: The custom track-performance.mjs script has been removed and replaced with Vitest's built-in --outputJson and --compare flags for saving baselines and comparing current results, simplifying the benchmarking workflow.
  • Updated Benchmark Scripts: package.json scripts (bench:baseline, bench:compare, bench:check) have been updated to leverage Vitest's native comparison features.
  • Improved Documentation: The README.md in the benchmark directory has been updated to reflect the new benchmark structure, commands, and comparison output format.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a great improvement, refactoring the benchmark suite to use Vitest's native comparison features and restructuring the tests for clearer, independent measurements. This removes custom scripting and leverages the framework, which enhances maintainability. My review includes a couple of suggestions to further improve maintainability and documentation clarity in package.json and the benchmark README.md.

package.json Outdated
Comment on lines 59 to 61
"bench:baseline": "vitest bench --run --config vitest.bench.config.mjs --outputJson test/benchmark/baseline.json",
"bench:compare": "vitest bench --run --config vitest.bench.config.mjs --compare test/benchmark/baseline.json",
"bench:check": "vitest bench --run --config vitest.bench.config.mjs --compare test/benchmark/baseline.json",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The benchmark scripts bench:baseline, bench:compare, and bench:check repeat the base command from the bench script. To improve maintainability and reduce duplication, you can reuse the bench script and pass additional arguments. This way, if the base vitest command needs to be changed in the future, you'll only need to update it in one place.

Suggested change
"bench:baseline": "vitest bench --run --config vitest.bench.config.mjs --outputJson test/benchmark/baseline.json",
"bench:compare": "vitest bench --run --config vitest.bench.config.mjs --compare test/benchmark/baseline.json",
"bench:check": "vitest bench --run --config vitest.bench.config.mjs --compare test/benchmark/baseline.json",
"bench:baseline": "pnpm run bench -- --outputJson test/benchmark/baseline.json",
"bench:compare": "pnpm run bench -- --compare test/benchmark/baseline.json",
"bench:check": "pnpm run bench -- --compare test/benchmark/baseline.json",

Comment on lines 17 to 18
# Check for regressions (CI mode)
pnpm run bench:check

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The description for bench:check is a bit brief. To make it clearer for other contributors, especially in a CI context, it would be helpful to explicitly state that this command will fail (exit with a non-zero code) if a performance regression is detected.

Suggested change
# Check for regressions (CI mode)
pnpm run bench:check
# Check for regressions (exits with an error on performance drops, for CI)
pnpm run bench:check

- Use 'pnpm run bench --' in bench:baseline and bench:compare to avoid duplicating full command
- Add note explaining bench:check is intended for CI environments where builds should fail on performance regressions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@theturtle32 theturtle32 merged commit ac53fda into v2 Oct 6, 2025
4 checks passed
@theturtle32 theturtle32 deleted the restructure-benchmarks branch October 6, 2025 17:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants