Skip to content

Conversation

@divdavem
Copy link

@divdavem divdavem commented Jan 17, 2025

This PR is based on top of #13
It makes it easy to view benchmark results on a graph.
It works in a very simple way: results are exported to the results.js file. There is a results.html file that allows to view the average time and each test for each framework on different graphs.
Also, benchmarks are run in CI and the resulting js file is uploaded with upload-artifact. (removed as mentioned in the comments below)
All tests for all frameworks are run multiple times (cf executions variable in the config.ts file, currently set to 3), the average time for each framework for all tests is displayed at the end of each execution and the results.js file is updated at that time too.

@tomByrer
Copy link

tomByrer commented Jan 17, 2025

benchmarks are run in CI

Cool, but this is a problem, since different CPUs & JS runtimes give different results.
EG my tests using Bun in recent-ish Intel i7 than t-b's MacOS M2 using Node. In both alien-signals took 1st place, but the middle pack numbers were all shuffled.

Unless you want to make a CI to run on 4-8 different setups?

@divdavem
Copy link
Author

divdavem commented Jan 18, 2025

@tomByrer Thank you for your feedback.
You're right when saying that running benchmarks in different environments (hardware, JS engine, OS...) gives different results. Even running benchmarks multiple times in the same environment can give different results.
The graph displayed in the README is one particular environment (M3 Macbook Pro using Node.js v22.10.0). Even though it is not the environment everyone uses, it gives an idea of how the different js libraries compare to one another and many people refer to it.
I added the command in CI to automatically have some benchmark results easily. Those results are just displayed in the logs and when downloading the uploaded artifact (cf for example here). Having a CI with different setups as you suggest is a good idea. And automatically publishing the results to a more visible place (such as GitHub Pages) linked in the README would be nice as well.
But actually, running benchmarks in CI is not the main focus of this PR. This could be done in other PRs. The main focus is to easily display results visually on different graphs after running benchmarks. That is something some people requested (cf #2). Only the last (small) commit in my PR adds the command in CI, and I do not care much if this last commit is discarded (it also slows down the CI).

@divdavem divdavem marked this pull request as draft January 20, 2025 13:43
@divdavem
Copy link
Author

@tomByrer I have removed the commit which adds tests in CI.
I have added an option in config.js to specify the number of times all tests are executed, defaulting to 3.
Graphs now display the min, max and average time for each test for each framework.

@divdavem divdavem marked this pull request as ready for review January 20, 2025 14:33
@divdavem divdavem force-pushed the displayGraph branch 2 times, most recently from 2feb2b5 to 879ef81 Compare January 20, 2025 14:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants