Open
Conversation
mafintosh
reviewed
May 21, 2018
| "varint": "^5.0.0" | ||
| }, | ||
| "devDependencies": { | ||
| "@andrewosh/nanobench": "^2.2.0", |
Owner
There was a problem hiding this comment.
there are two nanobench in the deps
Owner
|
@andrewosh whats missing for landing this? would be a cool addition |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Hey all,
Here's an initial stab at a benchmarking system that should help us get some solid numbers. Each benchmark is performed on 4 databases, with a customizable number of trials per benchmark (the default is 5). The initial set of databases is (and perhaps we want to add to this?):
The initial set of benchmarks are very simple: large batch writes, many single writes, and iterations over various subsets of a large db. The database has a single writer, and is entirely local. This set will surely need to be expanded to reflect real-world use-cases.
Speaking of real-world use-cases, all the data so far is randomly generated. @mafintosh suggested a dictionary as a more realistic dataset. Any other ideas for fixtures?
At the end of benchmarking, results are dumped into CSV files in
bench/stats. Here are some examples of what those look like, from a recent run:https://github.com/andrewosh/hyperdb/blob/benchmarking-2/bench/stats/writes-random-data.csv
https://github.com/andrewosh/hyperdb/blob/benchmarking-2/bench/stats/reads-random-data.csv
(Timing is in nanoseconds, so some post-processing is required to make it readable).
A few of things of note:
nanobenchbecause I started abusing it and I'm unsure if the changes I made should be reflected upstream. Before merging, that dependency (on mynanobenchfork) will have to be changed.