-
Couldn't load subscription status.
- Fork 2.7k
Fully Automated Benchmarking and Weight Generation #6168
Description
Long term goal of the benchmarking/weight effort should be to automate the whole process for the runtime engineer.
This means, if the runtime developer writes an accurate set of benchmarks for their extrinsics, that they can run some simple commands and it should do all the heavy lifting for benchmarking their runtime and creating the proper weight formulas.
Overview
In order to provide full end to end automation of this process we need to automate the following:
- Benchmarking Runtime Execution
- Benchmarking DB Operations (Add DB Read/Write Tracking to Benchmarking Pipeline #6386)
- Generating Weight formulas and appropriate rust files (Benchmarks Writer CLI #6567)
- Add
WeightInfotrait to all pallets (AddWeightInfoto all pallets with benchmarks. #6575) - (canceled) Update
decl_modulemacro to automatically generate theWeightInfotrait - Update all pallet weights to support usage of
WeightInfo- System WeightInfo for System, Timestamp, and Utility #6868
- Identity. WeightInfo for Identity Pallet #7107
- Balances Update Balances Pallet to use
WeightInfo#6610 (companion Companion for #6610 (Balances Weight Trait) polkadot#1425). - Timestamp WeightInfo for System, Timestamp, and Utility #6868
- Vesting WeightInfo for Vesting Pallet #7103
- Im Online WeightInfo for ImOnline #7128
- Staking Move Staking Weights to T::WeightInfo #7007
- Session WeightInfo for Session Pallet #7136
- Elections Phragmen Update elections-phragmen weight to WeightInfo #7161
- Democracy pallet-democracy use of weightinfo #6783
- Collective add generated weight info for pallet-collective #6789
- Offences (not needed)
- Treasury Bounties #5715
- Utility WeightInfo for System, Timestamp, and Utility #6868
- Indices WeightInfo for Pallet Indices #7137
- Scheduler WeightInfo for Scheduler #7138
- Multisig WeightInfo for Multisig Pallet #7154
- Proxy Time-delay proxies #6770
Benchmarking Runtime Execution
This process is already automated with our current benchmarking pipeline. We execute the extrinsic given some initial setup and collect data about the results of that benchmark. These data points are then put through a linear regression analysis which tells us the linear formula for this extrinsic. Currently, this information is outputted as text to console, but in this end to end pipeline, we need to extract this data and use it for generating the weight formulas.
Benchmarking DB Operations
Currently we benchmark DB operations through an external process which views the state logs while executing the benchmarks. With this we are able to see the DB operations that take place during the execution.
Additionally, we have special filters which take into account unique reads/writes to DB keys, and also whitelists certain keys from counting again the weight of an extrinsic. For example, if an extrinsic reads from a storage key more than once, we only count the first read as a DB operation. Anything else would be "in-memory". If we write to a key, then read from it, we only count the "write" operation, as the read would then be free. If we read/write to common storage items like events, the caller account, etc... we count these as free as we know these are already accounted for in other weight stuff.
We may need to add a special DB overlay to accurately track the DB reads and writes as well as implement a Hash table so we can remove duplicate reads/writes and add any other fancy logic we want like a whitelist. This should all be enabled only for benchmarks so that normal node operation does not have this overhead.
Generating Weight Formulas and Appropriate Rust Files
Finally once we have this data, we need to automate a process that puts it all together in a usable way.
The output of this automated process should be a rust module weights.rs for each pallet.
Each benchmark written will generate an equivalent weight_for_* formula.
For example if I have:
benchmarks!{
function_1 {
let a in 0 .. a_max;
let b in 0 .. b_max;
let c in 0 .. c_max;
...
}
function_2 {
let a in 0 .. a_max;
let d in 0 .. d_max;
let e in 0 .. e_max;
...
}
...
}
would result in:
weight_for_function_1(a: u32, b: u32, c: u32) -> Weight { ... }
weight_for_function_2(a: u32, d: u32, e: u32) -> Weight { ... }
Then when integrating the weight information into our runtime, we simply write:
mod weights;
...
#[weight = weights::weight_for_function_1(param_1, param_2, param_3)]
fn function_1(origin, param_1, param_2, param_3) {
...
}
This would also work for any piecewise weight calculations like so:
#[weight = if approved {
weights::weight_for_approve()
} else {
weights::weight_for_disapproved()
};]
fn approve_or_disapprove(origin, approved: bool) {
...
}
When we modify logic and need to update the weights, we simply run the pipeline again, and these formulas with the same name will simply be updated to represent the new weight.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status