bencher.bencher
Module Contents
Classes
A server for display plots of benchmark results |
Functions
|
|
|
|
|
|
|
Attributes
- bencher.bencher.formatter
- bencher.bencher.set_xarray_multidim(data_array: xarray.DataArray, index_tuple, value: float) xarray.DataArray
- bencher.bencher.kwargs_to_input_cfg(worker_input_cfg: bencher.variables.parametrised_sweep.ParametrizedSweep, **kwargs) bencher.variables.parametrised_sweep.ParametrizedSweep
- bencher.bencher.worker_cfg_wrapper(worker, worker_input_cfg: bencher.variables.parametrised_sweep.ParametrizedSweep, **kwargs) dict
- bencher.bencher.worker_kwargs_wrapper(worker: Callable, bench_cfg: bencher.bench_cfg.BenchCfg, **kwargs) dict
- class bencher.bencher.Bench(bench_name: str = None, worker: Callable | bencher.variables.parametrised_sweep.ParametrizedSweep = None, worker_input_cfg: bencher.variables.parametrised_sweep.ParametrizedSweep = None, run_cfg=None, report=None)
Bases:
bencher.bench_plot_server.BenchPlotServer
A server for display plots of benchmark results
- set_worker(worker: Callable, worker_input_cfg: bencher.variables.parametrised_sweep.ParametrizedSweep = None) None
Set the benchmark worker function and optionally the type the worker expects
- Parameters:
worker (Callable) – The benchmark worker function
worker_input_cfg (ParametrizedSweep, optional) – The input type the worker expects. Defaults to None.
- sweep(input_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, result_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, const_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, time_src: datetime.datetime = None, description: str = None, post_description: str = None, pass_repeat: bool = False, tag: str = '', run_cfg: bencher.bench_cfg.BenchRunCfg = None, plot: bool = False) bencher.results.bench_result.BenchResult
- sweep_sequential(title='', input_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, result_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, const_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, optimise_var: bencher.variables.parametrised_sweep.ParametrizedSweep = None, run_cfg: bencher.bench_cfg.BenchRunCfg = None, group_size: int = 1, iterations: int = 1, relationship_cb=None, plot=True) List[bencher.results.bench_result.BenchResult]
- plot_sweep(title: str = None, input_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, result_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, const_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, time_src: datetime.datetime = None, description: str = None, post_description: str = None, pass_repeat: bool = False, tag: str = '', run_cfg: bencher.bench_cfg.BenchRunCfg = None, plot: bool = True) bencher.results.bench_result.BenchResult
The all in 1 function benchmarker and results plotter.
- Parameters:
input_vars (List[ParametrizedSweep], optional) – _description_. Defaults to None.
result_vars (List[ParametrizedSweep], optional) – _description_. Defaults to None.
const_vars (List[ParametrizedSweep], optional) – A list of variables to keep constant with a specified value. Defaults to None.
title (str, optional) – The title of the benchmark. Defaults to None.
description (str, optional) – A description of the benchmark. Defaults to None.
post_description (str, optional) – A description that comes after the benchmark plots. Defaults to None.
time_src (datetime, optional) – Set a time that the result was generated. Defaults to datetime.now().
pass_repeat (bool,optional) –
number (you want the benchmark function to be passed the repeat) –
tag (str,optional) – Use tags to group different benchmarks together.
run_cfg – (BenchRunCfg, optional): A config for storing how the benchmarks and run and plotted
- Raises:
ValueError – If a result variable is not set
- Returns:
A class with all the data used to generate the results and the results
- Return type:
- convert_vars_to_params(variable: param.Parameter, var_type: str)
check that a variable is a subclass of param
- Parameters:
variable (param.Parameter) – the varible to check
var_type (str) – a string representation of the variable type for better error messages
- Raises:
TypeError – the input variable type is not a param.
- cache_results(bench_res: bencher.results.bench_result.BenchResult, bench_cfg_hash: int) None
- load_history_cache(dataset: xarray.Dataset, bench_cfg_hash: int, clear_history: bool) xarray.Dataset
Load historical data from a cache if over_time=true
- Parameters:
ds (xr.Dataset) – Freshly calcuated data
bench_cfg_hash (int) – Hash of the input variables used to generate the data
clear_history (bool) – Optionally clear the history
- Returns:
historical data as an xr dataset
- Return type:
xr.Dataset
- setup_dataset(bench_cfg: bencher.bench_cfg.BenchCfg, time_src: datetime.datetime | str) tuple[bencher.results.bench_result.BenchResult, List, List]
A function for generating an n-d xarray from a set of input variables in the BenchCfg
- Parameters:
bench_cfg (BenchCfg) – description of the benchmark parameters
time_src (datetime | str) – a representation of the sample time
- Returns:
_description_
- Return type:
_type_
- define_const_inputs(const_vars) dict
- define_extra_vars(bench_cfg: bencher.bench_cfg.BenchCfg, repeats: int, time_src) list[bencher.variables.inputs.IntSweep]
Define extra meta vars that are stored in the n-d array but are not passed to the benchmarking function, such as number of repeats and the time the function was called.
- Parameters:
bench_cfg (BenchCfg) – description of the benchmark parameters
repeats (int) – the number of times to sample the function
time_src (datetime) – a representation of the sample time
- Returns:
_description_
- Return type:
_type_
- calculate_benchmark_results(bench_cfg, time_src: datetime.datetime | str, bench_cfg_sample_hash, bench_run_cfg) bencher.results.bench_result.BenchResult
A function for generating an n-d xarray from a set of input variables in the BenchCfg
- store_results(job_result: bencher.job.JobFuture, bench_res: bencher.results.bench_result.BenchResult, worker_job: bencher.worker_job.WorkerJob, bench_run_cfg: bencher.bench_cfg.BenchRunCfg) None
- init_sample_cache(run_cfg: bencher.bench_cfg.BenchRunCfg)
- clear_tag_from_sample_cache(tag: str, run_cfg)
Clear all samples from the cache that match a tag :param tag: clear samples with this tag :type tag: str
- add_metadata_to_dataset(bench_res: bencher.results.bench_result.BenchResult, input_var: bencher.variables.parametrised_sweep.ParametrizedSweep) None
Adds variable metadata to the xrarry so that it can be used to automatically plot the dimension units etc.
- Parameters:
bench_cfg (BenchCfg) –
input_var (ParametrizedSweep) – The varible to extract metadata from
- report_results(bench_cfg: bencher.bench_cfg.BenchCfg, print_xarray: bool, print_pandas: bool)
Optionally display the caculated benchmark data as either as pandas, xarray or plot
- Parameters:
bench_cfg (BenchCfg) –
print_xarray (bool) –
print_pandas (bool) –
- clear_call_counts() None
Clear the worker and cache call counts, to help debug and assert caching is happening properly
- get_result(index: int = -1) bencher.results.bench_result.BenchResult