bencher

Subpackages

Submodules

Package Contents

Classes

Bench

A server for display plots of benchmark results

BenchCfg

A class for storing the arguments to configure a benchmark protocol If the inputs variables are the same the class should return the same hash and same filename. This is so that historical data can be referenced and ensures that the generated plots are unique per benchmark

BenchRunCfg

A Class to store options for how to run a benchmark parameter sweep

BenchRunner

A class to manage running multiple benchmarks in groups, or running the same benchmark but at multiple resolutions

BenchPlotServer

A server for display plots of benchmark results

VarRange

A VarRange represents the bounded and unbounded ranges of integers. This class is used to define filters for various variable types. For example by defining cat_var = VarRange(0,0), calling matches(0) will return true, but any other integer will not match. You can also have unbounded ranges for example VarRange(2,None) will match to 2,3,4... up to infinity. for By default the lower and upper bounds are set to -1 so so that no matter what value is passsed to matches() will return false. Matches only takes 0 and positive integers.

PlotFilter

A class for representing the types of results a plot is able to represent.

CachedParams

Parent class for all Sweep types that need a custom hash

BenchResult

Contains the results of the benchmark and has methods to cast the results to various datatypes and graphical representations

PanelResult

ReduceType

Generic enumeration.

HoloviewResult

BenchReport

A server for display plots of benchmark results

Executors

StrEnum is a Python enum.Enum that inherits from str. The default

VideoWriter

Functions

hmap_canonical_input(→ tuple)

From a dictionary of kwargs, return a hashable representation (tuple) that is always the same for the same inputs and retains the order of the input arguments. e.g, {x=1,y=2} -> (1,2) and {y=2,x=1} -> (1,2). This is used so that keywords arguments can be hashed and converted the the tuple keys that are used for holomaps

get_nearest_coords(→ dict)

Given an xarray dataset and kwargs of key value pairs of coordinate values, return a dictionary of the nearest coordinate name value pair that was found in the dataset

make_namedtuple(→ collections.namedtuple)

Convenience method for making a named tuple

gen_path(filename, folder, suffix)

gen_image_path(→ str)

gen_video_path(→ str)

lerp(value, input_low, input_high, output_low, output_high)

add_image(np_array[, name])

class bencher.Bench(bench_name: str = None, worker: Callable | bencher.variables.parametrised_sweep.ParametrizedSweep = None, worker_input_cfg: bencher.variables.parametrised_sweep.ParametrizedSweep = None, run_cfg=None, report=None)

Bases: bencher.bench_plot_server.BenchPlotServer

A server for display plots of benchmark results

set_worker(worker: Callable, worker_input_cfg: bencher.variables.parametrised_sweep.ParametrizedSweep = None) None

Set the benchmark worker function and optionally the type the worker expects

Parameters:
  • worker (Callable) – The benchmark worker function

  • worker_input_cfg (ParametrizedSweep, optional) – The input type the worker expects. Defaults to None.

sweep(input_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, result_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, const_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, time_src: datetime.datetime = None, description: str = None, post_description: str = None, pass_repeat: bool = False, tag: str = '', run_cfg: bencher.bench_cfg.BenchRunCfg = None, plot: bool = False) bencher.results.bench_result.BenchResult
sweep_sequential(title='', input_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, result_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, const_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, optimise_var: bencher.variables.parametrised_sweep.ParametrizedSweep = None, run_cfg: bencher.bench_cfg.BenchRunCfg = None, group_size: int = 1, iterations: int = 1, relationship_cb=None, plot=True) List[bencher.results.bench_result.BenchResult]
plot_sweep(title: str = None, input_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, result_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, const_vars: List[bencher.variables.parametrised_sweep.ParametrizedSweep] = None, time_src: datetime.datetime = None, description: str = None, post_description: str = None, pass_repeat: bool = False, tag: str = '', run_cfg: bencher.bench_cfg.BenchRunCfg = None, plot: bool = True) bencher.results.bench_result.BenchResult

The all in 1 function benchmarker and results plotter.

Parameters:
  • input_vars (List[ParametrizedSweep], optional) – _description_. Defaults to None.

  • result_vars (List[ParametrizedSweep], optional) – _description_. Defaults to None.

  • const_vars (List[ParametrizedSweep], optional) – A list of variables to keep constant with a specified value. Defaults to None.

  • title (str, optional) – The title of the benchmark. Defaults to None.

  • description (str, optional) – A description of the benchmark. Defaults to None.

  • post_description (str, optional) – A description that comes after the benchmark plots. Defaults to None.

  • time_src (datetime, optional) – Set a time that the result was generated. Defaults to datetime.now().

  • pass_repeat (bool,optional) –

  • number (you want the benchmark function to be passed the repeat) –

  • tag (str,optional) – Use tags to group different benchmarks together.

  • run_cfg – (BenchRunCfg, optional): A config for storing how the benchmarks and run and plotted

Raises:

ValueError – If a result variable is not set

Returns:

A class with all the data used to generate the results and the results

Return type:

BenchResult

convert_vars_to_params(variable: param.Parameter, var_type: str)

check that a variable is a subclass of param

Parameters:
  • variable (param.Parameter) – the varible to check

  • var_type (str) – a string representation of the variable type for better error messages

Raises:

TypeError – the input variable type is not a param.

cache_results(bench_res: bencher.results.bench_result.BenchResult, bench_cfg_hash: int) None
load_history_cache(dataset: xarray.Dataset, bench_cfg_hash: int, clear_history: bool) xarray.Dataset

Load historical data from a cache if over_time=true

Parameters:
  • ds (xr.Dataset) – Freshly calcuated data

  • bench_cfg_hash (int) – Hash of the input variables used to generate the data

  • clear_history (bool) – Optionally clear the history

Returns:

historical data as an xr dataset

Return type:

xr.Dataset

setup_dataset(bench_cfg: bencher.bench_cfg.BenchCfg, time_src: datetime.datetime | str) tuple[bencher.results.bench_result.BenchResult, List, List]

A function for generating an n-d xarray from a set of input variables in the BenchCfg

Parameters:
  • bench_cfg (BenchCfg) – description of the benchmark parameters

  • time_src (datetime | str) – a representation of the sample time

Returns:

_description_

Return type:

_type_

define_const_inputs(const_vars) dict
define_extra_vars(bench_cfg: bencher.bench_cfg.BenchCfg, repeats: int, time_src) list[bencher.variables.inputs.IntSweep]

Define extra meta vars that are stored in the n-d array but are not passed to the benchmarking function, such as number of repeats and the time the function was called.

Parameters:
  • bench_cfg (BenchCfg) – description of the benchmark parameters

  • repeats (int) – the number of times to sample the function

  • time_src (datetime) – a representation of the sample time

Returns:

_description_

Return type:

_type_

calculate_benchmark_results(bench_cfg, time_src: datetime.datetime | str, bench_cfg_sample_hash, bench_run_cfg) bencher.results.bench_result.BenchResult

A function for generating an n-d xarray from a set of input variables in the BenchCfg

Parameters:
  • bench_cfg (BenchCfg) – description of the benchmark parameters

  • time_src (datetime) – a representation of the sample time

Returns:

description of the benchmark parameters

Return type:

bench_cfg (BenchCfg)

store_results(job_result: bencher.job.JobFuture, bench_res: bencher.results.bench_result.BenchResult, worker_job: bencher.worker_job.WorkerJob, bench_run_cfg: bencher.bench_cfg.BenchRunCfg) None
init_sample_cache(run_cfg: bencher.bench_cfg.BenchRunCfg)
clear_tag_from_sample_cache(tag: str, run_cfg)

Clear all samples from the cache that match a tag :param tag: clear samples with this tag :type tag: str

add_metadata_to_dataset(bench_res: bencher.results.bench_result.BenchResult, input_var: bencher.variables.parametrised_sweep.ParametrizedSweep) None

Adds variable metadata to the xrarry so that it can be used to automatically plot the dimension units etc.

Parameters:
report_results(bench_cfg: bencher.bench_cfg.BenchCfg, print_xarray: bool, print_pandas: bool)

Optionally display the caculated benchmark data as either as pandas, xarray or plot

Parameters:
  • bench_cfg (BenchCfg) –

  • print_xarray (bool) –

  • print_pandas (bool) –

clear_call_counts() None

Clear the worker and cache call counts, to help debug and assert caching is happening properly

get_result(index: int = -1) bencher.results.bench_result.BenchResult
class bencher.BenchCfg(**params)

Bases: BenchRunCfg

A class for storing the arguments to configure a benchmark protocol If the inputs variables are the same the class should return the same hash and same filename. This is so that historical data can be referenced and ensures that the generated plots are unique per benchmark

input_vars
result_vars
const_vars
result_hmaps
meta_vars
all_vars
iv_time
iv_time_event
over_time: param.Boolean(False, doc='A parameter to control whether the function is sampled over time')
name: str
title: str
raise_duplicate_exception: str
bench_name: str
description: str
post_description: str
has_results: bool
pass_repeat: bool
tag: str
hash_value: str
hash_persistent(include_repeats) str

override the default hash function becuase the default hash function does not return the same value for the same inputs. It references internal variables that are unique per instance of BenchCfg

Parameters:

include_repeats (bool) – by default include repeats as part of the hash execpt with using the sample cache

inputs_as_str() List[str]
describe_sweep(width: int = 800) panel.pane.Markdown

Produce a markdown summary of the sweep settings

describe_benchmark() str

Generate a string summary of the inputs and results from a BenchCfg

Returns:

summary of BenchCfg

Return type:

str

to_title(panel_name: str = None) panel.pane.Markdown
to_description(width: int = 800) panel.pane.Markdown
to_post_description(width: int = 800) panel.pane.Markdown
to_sweep_summary(name=None, description=True, describe_sweep=True, results_suffix=True, title: bool = True) panel.pane.Markdown

Produce panel output summarising the title, description and sweep setting

optuna_targets(as_var=False) List[str]
class bencher.BenchRunCfg(**params)

Bases: BenchPlotSrvCfg

A Class to store options for how to run a benchmark parameter sweep

repeats: bool
over_time: bool
debug: bool
use_optuna: bool
summarise_constant_inputs
print_bench_inputs: bool
print_bench_results: bool
clear_history: bool
print_pandas: bool
print_xarray: bool
serve_pandas: bool
serve_pandas_flat: bool
serve_xarray: bool
auto_plot: bool
raise_duplicate_exception: bool
use_cache: bool
clear_cache: bool
use_sample_cache: bool
only_hash_tag: bool
clear_sample_cache: bool
overwrite_sample_cache: bool
only_plot: bool
use_holoview: bool
nightly: bool
time_event: str
headless: bool
render_plotly
level
run_tag
run_date
executor
plot_size
plot_width
plot_height
static from_cmd_line() BenchRunCfg

create a BenchRunCfg by parsing command line arguments

Returns:

parsed args

Return type:

parsed args

class bencher.BenchRunner(name: str, bench_class=None, run_cfg: bencher.bench_cfg.BenchRunCfg = BenchRunCfg(), publisher: Callable = None)

A class to manage running multiple benchmarks in groups, or running the same benchmark but at multiple resolutions

static setup_run_cfg(run_cfg: bencher.bench_cfg.BenchRunCfg = BenchRunCfg(), level: int = 2, use_cache=True) bencher.bench_cfg.BenchRunCfg
static from_parametrized_sweep(class_instance: bencher.variables.parametrised_sweep.ParametrizedSweep, run_cfg: bencher.bench_cfg.BenchRunCfg = BenchRunCfg(), report: bencher.bench_report.BenchReport = BenchReport())
add_run(bench_fn: Benchable) None
add_bench(class_instance: bencher.variables.parametrised_sweep.ParametrizedSweep) None
run(min_level: int = 2, max_level: int = 6, level: int = None, repeats: int = 1, run_cfg: bencher.bench_cfg.BenchRunCfg = None, publish: bool = False, debug: bool = False, show: bool = False, save: bool = False, grouped: bool = True, use_cache: bool = True) List[bencher.bencher.Bench]

This function controls how a benchmark or a set of benchmarks are run. If you are only running a single benchmark it can be simpler to just run it directly, but if you are running several benchmarks together and want them to be sampled at different levels of fidelity or published together in a single report this function enables that workflow. If you have an expensive function, it can be useful to view low fidelity results as they are computed but also continue to compute higher fidelity results while reusing previously computed values. The parameters min_level and max_level let you specify how to progressivly increase the sampling resolution of the benchmark sweep. By default use_cache=True so that previous values are reused.

Parameters:
  • min_level (int, optional) – The minimum level to start sampling at. Defaults to 2.

  • max_level (int, optional) – The maximum level to sample up to. Defaults to 6.

  • level (int, optional) – If this is set, then min_level and max_level are not used and only a single level is sampled. Defaults to None.

  • repeats (int, optional) – The number of times to run the entire benchmarking procedure. Defaults to 1.

  • run_cfg (BenchRunCfg, optional) – benchmark run configuration. Defaults to None.

  • publish (bool, optional) – Publish the results to git, requires a publish url to be set up. Defaults to False.

  • debug (bool, optional) – _description_. Defaults to False.

  • show (bool, optional) – show the results in the local web browswer. Defaults to False.

  • save (bool, optional) – save the results to disk in index.html. Defaults to False.

  • grouped (bool, optional) – Produce a single html page with all the benchmarks included. Defaults to True.

  • use_cache (bool, optional) – Use the sample cache to reused previous results. Defaults to True.

Returns:

A list of bencher instances

Return type:

List[BenchCfg]

show_publish(report, show, publish, save, debug)
shutdown()
__del__() None
class bencher.BenchPlotServer

A server for display plots of benchmark results

plot_server(bench_name: str, plot_cfg: bencher.bench_cfg.BenchPlotSrvCfg = BenchPlotSrvCfg(), plots_instance=None) threading.Thread

Load previously calculated benchmark data from the database and start a plot server to display it

Parameters:
  • bench_name (str) – The name of the benchmark and output folder for the figures

  • plot_cfg (BenchPlotSrvCfg, optional) – Options for the plot server. Defaults to BenchPlotSrvCfg().

Raises:

FileNotFoundError – No data found was found in the database to plot

load_data_from_cache(bench_name: str) Tuple[bencher.bench_cfg.BenchCfg, List[panel.panel]] | None

Load previously calculated benchmark data from the database and start a plot server to display it

Parameters:

bench_name (str) – The name of the benchmark and output folder for the figures

Returns:

benchmark result data and any additional panels

Return type:

Tuple[BenchCfg, List[pn.panel]] | None

Raises:

FileNotFoundError – No data found was found in the database to plot

serve(bench_name: str, plots_instance: List[panel.panel], port: int = None, show: bool = True) threading.Thread

Launch a panel server to view results

Parameters:
  • bench_cfg (BenchCfg) – benchmark results

  • plots_instance (List[pn.panel]) – list of panel objects to display

  • port (int) – use a fixed port to lauch the server

class bencher.VarRange(lower_bound: int = 0, upper_bound: int = -1)

A VarRange represents the bounded and unbounded ranges of integers. This class is used to define filters for various variable types. For example by defining cat_var = VarRange(0,0), calling matches(0) will return true, but any other integer will not match. You can also have unbounded ranges for example VarRange(2,None) will match to 2,3,4… up to infinity. for By default the lower and upper bounds are set to -1 so so that no matter what value is passsed to matches() will return false. Matches only takes 0 and positive integers.

matches(val: int) bool

Checks that a value is within the variable range. lower_bound and upper_bound are inclusive (lower_bound<=val<=upper_bound )

Parameters:

val (int) – A positive integer representing a number of items

Returns:

True if the items is within the range, False otherwise.

Return type:

bool

Raises:

ValueError – If val < 0

matches_info(val, name)
__str__() str

Return str(self).

class bencher.PlotFilter

A class for representing the types of results a plot is able to represent.

float_range: VarRange
cat_range: VarRange
vector_len: VarRange
result_vars: VarRange
panel_range: VarRange
repeats_range: VarRange
input_range: VarRange
matches_result(plt_cnt_cfg: bencher.plotting.plt_cnt_cfg.PltCntCfg, plot_name: str) PlotMatchesResult

Checks if the result data signature matches the type of data the plot is able to display.

bencher.hmap_canonical_input(dic: dict) tuple

From a dictionary of kwargs, return a hashable representation (tuple) that is always the same for the same inputs and retains the order of the input arguments. e.g, {x=1,y=2} -> (1,2) and {y=2,x=1} -> (1,2). This is used so that keywords arguments can be hashed and converted the the tuple keys that are used for holomaps

Parameters:

dic (dict) – dictionary with keyword arguments and values in any order

Returns:

values of the dictionary always in the same order and hashable

Return type:

tuple

bencher.get_nearest_coords(dataset: xarray.Dataset, collapse_list=False, **kwargs) dict

Given an xarray dataset and kwargs of key value pairs of coordinate values, return a dictionary of the nearest coordinate name value pair that was found in the dataset

Parameters:

ds (xr.Dataset) – dataset

Returns:

nearest coordinate name value pair that matches the input coordinate name value pairs.

Return type:

dict

bencher.make_namedtuple(class_name: str, **fields) collections.namedtuple

Convenience method for making a named tuple

Parameters:

class_name (str) – name of the named tuple

Returns:

a named tuple with the fields as values

Return type:

namedtuple

bencher.gen_path(filename, folder, suffix)
bencher.gen_image_path(image_name: str = 'img', filetype='.png') str
bencher.gen_video_path(video_name: str = 'vid', extension: str = '.webm') str
bencher.lerp(value, input_low: float, input_high: float, output_low: float, output_high: float)
class bencher.CachedParams(clear_cache=True, cache_name='fcache', **params)

Bases: bencher.variables.parametrised_sweep.ParametrizedSweep

Parent class for all Sweep types that need a custom hash

kwargs_to_hash_key(**kwargs)
in_cache(**kwargs)
cache_wrap(func, **kwargs)
class bencher.BenchResult(bench_cfg)

Bases: bencher.results.plotly_result.PlotlyResult, bencher.results.holoview_result.HoloviewResult, bencher.results.video_summary.VideoSummaryResult

Contains the results of the benchmark and has methods to cast the results to various datatypes and graphical representations

static default_plot_callbacks()
static plotly_callbacks()
to_auto(plot_list: List[callable] = None, remove_plots: List[callable] = None, **kwargs) List[panel.panel]
to_auto_plots(**kwargs) List[panel.panel]

Given the dataset result of a benchmark run, automatically dedeuce how to plot the data based on the types of variables that were sampled

Parameters:

bench_cfg (BenchCfg) – Information on how the benchmark was sampled and the resulting data

Returns:

A panel containing plot results

Return type:

pn.pane

class bencher.PanelResult(bench_cfg: bencher.bench_cfg.BenchCfg)

Bases: bencher.results.bench_result_base.BenchResultBase

to_video(result_var: param.Parameter = None, **kwargs)
zero_dim_da_to_val(da_ds: xarray.DataArray | xarray.Dataset) Any
ds_to_container(dataset: xarray.Dataset, result_var: param.Parameter, container, **kwargs) Any
to_panes(result_var: param.Parameter = None, target_dimension: int = 0, container=None, **kwargs) panel.pane.panel | None
class bencher.ReduceType

Bases: enum.Enum

Generic enumeration.

Derive from this class to define new enumerations.

AUTO
SQUEEZE
REDUCE
NONE
class bencher.HoloviewResult(bench_cfg: bencher.bench_cfg.BenchCfg)

Bases: bencher.results.panel_result.PanelResult

static set_default_opts(width=600, height=600)
to(hv_type: holoviews.Chart, reduce: bencher.results.bench_result_base.ReduceType = ReduceType.AUTO, **kwargs) holoviews.Chart
overlay_plots(plot_callback: callable) holoviews.Overlay | None
layout_plots(plot_callback: callable)
time_widget(title)
to_bar(result_var: param.Parameter = None, **kwargs) panel.panel | None
to_bar_ds(dataset: xarray.Dataset, result_var: param.Parameter = None, **kwargs)
hv_container_ds(dataset: xarray.Dataset, result_var: param.Parameter, container: holoviews.Chart = None, **kwargs)
to_hv_container(container: panel.pane.panel, reduce_type=ReduceType.AUTO, target_dimension: int = 2, result_var: param.Parameter = None, result_types=ResultVar, **kwargs) panel.pane.panel | None
to_line(result_var: param.Parameter = None, **kwargs) panel.panel | None
to_line_ds(dataset: xarray.Dataset, result_var: param.Parameter, **kwargs)
to_curve(result_var: param.Parameter = None, **kwargs)
to_curve_ds(dataset: xarray.Dataset, result_var: param.Parameter, **kwargs) holoviews.Curve | None
to_heatmap(result_var: param.Parameter = None, tap_var=None, tap_container=None, target_dimension=2, **kwargs) panel.panel | None
to_heatmap_ds(dataset: xarray.Dataset, result_var: param.Parameter, **kwargs) holoviews.HeatMap | None
to_heatmap_container_tap_ds(dataset: xarray.Dataset, result_var: param.Parameter, result_var_plot: param.Parameter, container: panel.pane.panel = pn.pane.panel, **kwargs) panel.Row
to_error_bar() holoviews.Bars
to_points(reduce: bencher.results.bench_result_base.ReduceType = ReduceType.AUTO) holoviews.Points
to_scatter(**kwargs) panel.panel | None
to_scatter_jitter(result_var: param.Parameter = None, **kwargs) List[holoviews.Scatter]
to_scatter_jitter_single(result_var: param.Parameter, **kwargs) holoviews.Scatter | None
to_heatmap_single(result_var: param.Parameter, reduce: bencher.results.bench_result_base.ReduceType = ReduceType.AUTO, **kwargs) holoviews.HeatMap
to_heatmap_tap(result_var: param.Parameter, reduce: bencher.results.bench_result_base.ReduceType = ReduceType.AUTO, width=800, height=800, **kwargs)
to_nd_layout(hmap_name: str) holoviews.NdLayout
to_holomap(name: str = None) holoviews.HoloMap
to_holomap_list(hmap_names: List[str] = None) holoviews.HoloMap
get_nearest_holomap(name: str = None, **kwargs)
to_dynamic_map(name: str = None) holoviews.DynamicMap

use the values stored in the holomap dictionary to populate a dynamic map. Note that this is much faster than passing the holomap to a holomap object as the values are calculated on the fly

to_grid(inputs=None)
to_table()
to_surface(result_var: param.Parameter = None, **kwargs) panel.panel | None
to_surface_ds(dataset: xarray.Dataset, result_var: param.Parameter, alpha: float = 0.3, **kwargs) panel.panel | None

Given a benchCfg generate a 2D surface plot

Parameters:

result_var (Parameter) – result variable to plot

Returns:

A 2d surface plot as a holoview in a pane

Return type:

pn.pane.holoview

class bencher.BenchReport(bench_name: str = None)

Bases: bencher.bench_plot_server.BenchPlotServer

A server for display plots of benchmark results

append_title(title: str, new_tab: bool = True)
append_markdown(markdown: str, name=None, width=800, **kwargs) panel.pane.Markdown
append(pane: panel.panel, name: str = None) None
append_col(pane: panel.panel, name: str = None) None
append_result(bench_res: bencher.results.bench_result.BenchResult) None
append_tab(pane: panel.panel, name: str = None) None
save_index(directory='', filename='index.html') pathlib.Path

Saves the result to index.html in the root folder so that it can be displayed by github pages.

Returns:

save path

Return type:

Path

save(directory: str | pathlib.Path = 'cachedir', filename: str = None, in_html_folder: bool = True, **kwargs) pathlib.Path

Save the result to a html file. Note that dynamic content will not work. by passing save(__file__) the html output will be saved in the same folder as the source code in a html subfolder.

Parameters:
  • directory (str | Path, optional) – base folder to save to. Defaults to “cachedir” which should be ignored by git.

  • filename (str, optional) – The name of the html file. Defaults to the name of the benchmark

  • in_html_folder (bool, optional) – Put the saved files in a html subfolder to help keep the results separate from source code. Defaults to True.

Returns:

the save path

Return type:

Path

show(run_cfg: bencher.bench_cfg.BenchRunCfg = None) threading.Thread

Launches a webserver with plots of the benchmark results, blocking

Parameters:

run_cfg (BenchRunCfg, optional) – Options for the webserve such as the port. Defaults to None.

publish(remote_callback: Callable, branch_name: str = None, debug: bool = False) str

Publish the results as an html file by committing it to the bench_results branch in the current repo. If you have set up your repo with github pages or equivalent then the html file will be served as a viewable webpage. This is an example of a callable to publish on github pages:

def publish_args(branch_name) -> Tuple[str, str]:
    return (
        "https://github.com/dyson-ai/bencher.git",
        f"https://github.com/dyson-ai/bencher/blob/{branch_name}")
Parameters:

remote (Callable) – A function the returns a tuple of the publishing urls. It must follow the signature def publish_args(branch_name) -> Tuple[str, str]. The first url is the git repo name, the second url needs to match the format for viewable html pages on your git provider. The second url can use the argument branch_name to point to the report on a specified branch.

Returns:

the url of the published report

Return type:

str

class bencher.Executors

Bases: strenum.StrEnum

StrEnum is a Python enum.Enum that inherits from str. The default auto() behavior uses the member name as its value.

Example usage:

class Example(StrEnum):
    UPPER_CASE = auto()
    lower_case = auto()
    MixedCase = auto()

assert Example.UPPER_CASE == "UPPER_CASE"
assert Example.lower_case == "lower_case"
assert Example.MixedCase == "MixedCase"
SERIAL
MULTIPROCESSING
SCOOP
static factory(provider: Executors)
class bencher.VideoWriter(filename: str = 'vid')
append(img)
write(bitrate: int = 1500) str
create_label(label, width, height=20)
append_file(filepath, label=None)
write_png(bitrate: int = 1500, target_duration: float = 10.0, frame_time=None)
bencher.add_image(np_array: numpy.ndarray, name: str = 'img')