bencher.bench_runner

Module Contents

Classes

Benchable

Base class for protocol classes.

BenchRunner

A class to manage running multiple benchmarks in groups, or running the same benchmark but at multiple resolutions

class bencher.bench_runner.Benchable

Bases: Protocol

Base class for protocol classes.

Protocol classes are defined as:

class Proto(Protocol):
    def meth(self) -> int:
        ...

Such classes are primarily used with static type checkers that recognize structural subtyping (static duck-typing), for example:

class C:
    def meth(self) -> int:
        return 0

def func(x: Proto) -> int:
    return x.meth()

func(C())  # Passes static type check

See PEP 544 for details. Protocol classes decorated with @typing.runtime_checkable act as simple-minded runtime protocols that check only the presence of given attributes, ignoring their type signatures. Protocol classes can be generic, they are defined as:

class GenProto(Protocol[T]):
    def meth(self) -> T:
        ...
abstract bench(run_cfg: bencher.bench_cfg.BenchRunCfg, report: bencher.bench_report.BenchReport) bencher.bench_cfg.BenchCfg
class bencher.bench_runner.BenchRunner(name: str, bench_class=None, run_cfg: bencher.bench_cfg.BenchRunCfg = BenchRunCfg(), publisher: Callable = None)

A class to manage running multiple benchmarks in groups, or running the same benchmark but at multiple resolutions

static setup_run_cfg(run_cfg: bencher.bench_cfg.BenchRunCfg = BenchRunCfg(), level: int = 2, use_cache=True) bencher.bench_cfg.BenchRunCfg
static from_parametrized_sweep(class_instance: bencher.variables.parametrised_sweep.ParametrizedSweep, run_cfg: bencher.bench_cfg.BenchRunCfg = BenchRunCfg(), report: bencher.bench_report.BenchReport = BenchReport())
add_run(bench_fn: Benchable) None
add_bench(class_instance: bencher.variables.parametrised_sweep.ParametrizedSweep) None
run(min_level: int = 2, max_level: int = 6, level: int = None, repeats: int = 1, run_cfg: bencher.bench_cfg.BenchRunCfg = None, publish: bool = False, debug: bool = False, show: bool = False, save: bool = False, grouped: bool = True, use_cache: bool = True) List[bencher.bencher.Bench]

This function controls how a benchmark or a set of benchmarks are run. If you are only running a single benchmark it can be simpler to just run it directly, but if you are running several benchmarks together and want them to be sampled at different levels of fidelity or published together in a single report this function enables that workflow. If you have an expensive function, it can be useful to view low fidelity results as they are computed but also continue to compute higher fidelity results while reusing previously computed values. The parameters min_level and max_level let you specify how to progressivly increase the sampling resolution of the benchmark sweep. By default use_cache=True so that previous values are reused.

Parameters:
  • min_level (int, optional) – The minimum level to start sampling at. Defaults to 2.

  • max_level (int, optional) – The maximum level to sample up to. Defaults to 6.

  • level (int, optional) – If this is set, then min_level and max_level are not used and only a single level is sampled. Defaults to None.

  • repeats (int, optional) – The number of times to run the entire benchmarking procedure. Defaults to 1.

  • run_cfg (BenchRunCfg, optional) – benchmark run configuration. Defaults to None.

  • publish (bool, optional) – Publish the results to git, requires a publish url to be set up. Defaults to False.

  • debug (bool, optional) – _description_. Defaults to False.

  • show (bool, optional) – show the results in the local web browswer. Defaults to False.

  • save (bool, optional) – save the results to disk in index.html. Defaults to False.

  • grouped (bool, optional) – Produce a single html page with all the benchmarks included. Defaults to True.

  • use_cache (bool, optional) – Use the sample cache to reused previous results. Defaults to True.

Returns:

A list of bencher instances

Return type:

List[BenchCfg]

show_publish(report, show, publish, save, debug)
shutdown()
__del__() None