First Graphs
In week 2 of CodeSpeed2, the rewrite of CodeSpeed for pypy and the PSF, we got our first graphs up.Malte & Matt just having coaxed a benchmark run into the first bar chart |
By the end of the week, we had the first cut on three graphs.
Revision timings |
Normalised data |
History graph |
These are being drawn from bench mark runner output data submitted via the Cyclone endpoint, persisted to MongoDB, fetched from the API, represented in an Angular.js object and rendered with d3.js.
API
The API is served up via Cylcone and so far we have the endpoints below.
Upload
/api/raw_benchmark_runner_baseline_output /api/raw_benchmark_runner_output
Retrieval
/api/c_python_normalized_benchmarks /api/benchmark_history /api/benchmark_result
For usage details, tests are in
/src/test
in the repository bitbucket.org/pypy/codespeed2.Taxonomy
Also in this week we had some meaningful discussion about what terms mean in this context.Best summarised in the following sentence:
The runner runs Benchmarks (which are Modules e.g. ai.py) and produces Benchmark_Runner_Output data which contains [one or more] BenchMark_ResultsA key observation: the system under test is pypy and the modules (e.g. ai) are the benchmarks being run against that.