testing.benchmark() function

testing.benchmark() executes a test case without comparing test output with the expected test output. This lets you accurately benchmark a test case without the added overhead of comparing test output that occurs in

Function type signature
(case: () => {A with input: B, fn: (<-: B) => C}) => C
For more information, see Function type signatures.



(Required) Test case to benchmark.


Define and benchmark a test case

The following script defines a test case for the sum() function and enables profilers to measure query performance.

import "csv"
import "testing"
import "profiler"

option profiler.enabledProfilers = ["query", "operator"]

inData =

outData =

t_sum = (table=<-) =>
        |> range(start: 2021-01-01T00:00:00Z, stop: 2021-01-03T01:00:00Z)
        |> sum()

test _sum = () => ({input: csv.from(csv: inData), want: csv.from(csv: outData), fn: t_sum})

testing.benchmark(case: _sum)

Was this page helpful?

Thank you for your feedback!

Upgrade to InfluxDB Cloud or InfluxDB 2.0!

InfluxDB Cloud and InfluxDB OSS 2.0 ready for production.