Finch Docs
Toggle Dark/Light/Auto modeToggle Dark/Light/Auto modeToggle Dark/Light/Auto modeBack to homepage


Data limits limit how much or how fast Finch reads or writes data. By default, Finch has no limits. But most benchmarks use at least two limits: row count and runtime. Finch has these limits and others.

Multiple limits can be specified. Execution is stopped (or delayed for throughput limits) when any limit is reached.


The -- rows statement modifier will stop the client after inserting the configured number of rows. Combine with multiple clients and CSV expansion to bulk insert rows.

Multi-client, multi-row inserts will exceed the configured row count by one set of multi-rows because the count tracking is async. If you need the exact number of -- rows, use a single client, or submit a PR to improve this feature.

You can indirectly limit data access with limited iterations:

For example, iter = 100 for a single-row UPDATE will limit the client to 100 updates. Or to update a lot of rows quickly, use multiple clients and iter-clients to apply a shared limit to all clients.


Row counts are common but arbitrary. A thousand rows of a huge table with many secondary indexes and blob columns is significantly different than one million rows of table with a few integer columns. How much RAM a system has (and MySQL is configured to use) is another factor: even 10 million rows might fit in RAM.

Depending on the benchmark, it might be better to generate certain data sizes, rather than row counts:

These statement modifiers are usually used in DDL stages. Combine with a parallel load and you can load terabytes of data relatively quickly. (For benchmarking, “relatively quickly” means hours and days for terabytes of data.)

There are currently no size-based data limits built into any data generators, but it would be possible to implement for both reading and writing data.


Finch has QPS (queries per second) and TPS (transactions per second) throughput limits.

Use stage.qps to ensure that Finch does not exceed a certain QPS no matter the workload or other limits. This is a top-level limit, and since limits are combined with logical OR, even a higher limit specified in the workload will be limited to stage.qps.

The workload-specific QPS limits are:

Finch checks every QPS limit before executing each query. Delay for QPS throttling is not measured or reported.

The TPS limits are the same, just “tps” instead of “qps”:

Finch checks all TPS limits on explicit BEGIN statements. And the TPS statistic is measured on explicit COMMIT statements.


Finch automatically sets iter = 1 for a client group with any DDL in any assigned trx. See Benchmark / Workload / Auto-allocation.