BonsaiDb Commerce Benchmark

This benchmark suite is designed to simulate the types of loads that an ecommerce application might see under different levels of concurrency and traffic patterns. As with all benchmark suites, the results should not be taken as proof that any datbase may or may not perform for any particular application. Each application's needs differ greatly, and this benchmark is designed at helping BonsaiDb's developers notice areas for improvement.

Comparison of all backends across all suites
Dataset Size Traffic Pattern Concurrency bonsaidb-local bonsaidb-local+lz4 bonsaidb-quic bonsaidb-ws postgresql Report
small balanced 1 3.585s 3.599s 7.016s 5.886s 3.663s Full Report
small balanced 4 8.117s 9.731s 10.48s 9.370s 6.457s Full Report
small balanced 8 16.34s 16.53s 18.58s 17.89s 10.14s Full Report
small readheavy 1 1.728s 1.789s 5.195s 3.856s 2.634s Full Report
small readheavy 4 4.068s 3.931s 6.252s 5.316s 4.400s Full Report
small readheavy 8 8.625s 8.191s 10.08s 9.399s 6.806s Full Report
small writeheavy 1 20.47s 19.54s 27.41s 25.11s 15.80s Full Report
small writeheavy 4 48.89s 40.95s 44.94s 43.31s 34.14s Full Report
small writeheavy 8 83.50s 78.97s 87.96s 84.80s 67.92s Full Report
medium balanced 1 3.550s 3.457s 7.832s 6.423s 7.714s Full Report
medium balanced 4 9.012s 9.729s 10.75s 10.34s 10.85s Full Report
medium balanced 8 16.42s 16.14s 18.26s 17.14s 15.39s Full Report
medium readheavy 1 1.829s 1.855s 4.988s 3.917s 6.591s Full Report
medium readheavy 4 3.985s 4.056s 6.345s 5.504s 8.395s Full Report
medium readheavy 8 9.129s 7.173s 9.092s 8.986s 10.96s Full Report
medium writeheavy 1 22.18s 19.82s 29.22s 26.45s 22.53s Full Report
medium writeheavy 4 53.46s 47.33s 51.84s 51.30s 50.79s Full Report
medium writeheavy 8 80.84s 75.41s 84.84s 89.36s 94.02s Full Report
large balanced 1 5.125s 4.729s 8.884s 7.305s 29.75s Full Report
large balanced 4 9.414s 9.683s 11.76s 10.94s 32.78s Full Report
large balanced 8 23.30s 21.90s 20.83s 19.88s 37.47s Full Report
large readheavy 1 3.171s 3.307s 6.333s 5.399s 28.41s Full Report
large readheavy 4 5.420s 4.822s 7.392s 6.540s 30.48s Full Report
large readheavy 8 8.509s 8.847s 11.44s 9.980s 33.67s Full Report
large writeheavy 1 21.78s 22.08s 28.68s 25.77s 48.35s Full Report
large writeheavy 4 50.76s 52.10s 51.16s 52.77s 88.48s Full Report
large writeheavy 8 96.15s 88.80s 88.39s 91.73s 144.6s Full Report

Dataset Sizes

The three dataset sizes are named "small", "medium", and "large". All databases being benchmarked can handle much larger dataset sizes than "large", but it is impractical at this time to run larger benchmarks on a regular basis. Each run's individual page will show the initial data set breakdown by type.

Traffic Patterns

This suite uses a probability-based system to generate plans for agents to process concurrently. These plans operate in a "funnel" pattern of searching, adding to cart, checking out, and reviewing the purchased items. Each stage in this funnel is assigned a probabilty, and these probabilities are tweaked to simulate read-heavy traffic patterns that perform more searches than purchasing, write-heavy traffic patterns where most plans result in purchasing and reviewing the products, and a balanced traffic pattern that is meant to simulate moderate amount of write traffic.

Concurrency

The suite is configured to run the plans up to three times, depending on the number of CPU cores present: 1 agent, 1 agent per core, and 2 agents per core.