Introduction
Often at the beginning of an analytical application's lifecycle–when there is a smaller dataset that queries execute over–the application works well and delivers responses within acceptable thresholds. As the size of the dataset and number of users grow, the time-to-response from a user's perspective can often suffer quite heavily. This is true of both application and purpose-built cloud data warehousing solutions.
This leaves us with a chicken-and-egg problem. Application databases can deliver low-latency responses with small-to-large datasets, but struggle with massive analytical datasets. Cloud data warehousing solutions usually make no guarantees except to deliver a response, which means latency can vary wildly on a query-to-query basis.
Cube Cloud optimizes query performance from redundant queries with in-memory caching, queue management, and configurable pre-aggregation caching in Cube Store. In this blog, we’ll provide an overview of Cube Cloud’s caching capabilities and load testing to compare Cube Store to a cloud data warehouse.
Caching in Cube Cloud
Cube Cloud provides a two-level caching system. The first level is an in-memory cache and is active by default. Cube Cloud's in-memory cache acts as a buffer for your database when there's a burst of requests hitting the same data from multiple concurrent users. The second level of caching is called pre-aggregations, and requires explicit configuration to activate.
Cube Cloud caches the results of executed queries using in-memory cache. The cache key is a generated SQL statement with any existing query-dependent pre-aggregations. Upon receiving an incoming request, Cube Cloud first checks the cache using this key. If nothing is found in the cache, the query is executed in the database and the result set is returned as well as updating the cache.
If an existing value is present in the cache and the refresh_key
value for the query hasn't changed, the cached value will be returned. Otherwise, the SQL query will be executed against either the pre-aggregations storage or the source database to populate the cache with the results and return them.
Pre-aggregations are defined in the data model and stored in a layer of the aggregated data built and refreshed by Cube Cloud. You can learn more about defining pre-aggregations in data modeling reference. It can dramatically improve the query performance and provide a higher concurrency. Pre-aggregations are designed to provide the right balance between time to insight and querying performance.
Load Testing
Objective
The following tests measure query performance and peak Requests Per Second (RPS) across distinct scenarios.
Hypothesis
Cube Store can deliver better performance than a cloud data warehouse in a real-world analytics use case.
Methodology
Load tests were structured into three scenarios using a similar size/cost configuration and TPC-H with a scaling factor of 100. The testing framework parameters operate in terms of virtual users (VU): 1 VU is 1 concurrent request. The test duration was set to 10 seconds. The parameters of the test were done purposefully so that both components could achieve a 100% success rate, instead of overwhelming the components to failure.
Each load test is designed to evaluate the platforms' performance under different conditions:
- Run Case 1: This scenario involved running identical queries against both Cube Store and the cloud data warehouse to gauge baseline performance metrics.
- Run Case 2: Here, we introduced a measure, a dimension, and a random filter into the queries, simulating a more realistic analytical workload.
- Run Case 3: This scenario added complexity by incorporating random dates and granularity alongside a measure and a dimension, reflecting common real-world analytics use cases.
Cube Cloud Configuration | Cloud Data Warehouse Configuration |
---|---|
Tier: Enterprise with Dedicated Infrastructure. | Cluster size: Medium |
API instances: 10 (min and max range set to 10). | |
Cube Store workers: 8 |
NOTE: When conducting small and quick tests on query speeds and then trying to extrapolate those results to real-world use cases, the methodology usually fails to take into account query variability and the multiple layers of caching a modern data stack has. It is important to test with your data and workloads. Your results may vary.
Results
Run Case 1
In this scenario, where identical queries were executed on both Cube Store and the cloud data warehouse, we observed the following results:
10 VUs, 20 s | Cube Store | Cloud Data Warehouse |
---|---|---|
Success Rate | 100 | 100 |
Total reqs | 766 | 691 |
Peak RPS | 46.67 | 46 |
p(95) ms | 253 | 303 |
The first run case is similar to a very rigid and unrealistic test. In both tests, we are just measuring the performance of Cube Store and the cloud data warehouse first caching layers. Both perform similarly and even in a real-world use case where the same query is executed multiple times by different actors, the real performance would be affected not by the query itself but by the size of the payload the caching layer is serving.
Run Case 2
Introducing a measure, a dimension, and a random filter provided deeper insights into the platforms' performance under a more realistic workload:
10 VUs, 20 s | Cube Store | Cloud Data Warehouse |
---|---|---|
Success Rate | 100 | 100 |
Total reqs | 814 | 731 |
Peak RPS | 47 | 46.67 |
p(95) ms | 226 | 290 |
The second run case tests a scenario where every query is different from each other, but the possible set of queries being executed is small. This means that the first few queries will be slow, but as the cloud data warehouse caching layers are warmed up, the subsequent queries are as fast as possible. Cube Cloud’s caching layer is already warmed up since it’s serving queries from pre-aggregations; that’s the reason it can achieve maximum performance since the beginning of the test.
Run Case 3
The inclusion of random dates and granularity alongside other query elements further tested the platforms' capabilities:
10 VUs, 20 s | Cube Store | Cloud Data Warehouse |
---|---|---|
Success Rate | 100 | 100 |
Total reqs | 880 | 85 |
Peak RPS | 46.67 | 5.67 |
p(95) ms | 250 | 4201 |
The third run case is a more realistic scenario. All queries are different, so all queries get executed by the respective query engine. In this scenario, Cube Store's speed is more noticeable, and for the cloud data warehouse to achieve the same performance, it would require using a much bigger cluster size.
Conclusion
Cube Cloud’s caching capabilities prove to be a powerful solution for improving query performance. With its in-memory caching, queue management, and configurable pre-aggregation caching in Cube Store, Cube Cloud addresses the latency issues that often arise as dataset and user numbers grow. This is particularly beneficial since application databases tend to struggle with massive analytical datasets, and cloud data warehousing solutions can be inconsistent in their response times. Load testing Cube Cloud’s caching capabilities has demonstrated its effectiveness compared to cloud data warehouse solutions.