Deploying Cube Core with Docker
This guide walks you through deploying Cube with Docker.
This is an example of a production-ready deployment, but real-world deployments can vary significantly depending on desired performance and scale.
If you'd like to deploy Cube to Kubernetes (opens in a new tab), please
refer to the following resources with Helm charts:
gadsme/charts
(opens in a new tab) or
OpstimizeIcarus/cubejs-helm-charts-kubernetes
(opens in a new tab).
These resources are community-maintained, and they are not maintained by the Cube team. Please direct questions related to these resources to their authors.
Prerequisites
Configuration
Create a Docker Compose stack by creating a docker-compose.yml
. A
production-ready stack would at minimum consist of:
- One or more Cube API instance
- A Cube Refresh Worker
- A Cube Store Router node
- One or more Cube Store Worker nodes
An example stack using BigQuery as a data source is provided below:
Using macOS or Windows? Use CUBEJS_DB_HOST=host.docker.internal
instead of
localhost
if your database is on the same machine.
Using macOS on Apple Silicon (arm64)? Use the arm64v8
tag for Cube Store
Docker images (opens in a new tab),
e.g., cubejs/cubestore:arm64v8
.
Note that it's a best practice to use specific locked versions, e.g.,
cubejs/cube:v0.36.0
, instead of cubejs/cube:latest
in production.
services:
cube_api:
restart: always
image: cubejs/cube:latest
ports:
- 4000:4000
environment:
- CUBEJS_DB_TYPE=bigquery
- CUBEJS_DB_BQ_PROJECT_ID=cube-bq-cluster
- CUBEJS_DB_BQ_CREDENTIALS=<BQ-KEY>
- CUBEJS_DB_EXPORT_BUCKET=cubestore
- CUBEJS_CUBESTORE_HOST=cubestore_router
- CUBEJS_API_SECRET=secret
volumes:
- .:/cube/conf
depends_on:
- cube_refresh_worker
- cubestore_router
- cubestore_worker_1
- cubestore_worker_2
cube_refresh_worker:
restart: always
image: cubejs/cube:latest
environment:
- CUBEJS_DB_TYPE=bigquery
- CUBEJS_DB_BQ_PROJECT_ID=cube-bq-cluster
- CUBEJS_DB_BQ_CREDENTIALS=<BQ-KEY>
- CUBEJS_DB_EXPORT_BUCKET=cubestore
- CUBEJS_CUBESTORE_HOST=cubestore_router
- CUBEJS_API_SECRET=secret
- CUBEJS_REFRESH_WORKER=true
volumes:
- .:/cube/conf
cubestore_router:
restart: always
image: cubejs/cubestore:latest
environment:
- CUBESTORE_WORKERS=cubestore_worker_1:10001,cubestore_worker_2:10002
- CUBESTORE_REMOTE_DIR=/cube/data
- CUBESTORE_META_PORT=9999
- CUBESTORE_SERVER_NAME=cubestore_router:9999
volumes:
- .cubestore:/cube/data
cubestore_worker_1:
restart: always
image: cubejs/cubestore:latest
environment:
- CUBESTORE_WORKERS=cubestore_worker_1:10001,cubestore_worker_2:10002
- CUBESTORE_SERVER_NAME=cubestore_worker_1:10001
- CUBESTORE_WORKER_PORT=10001
- CUBESTORE_REMOTE_DIR=/cube/data
- CUBESTORE_META_ADDR=cubestore_router:9999
volumes:
- .cubestore:/cube/data
depends_on:
- cubestore_router
cubestore_worker_2:
restart: always
image: cubejs/cubestore:latest
environment:
- CUBESTORE_WORKERS=cubestore_worker_1:10001,cubestore_worker_2:10002
- CUBESTORE_SERVER_NAME=cubestore_worker_2:10002
- CUBESTORE_WORKER_PORT=10002
- CUBESTORE_REMOTE_DIR=/cube/data
- CUBESTORE_META_ADDR=cubestore_router:9999
volumes:
- .cubestore:/cube/data
depends_on:
- cubestore_router
Set up reverse proxy
In production, the Cube API should be served over an HTTPS connection to ensure security of the data in-transit. We recommend using a reverse proxy; as an example, let's use NGINX (opens in a new tab).
You can also use a reverse proxy to enable HTTP 2.0 and GZIP compression
First we'll create a new server configuration file called nginx/cube.conf
:
server {
listen 443 ssl;
server_name cube.my-domain.com;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ecdh_curve secp384r1;
# Replace the ciphers with the appropriate values
ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384 OLD_TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 OLD_TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256";
ssl_prefer_server_ciphers on;
ssl_certificate /etc/ssl/private/cert.pem;
ssl_certificate_key /etc/ssl/private/key.pem;
ssl_session_timeout 10m;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
location / {
proxy_pass http://cube:4000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Then we'll add a new service to our Docker Compose stack:
services:
...
nginx:
image: nginx
ports:
- 443:443
volumes:
- ./nginx:/etc/nginx/conf.d
- ./ssl:/etc/ssl/private
Don't forget to create a ssl
directory with the cert.pem
and key.pem
files
inside so the Nginx service can find them.
For automatically provisioning SSL certificates with LetsEncrypt, this blog post (opens in a new tab) may be useful.
Security
Use JSON Web Tokens
Cube can be configured to use industry-standard JSON Web Key Sets for securing its API and limiting access to data. To do this, we'll define the relevant options on our Cube API instance:
If you're using queryRewrite
for access control,
then you must also configure
scheduledRefreshContexts
so the refresh workers
can correctly create pre-aggregations.
services:
cube_api:
image: cubejs/cube:latest
ports:
- 4000:4000
environment:
- CUBEJS_DB_TYPE=bigquery
- CUBEJS_DB_BQ_PROJECT_ID=cube-bq-cluster
- CUBEJS_DB_BQ_CREDENTIALS=<BQ-KEY>
- CUBEJS_DB_EXPORT_BUCKET=cubestore
- CUBEJS_CUBESTORE_HOST=cubestore_router
- CUBEJS_API_SECRET=secret
- CUBEJS_JWK_URL=https://cognito-idp.<AWS_REGION>.amazonaws.com/<USER_POOL_ID>/.well-known/jwks.json
- CUBEJS_JWT_AUDIENCE=<APPLICATION_URL>
- CUBEJS_JWT_ISSUER=https://cognito-idp.<AWS_REGION>.amazonaws.com/<USER_POOL_ID>
- CUBEJS_JWT_ALGS=RS256
- CUBEJS_JWT_CLAIMS_NAMESPACE=<CLAIMS_NAMESPACE>
volumes:
- .:/cube/conf
depends_on:
- cubestore_worker_1
- cubestore_worker_2
- cube_refresh_worker
Securing Cube Store
All Cube Store nodes (both router and workers) should only be accessible to Cube API instances and refresh workers. To do this with Docker Compose, we simply need to make sure that none of the Cube Store services have any exposed
Monitoring
All Cube logs can be found by through the Docker Compose CLI:
docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------------------------
cluster_cube_1 docker-entrypoint.sh cubej ... Up 0.0.0.0:4000->4000/tcp,:::4000->4000/tcp
cluster_cubestore_router_1 ./cubestored Up 3030/tcp, 3306/tcp
cluster_cubestore_worker_1_1 ./cubestored Up 3306/tcp, 9001/tcp
cluster_cubestore_worker_2_1 ./cubestored Up 3306/tcp, 9001/tcp
docker-compose logs
cubestore_router_1 | 2021-06-02 15:03:20,915 INFO [cubestore::metastore] Creating metastore from scratch in /cube/.cubestore/data/metastore
cubestore_router_1 | 2021-06-02 15:03:20,950 INFO [cubestore::cluster] Meta store port open on 0.0.0.0:9999
cubestore_router_1 | 2021-06-02 15:03:20,951 INFO [cubestore::mysql] MySQL port open on 0.0.0.0:3306
cubestore_router_1 | 2021-06-02 15:03:20,952 INFO [cubestore::http] Http Server is listening on 0.0.0.0:3030
cube_1 | 🚀 Cube API server (vX.XX.XX) is listening on 4000
cubestore_worker_2_1 | 2021-06-02 15:03:24,945 INFO [cubestore::cluster] Worker port open on 0.0.0.0:9001
cubestore_worker_1_1 | 2021-06-02 15:03:24,830 INFO [cubestore::cluster] Worker port open on 0.0.0.0:9001
Update to the latest version
Find the latest stable release version from
Docker Hub (opens in a new tab). Then update your docker-compose.yml
to use
a specific tag instead of latest
:
services:
cube_api:
image: cubejs/cube:v0.36.0
ports:
- 4000:4000
environment:
- CUBEJS_DB_TYPE=bigquery
- CUBEJS_DB_BQ_PROJECT_ID=cube-bq-cluster
- CUBEJS_DB_BQ_CREDENTIALS=<BQ-KEY>
- CUBEJS_DB_EXPORT_BUCKET=cubestore
- CUBEJS_CUBESTORE_HOST=cubestore_router
- CUBEJS_API_SECRET=secret
volumes:
- .:/cube/conf
depends_on:
- cubestore_router
- cube_refresh_worker
Extend the Docker image
If you need to use dependencies (i.e., Python or npm packages) with native extensions inside configuration files or dynamic data models, build a custom Docker image.
You can do this by creating a Dockerfile
and a corresponding
.dockerignore
file:
touch Dockerfile
touch .dockerignore
Add this to the Dockerfile
:
FROM cubejs/cube:latest
COPY . .
RUN apt update && apt install -y pip
RUN pip install -r requirements.txt
RUN npm install
And this to the .dockerignore
:
schema
cube.py
cube.js
.env
node_modules
npm-debug.log
Then start the build process by running the following command:
docker build -t <YOUR-USERNAME>/cube-custom-image .
Finally, update your docker-compose.yml
to use your newly-built image:
services:
cube_api:
image: <YOUR-USERNAME>/cube-custom-image
ports:
- 4000:4000
environment:
- CUBEJS_API_SECRET=secret
# Other environment variables
volumes:
- .:/cube/conf
depends_on:
- cubestore_router
- cube_refresh_worker
# Other container dependencies
Note that you shoudn't mount the whole current folder (.:/cube/conf
)
if you have dependencies in package.json
. Doing so would effectively
hide the node_modules
folder inside the container, where dependency files
installed with npm install
reside, and result in errors like this:
Error: Cannot find module 'my_dependency'
. In that case, mount individual files:
# ...
volumes:
- ./schema:/cube/conf/schema
- ./cube.js:/cube/conf/cube.js
# Other necessary files