Skip to main content
Version: 3.0.0-alpha (Diátaxis)

How to vertically scale ClickHouse

This guide explains how to adjust the CPU, memory, and storage resources of your ClickHouse instance on Hikube, either via a predefined preset or by defining explicit values.

Prerequisites

  • A ClickHouse instance deployed on Hikube (see the quick start)
  • kubectl configured to interact with the Hikube API
  • The YAML configuration file for your ClickHouse instance

Steps

1. Check current resources

Review the current configuration of your ClickHouse instance:

kubectl get clickhouse my-clickhouse -o yaml

Note the values of resourcesPreset, resources, replicas, shards and size in the spec section.

2. Modify the resourcesPreset or explicit resources

Option A: Use a preset

Here are the available presets:

PresetCPUMemory
nano250m128Mi
micro500m256Mi
small1512Mi
medium11Gi
large22Gi
xlarge44Gi
2xlarge88Gi

For example, to go from small (default value) to large:

clickhouse-large.yaml
apiVersion: apps.cozystack.io/v1alpha1
kind: ClickHouse
metadata:
name: my-clickhouse
spec:
replicas: 2
shards: 1
resourcesPreset: large
size: 20Gi
clickhouseKeeper:
enabled: true
replicas: 3
resourcesPreset: micro
size: 1Gi

Option B: Define explicit resources

For precise control, specify CPU and memory directly:

clickhouse-custom-resources.yaml
apiVersion: apps.cozystack.io/v1alpha1
kind: ClickHouse
metadata:
name: my-clickhouse
spec:
replicas: 2
shards: 1
resources:
cpu: 4000m
memory: 8Gi
size: 50Gi
clickhouseKeeper:
enabled: true
replicas: 3
resourcesPreset: small
size: 2Gi
warning

If the resources field is defined, the resourcesPreset value is entirely ignored. Remove resourcesPreset from the manifest to avoid confusion.

3. Adjust storage if needed

ClickHouse stores data on disk (unlike Redis). Remember to increase the persistent volume (size) based on the expected data volume:

clickhouse-storage.yaml
apiVersion: apps.cozystack.io/v1alpha1
kind: ClickHouse
metadata:
name: my-clickhouse
spec:
replicas: 2
shards: 1
resourcesPreset: xlarge
size: 100Gi
storageClass: replicated
clickhouseKeeper:
enabled: true
replicas: 3
resourcesPreset: micro
size: 1Gi
tip

Use storageClass: replicated in production to protect data against physical node loss.

4. Apply the update

kubectl apply -f clickhouse-large.yaml

Verification

Verify that the resources have been updated:

# Verifier la configuration de la ressource ClickHouse
kubectl get clickhouse my-clickhouse -o yaml | grep -A 5 resources

# Verifier l'etat des pods ClickHouse
kubectl get pods -l app.kubernetes.io/instance=my-clickhouse

Expected output:

NAME                READY   STATUS    RESTARTS   AGE
my-clickhouse-0-0 1/1 Running 0 3m
my-clickhouse-0-1 1/1 Running 0 3m

Going further