Skip to main content
Version: 2.0.2

ClickHouse API Reference

This reference details the use of ClickHouse on Hikube, whether in simple or distributed configuration with shards and replicas.


Base Structure​

ClickHouse Resource​

YAML Configuration Example​

apiVersion: apps.cozystack.io/v1alpha1
kind: ClickHouse
metadata:
name: clickhouse-name
spec:

Parameters​

Common Parameters​

ParameterTypeDescriptionDefaultRequired
replicasintNumber of ClickHouse replicas2Yes
shardsintNumber of ClickHouse shards1Yes
resourcesobjectExplicit CPU and memory configuration for each replica. If empty, resourcesPreset is applied{}No
resources.cpuquantityCPU available to each replicanullNo
resources.memoryquantityMemory (RAM) available to each replicanullNo
resourcesPresetstringDefault sizing preset (nano, micro, small, medium, large, xlarge, 2xlarge)"small"Yes
sizequantityPersistent Volume Claim size, available for application data10GiYes
storageClassstringStorageClass used to store the data""No

YAML Configuration Example​

clickhouse.yaml
replicas: 2
shards: 1
resources:
cpu: 4000m
memory: 4Gi
resourcesPreset: small
size: 20Gi
storageClass: replicated

Application-Specific Parameters​

ParameterTypeDescriptionDefaultRequired
logStorageSizequantitySize of Persistent Volume for logs2GiNo
logTTLintTTL (expiration time) for query_log and query_thread_log15No
usersmap[string]objectUsers configuration{...}No
users[name].passwordstringPassword for the usernullYes
users[name].readonlyboolUser is readonly, default is falsenullNo

YAML Configuration Example​

clickhouse.yaml
logStorageSize: 5Gi
logTTL: 30
users:
analyst:
password: analyst123
readonly: true
admin:
password: adminStrongPwd

Backup Parameters​

ParameterTypeDescriptionDefaultRequired
backupobjectBackup configuration{}No
backup.enabledboolEnable regular backupsfalseNo
backup.s3RegionstringAWS S3 region where backups are storedus-east-1No
backup.s3BucketstringS3 bucket used for storing backupss3.example.org/clickhouse-backupsNo
backup.schedulestringCron schedule for automated backups"0 2 * * *"No
backup.cleanupStrategystringRetention strategy for cleaning up old backups"--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"No
backup.s3AccessKeystringAccess key for S3, used for authentication<your-access-key>Yes
backup.s3SecretKeystringSecret key for S3, used for authentication<your-secret-key>Yes
backup.resticPasswordstringPassword for Restic backup encryption<password>Yes

YAML Configuration Example​

clickhouse.yaml
backup:
enabled: true
s3Region: eu-central-1
s3Bucket: backups.hikube.clickhouse
schedule: "0 3 * * *"
cleanupStrategy: "--keep-last=5 --keep-daily=7 --keep-weekly=4"
s3AccessKey: "<your-access-key>"
s3SecretKey: "<your-secret-key>"
resticPassword: "<password>"

ClickHouse Keeper Parameters​

ParameterTypeDescriptionDefaultRequired
clickhouseKeeperobjectClickHouse Keeper configuration{}No
clickhouseKeeper.enabledboolDeploy ClickHouse Keeper for cluster coordinationtrueYes
clickhouseKeeper.sizequantityPersistent Volume Claim size, available for application data1GiYes
clickhouseKeeper.resourcesPresetstringDefault sizing preset (nano, micro, small, medium, large, xlarge, 2xlarge)microYes
clickhouseKeeper.replicasintNumber of Keeper replicas3Yes

YAML Configuration Example​

clickhouse.yaml
clickhouseKeeper:
enabled: true
replicas: 3
resourcesPreset: medium
size: 5Gi

resources and resourcesPreset​

The resources field allows explicitly defining the CPU and memory configuration of each ClickHouse replica.
If this field is left empty, the value of the resourcesPreset parameter is used.

YAML Configuration Example​

clickhouse.yaml
resources:
cpu: 4000m
memory: 4Gi

⚠️ Attention: if resources is defined, the resourcesPreset value is ignored.

Preset nameCPUMemory
nano250m128Mi
micro500m256Mi
small1512Mi
medium11Gi
large22Gi
xlarge44Gi
2xlarge88Gi

Complete Examples​

Production Cluster​

clickhouse-production.yaml
apiVersion: apps.cozystack.io/v1alpha1
kind: ClickHouse
metadata:
name: production
spec:
replicas: 3
shards: 2
resources:
cpu: 4000m
memory: 8Gi
size: 100Gi
storageClass: replicated

logStorageSize: 10Gi
logTTL: 30

clickhouseKeeper:
enabled: true
replicas: 3
resourcesPreset: small
size: 5Gi

users:
admin:
password: SecureAdminPassword
analyst:
password: SecureAnalystPassword
readonly: true

backup:
enabled: true
schedule: "0 3 * * *"
cleanupStrategy: "--keep-last=7 --keep-daily=7 --keep-weekly=4"
s3Region: eu-central-1
s3Bucket: s3.hikube.cloud/clickhouse-backups
s3AccessKey: your-access-key
s3SecretKey: your-secret-key
resticPassword: SecureResticPassword

Development Cluster​

clickhouse-development.yaml
apiVersion: apps.cozystack.io/v1alpha1
kind: ClickHouse
metadata:
name: development
spec:
replicas: 1
shards: 1
resourcesPreset: nano
size: 10Gi

logStorageSize: 2Gi
logTTL: 7

clickhouseKeeper:
enabled: true
replicas: 1
resourcesPreset: nano
size: 1Gi

users:
dev:
password: devpassword

Best Practices
  • Odd number of Keepers: always deploy 3 or 5 Keeper replicas to ensure quorum (majority required for leader election)
  • logTTL: adjust the retention period for system logs (query_log, query_thread_log) to avoid unnecessary data accumulation
  • Shards vs replicas: use shards to distribute data horizontally (more capacity) and replicas for redundancy (more availability)
  • readonly user: create a read-only user for analytics and reporting tools
Warning
  • Deletions are irreversible: deleting a ClickHouse resource results in permanent data loss if no backup is configured
  • Changing shards: modifying the number of shards on an existing cluster can lead to complex data redistribution
  • Keeper and quorum: with fewer than 3 Keepers, the cluster cannot maintain quorum in case of a node failure