Skip to main content
Version: 3.0.0-alpha (Diátaxis)

How to scale the RabbitMQ cluster

This guide explains how to adjust the resources of a RabbitMQ cluster on Hikube: number of replicas, CPU/memory resources, and storage.

Prerequisites

  • kubectl configured with your Hikube kubeconfig
  • A RabbitMQ cluster deployed on Hikube

Available presets

Hikube offers predefined resource presets for RabbitMQ:

PresetCPUMemory
nano100m128Mi
micro250m256Mi
small500m512Mi
medium500m1Gi
large12Gi
xlarge24Gi
2xlarge48Gi
warning

If the resources field (explicit CPU/memory) is defined, the resourcesPreset value is entirely ignored. Make sure to clear the resources field if you want to use a preset.

note

RabbitMQ presets differ slightly from other services (Kafka, NATS, databases). Refer to the table above for exact values.

Steps

1. Check current resources

Review the current cluster configuration:

kubectl get rabbitmq my-rabbitmq -o yaml | grep -A 5 -E "replicas:|resources:|resourcesPreset|size:"

Example output:

  replicas: 3
resourcesPreset: small
resources: {}
size: 10Gi

2. Modify the number of replicas

The number of replicas determines the number of nodes in the RabbitMQ cluster.

kubectl patch rabbitmq my-rabbitmq --type='merge' -p='
spec:
replicas: 3
'
warning

With fewer than 3 replicas, quorum queues cannot guarantee message durability in case of failure. Use 3 replicas minimum in production.

Recommendations by environment:

EnvironmentReplicasJustification
Development1Sufficient for testing
Staging3Simulates production
Production3 or 5High availability and quorum queues

3. Modify the preset or explicit resources

Option A: change the preset

kubectl patch rabbitmq my-rabbitmq --type='merge' -p='
spec:
resourcesPreset: large
resources: {}
'
note

It is important to reset resources: {} when switching to a preset, so that the preset is properly applied.

Option B: define explicit resources

For fine-grained control, directly define CPU and memory values:

kubectl patch rabbitmq my-rabbitmq --type='merge' -p='
spec:
resources:
cpu: 2000m
memory: 4Gi
'

You can also modify the full manifest:

rabbitmq-scaled.yaml
apiVersion: apps.cozystack.io/v1alpha1
kind: RabbitMQ
metadata:
name: my-rabbitmq
spec:
replicas: 3
resources:
cpu: 2000m
memory: 4Gi
size: 20Gi
storageClass: replicated

users:
admin:
password: SecureAdminPassword

vhosts:
production:
roles:
admin:
- admin
kubectl apply -f rabbitmq-scaled.yaml

4. Apply and verify

Monitor the rolling update of the pods:

kubectl get po -w | grep my-rabbitmq

Expected output (during rolling update):

my-rabbitmq-server-0   1/1     Running       0   45m
my-rabbitmq-server-1 1/1 Terminating 0 44m
my-rabbitmq-server-1 0/1 Pending 0 0s
my-rabbitmq-server-1 1/1 Running 0 30s

Wait for all pods to be in Running state:

kubectl get po | grep my-rabbitmq
my-rabbitmq-server-0   1/1     Running   0   10m
my-rabbitmq-server-1 1/1 Running 0 8m
my-rabbitmq-server-2 1/1 Running 0 6m

Check the RabbitMQ cluster status:

kubectl exec -it my-rabbitmq-server-0 -- rabbitmqctl cluster_status

Verification

Confirm that the new resources are applied:

kubectl get rabbitmq my-rabbitmq -o yaml | grep -A 5 -E "replicas:|resources:|resourcesPreset|size:"

Verify that the cluster is functional:

kubectl exec -it my-rabbitmq-server-0 -- rabbitmqctl node_health_check

Expected output:

Health check passed

Next steps