Skip to main content
Version: 3.0.0-alpha (Diátaxis)

How to add and modify a node group

Node groups allow you to segment your Kubernetes cluster nodes according to your workload needs. This guide explains how to add, modify, and remove node groups in your Hikube configuration.

Prerequisites

  • A deployed Hikube Kubernetes cluster (see the quick start)
  • kubectl configured to interact with the Hikube API
  • Your cluster YAML configuration file

Steps

1. Understand instance types

Hikube offers three instance series suited to different use cases:

SeriesCPU:RAM ratioUse case
S (Standard)1:2General workloads, web applications
U (Universal)1:4Balanced workloads, databases
M (Memory Optimized)1:8Memory-intensive applications, caches

Available instance details:

InstancevCPURAM
s1.small12 GB
s1.medium24 GB
s1.large48 GB
s1.xlarge816 GB
s1.2xlarge1632 GB
s1.4xlarge3264 GB
s1.8xlarge64128 GB
u1.medium14 GB
u1.large28 GB
u1.xlarge416 GB
u1.2xlarge832 GB
u1.4xlarge1664 GB
u1.8xlarge32128 GB
m1.large216 GB
m1.xlarge432 GB
m1.2xlarge864 GB
m1.4xlarge16128 GB
m1.8xlarge32256 GB

2. Add a node group

To add a new node group, add an entry under spec.nodeGroups in your cluster configuration file:

cluster-with-compute.yaml
apiVersion: apps.cozystack.io/v1alpha1
kind: Kubernetes
metadata:
name: my-cluster
spec:
controlPlane:
replicas: 3

nodeGroups:
# Existing node group
general:
minReplicas: 2
maxReplicas: 5
instanceType: "s1.large"
ephemeralStorage: 50Gi
roles:
- ingress-nginx

# New node group for intensive compute
compute:
minReplicas: 1
maxReplicas: 10
instanceType: "u1.4xlarge"
ephemeralStorage: 100Gi
roles: []
tip

Choose a descriptive name for your node groups (compute, web, monitoring, gpu) to make cluster management easier.

3. Modify an existing node group

To modify a node group, update the desired fields in your YAML file. For example, to change the instance type and increase ephemeral storage:

cluster-updated.yaml
apiVersion: apps.cozystack.io/v1alpha1
kind: Kubernetes
metadata:
name: my-cluster
spec:
controlPlane:
replicas: 3

nodeGroups:
general:
minReplicas: 2
maxReplicas: 5
instanceType: "u1.xlarge" # Modified: from s1.large to u1.xlarge
ephemeralStorage: 100Gi # Modified: from 50Gi to 100Gi
roles:
- ingress-nginx
warning

Changing instanceType triggers a rolling update of the group's nodes. Ensure your cluster has enough capacity to absorb the load during the update.

4. Remove a node group

To remove a node group, simply delete its block from the configuration and re-apply:

cluster-simplified.yaml
apiVersion: apps.cozystack.io/v1alpha1
kind: Kubernetes
metadata:
name: my-cluster
spec:
controlPlane:
replicas: 3

nodeGroups:
general:
minReplicas: 2
maxReplicas: 5
instanceType: "s1.large"
ephemeralStorage: 50Gi
roles:
- ingress-nginx
# The "compute" node group has been removed
warning

Before removing a node group, ensure that the workloads running on it can be rescheduled on other groups. Use kubectl drain on the affected nodes if necessary.

5. Apply the changes

Apply the changes with kubectl:

kubectl apply -f cluster-updated.yaml

Verification

Verify that the changes have been applied:

# Check the cluster configuration
kubectl get kubernetes my-cluster -o yaml | grep -A 15 nodeGroups

# Watch the child cluster nodes
kubectl --kubeconfig=cluster-admin.yaml get nodes -w

# Check machines being provisioned
kubectl get machines -l cluster.x-k8s.io/cluster-name=my-cluster

Expected output:

NAME                         STATUS   ROLES    AGE   VERSION
my-cluster-general-xxxxx Ready <none> 10m v1.29.0
my-cluster-compute-yyyyy Ready <none> 2m v1.29.0

Next steps