Create Your First S3 Bucket
This guide walks you step by step through creating your first Hikube S3 bucket in 5 minutes.
By the end of this tutorial, you will have a ready-to-use bucket, with valid S3 credentials and operational connectivity.
π― Objectiveβ
By the end of this guide, you will have:
- A functional S3 bucket in your tenant
- An S3 access secret automatically generated
- The ability to connect with standard tools (
aws-cli,mc, etc.)
π§° Prerequisitesβ
Before starting, make sure you have:
- kubectl configured with your Hikube kubeconfig
- The necessary rights on your tenant to create resources
- An S3 tool of your choice installed (e.g.,
aws-cliormc)
π Step 1: Create the Bucket (1 minute)β
Prepare the manifest fileβ
Create a bucket.yaml file:
apiVersion: apps.cozystack.io/v1alpha1
kind: Bucket
metadata:
name: example-bucket
π The name indicated in
metadata.nameidentifies the Kubernetes resource. The actual S3 bucket name is automatically generated.
Deploy the bucketβ
# Create the bucket
kubectl apply -f bucket.yaml
# Verify creation
kubectl get bucket example-bucket -w
Expected result:
NAME READY AGE
example-bucket True 15s
π Step 2: Retrieve Credentials (2 minutes)β
Bucket creation generates a Secret containing a BucketInfo key (JSON).
# Retrieve and store the JSON in a variable
INFO="$(kubectl get secret bucket-example-bucket -o jsonpath='{.data.BucketInfo}' | base64 -d)"
# Export useful variables
export S3_ENDPOINT="$(echo "$INFO" | jq -r '.spec.secretS3.endpoint')"
export S3_ACCESS_KEY="$(echo "$INFO" | jq -r '.spec.secretS3.accessKeyID')"
export S3_SECRET_KEY="$(echo "$INFO" | jq -r '.spec.secretS3.accessSecretKey')"
export BUCKET_NAME="$(echo "$INFO" | jq -r '.spec.bucketName')"
BUCKET_NAMEis the actual name of your bucket on the S3 side. Use it in the commands below.
π Step 3: Test S3 Connection (2 minutes)β
With these credentials, you do not have permission to list all buckets on the endpoint.
Commands like ls must target your bucket:
β¦ ls s3://$BUCKET_NAME/ β¦ or β¦ ls <alias>/$BUCKET_NAME/ β¦
Option A β aws-cliβ
# Configure a temporary profile
aws configure --profile hikube
# Access Key ID: $S3_ACCESS_KEY
# Secret Access Key: $S3_SECRET_KEY
# Default region name: (leave empty)
# Default output format: json
# List the contents **of your bucket** (empty right after creation)
aws s3 ls "s3://$BUCKET_NAME/" --endpoint-url "$S3_ENDPOINT" --profile hikube
# Upload a test file
echo "hello hikube" > /tmp/hello.txt
aws s3 cp /tmp/hello.txt "s3://$BUCKET_NAME/hello.txt" --endpoint-url "$S3_ENDPOINT" --profile hikube
# Verify
aws s3 ls "s3://$BUCKET_NAME/" --endpoint-url "$S3_ENDPOINT" --profile hikube
Option B β mc (S3 client)β
# Define an alias for the endpoint
mc alias set hikube "$S3_ENDPOINT" "$S3_ACCESS_KEY" "$S3_SECRET_KEY"
# β οΈ Do NOT do: `mc ls hikube` -> AccessDenied
# β
Target your bucket directly:
mc ls "hikube/$BUCKET_NAME/"
# Upload a test file
mc cp /tmp/hello.txt "hikube/$BUCKET_NAME/hello.txt"
# Verify
mc ls "hikube/$BUCKET_NAME/"
π§Ή Cleanup (optional)β
# Delete the bucket (also erases its content)
kubectl delete buckets example-bucket
Deleting the bucket permanently erases all data it contains. Check your backups before proceeding.
π Next Stepsβ
π API Reference β Complete specification π Architecture β Overview
π‘ To Rememberβ
- The provided credentials give access only to your bucket
- Always target
s3://$BUCKET_NAME/(oralias/$BUCKET_NAME/) in your commands - The S3 endpoint is compatible with standard tools and SDKs
- Strict isolation by tenant and dedicated credentials