Troubleshooting — S3 Buckets
AccessDenied when accessing the bucket
Cause: the credentials used are incorrect, or the bucket name used does not match the real name in the S3 backend.
Solution:
-
Retrieve credentials from the Kubernetes Secret:
kubectl get tenantsecret bucket-<name> -o jsonpath='{.data.BucketInfo}' | base64 -d | jq -
Use the
spec.bucketNamefield as the bucket name (notmetadata.name):aws --endpoint-url https://prod.s3.hikube.cloud s3 ls s3://<spec.bucketName>/ -
Verify that
accessKeyIDandaccessSecretKeyare correctly configured in your S3 tool.
ListBucket fails on the root
Cause: each bucket has its own isolated credentials. It is not possible to list all buckets with a single set of credentials.
Solution:
-
Use the specific credentials for the bucket you want to list:
aws --endpoint-url https://prod.s3.hikube.cloud s3 ls s3://<spec.bucketName>/ -
To list all your buckets, use
kubectl:kubectl get buckets -
For each bucket, retrieve the individual credentials from the corresponding Secret.
Credentials not found
Cause: the Secret name follows the pattern bucket-<name> where <name> is the metadata.name of the Bucket resource.
Solution:
-
List available Secrets:
kubectl get tenantsecrets | grep bucket- -
Extract access information:
kubectl get tenantsecret bucket-<name> -o jsonpath='{.data.BucketInfo}' | base64 -d | jq -
To extract only the keys:
kubectl get tenantsecret bucket-<name> -o jsonpath='{.data.BucketInfo}' \
| base64 -d \
| jq -r '.spec.secretS3 | "\(.accessKeyID) \(.accessSecretKey)"'
Slow upload or timeout
Cause: network issue, large file size without multipart upload, or distant endpoint.
Solution:
-
Check your connectivity to the endpoint:
curl -s -o /dev/null -w "%{time_total}" https://prod.s3.hikube.cloud -
Use the regional endpoint
https://prod.s3.hikube.cloud(no intermediate CDN). -
For large files, enable multipart upload:
aws --endpoint-url https://prod.s3.hikube.cloud s3 cp large-file.tar.gz s3://<bucket-name>/ \
--expected-size $(stat -c%s large-file.tar.gz) -
With
mc, multipart is automatic for files larger than 64 MB.
Bucket not found after creation
Cause: the real bucket name in the S3 backend (spec.bucketName) differs from the Kubernetes resource's metadata.name.
Solution:
-
Check the Bucket resource status:
kubectl get bucket <name>
kubectl describe bucket <name> -
Retrieve the real bucket name from the Secret:
kubectl get tenantsecret bucket-<name> -o jsonpath='{.data.BucketInfo}' \
| base64 -d \
| jq -r '.spec.bucketName' -
Use this real name to access the bucket:
aws --endpoint-url https://prod.s3.hikube.cloud s3 ls s3://<real-bucket-name>/
Do not confuse metadata.name (Kubernetes name) with spec.bucketName (real name in S3). Only the latter works for S3 access.