Introduction
This guide explains how to deploy a fully functional, high-availability Kafka cluster in KRaft mode using Strimzi 0.48.0 on Kubernetes.
When Bitnami recently restructured their repository access, many of their popular Helm charts – including those for Apache Kafka – became unavailable publicly. This change left many Kubernetes users, myself included, looking for reliable and production-ready alternatives.
After evaluating several options, I found Strimzi to be the most robust and Kubernetes-native way to deploy Kafka. Strimzi isn’t just another Helm chart – it’s a full-featured Operator built to manage Apache Kafka clusters declaratively within Kubernetes, including the modern KRaft mode (Kafka without ZooKeeper).
In this guide, I’ll walk you through deploying a production-ready Kafka cluster using Strimzi in KRaft mode, along with fixes for common RBAC permission issues you might encounter during installation.
Key Advantages of Strimzi
- Native Kubernetes Integration – Seamlessly works with Kubernetes resources and APIs
- Automatic Cluster Management – Handles recovery, scaling, and rolling upgrades
- KRaft Mode Support – Deploy a lightweight, ZooKeeper-free Kafka cluster
- Built-in Security – Supports TLS, authentication, and fine-grained authorization
- Active Community – Backed by Red Hat and the Apache Kafka ecosystem
Prerequisites
Before starting, make sure you have:
- A Kubernetes cluster (tested on v1.34.1; compatible with v1.28+)
- At least 3 worker nodes (for high availability)
- A functional StorageClass (this guide uses
rook-cephfs) - kubectl configured to access your cluster
- Basic understanding of Kubernetes resources (Pods, Deployments, CRDs, etc.)
Architecture Overview
Here’s what we’ll deploy:
| Component | Purpose |
|---|---|
| 3 Kafka brokers | Handle message streaming and data replication |
| 3 Kafka controllers | Manage cluster metadata and consensus in KRaft mode |
| Entity Operator | Manages Kafka topics and users |
| Strimzi Cluster Operator | Automates Kafka lifecycle management |
To ensure high availability, broker and controller pods will be distributed evenly across worker nodes using pod anti-affinity rules.
1. Install Strimzi Cluster Operator 0.48.0
# Create namespace
kubectl create namespace kafka
# Download and deploy Strimzi 0.48.0
STRIMZI_VERSION="0.48.0"
curl -L "https://github.com/strimzi/strimzi-kafka-operator/releases/download/${STRIMZI_VERSION}/strimzi-${STRIMZI_VERSION}.tar.gz" \
-o strimzi-${STRIMZI_VERSION}.tar.gz
tar -xzf strimzi-${STRIMZI_VERSION}.tar.gz
cd strimzi-${STRIMZI_VERSION}
kubectl apply -f install/cluster-operator/ -n kafka2. Fix RBAC Permissions
- Strimzi’s default installation YAML files are hardcoded to use the myproject namespace (used in their examples)
- When you install Strimzi in a different namespace (like kafka), the RoleBindings and ClusterRoleBindings still reference myproject in their subjects section
- This causes the operator’s ServiceAccount to be in the wrong namespace context, leading to permission errors
- Apply the following patches to fix them:
# Fix RoleBindings namespace
kubectl patch rolebinding strimzi-cluster-operator -n kafka \
--type='json' -p='[{"op": "replace", "path": "/subjects/0/namespace", "value": "kafka"}]'
kubectl patch rolebinding strimzi-cluster-operator-leader-election -n kafka \
--type='json' -p='[{"op": "replace", "path": "/subjects/0/namespace", "value": "kafka"}]'
kubectl patch rolebinding strimzi-cluster-operator-entity-operator-delegation -n kafka \
--type='json' -p='[{"op": "replace", "path": "/subjects/0/namespace", "value": "kafka"}]'
kubectl patch rolebinding strimzi-cluster-operator-watched -n kafka \
--type='json' -p='[{"op": "replace", "path": "/subjects/0/namespace", "value": "kafka"}]'
# Fix ClusterRoleBinding
kubectl patch clusterrolebinding strimzi-cluster-operator \
--type='json' -p='[{"op": "replace", "path": "/subjects/0/namespace", "value": "kafka"}]'
# Add missing permissions
kubectl patch clusterrole strimzi-cluster-operator-global \
--type='json' -p='[{"op": "add", "path": "/rules/-", "value": {
"apiGroups": [""],
"resources": ["nodes"],
"verbs": ["get", "list", "watch"]
}}]'
kubectl patch clusterrole strimzi-cluster-operator-global \
--type='json' -p='[{"op": "add", "path": "/rules/-", "value": {
"apiGroups": ["rbac.authorization.k8s.io"],
"resources": ["clusterrolebindings"],
"verbs": ["get", "list", "watch", "create", "patch", "delete"]
}}]'
# Restart the operator to apply changes
kubectl rollout restart deployment strimzi-cluster-operator -n kafka3. Create the Kafka Cluster Resource
In Strimzi 0.48.0, Kafka runs in KRaft mode by default.
Create the main cluster definition file kafka-cluster.yaml:
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: kafka-cluster
namespace: kafka
spec:
kafka:
version: 4.1.0
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: external
port: 9094
type: nodeport
tls: false
configuration:
externalTrafficPolicy: Cluster
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
auto.create.topics.enable: "true"
log.flush.interval.messages: "100000"
log.flush.interval.ms: "5000"
log.retention.bytes: "2147483648"
log.retention.check.interval.ms: "300000"
log.retention.hours: "72"
entityOperator:
topicOperator:
resources:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 512Mi
cpu: 200m
userOperator:
resources:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 512Mi
cpu: 200m4. Create Kafka Node Pools
KRaft requires separate node pools for brokers and controllers.
Brokers Node Pool (kafka-nodepool-brokers.yaml)
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: brokers
namespace: kafka
labels:
strimzi.io/cluster: kafka-cluster
spec:
replicas: 3
roles:
- broker
resources:
requests:
memory: 2Gi
cpu: 1000m
limits:
memory: 4Gi
cpu: 2000m
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 8Gi
deleteClaim: false
class: rook-cephfs
template:
kafkaContainer:
env:
- name: KAFKA_HEAP_OPTS
value: "-Xmx2048m -Xms2048m"
pod:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
strimzi.io/cluster: kafka-cluster
strimzi.io/kind: Kafka
strimzi.io/pool-name: brokers
topologyKey: kubernetes.io/hostnameControllers Node Pool (kafka-nodepool-controllers.yaml)
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: controllers
namespace: kafka
labels:
strimzi.io/cluster: kafka-cluster
spec:
replicas: 3
roles:
- controller
resources:
requests:
memory: 1Gi
cpu: 500m
limits:
memory: 2Gi
cpu: 1000m
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 8Gi
deleteClaim: false
class: rook-cephfs
template:
pod:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
strimzi.io/cluster: kafka-cluster
strimzi.io/kind: Kafka
strimzi.io/pool-name: controllers
topologyKey: kubernetes.io/hostname5. Deploy the Kafka Cluster
# Deploy the main cluster
kubectl apply -f kafka-cluster.yaml
# Then deploy node pools
kubectl apply -f kafka-nodepool-controllers.yaml
kubectl apply -f kafka-nodepool-brokers.yaml
# Wait for the cluster to become ready
kubectl wait --for=condition=Ready kafka/kafka-cluster -n kafka --timeout=600s6. Configure External Access (NodePorts) (Optional)
To allow external clients to connect, patch the broker services with custom NodePorts:
kubectl patch svc kafka-cluster-brokers-0 -n kafka \
--type='json' -p='[{"op": "replace", "path": "/spec/ports/0/nodePort", "value":32501}]'
kubectl patch svc kafka-cluster-brokers-1 -n kafka \
--type='json' -p='[{"op": "replace", "path": "/spec/ports/0/nodePort", "value":32502}]'
kubectl patch svc kafka-cluster-brokers-2 -n kafka \
--type='json' -p='[{"op": "replace", "path": "/spec/ports/0/nodePort", "value":32503}]'7. Deploy Kafdrop (Optional)
For monitoring topics and messages, deploy Kafdrop.
Create kafdrop.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafdrop
namespace: kafka
spec:
replicas: 1
selector:
matchLabels:
app: kafdrop
template:
metadata:
labels:
app: kafdrop
spec:
containers:
- name: kafdrop
image: obsidiandynamics/kafdrop:latest
ports:
- containerPort: 9000
env:
- name: KAFKA_BROKERCONNECT
value: "kafka-cluster-kafka-bootstrap:9092"
- name: SERVER_SERVLET_CONTEXTPATH
value: "/"
---
apiVersion: v1
kind: Service
metadata:
name: kafdrop
namespace: kafka
spec:
ports:
- port: 9000
targetPort: 9000
selector:
app: kafdrop
type: ClusterIPDeploy it:
kubectl apply -f kafdrop.yamlFinal Architecture Overview
| Component | Replicas | Role | Description |
|---|---|---|---|
| Controller Pods | 3 | Cluster metadata management | Handle coordination and leadership |
| Broker Pods | 3 | Data processing | Manage topics, partitions, and replication |
| Entity Operator | 1 | User/topic automation | Maintains Kafka users and topics |
| Kafdrop (optional) | 1 | UI | Visualizes cluster data |
