Installing KEDA
you can use any Kubernetes. Installing KEDA is simple, I use helm:
helm repo add kedacore https://kedacore.github.io/charts helm repo update helm install keda kedacore/keda --namespace keda --create-namespace
Now check that the KEDA and metrics server are up:
kubectl get po -n keda kubectl get apiservices v1beta1.external.metrics.k8s.io
That’s it, KEDA operator is installed, let’s move on.
Let’s deploy “Hello World”
kubectl create ns nginx-demo
kubectl apply -n nginx-demo -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/controllers/nginx-deployment.yaml
Everything is good, this deployment has 3 pods, as it should.
Let’s scale the pods depending on the number of messages in the RabbitMQ queue.
Let’s simulate a real situation where RabbitMQ is not running inside Kubernetes but in a different network. But before that, let’s find the IP address of local machine, we will need it soon:
ipconfig getifaddr eth0 192.168.1.101
All right, let’s install RabbitMQ and start it:
apt install rabbitmq
RABBITMQ_NODE_IP_ADDRESS=0.0.0.0 /usr/local/sbin/rabbitmq-server
Let’s make sure everything is up:
/usr/local/sbin/rabbitmqadmin -H 127.0.0.1 -u guest -p guest \ list connections
No items
Everything is good. The only thing is that access for the default guest user is only available at 127.0.0.1, so we need to add a new user (demo/demo) and create a queue (demo_queue) for testing:
/usr/local/sbin/rabbitmqadmin --host 127.0.0.1 -u guest -p guest \ declare user name=demo password=demo tags=administrator user declared
/usr/local/sbin/rabbitmqadmin --host 127.0.0.1 -u guest -p guest \ declare permission vhost=/ user=demo configure=".*" write=".*" read=".*" permission declared
/usr/local/sbin/rabbitmqadmin --host 127.0.0.1 -u guest -p guest \ declare queue name=demo_queue queue declared
After that, we can connect via the “external” IP address:
/usr/local/sbin/rabbitmqadmin -H 192.168.1.101 -u demo -p demo \ list queues
RabbitMQ is ready, now we need to connect KEDA to it. For this, we need to deploy three objects:
Secret – it will store the connection string to RabbitMQ: amqp://demo:demo@192.168.1.101:5672/
TriggerAuthentication – an authentication object that uses the data from the secret above
ScaledObject – the object where we can configure various scaling parameters
Deploy:
kubectl apply -n nginx-demo -f objects.yaml
--- apiVersion: v1 kind: Secret metadata: name: keda-rabbitmq-secret data: host: YW1xcDovL2RlbW86ZGVtb0AxOTIuMTY4LjIuMTAxOjU2NzIv # echo -n amqp://demo:demo@192.168.1.101:5672/ | base64 --- apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-rabbitmq-conn spec: secretTargetRef: - parameter: host name: keda-rabbitmq-secret key: host --- apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: rabbitmq-scaledobject spec: scaleTargetRef: name: nginx-deployment minReplicaCount: 1 maxReplicaCount: 5 pollingInterval: 10 cooldownPeriod: 120 triggers: - type: rabbitmq metadata: protocol: amqp queueName: demo_queue mode: QueueLength value: "3" authenticationRef: name: keda-trigger-auth-rabbitmq-conn
Let’s check that everything is working:
kubectl get scaledobject kubectl get po kubectl get hpa
Testing
Let’s send 5 messages to our RabbitMQ queue:
for i in {1..5}; do /usr/local/sbin/rabbitmqadmin --host 192.168.1.101 -u demo -p demo \ publish exchange=amq.default routing_key=demo_queue payload="message ${i}" done
/usr/local/sbin/rabbitmqadmin -H 192.168.1.101 -u demo -p demo \ list queues
Wait couple of seconds while KEDA fetches metrics and checks our pods:
kubectl get hpa kubectl get po
Let’s add more messages to the queue:
for i in {6..12}; do /usr/local/sbin/rabbitmqadmin --host 192.168.1.101 -u demo -p demo \ publish exchange=amq.default routing_key=demo_queue payload="message ${i}" done /usr/local/sbin/rabbitmqadmin -H 192.168.1.101 -u demo -p demo \ list queues
12 messages in the queue, theoretically, 4 pods should be up, let’s check:
kubectl get hpa kubectl get po
Now let’s clear the queue, the number of pods should drop to the minimum value:
/usr/local/sbin/rabbitmqadmin -H 192.168.1.101 -u demo -p demo \ purge queue name=demo_queue queue purged /usr/local/sbin/rabbitmqadmin -H 192.168.1.101 -u demo -p demo \ list queues
Everything is good, it scales up and down as expected.
Let’s clean up:
helm uninstall keda -n keda kubectl delete ns keda kubectl delete ns nginx-demo