Cluster ID Generation and Kafka Storage Preparation
The cluster ID is a unique identifier generated during cluster creation. It serves for internal cluster node identification and metadata management. Changing the Kafka cluster ID will render it inoperable.
All cluster nodes require Kafka storage preparation using the kafka-storage.sh
utility, which creates the necessary directory structure and files for Kafka data storage.
- On any
Kafka
node, execute to generatecluster_id
:
$ KAFKA_CLUSTER_ID="$(JAVA_HOME=/app/jdk/ /app/kafka/bin/kafka-storage.sh random-uuid)"
Get the variable value:
$ echo $KAFKA_CLUSTER_ID
- This variable must be set on all cluster nodes. Use the value from the node where
cluster_id
was first generated. Execute:
$ KAFKA_CLUSTER_ID= mK5QOoYJQFaaGLq9wV6uiA
- On all
Kafka
nodes withcontroller
role, execute:
$ JAVA_HOME=/app/jdk/ /app/kafka/bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c /app/kafka/config/kraft/controller.properties
- On all
Kafka
nodes withbroker
role, execute:
$ JAVA_HOME=/app/jdk/ /app/kafka/bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c /app/kafka/config/kraft/broker.properties
Certificate Store Configuration
This example uses a local Certificate Authority (CA). Each Kafka
node will receive an individual certificate. You'll need root, intermediate (if applicable), and node certificates in .pem
format.
Intermediate and root certificates used for authenticating other parties are stored in the truststore. Server certificates and private keys are stored in the keystore. Truststore and keystore names can be arbitrary.
Certificate table example:
Common Name (CN) | C | ST | L | O | Subject Alternative Name |
---|---|---|---|---|---|
adminkfk | AE | Dubai | Dubai | Work | - |
broker1 | AE | Dubai | Dubai | Work | DNS:broker1, IP:192.168.0.54 |
broker2 | AE | Dubai | Dubai | Work | DNS:broker2, IP:192.168.0.55 |
broker3 | AE | Dubai | Dubai | Work | DNS:broker3, IP:192.168.0.56 |
broker4 | AE | Dubai | Dubai | Work | DNS:broker4, IP:192.168.0.57 |
broker5 | AE | Dubai | Dubai | Work | DNS:broker5, IP:192.168.0.58 |
controller1 | AE | Dubai | Dubai | Work | DNS:controller1, IP:192.168.0.51 |
controller2 | AE | Dubai | Dubai | Work | DNS:controller2, IP:192.168.0.52 |
controller3 | AE | Dubai | Dubai | Work | DNS:controller3, IP:192.168.0.53 |
producer_host1 | AE | Dubai | Dubai | Work | DNS:producer_host1, IP:192.168.0.71 |
consumer_host1 | AE | Dubai | Dubai | Work | DNS:consumer_host1, IP:192.168.0.81 |
Root (ca-cert.pem
) and intermediate (ca-sub-cert.pem
) certificates must be placed in a store named ca-truststore.jks
.
To add the root certificate, execute:
$ JAVA_HOME=/app/jdk/ /app/jdk/bin/keytool -keystore /app/certs/ca-truststore.jks -alias ca-root -importcert -file ca-cert.pem
This creates a keystore. The system will prompt for a new store password (required for adding certificates and used by Kafka when accessing the truststore). When asked Trust this certificate? [no]:
, answer yes
. Completion message: Certificate was added to keystore
.
For intermediate certificates, repeat similarly:
$JAVA_HOME=/app/jdk/ /app/jdk/bin/keytool -keystore /app/certs/ca-truststore.jks
-alias ca-sub-root -importcert -file ca-sub-cert.pem
The ca-truststore.jks
must be identical across all Kafka
nodes and distributed to other hosts via preferred method.
Server certificates and corresponding private keys must be placed in kafka.keystore.jks
. Since keytool doesn't support separate key imports, first combine key and certificate into .p12
format. To do this, run the command:
$ openssl pkcs12 -export -in broker1-cert.pem -inkey broker1-key.pem -name broker1 -out broker1.p12
The system will prompt for the Enter Export Password
. You must save this password - it will be required when importing the certificate into the kafka.keystore.jks
store. To import the certificate into the keystore, execute the command:
$ JAVA_HOME=/app/jdk/ /app/jdk/bin/keytool -importkeystore -destkeystore /app/certs/ kafka.keystore.jks -srckeystore broker1.p12 -srcstoretype PKCS12
The system will prompt you to set a new password Enter destination keystore password
- this should be saved. Then you'll need to enter the previously set export password Export Password
. Upon completion, the system will display: Entry for alias broker1 successfully imported. Import command completed: 1 entries successfully imported, 0 entries failed or cancelled
.
To verify the imported certificates in the keystore, you can use the following commands:
$ JAVA_HOME=/app/jdk/ /app/jdk/bin/keytool -keystore /app/certs/kafka.keystore.jks --list
$ JAVA_HOME=/app/jdk/ /app/jdk/bin/keytool -keystore /app/certs/ca-truststore.jks –list
The passwords for accessing certificate stores must be specified in the configuration files /app/kafka/config/kraft/broker.properties
and /app/kafka/config/kraft/controller.properties
under the Security
section, using the variables ssl.keystore.password
and ssl.truststore.password
.
The same steps for importing the subject certificate into kafka.keystore.jks
must be performed on each Kafka
node, as a separate certificate was prepared for each node.
For cluster administration, a separate certificate with CN = adminkfk
. has been created. We need to create a dedicated keystore for it. This can be placed on any Kafka
cluster node or an external node (in which case additional cluster management tools would be required - this scenario is not covered in this article). As an example, cluster management will be performed only from the first node with the broker
role. On the first broker
node, execute these commands:
$ openssl pkcs12 -export -in adminkfk-cert.pem -inkey adminkfk-key.pem -name adminkfk -out adminkfk.p12
$ JAVA_HOME=/app/jdk/ /app/jdk/bin/keytool -importkeystore -destkeystore /app/certs/adminkfk.keystore.jks -srckeystore adminkfk.p12 -srcstoretype PKCS12
The passwords for accessing ca-truststore.jks
and adminkfk.keystore.jks
must be specified in the /app/certs/adminkfk.properties
file:
security.protocol=SSL
ssl.keystore.location=/app/certs/adminkfk.keystore.jks
ssl.keystore.password=PasswordKeystore
ssl.truststore.location=/app/certs/ca-truststore.jks
ssl.truststore.password=PasswordTruststore
client.id=adminkfk
ssl.endpoint.identification.algorithm=
Pay attention to the ssl.endpoint.identification.algorithm=
parameter. A similar parameter is specified in the controller.properties
and broker.properties
configuration files, and is also used in client configurations (producer/consumer). It serves to verify the match between the hostname in the server or client certificate and the actual hostname when connecting.
By default, it has the value https
- which enables hostname verification, similar to HTTPS. Kafka
compares the CN or SAN in the certificate with the hostname from which the connection is made.
When using an empty string - hostname verification is disabled.