[bitnami/kafka] Kafka SASL_PLAINTEXT authentication with kraft results in invalid password
ALLOW_PLAINTEXT_LISTENER=yes
KAFKA_ENABLE_KRAFT=yes
KAFKA_CFG_PROCESS_ROLES=broker,controller
KAFKA_BROKER_ID=1
KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9093
KAFKA_CFG_LISTENERS=BROKER://:9092,CONTROLLER://:9093
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=BROKER:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT
KAFKA_CFG_ADVERTISED_LISTENERS=BROKER://localhost:9092
KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
KAFKA_CFG_INTER_BROKER_LISTENER_NAME=BROKER
KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN
KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN
KAFKA_CFG_LISTENER_NAME_BROKER_PLAIN_SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username="dev01" password="42" user_admin="42";
KAFKA_CFG_LISTENER_NAME_CONTROLLER_PLAIN_SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username="dev01" password="42" user_admin="42";
BITNAMI_DEBUG=true
Client properites
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="dev01" \
password="42";
Run this command for example
kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test-topic --producer.config client.properties
What is the expected behavior?
Client is able to authenticate
What do you see instead?
Client logs:
[2023-04-05 13:03:01,634] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2023-04-05 13:03:01,637] ERROR Error when sending message to topic report-events with key: null, value: 264 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed: Invalid username or password
Server logs:
[2023-04-05 12:03:02,409] INFO [SocketServer listenerType=BROKER, nodeId=1] Failed authentication with /172.29.0.1 (channelId=172.29.0.7:9092-172.29.0.1:54250-0) (Authentication failed: Invalid username or password) (org.apache.kafka.common.network.Selector)
Additional information
I want to setup for local development kraft kafka with a SASL_PLAINTEXT auth protocol. I've tried all sorts of configs I found on the internet, but I've been unable to connect the client to the server. Here https://hub.docker.com/r/bitnami/kafka/ I have only found kraft properties or sasl properties but not both.
What is the correct set of properties to make it work?
Is My config wrong,this does not work for me
- KAFKA_CFG_LISTENERS=INTERNAL://:9092,CONTROLLER://:9093,EXTERNAL://:9094
- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka3:9092,EXTERNAL://10.22.191.138:9100
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=SCRAM-SHA-256
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=SCRAM-SHA-256
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
- KAFKA_CFG_LISTENER_NAME_INTERNAL_SCRAM-SHA-256_SASL_JAAS_CONFIG=org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password" user_user="password";
I am also having the exact same issue - tried recreating with the following:
client.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="dev01" \
password="42";``
environment:
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_ZOOKEEPER_CONNECT=rms_zookeeper:2181
- KAFKA_CFG_LISTENERS=BROKER://:9092,CONTROLLER://:9093
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=BROKER:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT
- KAFKA_CFG_ADVERTISED_LISTENERS=BROKER://localhost:9092
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=BROKER
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN
- KAFKA_CFG_LISTENER_NAME_BROKER_PLAIN_SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username="dev01" password="42" user_dev01="42";
- KAFKA_CFG_LISTENER_NAME_CONTROLLER_PLAIN_SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username="dev01" password="42" user_dev01="42";
- BITNAMI_DEBUG=true
running /opt/bitnami$ kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test-topic --producer.config consumer.properties
[2023-08-04 12:29:21,885] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) failed authentication due to: Authentication failed: Invalid username or password (org.apache.kafka.clients.NetworkClient)
is this auto generated or should configure by myself
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="dev01" \
password="42";``
Same my issues
With latest version, I receive error message when connect from client: (Authentication failed: Invalid username or password) (org.apache.kafka.common.network.Selector)
services:
zookeeper:
image: bitnami/zookeeper:latest
kafka:
image: bitnami/kafka:latest
When I used this version, this problem was solved:
services:
zookeeper:
image: bitnami/zookeeper:3.8.0
kafka:
image: bitnami/kafka:3.1.1
@Unique201 @WhiteStart,
In order to configure usernames and passwords, you could use KAFKA_USERNAMES/KAFKA_PASSWORDS
.
In the latest version of the bitnami/kafka image, we have introduced a change to use the listener.name.<listener_name>.<sasl_mechanism>.sasl.jaas.config
recommended by Kafka instead of the previous kafka_jaas.conf
approach.
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=rms_zookeeper:2181
# Zookeeper SASL credentials
#- KAFKA_ZOOKEEPER_USER=zk_user
#- KAFKA_ZOOKEEPER_PASSWORD=zk_pass
#- KAFKA_ZOOKEEPER_PROTOCOL=SASL
- KAFKA_CFG_LISTENERS=BROKER://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=BROKER://localhost:9092
- KAFKA_CLIENT_USERS=dev01
- KAFKA_CLIENT_PASSWORDS=41
- KAFKA_CLIENT_LISTENER_NAME=BROKER
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=BROKER:SASL_PLAINTEXT
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=BROKER
- KAFKA_INTER_BROKER_USER=inter_broker_user
- KAFKA_INTER_BROKER_PASSWORD=inter_broker_password
Note: I omitted the CONTROLLER listener, as it is not required in Zookeeper mode.
This will configure the server.properties with the following values:
$ cat /opt/bitnami/kafka/config/server.properties |
grep -Ev "^$|^#"
listeners=BROKER://:9092
advertised.listeners=BROKER://localhost:9092
listener.security.protocol.map=BROKER:SASL_PLAINTEXT
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/bitnami/kafka/data
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.retention.check.interval.ms=300000
zookeeper.connect=zookeeper:2181
controller.quorum.voters=
inter.broker.listener.name=BROKER
max.partition.fetch.bytes=
max.request.size=
sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
sasl.mechanism.controller.protocol=
sasl.mechanism.inter.broker.protocol=PLAIN
listener.name.broker.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="inter_broker_user" password="inter_broker_password" user_dev01="42" user_inter_broker_user="inter_broker_password";
listener.name.broker.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="inter_broker_user" password="inter_broker_password";
listener.name.broker.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="inter_broker_user" password="inter_broker_password";
And additionally, producer.properties/consumer.properties
with the following:
$ cat /opt/bitnami/kafka/config/producer.properties | grep -Ev "^$|^#"
bootstrap.servers=localhost:9092
compression.type=none
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="dev01" password="42";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-512
$ cat /opt/bitnami/kafka/config/consumer.properties | grep -Ev "^$|^#"
bootstrap.servers=localhost:9092
group.id=test-consumer-group
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="dev01" password="42";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-512
KAFKA_BROKER_ID: '1'
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: 'false'
KAFKA_CLIENT_LISTENER_NAME: EXTERNAL
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
KAFKA_CFG_LISTENERS: INTERNAL://:9091,EXTERNAL://:9092
KAFKA_CFG_ADVERTISED_LISTENERS: INTERNAL://kafka:9091,EXTERNAL://kafka.local.net:30008
KAFKA_CFG_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_CFG_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_INTER_BROKER_USER: sa
KAFKA_INTER_BROKER_PASSWORD: 000000
KAFKA_CFG_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND: 'false'
KAFKA_CFG_SUPER_USERS: User:sa
KAFKA_CFG_LISTENER_NAME_INTERNAL_PLAIN_SASL_JAAS_CONFIG: >
org.apache.kafka.common.security.plain.PlainLoginModule required
username="sa"
password="000000"
user_sa="000000";
KAFKA_CFG_LISTENER_NAME_EXTERNAL_PLAIN_SASL_JAAS_CONFIG: >
org.apache.kafka.common.security.plain.PlainLoginModule required
username="sa"
password="000000"
user_sa="000000";
depends_on:
- zookeeper
Hi @lehong3000
Could you please try pulling the latest version of the bitnami/kafka
image?
I have just implemented a fix that affected Zookeeper SCRAM user creation #43829.
Additionally, could you please make the following changes to your docker-compose?
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=false
- KAFKA_CLIENT_LISTENER_NAME=EXTERNAL
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
- KAFKA_CFG_LISTENERS=INTERNAL://:9091,EXTERNAL://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka:9091,EXTERNAL://kafka.local.net:30008
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_INTER_BROKER_USER=sa
- KAFKA_INTER_BROKER_PASSWORD=000000
- KAFKA_CLIENT_USERS=sa
- KAFKA_CLIENT_PASSWORDS=000000
- KAFKA_CFG_AUTHORIZER_CLASS_NAME=kafka.security.authorizer.AclAuthorizer
- KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND=false
- KAFKA_CFG_SUPER_USERS=User:sa
ALLOW_PLAINTEXT_LISTENER
no longer needed
Use KAFKA_CLIENT_USERS
and KAFKA_CLIENT_PASSWORDS
to ensure SCRAM users are created in Zookeeper
No need to use KAFKA_CFG_LISTENER_NAME_INTERNAL_PLAIN_SASL_JAAS_CONFIG
, as they will be automatically configured based on the values of KAFKA_CLIENT_USERS/KAFKA_CLIENT_PASSWORDS
.
@migruiz4 Thank you , your fix worked.
I was hoping to see if someone is able to show me how to set it up with different username and password for example:
Consumer:
Username: userC
Password: password
Consumer Group ID: grp-svc-kafka-consumer
Producer:
Username: producerC
Password: password
Consumer Group ID: grp-svc-kafka-producer
Is this something that is possible?
basing if off of this setup which now works: #29327 (comment)
Hi @Unique201,
You could create multiple users using KAFKA_CLIENT_USERS/KAFKA_CLIENT_PASSWORDS
as a comma-separated list:
- KAFKA_CLIENT_USERS=user1,user2,user3
- KAFKA_CLIENT_PASSWORDS=pass1,pass2,pass3
Sadly, both consumer.properties
and producer.properties
will be configured with the first username of the list.
I would recommend using the automatically generated files as a reference to build your own.
Hi @Unique201,
You could create multiple users using KAFKA_CLIENT_USERS/KAFKA_CLIENT_PASSWORDS
as a comma-separated list:
- KAFKA_CLIENT_USERS=user1,user2,user3
- KAFKA_CLIENT_PASSWORDS=pass1,pass2,pass3
Sadly, both consumer.properties
and producer.properties
will be configured with the first username of the list.
I would recommend using the automatically generated files as a reference to build your own.
Yes that is what I tried originally : I have a consumer.properties file
bootstrap.servers=localhost:9092
compression.type=none
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="test" password="test";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-512`
volumes:
- "kafka_data:/bitnami"
- ${CONSUMER_PROPS}:/opt/bitnami/kafka/config/consumer.properties:ro`
and as follows
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=rms_zookeeper:2181
- KAFKA_CFG_LISTENERS=BROKER://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=BROKER://localhost:9092
- KAFKA_CLIENT_USERS=dev01,test
- KAFKA_CLIENT_PASSWORDS=41,test
- KAFKA_CLIENT_LISTENER_NAME=BROKER
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=BROKER:SASL_PLAINTEXT
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=BROKER
- KAFKA_INTER_BROKER_USER=inter_broker_user
- KAFKA_INTER_BROKER_PASSWORD=inter_broker_password`
Adding in the custom consumer.properties seemed to break it - it was working before but now
[2023-08-07 15:22:29,620] ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$)
java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 'broker.KafkaServer' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:150)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:103)
at org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:74)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:168)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:81)
at kafka.server.BrokerToControllerChannelManagerImpl.buildNetworkClient$1(BrokerToControllerChannelManager.scala:204)
at kafka.server.BrokerToControllerChannelManagerImpl.newRequestThread(BrokerToControllerChannelManager.scala:243)
at kafka.server.BrokerToControllerChannelManagerImpl.<init>(BrokerToControllerChannelManager.scala:183)
at kafka.server.BrokerToControllerChannelManager$.apply(BrokerToControllerChannelManager.scala:149)
at kafka.server.KafkaServer.startup(KafkaServer.scala:316)
at kafka.Kafka$.main(Kafka.scala:113)
at kafka.Kafka.main(Kafka.scala)
[2023-08-07 15:22:29,621] INFO [KafkaServer id=1008] shutting down (kafka.server.KafkaServer)
Hi @Unique201,
I'm sorry but I haven't been able to reproduce the issue, could you please provide more information about how are you running the producer/consumer?
In case it helps, these are the steps I followed to test consumer/producer were properly configured:
Deploy using this docker-compose:
version: "3.6"
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
ports:
- "9092:9092"
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CFG_LISTENERS=BROKER://:9092
- KAFKA_CLIENT_LISTENER_NAME=BROKER
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=BROKER:SASL_PLAINTEXT
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=BROKER
- KAFKA_INTER_BROKER_USER=inter_broker_user
- KAFKA_INTER_BROKER_PASSWORD=inter_broker_password
- KAFKA_CFG_ADVERTISED_LISTENERS=BROKER://localhost:9092
- KAFKA_CLIENT_USERS=user1,user2,user3
- KAFKA_CLIENT_PASSWORDS=pass1,pass2,pass3
depends_on:
- zookeeper
Connect a producer and consumer using their properties files:
$ docker exec -it <your_kafka_container> bash
# Now inside the container
$ kafka-console-producer.sh --bootstrap-server 127.0.0.1:9092 --topic test --producer.config /opt/bitnami/kafka/config/producer.properties
>Hi there!
>This is a test
$ kafka-console-consumer.sh --bootstrap-server 127.0.0.1:9092 --topic test --consumer.config /opt/bitnami/kafka/config/consumer.properties
Hi there!
This is a test
Hi @Unique201,
I'm sorry but I haven't been able to reproduce the issue, could you please provide more information about how are you running the producer/consumer?
In case it helps, these are the steps I followed to test consumer/producer were properly configured:
Deploy using this docker-compose:
version: "3.6"
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
ports:
- "9092:9092"
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CFG_LISTENERS=BROKER://:9092
- KAFKA_CLIENT_LISTENER_NAME=BROKER
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=BROKER:SASL_PLAINTEXT
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=BROKER
- KAFKA_INTER_BROKER_USER=inter_broker_user
- KAFKA_INTER_BROKER_PASSWORD=inter_broker_password
- KAFKA_CFG_ADVERTISED_LISTENERS=BROKER://localhost:9092
- KAFKA_CLIENT_USERS=user1,user2,user3
- KAFKA_CLIENT_PASSWORDS=pass1,pass2,pass3
depends_on:
- zookeeper
Connect a producer and consumer using their properties files:
$ docker exec -it <your_kafka_container> bash
# Now inside the container
$ kafka-console-producer.sh --bootstrap-server 127.0.0.1:9092 --topic test --producer.config /opt/bitnami/kafka/config/producer.properties
>Hi there!
>This is a test
$ kafka-console-consumer.sh --bootstrap-server 127.0.0.1:9092 --topic test --consumer.config /opt/bitnami/kafka/config/consumer.properties
Hi there!
This is a test
Hi - so I managed to fix the issue I was having . I am having an annoying issue where I am trying to get my KakfaProducer in my Spring App to connect:
public class KafkaConfig {
@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
@Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
// Add security-related properties
props.put("security.protocol", "SASL_PLAINTEXT");
props.put("sasl.mechanism", "PLAIN");
props.put("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"test\" password=\"test\";");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(props);
@Service
@Log4j2
@NoArgsConstructor
public class KafkaProducer
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
@PostConstruct
private void setUp()
// Set the message converter to one that converts the message type to json
kafkaTemplate.setMessageConverter(new StringJsonMessageConverter());
public KafkaProducer(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
public void sendMessage(String topic, String message) {
kafkaTemplate.send(topic, message);
Fails authentication with
(Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
I originally had it the security options set in application.properties as:
spring.kafka.bootstrap-servers=host.docker.internal:9092
spring.kafka.properties.security.protocol=SASL_PLAINTEXT
spring.kafka.properties.sasl.mechanism=PLAIN
spring.kafka.properties.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="test" password="test;
I thought it was to do with the properties not being used by the Spring Boot so tried configuring the same properties in the beans definitions but still no luck , am I misunderstanding something fundamental here?
Hi @Unique201,
I'm sorry but I lack the expertise to help you configure your Kafka producer/consumer client, as my scope is limited to the Bitnami Kafka image and chart.
In case it helps, I found this issue which could be related to your issue: https://stackoverflow.com/questions/60825373/spring-kafka-application-properties-configuration-for-jaas-sasl-not-working