Hi, I'm trying to make a kafka consumer working, but I am having this issue about SSL Handshake failed. Any ideas ?
2022-07-18 14:00:45,216 INFO [NiFi Web Server-203] o.a.n.c.s.StandardProcessScheduler Starting ConsumeKafkaRecord_2_6[id=f5ee162d-1006-1181-c1d1-1d8a7293ffb7]
2022-07-18 14:00:45,217 INFO [NiFi Web Server-203] o.a.n.controller.StandardProcessorNode Starting ConsumeKafkaRecord_2_6[id=f5ee162d-1006-1181-c1d1-1d8a7293ffb7]
2022-07-18 14:00:45,217 INFO [Timer-Driven Process Thread-5] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled ConsumeKafkaRecord_2_6[id=f5ee162d-1006-1181-c1d1-1d8a7293ffb7] to run with 1 threads
2022-07-18 14:00:45,219 INFO [Timer-Driven Process Thread-8] o.a.k.clients.consumer.ConsumerConfig ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [bootstrap-url:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-integration.cubo-transactions-consumer-20
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = integration.cubo-transactions-consumer
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 10000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = SCRAM-SHA-512
security.protocol = SASL_SSL
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /opt/nifi-toolkit-1.15.3/bin/target/CN=localhost_OU=NIFI.p12
ssl.truststore.password = [hidden]
ssl.truststore.type = PKCS12
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
2022-07-18 14:00:45,224 INFO [Timer-Driven Process Thread-8] o.a.k.c.s.authenticator.AbstractLogin Successfully logged in.
2022-07-18 14:00:45,291 INFO [Timer-Driven Process Thread-8] o.a.kafka.common.utils.AppInfoParser Kafka version: 2.6.3
2022-07-18 14:00:45,291 INFO [Timer-Driven Process Thread-8] o.a.kafka.common.utils.AppInfoParser Kafka commitId: c24cbd3f5eeffa1e
2022-07-18 14:00:45,291 INFO [Timer-Driven Process Thread-8] o.a.kafka.common.utils.AppInfoParser Kafka startTimeMs: 1658163645291
2022-07-18 14:00:45,291 INFO [Timer-Driven Process Thread-8] o.a.kafka.clients.consumer.KafkaConsumer [Consumer clientId=consumer-integration.cubo-transactions-consumer-20, groupId=integration.cubo-transactions-consumer] Subscribed to topic(s): integration.cubo-transactions
2022-07-18 14:00:45,386 INFO [Flow Service Tasks Thread-1] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@558d7d23 // Another save pending = false
2022-07-18 14:00:45,532 INFO [pool-9-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository
2022-07-18 14:00:45,532 INFO [pool-9-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 28 records in 0 milliseconds
2022-07-18 14:00:47,314 INFO [Timer-Driven Process Thread-2] org.apache.kafka.common.network.Selector [Consumer clientId=consumer-integration.cubo-transactions-consumer-20, groupId=integration.cubo-transactions-consumer] Failed authentication with bootstrap-url (SSL handshake failed)
2022-07-18 14:00:47,314 ERROR [Timer-Driven Process Thread-2] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-integration.cubo-transactions-consumer-20, groupId=integration.cubo-transactions-consumer] Connection to node -1 (bootstrap-url:9092) failed authentication due to: SSL handshake failed
2022-07-18 14:00:47,314 WARN [Timer-Driven Process Thread-2] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-integration.cubo-transactions-consumer-20, groupId=integration.cubo-transactions-consumer] Bootstrap broker bootstrap-url:9092 (id: -1 rack: null) disconnected
2022-07-18 14:00:47,315 ERROR [Timer-Driven Process Thread-2] o.a.n.p.k.pubsub.ConsumeKafkaRecord_2_6 ConsumeKafkaRecord_2_6[id=f5ee162d-1006-1181-c1d1-1d8a7293ffb7] Exception while interacting with Kafka so will close the lease org.apache.nifi.processors.kafka.pubsub.ConsumerPool$SimpleConsumerLease@6e83a054 due to org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
↳ causes: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
↳ causes: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
↳ causes: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alert.createSSLException(Alert.java:131)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:324)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:267)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:262)
at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654)
at sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)
at sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)
at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)
at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:981)
at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:968)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:915)
at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:430)
at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:514)
at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:368)
at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:291)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:547)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:245)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:480)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1261)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1230)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
at org.apache.nifi.processors.kafka.pubsub.ConsumerLease.poll(ConsumerLease.java:190)
at org.apache.nifi.processors.kafka.pubsub.ConsumeKafkaRecord_2_6.onTrigger(ConsumeKafkaRecord_2_6.java:488)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1273)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:103)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:456)
at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:323)
at sun.security.validator.Validator.validate(Validator.java:271)
at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:315)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:278)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)
at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:632)
... 38 common frames omitted
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:451)
... 44 common frames omitted
@Alevc
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
The above exception you are encountering with TLS is caused by a lack of a complete trust chain in the mutual TLS handshake.
On each side (server and client) of your TLS connection, you will have a keystore containing a PrivateKey entry (Will support and extended key usage (EKU) of clientAuth, serverAuth, or both) that either your client or server will use to identify itself. That PrivateKey entry will have an owner and issuer DN associated with it. The issuer is the signer for the owner. Each side will also have a truststore (just another keystore by a different name containing a bunch of TrustedCertEntry(s)) that would need to contain the trustedCertEntry for the issuer/signer of your PrivateKeyEntry. It is also very common that the issuer/signer trustedCertEntry has an owner DN and Issuer DN that do not match. This means that that issuer was just an intermediate Certificate Authority (CA) and was issued/signed by another CA. As such the truststore would need to also contain the TrustedCertEntry for that next level issuer CA. This continues until you reach the root CA trustedCertEntry where the owner and issuer have the same DN. This is known as the rootCA for your PriavteKeyEntry. Having all the intermediate CA(s) and the root CA, means you have the complete trust chain in your truststore. This process applies in both directions in the mutual TSL handshake. Meaning your clientAuth certificate presented by your Kafka Consumer must have its complete trust chain in the Kafka servers truststore. And the ServerAuth certificate presented by your server must have its complete trust chain present in the truststore used by your client Kafka consumer.
Note:
I am over simplifying this mutual TLS handshake (private keys themselves are never shared and there is more in the server and client hello exchanges in the TLS handshake), but intent is to focus at a high level on what your issue is caused by specifically.
So to get past your issue, you need to make sure the truststore used by your client and server side contain all the CAs trust chain trustedCertEntries.
If you found this response assisted with your query, please take a moment to login and click on "
Accept as Solution
" below this post.
Thank you,
Matt
@Alevc
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
The above exception you are encountering with TLS is caused by a lack of a complete trust chain in the mutual TLS handshake.
On each side (server and client) of your TLS connection, you will have a keystore containing a PrivateKey entry (Will support and extended key usage (EKU) of clientAuth, serverAuth, or both) that either your client or server will use to identify itself. That PrivateKey entry will have an owner and issuer DN associated with it. The issuer is the signer for the owner. Each side will also have a truststore (just another keystore by a different name containing a bunch of TrustedCertEntry(s)) that would need to contain the trustedCertEntry for the issuer/signer of your PrivateKeyEntry. It is also very common that the issuer/signer trustedCertEntry has an owner DN and Issuer DN that do not match. This means that that issuer was just an intermediate Certificate Authority (CA) and was issued/signed by another CA. As such the truststore would need to also contain the TrustedCertEntry for that next level issuer CA. This continues until you reach the root CA trustedCertEntry where the owner and issuer have the same DN. This is known as the rootCA for your PriavteKeyEntry. Having all the intermediate CA(s) and the root CA, means you have the complete trust chain in your truststore. This process applies in both directions in the mutual TSL handshake. Meaning your clientAuth certificate presented by your Kafka Consumer must have its complete trust chain in the Kafka servers truststore. And the ServerAuth certificate presented by your server must have its complete trust chain present in the truststore used by your client Kafka consumer.
Note:
I am over simplifying this mutual TLS handshake (private keys themselves are never shared and there is more in the server and client hello exchanges in the TLS handshake), but intent is to focus at a high level on what your issue is caused by specifically.
So to get past your issue, you need to make sure the truststore used by your client and server side contain all the CAs trust chain trustedCertEntries.
If you found this response assisted with your query, please take a moment to login and click on "
Accept as Solution
" below this post.
Thank you,
Matt
Terms & Conditions
Privacy Policy and Data Policy
Unsubscribe / Do Not Sell My Personal Information
Supported Browsers Policy
Apache Hadoop
and associated open source project names are trademarks of the
Apache Software Foundation.
For a complete list of trademarks,
click here.