添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The cluster is working without any problem on openshift when using only internal Listeners, but I need to connect to the brokers from outside so I followed the documentation here and extracted the my-cluster-cluster-ca-cert and imported to a keystore. When I run the producer example

 /opt/kafka/bin/kafka-console-producer.sh --broker-list bootstrap-kafka.mydomain:443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producur-property ssl.truststore.location=./truststore.jks --topic test

I get :

 ERROR [Producer clientId=console-producer] Connection to node -1 (bootstrap-kafka.mydomain/<<myip>:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)

Expected behavior
The producer connects without any problem to the external listener using ssl

Environment (please complete the following information):

  • Strimzi strimzi/kafka:0.13.0-kafka-2.3.0
  • Installation method: [OKD]
  • Kubernetes cluster: [OpenShift 3.9]
  • YAML files and logs

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        version: 2.3.0
        replicas: 2
        listeners:
         external:
           type: route
           authentication:
             type: tls
           overrides:
               bootstrap:
                    host: bootstrap-kafka.mydomain
               brokers:
               - broker: 0
                 host: broker-0.mydomain
               - broker: 1
                 host: broker-1.mydomain
        config:
          offsets.topic.replication.factor: 2
          transaction.state.log.replication.factor: 2
          transaction.state.log.min.isr: 1
          log.message.format.version: "2.3"
        storage:
          type: jbod
          volumes:
          - id: 0
            type: persistent-claim
            size: 30Gi
            deleteClaim: false
      zookeeper:
        replicas: 2
        storage:
          type: persistent-claim
          size: 30Gi
          deleteClaim: false
      entityOperator:
        topicOperator: {}
        userOperator: {}
              

    If you run the client with the Java property -Djavax.net.debug=ssl it should tell use more about what is exactly failing. But my guess from the provided files is that you are using TLS client authentication. But you seem to be configuring the client only for TLS Server authentication because your kafka-console-producer.sh seems to configure only the truststore options.

    You have most probably two options:

  • Disable the TLS Client authentication by deleting the following section form your CRD:
  •        authentication:
             type: tls

    and trying to use the command as you have it.

  • Create a TLS user and specifying the ssl.keystore related options in your producer as well. Let me know if you want to try this and I can put together some more details about how to do it.
  • Right. So for the option 2 with mutual TLS you will need to do following in addition:

  • Create the KafkaUser resource:
  • apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaUser
    metadata:
      name: my-user
      labels:
        strimzi.io/cluster: my-cluster
    spec:
      authentication:
        type: tls
  • Export the user TLS certificate from the TLS secret generated by the User Operator:
  • oc extract secret/$USERNAME --keys=user.crt --to=- > user.crt
    oc extract secret/$USERNAME --keys=user.key --to=- > user.key
    
  • Use them to create a keystore:
  • openssl pkcs12 -export -in user.crt -inkey user.key -name my-user -password pass:123456 -out user.p12
    
  • Use it in your application:
  • /opt/kafka/bin/kafka-console-producer.sh --broker-list bootstrap-kafka.mydomain:443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producur-property ssl.truststore.location=./truststore.jks  --producer-property ssl.truststore.password=password --producur-property ssl.truststore.location=./user.p12 --topic test
              

    thanks a lot i will try it soon. Is it possible to configure my CRD so that the internal listener uses plaintext and the external ssl ? I tried this configuration but it didn't work. The producer and consumer inside openshift are not able to connect to the broker without ssl :

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        version: 2.3.0
        replicas: 2
        listeners:
            plain: {}
            external:
              type: route
              overrides:
                  bootstrap:
                       host: bootstrap-kafka.mydomain
                  brokers:
                  - broker: 0
                    host: broker-0.mydomain
                  - broker: 1
                    host: broker-1.mydomain
        config:
          offsets.topic.replication.factor: 2
          transaction.state.log.replication.factor: 2
          transaction.state.log.min.isr: 1
          log.message.format.version: "2.3"
        storage:
          type: jbod
          volumes:
          - id: 0
            type: persistent-claim
            size: 30Gi
            deleteClaim: false
      zookeeper:
        replicas: 1
        storage:
          type: persistent-claim
          size: 30Gi
          deleteClaim: false
      entityOperator:
        topicOperator: {}
        userOperator: {}
    We are facing same issue as this thread.. Kafka server on OCP does not have "Authentication: TLS".

    We followed this blog for extracting or certficate and creating trustore : https://strimzi.io/blog/2019/04/30/accessing-kafka-part-3/

    We folllowed same steps as option#1 with below config in "Conduktor" application on MAC.

    security.protocol=SSL
    ssl.truststore.type=JKS
    ssl.truststore.location=./truststore.jks
    ssl.truststore.password=password

    We are getting "SSL Handshake failed" error. Something we are missing..

    Can u please help suggest. Thanks.

    offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 128Mi storage: type: ephemeral metrics: lowercaseOutputName: true rules: - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count" name: "kafka_server_$1_$2_total" - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count" name: "kafka_server_$1_$2_total" labels: topic: "$3" zookeeper: replicas: 3 readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 storage: type: ephemeral metrics: lowercaseOutputName: true entityOperator: topicOperator: {} userOperator: {}

    Have created a external bootstrap route on the service (pointing to the 3 kafka nodes).. with TLS as EDGE termination.
    As per the blog above.. tried to use the parameters mentioned above.. to reach to the bootstrap server: 443 through the CONDUKTOR tool..

          external:
            type: route

    Strimzi will create its routes for you. You should use them. It will not work with edge termination. The Kafka traffic is TCP while the router supports only HTTP(S). So when you use edge termination, the Router will terminate TLS and find out there is no HTTP inside. That is probably causing the error. You need to use TLS Passthrough which hides the Kafka TCP traffic as HTTPS and gets it through the router.

    Cluster operator start fail on PKS | Error: PKIX path validation failed: java.security.cert.CertPathValidatorException #1432