添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.

Kubernetes version :

Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:52:34Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Environment :

  • Cloud provider or hardware configuration :
  • OS : CentOS Linux 7
  • Kernel : Linux kubernetes-master-3302 3.10.0-327.el7.x86_64 Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  • What happened :
    I used the below command to create a POD:
    kubectl create --insecure-skip-tls-verify -f monitorms-rc.yml
    I get this monitorms-mmqhm 0/1 ImagePullBackOff

    and upon running
    kubectl describe pod monitorms-mmqhm --namespace=sample
    It said Warning Failed Failed to pull image "10.78.0.228:5000/monitorms": Error response from daemon: {"message":"Get https://10.78.0.228:5000/v1/_ping: x509: certificate signed by unknown authority"}

    There is no certificate mentioned anywhere in my deployment configuration anywhere.

    10.78.0.228 is running a private insecured docker registry.
    Should Kubernetes not ignore the server certificate with that --insecure-skip-tls-verify flag ?

    @dixudx I forgot to mention it. I installed the server certificate globally on this kubernetes master node and then restarted the docker service running on it. After that I am successfully able to manually pull that image using docker pull 10.78.0.228:5000/monitorms . Before that I was getting this error message while doing a manual pull of that image.

    Is the error coming because the Kubernetes nodes don't have the certificate installed?

    --insecure-skip-tls-verify just skips the server's certificate verification, not docker registry, so it can not solve the problem. The error is from Docker daemon while pulling image.

    I installed the server certificate globally on this kubernetes master node and then restarted the docker service running on it.

    Maybe you should try the command docker pull 10.78.0.228:5000/monitorms on the k8s nodes which hold the pod, not the k8s master.

    That is a valid arg to kubectl create but just controls trust between kubectl and the API server

    The pull error is between the node and the docker registry. The node either needs to trust the certificate or treat that registry as an untrusted registry (which makes the node tolerate TLS verification errors)

    changed the title "x509: certificate signed by unknown authority" even with "--insecure-skip-tls-verify" option Failed to pull image with "x509: certificate signed by unknown authority" error Apr 4, 2017

    You would think that this is solved by now.

    CA Certificates

    Actual recorded cases of preventing unauthorized access : ZERO
    Amount of countless of developer time wasted because of tooling that don't integrate CA certs into their tooling properly: gasmillions of man hours.

    Moral of the story. Ditch CA certs. Such a ballache every time you have try to get tooling to work together. Nobody knows how it works. Nobody. Software that use it never work. In the end you just copy all the certs to every machine and your toaster just so you don't get that god damn x509: certificate signed by unknown authority bullshit error every bloody time you try to do any tooling.

    Now I have to go and drill right to the core of this cluster to get those certs installed, because kubernetes's secrets for handling docker is just plain useless.

    Just use the money that would have been spent trying to get the bloody CA certs to work and hire a henchman with an axe that cut the hardlines when the hacker comes. This CA certs it's not security if it does not let authorized people in because the entire field is just one giant BUG that will JAM your tooling.

    tfriedlx, jacmba, gerrat, roessland, anixiapetha, chitza, gaving, willkjackson, vindir, gauravve, and 249 more reacted with thumbs up emoji amoll-opentext-com, sellers, PatrickMurray, christoph-puppe, tecknicaltom, paribus-jenkins, kalexmills, JakeDEvans, svrc, stith, and 14 more reacted with thumbs down emoji umapathy08, psaunders, Kshitiz-Sharma, mcsheehan, denel-manilov, karthik101, cbluth, labarilem, avirtual, kalexmills, and 91 more reacted with laugh emoji cbluth, knowncitizen, conal-mclaughlin, Properko, Jaskaranbir, janwendt, christogreeff, mathielo, Deeds67, nato16, and 30 more reacted with hooray emoji hczerpak, Nick-Harvey, dhermyt, lgarza17, cbluth, jpantsjoha, AlexDDQ, onkarbhat-bsn, piotrowski, J03rg, and 81 more reacted with heart emoji James9074, just1689, manickbhan, MrAmbiG, SeanChristopherConway, archoversight, claramiranda, parth-linux, BurkovBA, concaf, and 4 more reacted with rocket emoji lcyning, James9074, just1689, calixwu, concaf, flupec, nanoguo, jonatuxdr, XiaoXiaoSN, Mert18, and jinseon2416 reacted with eyes emoji All reactions

    should anyone face it while using directly the gcr.io, one possible situation is that CA certificates on your machine are too old.

    docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.7.2
    Trying to pull repository gcr.io/google_containers/kube-apiserver-amd64 ...
    Get https://gcr.io/v1/_ping: x509: certificate signed by unknown authority '

    solution that worked for me on RH/CentOS:

    yum check-update ca-certificates; (($?==100)) && yum update ca-certificates || yum reinstall ca-certificates
    update-ca-trust extract

    @srossross-tableau

    As far as I remember this was a docker issue, not a kubernetes one. Docker does not use linux's ca certs. Nobody knows why.

    You have to install those certs manually (on every node that could spawn those pods) so that docker can use them:

    /etc/docker/certs.d/mydomain.com:1234/ca.crt

    This is a highly annoying issue as you have to butcher your nodes after bootstrapping to get those certs in there. And kubernetes spawns nodes all the time. How this issue has not been solved yet is a mystery to me. It's a complete showstopper IMO.

    This should really be solved using the secrets mechanism of kubernetes. But somehow it is not. Who knows!?

    zer-o-rez, Isuama, r-joyce, asauber, bgrnd, justinsowhat, valiantljk, timmy2702, zsjohny, benjauger, and 17 more reacted with thumbs up emoji tactical-drone, hgross, doertedev, and AaronQkuis reacted with confused emoji All reactions

    @pompomJuice, could this be a minikube image issue? I am not able to even curl this site

    minikube ssh -- curl -I https://storage.googleapis.com
    curl: (60) SSL certificate problem: self signed certificate in certificate chain
    $minikube logs
    Nov 08 18:19:06 minikube localkube[3032]: E1108 18:19:06.788101    3032 remote_image.go:108] PullImage "gcr.io/google_containers/heapster:v1.3.0" from image service failed: rpc error: code = 2 desc = error pulling image configuration: Get https://storage.googleapis.com/artifacts.google-containers.appspot.com/containers/images/sha256:f9d33bedfed3f1533f734a73718c15976fbd37f04f383087f35e5ebd91b18d1e: x509: certificate signed by unknown authority
              

    Exactly my point. That curl error is just plain wrong. It is telling you that you have the certificates but they are self signed. I find that highly unlikely. (Unless you hacked them in there somehow)

    That means that that error message is just plain wrong. This connects with my previous point that almost nobody implements this stuff correctly.

    Try to update the certs on that box like ReSearchITEng suggested above.

    I'm running into the same issue. Certs are from digicert, kubernetes cluster running in GCE, certs installed through the host and put in /etc/docker/certs.d/, and still x509 error.

    Docker logs:
    TLS handshake error from XXXXXXXXXX: remote error: tls: bad certificate

    Kub version:
    Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

    host:
    NAME="Ubuntu"
    VERSION="16.04.3 LTS (Xenial Xerus)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 16.04.3 LTS"
    VERSION_ID="16.04"
    HOME_URL="http://www.ubuntu.com/"
    SUPPORT_URL="http://help.ubuntu.com/"
    BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
    VERSION_CODENAME=xenial
    UBUNTU_CODENAME=xenial

    Paste the entire folder name in '/etc/docker/certs.d/' please. And the filenames of the certs.

    It should work if all your nodes have that cert installed.

    root@kubernetes-minion-group-96k7:/etc/docker/certs.d/"foo.bar.com":5000# ll
    total 16
    drwxr-xr-x 2 root root 4096 Dec 2 20:43 ./
    drwxr-xr-x 3 root root 4096 Dec 2 20:07 ../
    -rw-r--r-- 1 root root 3332 Dec 2 20:23 domain.crt
    -rw-r--r-- 1 root root 1675 Dec 2 20:43 domain.key

    So far only one node in the cluster :)

    Changed them to ca.crt and ca.key both in the directory, as well as updated the files called out in the secret . I restarted docker service on the node and redeployed the pods and still, same error.

    Here's more info from curl:

    curl -vvI https://foo.bar.com:5000/v2/

  • Trying XXX.XXX.XXX.XXX...
  • TCP_NODELAY set
  • Connected to foo.bar.com (XXX.XXX.XXX.XXX) port 5000 (#0)
  • ALPN, offering h2
  • ALPN, offering http/1.1
  • Cipher selection: PROFILE=SYSTEM
  • successfully set certificate verify locations:
  • CAfile: /etc/pki/tls/certs/ca-bundle.crt
    CApath: none
  • TLSv1.2 (OUT), TLS handshake, Client hello (1):
  • TLSv1.2 (IN), TLS handshake, Server hello (2):
  • TLSv1.2 (IN), TLS handshake, Certificate (11):
  • TLSv1.2 (OUT), TLS alert, Server hello (2):
  • SSL certificate problem: unable to get local issuer certificate
  • stopped the pause stream!
  • Closing connection 0
    curl: (60) SSL certificate problem: unable to get local issuer certificate
    More details here: https://curl.haxx.se/docs/sslcerts.html
  • curl performs SSL certificate verification by default, using a "bundle"
    of Certificate Authority (CA) public keys (CA certs). If the default
    bundle file isn't adequate, you can specify an alternate file
    using the --cacert option.
    If this HTTPS server uses a certificate signed by a CA represented in
    the bundle, the certificate verification probably failed due to a
    problem with the certificate (it might be expired, or the name might
    not match the domain name in the URL).
    If you'd like to turn off curl's verification of the certificate, use
    the -k (or --insecure) option.
    HTTPS-proxy has similar options --proxy-cacert and --proxy-insecure.

    Try the following.

    Make your own docker registry. Use gitlab for this it is free.

    Host some images on it on http. Try to start a pod with this image. Then verify that the docker you are looking at is in fact running that pod. If it is then you know you have the correct node.

    Then like before docker run and explain to me what you mean by connection refused.

    Issues go stale after 90d of inactivity.
    Mark the issue as fresh with /remove-lifecycle stale.
    Stale issues rot after an additional 30d of inactivity and eventually close.

    If this issue is safe to close now please do so with /close.

    Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
    /lifecycle stale

    lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 4, 2018

    Stale issues rot after 30d of inactivity.
    Mark the issue as fresh with /remove-lifecycle rotten.
    Rotten issues close after an additional 30d of inactivity.

    If this issue is safe to close now please do so with /close.

    Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
    /lifecycle rotten
    /remove-lifecycle stale

    lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 4, 2018

    Rotten issues close after 30d of inactivity.
    Reopen the issue with /reopen.
    Mark the issue as fresh with /remove-lifecycle rotten.

    Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
    /close

    So whats the workaround/fix on this? Im still getting it after upgrading from 3.9 to 3.10. Failed to pull image "docker-registry.default.svc:5000/openshift/mysql@sha256:dfd9f18f47caf290... and with errormessage: v2/: x509: certificate signed by unknown authority. Im agreeing with @pompomJuice. A permanent fix that doesnt break after install/upgrades is needed or reengineer this completly. If not this is not ready for production workloads.

    working solution for pulling docker image on ubuntu from Artifactory (certificate is selfsigned):

  • put all used (if there is more than one root ca) ca certs in /usr/local/share/ca-certificates
  • run update-ca-certificates
  • restart docker daemon (sudo service docker restart)
  • should anyone face it while using directly the gcr.io, one possible situation is that CA certificates on your machine are too old.

    docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.7.2
    Trying to pull repository gcr.io/google_containers/kube-apiserver-amd64 ...
    Get https://gcr.io/v1/_ping: x509: certificate signed by unknown authority '
    

    solution that worked for me on RH/CentOS:

    yum check-update ca-certificates; (($?==100)) && yum update ca-certificates || yum reinstall ca-certificates
    update-ca-trust extract
    

    This actually worked for me.

    I run kubernetes on RancherOS as part of Rancher 2.x setup and have private registry that is not internet facing, so I have to use self-signed certificate on it, resulting in x509 error. I read this thread and few others and the solved the issue - sharing in case it may help someone, if not directly then by suggesting possible path.

    This worked for me - https://www.ctrl-alt-del.cc/2018/11/solution-rancher-2-k8s-private-registry.html

    minikube pods in "ContainerCreating"status with error failed pulling image "k8s.gcr.io/pause-amd64:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority #72684 private harbor registry.
    docker pull works with no problem.
    ls /etc/docker/certs.d/registry.myharbor.com/ shows up the certificate.
    kubernetes fails to pull images with imagepullbackoff error.
    It is 3 years and kubernetes still has this issue. Very disappointing.

    Solved

  • Make sure you are able to do a docker pull IMAGENAME from the machine where you are running the deployments (yaml files, helm packages etc.,)
  • On all the kubernetes nodes make sure the following is present /etc/docker/certs.d/my-private-registry.com/my-private-registry.com.crt
  • 19.03.10

    I am using Jfrog Container Registry as registry to my minikube. I am able do the following:

  • docker login localhost:443 | or | ip-add:443
  • docker push ip-add:443/docker-local/test:latest
  • docker pull ip-add:443/docker-local/test:latest
  • I have configured Jfrog Container Registry to run behind Nginx Reverse Proxy listening on port 443. Created self-signed certs and Jfrog is using these certs.

    Configured docker to use the self-signed certs as follows.

  • Create certs, copy it to /usr/local/share/ca-certificates/
  • sudo update-ca-certificates
  • copy the certificate to /etc/docker/cert.d/192.168.0.114:443/ca.crt
  • restarted the docker, just be sure
  • Configure K8 to use the docker login secret by .yaml file as following:

  • base64 encode ~/.docker/config.json
  • use it in the following template
  • apiVersion: v1
    kind: Secret
    metadata:
     name: myregistrykey
     namespace: awesomeapps
    data:
     .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
    type: kubernetes.io/dockerconfigjson
    

    In the deployment.yaml, I use ImagePullSecrets and the name flag.

    Now after all this setup where the docker pull is working on terminal, I get error on the pods saying x509 IP Sans.

    I went through lot of documentation and K8 issues, replicated the step of #43924 (comment)

    replicated the steps didn't work out. Can anyone let me know what I am doing wrong? and how can I correct it.

  • Generate certs for each node (workers and masters). need public key, private key (unencrypted), and your root ca's public key (Openssl CA public key). I created a Subject Alternative Name (SAN) value for short hostname, FQDN, and IP address for each.
  • On each node create /etc/docker/certs.d/ (If it does not exist create it)
  • cd /etc/docker/certs.d/, create a directory in here that uses the SAME EXACT name used when you communicate with the registry in your manifest. So if you are referecing "docker-reg.company.domain" directory must say that. Or if you use "docker-reg.company.domain:443", need to make the directory name include the port. Must be exact as you reference in your mainfest.
  • Copy your RootCA, public key, and private key to this directory. They must be this name structure.
  • root ca = ca.crt
  • public key = client.cert
  • private key = client.key
  • Copy the root ca public key also to the cert store for the OS
  • Debian / Ubuntu
    cp certs/domain.crt /usr/local/share/ca-certificates/myregistrydomain.com.crt
    update-ca-certificates
    Rhel / Centos
    cp certs/domain.crt /etc/pki/ca-trust/source/anchors/myregistrydomain.com.crt
    update-ca-trust

  • Test to make sure you can pull directly from the node with docker pull docker-reg.company.domain:443/my-image:latest (Or however you have it in your manifest, must be exact)
  • If all this is done correctly try doing a deployment with kubectl again. Be careful of namespaces.

    These commands work for me, run them on all nodes include master node

    sudo cp /opt/certs/registry.crt /usr/local/share/ca-certificates/docker-registry.crt
    sudo update-ca-certificates
    sudo systemctl restart docker
    sudo systemctl restart containerd
    

    Ref: https://docs.docker.com/registry/insecure/#docker-still-complains-about-the-certificate-when-using-authentication

    nklsla, i5Js, ben-yacine, desarrolladoSenior2Saggaac, and Y2k38 reacted with thumbs up emoji vmtuan12, ben-yacine, and Y2k38 reacted with hooray emoji All reactions lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.