添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503 #5344 loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503 #5344 wang-xiaowu opened this issue Mar 28, 2022 · 18 comments K3s Version: v1.21.7+k3s1

Node(s) CPU architecture, OS, and Version: Linux k3s-node2 3.10.0-1127.el7.x86_64 #1 SMP Tue Mar 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 **GNU/Linux **

Cluster Configuration: 2 servers

Describe the bug:

Steps To Reproduce:

  • Installed K3s:
  • master server1

    # MySQL
    export K3S_DATASTORE_ENDPOINT="mysql://root:root@tcp(192.168.56.103:3306)/k3s_xiaowu"
    export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
    export INSTALL_K3S_VERSION=v1.21.7+k3s1
    export INSTALL_K3S_EXEC="server \
    --kube-proxy-arg proxy-mode=ipvs \
    --kube-proxy-arg masquerade-all=true \
    --kube-proxy-arg metrics-bind-address=0.0.0.0 \
    --kube-apiserver-arg service-node-port-range=30000-40000"
    

    master server2

    export K3S_DATASTORE_ENDPOINT="mysql://root:root@tcp(192.168.56.103:3306)/k3s_xiaowu"
    export K3S_TOKEN="K10d95efb2d8e6363cdaf8a09a4682ab41b0508b0a6d61ad254dafcfc671f35efc5::server:3a300490b8396cb8b8a0b64364f1ccfe"
    export K3S_URL="https://10.0.0.15:6443"
    export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
    export INSTALL_K3S_VERSION=v1.21.7+k3s1
    export INSTALL_K3S_EXEC="server \
    --kube-proxy-arg proxy-mode=ipvs \
    --kube-proxy-arg masquerade-all=true \
    --kube-proxy-arg metrics-bind-address=0.0.0.0 \
    --kube-apiserver-arg service-node-port-range=30000-40000"
    

    Expected behavior:

    Actual behavior:

    master server2 shows log

    Mar 28 15:41:51 k3s-node2 k3s[8452]: I0328 15:41:51.204685    8452 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
    Mar 28 15:41:55 k3s-node2 k3s[8452]: E0328 15:41:55.263517    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object h
    Mar 28 15:42:00 k3s-node2 k3s[8452]: E0328 15:42:00.280779    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43
    Mar 28 15:42:05 k3s-node2 k3s[8452]: E0328 15:42:05.296275    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object h
    Mar 28 15:42:10 k3s-node2 k3s[8452]: E0328 15:42:10.318641    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43
    Mar 28 15:42:15 k3s-node2 k3s[8452]: E0328 15:42:15.324753    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object h
    Mar 28 15:42:20 k3s-node2 k3s[8452]: E0328 15:42:20.333974    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43
    Mar 28 15:42:25 k3s-node2 k3s[8452]: E0328 15:42:25.339387    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object h
    Mar 28 15:42:30 k3s-node2 k3s[8452]: E0328 15:42:30.348943    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43
    Mar 28 15:42:31 k3s-node2 k3s[8452]: E0328 15:42:31.288031    8452 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: error trying to rea
    Mar 28 15:42:31 k3s-node2 k3s[8452]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
    

    Additional context / logs:

    master server2 shows log

    Mar 28 15:41:51 k3s-node2 k3s[8452]: I0328 15:41:51.204685    8452 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
    Mar 28 15:41:55 k3s-node2 k3s[8452]: E0328 15:41:55.263517    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object h
    Mar 28 15:42:00 k3s-node2 k3s[8452]: E0328 15:42:00.280779    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43
    Mar 28 15:42:05 k3s-node2 k3s[8452]: E0328 15:42:05.296275    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object h
    Mar 28 15:42:10 k3s-node2 k3s[8452]: E0328 15:42:10.318641    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43
    Mar 28 15:42:15 k3s-node2 k3s[8452]: E0328 15:42:15.324753    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object h
    Mar 28 15:42:20 k3s-node2 k3s[8452]: E0328 15:42:20.333974    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43
    Mar 28 15:42:25 k3s-node2 k3s[8452]: E0328 15:42:25.339387    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object h
    Mar 28 15:42:30 k3s-node2 k3s[8452]: E0328 15:42:30.348943    8452 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43
    Mar 28 15:42:31 k3s-node2 k3s[8452]: E0328 15:42:31.288031    8452 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: error trying to rea
    Mar 28 15:42:31 k3s-node2 k3s[8452]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
    

    Backporting

  • Needs backporting to older releases
  • You've trimmed the end off of all your error messages so I can't see what the actual failure logged by the apiserver is, but it appears that for some reason the metrics-server pod isn't running or can't be reached by the apiservers. Please check the metrics-server pod logs.

    Mar 29 09:40:25 k3s-node2 k3s[4329]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
    Mar 29 09:40:25 k3s-node2 k3s[4329]: I0329 09:40:25.988168    4329 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
    Mar 29 09:40:26 k3s-node2 k3s[4329]: E0329 09:40:26.090621    4329 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
    Mar 29 09:40:31 k3s-node2 k3s[4329]: E0329 09:40:31.099700    4329 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    Mar 29 09:40:36 k3s-node2 k3s[4329]: E0329 09:40:36.102961    4329 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    Mar 29 09:40:41 k3s-node2 k3s[4329]: E0329 09:40:41.108659    4329 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    Mar 29 09:40:46 k3s-node2 k3s[4329]: E0329 09:40:46.116424    4329 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
    Mar 29 09:40:51 k3s-node2 k3s[4329]: E0329 09:40:51.122950    4329 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    Mar 29 09:40:56 k3s-node2 k3s[4329]: E0329 09:40:56.130200    4329 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
    Mar 29 09:41:01 k3s-node2 k3s[4329]: E0329 09:41:01.147269    4329 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded
    Mar 29 09:41:02 k3s-node2 k3s[4329]: E0329 09:41:02.102317    4329 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: error trying to reach service: dial tcp 10.43.233.254:443: i/o timeout
    Mar 29 09:41:02 k3s-node2 k3s[4329]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
    Mar 29 09:41:02 k3s-node2 k3s[4329]: I0329 09:41:02.102355    4329 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
    Mar 29 09:41:06 k3s-node2 k3s[4329]: E0329 09:41:06.154342    4329 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
    Mar 29 09:41:11 k3s-node2 k3s[4329]: E0329 09:41:11.167336    4329 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.233.254:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded
    
    [root@k3s-node1 log]# kubectl get pods -n kube-system|grep "metrics-server"
    metrics-server-86cbb8457f-z4n4q           1/1     Running     1          18h
    [root@k3s-node1 log]# kubectl logs metrics-server-86cbb8457f-z4n4q -n kube-system
    I0329 01:39:30.709641       1 serving.go:312] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
    I0329 01:39:31.204775       1 secure_serving.go:116] Serving securely on [::]:443
    E0329 01:40:31.233549       1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:k3s-node2: unable to fetch metrics from Kubelet k3s-node2 (k3s-node2): Get https://k3s-node2:10250/stats/summary?only_cpu_and_memory=true: x509: certificate is valid for k3s-node1, localhost, not k3s-node2
    E0329 01:41:31.418154       1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:k3s-node2: unable to fetch metrics from Kubelet k3s-node2 (k3s-node2): Get https://k3s-node2:10250/stats/summary?only_cpu_and_memory=true: x509: certificate is valid for k3s-node1, localhost, not k3s-node2
    E0329 01:42:31.217012       1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:k3s-node2: unable to fetch metrics from Kubelet k3s-node2 (k3s-node2): Get https://k3s-node2:10250/stats/summary?only_cpu_and_memory=true: x509: certificate is valid for k3s-node1, localhost, not k3s-node2
    

    You've trimmed the end off of all your error messages so I can't see what the actual failure logged by the apiserver is, but it appears that for some reason the metrics-server pod isn't running or can't be reached by the apiservers. Please check the metrics-server pod logs.

    does it matter about ipvs?
    because when i configured it with defualt kube-proxy, it was ok

    IPVS should work, but we don't test it. You may need to do additional work to ensure that everything is open between your nodes.

    The other error you posted does concern me though. How does one of your nodes have a cert for a different node? Do you have some odd issue with reusing hostnames or IPs on your network?
    E0329 01:42:31.217012 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:k3s-node2: unable to fetch metrics from Kubelet k3s-node2 (k3s-node2): Get https://k3s-node2:10250/stats/summary?only_cpu_and_memory=true: x509: certificate is valid for k3s-node1, localhost, not k3s-node2

    IPVS should work, but we don't test it. You may need to do additional work to ensure that everything is open between your nodes.

    The other error you posted does concern me though. How does one of your nodes have a cert for a different node? Do you have some odd issue with reusing hostnames or IPs on your network? E0329 01:42:31.217012 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:k3s-node2: unable to fetch metrics from Kubelet k3s-node2 (k3s-node2): Get https://k3s-node2:10250/stats/summary?only_cpu_and_memory=true: x509: certificate is valid for k3s-node1, localhost, not k3s-node2

  • k3s-node1
  • [root@k3s-node1 ~]# hostname
    k3s-node1
    [root@k3s-node1 ~]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    127.0.1.1 k3s-node1 k3s-node1
    10.0.2.15 k3s-node1
    10.0.2.10 k3s-node2
    10.0.2.11 k3s-node3
    
  • k3s-node2
  • [root@k3s-node2 ~]# hostname
    k3s-node2
    [root@k3s-node2 ~]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    127.0.1.1 k3s-node2 k3s-node2
    10.0.2.15 k3s-node1
    10.0.2.10 k3s-node2
    10.0.2.11 k3s-node3
    

    ps: i found one fault of mine with environments,(it should beexport K3S_URL="https://10.0.2.15:6443",not 10.0.0.15), but after i configured it right,it's still not work
    can you please check my configuration? i wanna know should i configure K3S_URL and K3S_TOKEN on another server-node

    Why do you have a secondary loopback address for each node's hostname? The hostname in the hosts file (if you're working without DNS) should resolve to the LAN IP address, not a loopback address.

    Also, the duplicate ipv6 entries for localhost localhost.localdomain is a little weird.

    i found this

    [root@k3s-node1 ~]# kubectl logs metrics-server-86cbb8457f-kvzdd -n kube-system
    I0330 01:57:47.880252       1 serving.go:312] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
    I0330 01:57:48.088525       1 secure_serving.go:116] Serving securely on [::]:443
    E0330 01:59:18.793457       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Pod: Get https://10.43.0.1:443/api/v1/pods?resourceVersion=894&timeout=9m4s&timeoutSeconds=544&watch=true: dial tcp 10.43.0.1:443: connect: connection refused
    E0330 01:59:18.794049       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Node: Get https://10.43.0.1:443/api/v1/nodes?resourceVersion=875&timeout=6m4s&timeoutSeconds=364&watch=true: dial tcp 10.43.0.1:443: connect: connection refused
    [root@k3s-node1 ~]# curl -k https://10.43.0.1:443
      "kind": "Status",
      "apiVersion": "v1",
      "metadata": {
      "status": "Failure",
      "message": "Unauthorized",
      "reason": "Unauthorized",
      "code": 401
    

    ip addr

    5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
        link/ether b2:65:e8:8b:b7:af brd ff:ff:ff:ff:ff:ff
        inet 10.43.0.1/32 scope global kube-ipvs0
           valid_lft forever preferred_lft forever
        inet 10.43.0.10/32 scope global kube-ipvs0
           valid_lft forever preferred_lft forever
        inet 10.43.51.102/32 scope global kube-ipvs0
           valid_lft forever preferred_lft forever
        inet 10.43.81.93/32 scope global kube-ipvs0
           valid_lft forever preferred_lft forever
        inet 10.0.2.15/32 scope global kube-ipvs0
           valid_lft forever preferred_lft forever
    

    Yeah, it looks like your cluster network is kinda messed up. I would fix your hosts file, restart the nodes, and see if everything comes up in a better state.

    my question is why the certificate is just valid for k3s-node1, localhost, not k3s-node2
    is there something wrong i configured

    - --metric-resolution=30s - --kubelet-insecure-tls=true - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
  • kubectl patch deployment metrics-server --patch-file metrics_patch.yaml -n kube-system
  • and it worded
  • [root@k3s-release-server2 ~]# kubectl logs metrics-server-7899bdb5fd-wbphf -n kube-system
    I0610 09:53:36.890370       1 serving.go:312] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
    I0610 09:53:37.634498       1 secure_serving.go:116] Serving securely on [::]:443
    E0610 09:54:07.644855       1 sinkprov.go:135] duplicate pod kube-system/svclb-traefik-w9rqc received
    E0610 09:54:07.644887       1 sinkprov.go:135] duplicate pod kube-system/metrics-server-7899bdb5fd-wbphf received
    E0610 09:54:37.632009       1 sinkprov.go:135] duplicate pod kube-system/svclb-traefik-w9rqc received
    E0610 09:54:37.632036       1 sinkprov.go:135] duplicate pod kube-system/metrics-server-7899bdb5fd-wbphf received
    E0610 09:55:07.632141       1 sinkprov.go:135] duplicate pod kube-system/svclb-traefik-w9rqc received
    E0610 09:55:07.632170       1 sinkprov.go:135] duplicate pod kube-system/metrics-server-7899bdb5fd-wbphf received
    E0610 09:55:37.627343       1 sinkprov.go:135] duplicate pod kube-system/metrics-server-7899bdb5fd-wbphf received
    E0610 09:55:37.627371       1 sinkprov.go:135] duplicate pod kube-system/svclb-traefik-w9rqc received
    E0610 09:56:07.634909       1 sinkprov.go:135] duplicate pod kube-system/svclb-traefik-w9rqc received
    E0610 09:56:07.634931       1 sinkprov.go:135] duplicate pod kube-system/metrics-server-7899bdb5fd-wbphf received
    E0610 09:56:37.636754       1 sinkprov.go:135] duplicate pod kube-system/svclb-traefik-w9rqc received
    E0610 09:56:37.636785       1 sinkprov.go:135] duplicate pod kube-system/metrics-server-7899bdb5fd-wbphf received
    E0610 09:57:07.629617       1 sinkprov.go:135] duplicate pod kube-system/svclb-traefik-w9rqc received
    
  • but my other k3s-node got this
  • # kubectl top nodes
    W0610 18:01:27.029164    7766 top_node.go:119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
    Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
    

    It sounds like one of your server nodes is not able to reach the pod running on another node. Can you confirm that the correct ports for your flannel backend are open between both nodes, and that other inter-node cluster traffic (coredns, etc) works properly?

    i dont know why the certificate is not valid..🤔

    [root@k3s-release-server1 ~]# kubectl logs coredns-7448499f4d-9jrc8 -n kube-system
    Error from server: Get "https://k3s-release-server1:10250/containerLogs/kube-system/coredns-7448499f4d-9jrc8/coredns": x509: certificate is valid for k3s-release-server2, localhost, not k3s-release-server1
    

    the firewall was closed

    [root@k3s-release-server1 ~]# netstat -ano|grep 8472
    udp        0      0 0.0.0.0:8472            0.0.0.0:*                           off (0.00/0/0)
    [root@k3s-release-server2 ~]# netstat -ano|grep 8472
    udp        0      0 0.0.0.0:8472            0.0.0.0:*                           off (0.00/0/0)
              

    now the situation is like this
    when i use 'iptables' as the kube-proxy for my k3s cluster, it was ok
    but if i enbled the ipvs, it was wrong
    maybe there is something configuration different between them?

    This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.