添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
相关文章推荐
失望的鸵鸟  ·  Managing clusters | ...·  1 周前    · 
傻傻的楼梯  ·  openshift | GUI Free Life·  2 周前    · 
老实的刺猬  ·  OpenShift 4 - Fedora ...·  2 周前    · 
旅行中的硬盘  ·  Publications – CERES·  1 月前    · 
乖乖的绿豆  ·  jquery easyUI+spring ...·  2 月前    · 
坐怀不乱的乌冬面  ·  Deborah ...·  3 月前    · 
重感情的大象  ·  A-Z Databases·  4 月前    · 

Hi guys,

I have a ready redhat openshift cluster and try to connect openshift cluster to Azure Arc. I have tried to follow the guide provided in https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli and successfully create providers & resource group.

However during I execute the command "az connectedk8s connect" and encounter following error:

After get deployment status of kubernetes pods, I found one of the kubernetes nodes unable to create successfully:

[crc@crc ~]$ kubectl get pod --namespace azure-arc  
NAME                                         READY   STATUS              RESTARTS      AGE  
cluster-metadata-operator-74c5b94d47-jz2mf   2/2     Running             0             6m41s  
clusterconnect-agent-57496ddf98-pxdwb        2/3     CrashLoopBackOff    6 (45s ago)   6m40s  
clusteridentityoperator-5595dbf759-npgj7     2/2     Running             0             6m40s  
config-agent-85745b6f89-ktcgn                2/2     Running             0             6m40s  
controller-manager-78cf8484c4-bkdrz          2/2     Running             0             6m40s  
extension-manager-599cd7b644-c9sqw           2/2     Running             0             6m40s  
flux-logs-agent-6cbd59f69d-8sqpj             1/1     Running             0             6m40s  
kube-aad-proxy-6ddf6b7b6d-2tpxm              0/2     ContainerCreating   0             6m41s  
metrics-agent-5d985f9b9c-t6pjd               2/2     Running             0             6m41s  
resource-sync-agent-8444f5fc44-zlx8q         2/2     Running             0             6m40s  

After I get details of the error, I found pods creation error due to secret "kube-aad-proxy-certificate" not found with following events:

[crc@crc ~]$ kubectl describe pod kube-aad-proxy-6ddf6b7b6d-2tpxm  
Error from server (NotFound): pods "kube-aad-proxy-6ddf6b7b6d-2tpxm" not found  
[crc@crc ~]$ kubectl describe pod kube-aad-proxy-6ddf6b7b6d-2tpxm -n azure-arc  
Name:           kube-aad-proxy-6ddf6b7b6d-2tpxm  
Namespace:      azure-arc  
Priority:       0  
Node:           crc-x4qnm-master-0/192.168.126.11  
Start Time:     Mon, 14 Feb 2022 20:44:22 +0800  
Labels:         app.kubernetes.io/component=kube-aad-proxy  
                app.kubernetes.io/name=azure-arc-k8s  
                pod-template-hash=6ddf6b7b6d  
Annotations:    checksum/proxysecret: 316deeb28892b1cdebfe5c12c2cd620b5b8f29289c1ffe3d4f5fc1b2e6a4ea7d  
                openshift.io/scc: kube-aad-proxy-scc  
                prometheus.io/port: 8080  
                prometheus.io/scrape: true  
Status:         Pending  
IPs:            <none>  
Controlled By:  ReplicaSet/kube-aad-proxy-6ddf6b7b6d  
Containers:  
  kube-aad-proxy:  
    Container ID:    
    Image:         mcr.microsoft.com/azurearck8s/kube-aad-proxy:1.6.1-preview  
    Image ID:        
    Ports:         8443/TCP, 8080/TCP  
    Host Ports:    0/TCP, 0/TCP  
    Args:  
      --secure-port=8443  
      --tls-cert-file=/etc/kube-aad-proxy/tls.crt  
      --tls-private-key-file=/etc/kube-aad-proxy/tls.key  
      --azure.client-id=6256c85f-0aad-4d50-b960-e6e9b21efe35  
      --azure.tenant-id=c58bdaa9-7ab0-40c5-9b0f-64b2c1fe2ef1  
      --azure.enforce-PoP=true  
      --azure.skip-host-check=false  
      -v=info  
      --azure.environment=AZUREPUBLICCLOUD  
    State:          Waiting  
      Reason:       ContainerCreating  
    Ready:          False  
    Restart Count:  0  
    Limits:  
      cpu:     100m  
      memory:  350Mi  
    Requests:  
      cpu:      10m  
      memory:   20Mi  
    Readiness:  http-get http://:8080/readiness delay=10s timeout=1s period=15s #success=1 #failure=3  
    Environment Variables from:  
      azure-clusterconfig  ConfigMap  Optional: false  
    Environment:           <none>  
    Mounts:  
      /etc/kube-aad-proxy from kube-aad-proxy-tls (ro)  
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-khrkl (ro)  
  fluent-bit:  
    Container ID:     
    Image:          mcr.microsoft.com/azurearck8s/fluent-bit:1.6.1  
    Image ID:         
    Port:           2020/TCP  
    Host Port:      0/TCP  
    State:          Waiting  
      Reason:       ContainerCreating  
    Ready:          False  
    Restart Count:  0  
    Limits:  
      cpu:     20m  
      memory:  100Mi  
    Requests:  
      cpu:     5m  
      memory:  25Mi  
    Environment Variables from:  
      azure-clusterconfig  ConfigMap  Optional: false  
    Environment:  
      POD_NAME:    kube-aad-proxy-6ddf6b7b6d-2tpxm (v1:metadata.name)  
      AGENT_TYPE:  ConnectAgent  
      AGENT_NAME:  kube-aad-proxy  
    Mounts:  
      /fluent-bit/etc/ from fluentbit-clusterconfig (rw)  
      /var/lib/docker/containers from varlibdockercontainers (ro)  
      /var/log from varlog (ro)  
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-khrkl (ro)  
Conditions:  
  Type              Status  
  Initialized       True   
  Ready             False   
  ContainersReady   False   
  PodScheduled      True   
Volumes:  
  kube-aad-proxy-tls:  
    Type:        Secret (a volume populated by a Secret)  
    SecretName:  kube-aad-proxy-certificate  
    Optional:    false  
  varlog:  
    Type:          HostPath (bare host directory volume)  
    Path:          /var/log  
    HostPathType:    
  varlibdockercontainers:  
    Type:          HostPath (bare host directory volume)  
    Path:          /var/lib/docker/containers  
    HostPathType:    
  fluentbit-clusterconfig:  
    Type:      ConfigMap (a volume populated by a ConfigMap)  
    Name:      azure-fluentbit-config  
    Optional:  false  
  kube-api-access-khrkl:  
    Type:                    Projected (a volume that contains injected data from multiple sources)  
    TokenExpirationSeconds:  3607  
    ConfigMapName:           kube-root-ca.crt  
    ConfigMapOptional:       <nil>  
    DownwardAPI:             true  
    ConfigMapName:           openshift-service-ca.crt  
    ConfigMapOptional:       <nil>  
QoS Class:                   Burstable  
Node-Selectors:              kubernetes.io/arch=amd64  
                             kubernetes.io/os=linux  
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists  
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s  
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s  
Events:  
  Type     Reason       Age                   From               Message  
  ----     ------       ----                  ----               -------  
  Normal   Scheduled    17m                   default-scheduler  Successfully assigned azure-arc/kube-aad-proxy-6ddf6b7b6d-2tpxm to crc-x4qnm-master-0  
  Warning  FailedMount  15m                   kubelet            Unable to attach or mount volumes: unmounted volumes=[kube-aad-proxy-tls], unattached volumes=[varlibdockercontainers fluentbit-clusterconfig kube-aad-proxy-tls kube-api-access-khrkl varlog]: timed out waiting for the condition  
  Warning  FailedMount  8m32s                 kubelet            Unable to attach or mount volumes: unmounted volumes=[kube-aad-proxy-tls], unattached volumes=[fluentbit-clusterconfig kube-aad-proxy-tls kube-api-access-khrkl varlog varlibdockercontainers]: timed out waiting for the condition  
  Warning  FailedMount  4m2s (x3 over 13m)    kubelet            Unable to attach or mount volumes: unmounted volumes=[kube-aad-proxy-tls], unattached volumes=[kube-aad-proxy-tls kube-api-access-khrkl varlog varlibdockercontainers fluentbit-clusterconfig]: timed out waiting for the condition  
  Warning  FailedMount  107s (x2 over 6m18s)  kubelet            Unable to attach or mount volumes: unmounted volumes=[kube-aad-proxy-tls], unattached volumes=[kube-api-access-khrkl varlog varlibdockercontainers fluentbit-clusterconfig kube-aad-proxy-tls]: timed out waiting for the condition  
  Warning  FailedMount  59s (x16 over 17m)    kubelet            MountVolume.SetUp failed for volume "kube-aad-proxy-tls" : secret "kube-aad-proxy-certificate" not found  

Add on, I attached details for clusterconnect-agent-xxx for further troubleshooting:

[crc@crc ~]$ kubectl describe pod clusterconnect-agent-57496ddf98-wxwl4 -n azure-arc  
 Name:         clusterconnect-agent-57496ddf98-wxwl4  
 Namespace:    azure-arc  
 Priority:     0  
 Node:         crc-x4qnm-master-0/192.168.126.11  
 Start Time:   Wed, 16 Feb 2022 15:49:16 +0800  
 Labels:       app.kubernetes.io/component=clusterconnect-agent  
               app.kubernetes.io/name=azure-arc-k8s  
               pod-template-hash=57496ddf98  
 Annotations:  checksum/proxysecret: 316deeb28892b1cdebfe5c12c2cd620b5b8f29289c1ffe3d4f5fc1b2e6a4ea7d  
               k8s.v1.cni.cncf.io/network-status:  
                     "name": "openshift-sdn",  
                     "interface": "eth0",  
                     "ips": [  
                         "10.217.0.180"  
                     "default": true,  
                     "dns": {}  
               k8s.v1.cni.cncf.io/networks-status:  
                     "name": "openshift-sdn",  
                     "interface": "eth0",  
                     "ips": [  
                         "10.217.0.180"  
                     "default": true,  
                     "dns": {}  
               openshift.io/scc: kube-aad-proxy-scc  
               prometheus.io/port: 8080  
               prometheus.io/scrape: true  
 Status:       Running  
 IP:           10.217.0.180  
   IP:           10.217.0.180  
 Controlled By:  ReplicaSet/clusterconnect-agent-57496ddf98  
 Containers:  
   clusterconnect-agent:  
     Container ID:   cri-o://d724fea24e4f54d6f619684ad0c7c705bc83978aa272c06962225db6841091cf  
     Image:          mcr.microsoft.com/azurearck8s/clusterconnect-agent:1.6.1  
     Image ID:       mcr.microsoft.com/azurearck8s/clusterconnect-agent@sha256:58a223db621a78d837b144d8d50f2faa8af65f2a8f46f24a3fc331deba28c33c  
     Port:           <none>  
     Host Port:      <none>  
     State:          Waiting  
       Reason:       CrashLoopBackOff  
     Last State:     Terminated  
       Reason:       Error  
       Exit Code:    137  
       Started:      Wed, 16 Feb 2022 16:00:19 +0800  
       Finished:     Wed, 16 Feb 2022 16:00:19 +0800  
     Ready:          False  
     Restart Count:  7  
     Environment Variables from:  
       azure-clusterconfig  ConfigMap  Optional: false  
     Environment:  
       CONNECT_DP_ENDPOINT_OVERRIDE:         
       PROXY_VERSION:                      v2  
       NOTIFICATION_DP_ENDPOINT_OVERRIDE:    
       TARGET_SERVICE_HOST:                KUBEAADPROXY_SERVICE_HOST  
       TARGET_SERVICE_PORT:                KUBEAADPROXY_SERVICE_PORT  
       KUBEAADPROXY_SERVICE_HOST:          kube-aad-proxy.azure-arc  
       KUBEAADPROXY_SERVICE_PORT:          443  
     Mounts:  
       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d22f5 (ro)  
   fluent-bit:  
     Container ID:   cri-o://945fac844efcb50278f4b64554ae1af8efd77fccc22e6bf1f03b0af1125c8ba9  
     Image:          mcr.microsoft.com/azurearck8s/fluent-bit:1.6.1  
     Image ID:       mcr.microsoft.com/azurearck8s/fluent-bit@sha256:a60b89ca44e1b70f205ba21920b867a000828df42ba83bde343fc3e9eed0825c  
     Port:           2020/TCP  
     Host Port:      0/TCP  
     State:          Running  
       Started:      Wed, 16 Feb 2022 15:49:20 +0800  
     Ready:          True  
     Restart Count:  0  
     Limits:  
       cpu:     20m  
       memory:  100Mi  
     Requests:  
       cpu:     5m  
       memory:  25Mi  
     Environment Variables from:  
       azure-clusterconfig  ConfigMap  Optional: false  
     Environment:  
       POD_NAME:    clusterconnect-agent-57496ddf98-wxwl4 (v1:metadata.name)  
       AGENT_TYPE:  ConnectAgent  
       AGENT_NAME:  ClusterConnectAgent  
     Mounts:  
       /fluent-bit/etc/ from fluentbit-clusterconfig (rw)  
       /var/lib/docker/containers from varlibdockercontainers (ro)  
       /var/log from varlog (ro)  
       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d22f5 (ro)  
   clusterconnectservice-operator:  
     Container ID:   cri-o://4066bf63c6a5f0f38928992986405127fcc8c76e6ba76f9fe501907e5600c1e4  
     Image:          mcr.microsoft.com/azurearck8s/clusterconnectservice-operator:1.6.1  
     Image ID:       mcr.microsoft.com/azurearck8s/clusterconnectservice-operator@sha256:6d8cc5f1798441ae322c5989dfdc34a5702ce0a8ca569926b1274aa147e66da0  
     Port:           9443/TCP  
     Host Port:      0/TCP  
     State:          Running  
       Started:      Wed, 16 Feb 2022 15:49:20 +0800  
     Ready:          True  
     Restart Count:  0  
     Limits:  
       cpu:     100m  
       memory:  400Mi  
     Requests:  
       cpu:     10m  
       memory:  20Mi  
     Environment Variables from:  
       azure-clusterconfig  ConfigMap  Optional: false  
     Environment:           <none>  
     Mounts:  
       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d22f5 (ro)  
 Conditions:  
   Type              Status  
   Initialized       True   
   Ready             False   
   ContainersReady   False   
   PodScheduled      True   
 Volumes:  
   varlog:  
     Type:          HostPath (bare host directory volume)  
     Path:          /var/log  
     HostPathType:    
   varlibdockercontainers:  
     Type:          HostPath (bare host directory volume)  
     Path:          /var/lib/docker/containers  
     HostPathType:    
   fluentbit-clusterconfig:  
     Type:      ConfigMap (a volume populated by a ConfigMap)  
     Name:      azure-fluentbit-config  
     Optional:  false  
   kube-api-access-d22f5:  
     Type:                    Projected (a volume that contains injected data from multiple sources)  
     TokenExpirationSeconds:  3607  
     ConfigMapName:           kube-root-ca.crt  
     ConfigMapOptional:       <nil>  
     DownwardAPI:             true  
     ConfigMapName:           openshift-service-ca.crt  
     ConfigMapOptional:       <nil>  
 QoS Class:                   Burstable  
 Node-Selectors:              kubernetes.io/arch=amd64  
                              kubernetes.io/os=linux  
 Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists  
                              node.kubernetes.io/not-ready:NoExecute op=Exists for 300s  
                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s  
 Events:  
   Type     Reason          Age                 From               Message  
   ----     ------          ----                ----               -------  
   Normal   Scheduled       11m                 default-scheduler  Successfully assigned azure-arc/clusterconnect-agent-57496ddf98-wxwl4 to crc-x4qnm-master-0  
   Normal   AddedInterface  11m                 multus             Add eth0 [10.217.0.180/23] from openshift-sdn  
   Normal   Pulled          11m                 kubelet            Container image "mcr.microsoft.com/azurearck8s/fluent-bit:1.6.1" already present on machine  
   Normal   Pulled          11m                 kubelet            Container image "mcr.microsoft.com/azurearck8s/clusterconnectservice-operator:1.6.1" already present on machine  
   Normal   Created         11m                 kubelet            Created container clusterconnectservice-operator  
   Normal   Started         11m                 kubelet            Started container clusterconnectservice-operator  
   Normal   Created         11m                 kubelet            Created container fluent-bit  
   Normal   Started         11m                 kubelet            Started container fluent-bit  
   Normal   Pulled          10m (x4 over 11m)   kubelet            Container image "mcr.microsoft.com/azurearck8s/clusterconnect-agent:1.6.1" already present on machine  
   Normal   Created         10m (x4 over 11m)   kubelet            Created container clusterconnect-agent  
   Normal   Started         10m (x4 over 11m)   kubelet            Started container clusterconnect-agent  
   Warning  BackOff         87s (x47 over 11m)  kubelet            Back-off restarting failed container  

The clusterconnect-agent showing error in the log:

Any help would be much appreciated. Thank you!

I have experienced identical issues lately on Azure RedHat OpenShift (ARO) version 4.8.18.

The hack below temporarily fixed the issue with clusterconnect-agent but it keeps reporting "Back-off restarting failed container" every 10 minutes.

Also I'm still unable to get over the error on kube-aad-proxy: 'MountVolume.SetUp failed for volume "kube-aad-proxy-tls" : secret "kube-aad-proxy-certificate" not found'. Multiple arc connects and pod restarts have failed identically over the last days.

Happy to see I'm not the only one :)

I had successful k8s Arc onboarding experience earlier with agent versions 1.5.9. Now using the latest 1.6.1.

We were experiencing the same issue, and it turns out that the problem lied with the configuration of our proxy server: We had not added the "https://*.his.arc.azure.com" URL (as described here) to the list of endpoints allowed by our proxy server. We were able to determine this by using oc debug node/... into a worker node, enabling the proxy server on the node and checking that indeed the above-mentioned URL (with "weu" instead of "*") was returning HTTP error "407 Proxy Authentication Required".

Once we added the https://*.his.arc.azure.com URL to the list of endpoints allowed by our proxy server, the issue was resolved. We are using ARO v. 4.8.18

I'm having a similar issue.
However it is intermittent, sometimes works and sometimes does not when running the same connect command against the same cluster.
I had assumed it was due to proxy authentication, or network timeouts - however this does not seem to be the case.

Noting that if the clusterconnect-agent-xx pod errors within the first 10 seconds of running the command, kube-aad-proxy will never finish creating and the arc-connect will fail.

Hi @Sulien , same observation from my side, may I know your side able to onboard successful for now? I have tried around +-20 times with one time successful onboard the Azure Arc. I have attached more details on clusterconnect-agent-xxx pod for further troubleshooting and hope anyone from Microsoft could investigate?

G'day @Jimmy Hee Woon Siong ,
I've had success when arc-connecting an OCP cluster version 4.9.17 rather than the latest stable release (4.9.18). Which version are you running?
Only tried the once against this version so far, will run the az connectedk8s delete command and re-connect a few times to check consistency.

The first two connects out of five were successful.

Not really a fix, but seems the clusterconnect-agent pod can be healed by adding the following environment variable:
COMPlus_EnableDiagnostics with a value of '0'.

Not sure if this really is a fix as unaware if it impacts other arc functionality.

Heres a 1 liner to apply the "fix":

oc patch deployment clusterconnect-agent -n azure-arc -p '{"spec":{"template":{"spec":{"containers":[{"name":"clusterconnect-agent","env":[{"name":"COMPlus_EnableDiagnostics","value":"0"}]}]}}}}'  

Give it a few minutes and the kube-aad-proxy pod will come up too.

Dear @Sulien ,
Currently I am using OCP cluster version 4.9.8, which most of the time having fail attempt. By using the oc patch command provided by you, I have started all the pods successfully without error. Just to mentioned for my case, if kube-aad-proxy pod does not startup, can just delete pod and openshift will auto generate new kube-aad-proxy pod with startup successfully.

Although it might not be the fixes, but it could be a workaround to allow pod started successfully. Thank you for sharing your finding and I shall mark this as accepted answer. If I have any input from Microsoft for the valid fixes will update here also. Thanks again!

Hey @Jimmy Hee Woon Siong ,

Microsoft has advised the arc agent has been updated. This seem to have resolved the issue for me (as well as the issue when deploying extensions).
Are your arc-connects working now without issue?

To add to troubleshooting details, in my Arc connected ARO case at least, the first pod with issues after running az connectedk8s connect seems to be config-agent with following error lines in the logs:

{"Message":"In clusterIdentityCRDInteraction status not populated","LogType":"ConfigAgentTrace","LogLevel":"Error", "Environment":"prod","Role":"ClusterConfigAgent" ...
{"Message":"get token from status error: status not populated","LogType":"ConfigAgentTrace","LogLevel":"Error", ...
{"Message":"2022/02/20 09:39:12 Error : Retry for given duration didn't get any results with err {status not populated}","LogType":"ConfigAgentTrace","LogLevel":"Information" ...
{"Message":"2022/02/20 09:39:12 Error in getting Token for clusterType: {ConnectedClusters}: error {Error : Retry for given duration didn't get any results with err {status not populated}}", ...
{"Message":"2022/02/20 09:39:12 Error: in getting auth header : error {Error : Retry for given duration didn't get any results with err {status not populated}}", ...
{"Message":"get token error: Error : Retry for given duration didn't get any results with err {status not populated}","LogType":"ConfigAgentTrace","LogLevel":"Error", ... ,"AgentName":"ConfigAgent","AgentVersion":"1.6.1",

This leaves the config-agent container in unready status.

containers with unready status: [config-agent]

This may or may not lead to kube-aad-proxy and clusterconnect-agent pods having their own issues down the road.

Hello

I have a ready redhat openshift cluster and try to connect openshift cluster to Azure Arc. I have tried to follow the guide provided in https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli and successfully create providers & resource group.

PS C:\arc> az connectedk8s troubleshoot --name ais-ci-arc-oke01 --resource-group rg-arc-demo
?[36mThis command is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus?[0m
?[93mDiagnoser running. This may take a while ...
?[93mError: One or more agents in the Azure Arc are not fully running.
?[93mError: We found an issue with outbound network connectivity from the cluster.
If your cluster is behind an outbound proxy server, please ensure that you have passed proxy parameters during the onboarding of your cluster.
For more details visit 'https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli#connect-using-an-outbound-proxy-server'.
Please ensure to meet the following network requirements 'https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli#meet-network-requirements'
?[93mThe diagnoser logs have been saved at this path:C:\Users\Administrator.azure\arc_diagnostic_logs\ais-ci-arc-oke01-Sat-Aug-13-00.08.40-2022 .
These logs can be attached while filing a support ticket for further assistance.
PS C:\arc>

weerayut@Weerayuts-MacBook-Pro ~ % kubectl get deployments,pods -n azure-arc
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cluster-metadata-operator 1/1 1 1 104m
deployment.apps/clusterconnect-agent 1/1 1 1 104m
deployment.apps/clusteridentityoperator 1/1 1 1 104m
deployment.apps/config-agent 0/1 1 0 82m
deployment.apps/controller-manager 1/1 1 1 104m
deployment.apps/extension-manager 1/1 1 1 104m
deployment.apps/flux-logs-agent 1/1 1 1 104m
deployment.apps/kube-aad-proxy 0/1 1 0 6m
deployment.apps/metrics-agent 1/1 1 1 104m
deployment.apps/resource-sync-agent 1/1 1 1 104m

NAME READY STATUS RESTARTS AGE
pod/cluster-metadata-operator-6d4b957d65-8bcr7 2/2 Running 0 104m
pod/clusterconnect-agent-d5d6c6848-5qzt9 3/3 Running 16 (78s ago) 104m
pod/clusteridentityoperator-76bb64d65b-282cv 2/2 Running 0 104m
pod/config-agent-689cb54fc9-z7fmq 1/2 Running 0 82m
pod/controller-manager-69fd59cf7-58q7s 2/2 Running 0 104m
pod/extension-manager-6f56ffd7db-8nx67 2/2 Running 0 104m
pod/flux-logs-agent-88588c88-h4s6r 1/1 Running 0 104m
pod/kube-aad-proxy-fb444c6b9-cw6tv 0/2 ContainerCreating 0 6m
pod/metrics-agent-854dfbdc74-82qcj 2/2 Running 0 104m
pod/resource-sync-agent-77f8bb95d4-jb452 2/2 Running 0 104m

weerayut@Weerayuts-MacBook-Pro ~ % kubectl describe pods -n azure-arc config-agent-689cb54fc9-z7fmq
Name: config-agent-689cb54fc9-z7fmq
Namespace: azure-arc
Priority: 0
Node: node1.192.168.100.221.nip.io/192.168.100.221
Start Time: Fri, 12 Aug 2022 22:47:01 +0700
Labels: app.kubernetes.io/component=config-agent
app.kubernetes.io/name=azure-arc-k8s
pod-template-hash=689cb54fc9
Annotations: checksum/azureconfig: 304466be76b04e85cb4a48d705bbe4a0d40ae3b9ac288ea9a8209ccde4930ce3
checksum/proxysecret: 316deeb28892b1cdebfe5c12c2cd620b5b8f29289c1ffe3d4f5fc1b2e6a4ea7d
extensionEnabled: true
k8s.v1.cni.cncf.io/network-status:
"name": "openshift-sdn",
"interface": "eth0",
"ips": [
"10.130.0.57"
"default": true,
"dns": {}
k8s.v1.cni.cncf.io/networks-status:
"name": "openshift-sdn",
"interface": "eth0",
"ips": [
"10.130.0.57"
"default": true,
"dns": {}
openshift.io/scc: kube-aad-proxy-scc
prometheus.io/port: 8080
prometheus.io/scrape: true
Status: Running
IP: 10.130.0.57
IP: 10.130.0.57
Controlled By: ReplicaSet/config-agent-689cb54fc9
Containers:
config-agent:
Container ID: cri-o://479ea47e106961bd2ae3d34fb2ffbae9c79b533cd95f4963e8e4de55e346f3f4
Image: mcr.microsoft.com/azurearck8s/config-agent:1.7.4
Image ID: mcr.microsoft.com/azurearck8s/config-agent@sha256:09d645e1274c8d7030f95c54733b130c078b64d973a125091a430e7dc9547428
Port:
Host Port:
State: Running
Started: Fri, 12 Aug 2022 22:47:06 +0700
Ready: False
Restart Count: 0
Limits:
cpu: 50m
memory: 100Mi
Requests:
cpu: 5m
memory: 20Mi
Readiness: http-get http://:9090/readiness delay=10s timeout=1s period=15s #success=1 #failure=3
Environment Variables from:
azure-clusterconfig ConfigMap Optional: false
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xv7hf (ro)
fluent-bit:
Container ID: cri-o://7cc496e5aa7c82bd8c670a3a5cc636d732fe92c83a0b861d695590b7b5c4af0b
Image: mcr.microsoft.com/azurearck8s/fluent-bit:1.7.4
Image ID: mcr.microsoft.com/azurearck8s/fluent-bit@sha256:a4810fdfc59a38f29c1e5d3f29847e5866e719edcbb78eeb70802e820fafd02a
Port: 2020/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 12 Aug 2022 22:47:08 +0700
Ready: True
Restart Count: 0
Limits:
cpu: 20m
memory: 100Mi
Requests:
cpu: 5m
memory: 25Mi
Environment Variables from:
azure-clusterconfig ConfigMap Optional: false
Environment:
POD_NAME: config-agent-689cb54fc9-z7fmq (v1:metadata.name)
AGENT_TYPE: ConfigAgent
AGENT_NAME: ConfigAgent
Mounts:
/fluent-bit/etc/ from fluentbit-clusterconfig (rw)
/var/lib/docker/containers from varlibdockercontainers (ro)
/var/log from varlog (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xv7hf (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
HostPathType:
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
HostPathType:
fluentbit-clusterconfig:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: azure-fluentbit-config
Optional: false
kube-api-access-xv7hf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional:
QoS Class: Burstable
Node-Selectors: kubernetes.io/arch=amd64
kubernetes.io/os=linux
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message

Normal Scheduled 82m default-scheduler Successfully assigned azure-arc/config-agent-689cb54fc9-z7fmq to node1.192.168.100.221.nip.io
Normal AddedInterface 82m multus Add eth0 [10.130.0.57/23] from openshift-sdn
Normal Pulled 82m kubelet Container image "mcr.microsoft.com/azurearck8s/config-agent:1.7.4" already present on machine
Normal Created 82m kubelet Created container config-agent
Normal Started 82m kubelet Started container config-agent
Normal Pulled 82m kubelet Container image "mcr.microsoft.com/azurearck8s/fluent-bit:1.7.4" already present on machine
Normal Created 82m kubelet Created container fluent-bit
Normal Started 82m kubelet Started container fluent-bit
Warning Unhealthy 2m53s (x384 over 82m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
weerayut@Weerayuts-MacBook-Pro ~ %

weerayut@Weerayuts-MacBook-Pro ~ % kubectl describe pods -n azure-arc kube-aad-proxy-fb444c6b9-cw6tv
Name: kube-aad-proxy-fb444c6b9-cw6tv
Namespace: azure-arc
Priority: 0
Node: node1.192.168.100.221.nip.io/192.168.100.221
Start Time: Sat, 13 Aug 2022 00:03:03 +0700
Labels: app.kubernetes.io/component=kube-aad-proxy
app.kubernetes.io/name=azure-arc-k8s
pod-template-hash=fb444c6b9
Annotations: checksum/proxysecret: 316deeb28892b1cdebfe5c12c2cd620b5b8f29289c1ffe3d4f5fc1b2e6a4ea7d
openshift.io/scc: kube-aad-proxy-scc
prometheus.io/port: 8080
prometheus.io/scrape: true
Status: Pending
Controlled By: ReplicaSet/kube-aad-proxy-fb444c6b9
Containers:
kube-aad-proxy:
Container ID:
Image: mcr.microsoft.com/azurearck8s/kube-aad-proxy:1.7.4-preview
Image ID:
Ports: 8443/TCP, 8080/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--secure-port=8443
--tls-cert-file=/etc/kube-aad-proxy/tls.crt
--tls-private-key-file=/etc/kube-aad-proxy/tls.key
--azure.client-id=6256c85f-0aad-4d50-b960-e6e9b21efe35
--azure.tenant-id=5d1751d4-0dcf-4283-8725-5f9ddf344632
--azure.enforce-PoP=true
--azure.skip-host-check=false
-v=info
--azure.environment=AZUREPUBLICCLOUD
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 350Mi
Requests:
cpu: 10m
memory: 20Mi
Readiness: http-get http://:8080/readiness delay=10s timeout=1s period=15s #success=1 #failure=3
Environment Variables from:
azure-clusterconfig ConfigMap Optional: false
Environment:
Mounts:
/etc/kube-aad-proxy from kube-aad-proxy-tls (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mdcfk (ro)
fluent-bit:
Container ID:
Image: mcr.microsoft.com/azurearck8s/fluent-bit:1.7.4
Image ID:
Port: 2020/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 20m
memory: 100Mi
Requests:
cpu: 5m
memory: 25Mi
Environment Variables from:
azure-clusterconfig ConfigMap Optional: false
Environment:
POD_NAME: kube-aad-proxy-fb444c6b9-cw6tv (v1:metadata.name)
AGENT_TYPE: ConnectAgent
AGENT_NAME: kube-aad-proxy
Mounts:
/fluent-bit/etc/ from fluentbit-clusterconfig (rw)
/var/lib/docker/containers from varlibdockercontainers (ro)
/var/log from varlog (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mdcfk (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-aad-proxy-tls:
Type: Secret (a volume populated by a Secret)
SecretName: kube-aad-proxy-certificate
Optional: false
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
HostPathType:
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
HostPathType:
fluentbit-clusterconfig:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: azure-fluentbit-config
Optional: false
kube-api-access-mdcfk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional:
QoS Class: Burstable
Node-Selectors: kubernetes.io/arch=amd64
kubernetes.io/os=linux
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message

Normal Scheduled 7m33s default-scheduler Successfully assigned azure-arc/kube-aad-proxy-fb444c6b9-cw6tv to node1.192.168.100.221.nip.io
Warning FailedMount 3m13s kubelet Unable to attach or mount volumes: unmounted volumes=[kube-aad-proxy-tls], unattached volumes=[varlog varlibdockercontainers fluentbit-clusterconfig kube-aad-proxy-tls kube-api-access-mdcfk]: timed out waiting for the condition
Warning FailedMount 82s (x11 over 7m33s) kubelet MountVolume.SetUp failed for volume "kube-aad-proxy-tls" : secret "kube-aad-proxy-certificate" not found
Warning FailedMount 59s (x2 over 5m31s) kubelet Unable to attach or mount volumes: unmounted volumes=[kube-aad-proxy-tls], unattached volumes=[kube-aad-proxy-tls kube-api-access-mdcfk varlog varlibdockercontainers fluentbit-clusterconfig]: timed out waiting for the condition
weerayut@Weerayuts-MacBook-Pro ~ %