12
# 根据标签值筛选kubectl get po -l createion_method=manual
# 列出包含某标签的podkubectl get po -l env
# 列出不存在某标签kubectl get po -l '!env'
1234
# 列出值匹配的kubectl get po -l create_method!=manualkubectl get po -l env in (prod,dev)kubectl get po -l env notin (prod,dev)
# 多条件kubect get po -l app=pc,env=prod
# 1. 利用标签分类工作节点kubectl label node xxxxxxxxxxx gpu=true
1234567891011
# 2. 将pod调度到gpu节点apiVersion: v1kind: Podmetadata: name: kubia-gpuspec: nodeSelector: gpu: true # 要求k8s将pod部署在gpu节点 containers: - image: lukas/kubia name: kubia
12345678910111213
apiVersion: v1kind: Podmetadata: name: kubia-livenessspec: containers: - image: luksa/kubia-unhealthy name: kubia livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 10
和存活探针一样的类型种类
123456789101112131415161718
apiVersion: v1kind: ReplicationControllermetadata: name: kubia #ReplicationController名字spec: replicas: 3 #pod副本数 selector: app: kubia template: #ReplicationController管理的pod的模板 metadata: labels: app: kubia #pod的标签 spec: containers: - name: kubia image: luksa/kubia ports: - containerPort: 8080
针对修改—关于标签和模板
删除ReplicationController时会连带删除掉它所管理的所有pod
# 可以通过增加cascade保持pod运行kubectl delete rc kubia --cascade=false
ReplicationController运行后新增pod标签不会影响ReplicationController的副本数目
删除或修改pod的selector标签,该pod将被移出ReplicationController的管理,ReplicationController会新建一个pod保证副本数
修改ReplicationController的模板不会对当前pod造成影响,只会影响之后新建的pod
1
kubectl scale rc kubia --replicas=5
ReplicationController的替代。
行为与ReplicationController一致,但是pod选择器的表达能力更强。比如ReplicaSet可以同时匹配多组标签,而且可以基于标签名的存在性来匹配。
1234567891011121314151617
apiVersion: apps/v1kind: ReplicaSetmetadata: name: kubiaspec: replicas: 3 selector: matchLabels: #多个标签的匹配,也可以是 app: kubia template: metadata: labels: app: kubia spec: containers: - name: kubia image: luksa/kubia
增强型标签选择器
1234567
selector: matchExpressions: - key: app # 标签名字 #In NotIn Exists(包含,不指定values) DoesNotExist(不得包含,不指定values) operator: In values: - kubia
在匹配某标签的每个节点上,启动一个Daemon pod。
apiVersion: apps/v1kind: DaemonSetmetadata: name: ssd-monitor-dsspec: selector: matchLabels: app: ssd-monitor template: metadata: labels: app: ssd-monitor spec: nodeSelector: disk: ssd # 每个标记为ssd的节点都会部署一个标记label为ssd-monitor的pod containers: - name: main image: luksa/ssd-monitor
如果节点的标签从ssd修改为hdd,则对应的pod也会跟着终止掉。
Job的内部进程成功结束后,不重启容器,处于完成状态。如果进程异常退出,可以将Job配置为重新启动。
1234567891011121314
apiVersion: batch/v1kind: Jobmetadata: name: batch-jobspec: template: metadata: labels: app: batch-job spec: restartPolicy: OnFailure # 不能设置为Always自动重启 containers: - name: main image: luksa/batch-job
Job可以创建多个pod实例,一并行或串行方式运行。通过配置completions和parallelism属性。
12345678910111213141516
apiVersion: batch/v1kind: Jobmetadata: name: multi-completion-batch-jobspec: completion: 5 parallelism: 1 template: metadata: labels: app: batch-job spec: restartPolicy: OnFailure containers: - name: main image: luksa/batch-job
运行逻辑:第一个pod运行完成,继续创建第二个,相当于相同的逻辑起5次pod来执行。
当parallelism>1时,表示可以同时有parallelism个pod同时运行。
apiVersion: batch/v1kind: CronJobmetadata: name: batch-job-every-fifteen-minutesspec: schedule: "0,15,30,45 * * * *" # 每小时0,15,30,45分执行 jobTemplate: spec: template: metadata: labels: app: periodic-batch-job spec: restartPolicy: OnFailure containers: - name: main image: luksa/batch-job
123456789101112131415
apiVersion: v1kind: Servicemetadata: name: kubiaspec: #sessionAffinity: ClientIP 是否保持会话亲和,所有来自同一个clientip的请求会发给同一个pod ports: - name: http # 端口命名 port: 80 # 暴露出去的端口 targetPort: 8080 # 容器内部的服务端口,这里也可使用pod里ports的名字代替 - name: https port: 443 targetPort: 8443 selector: app: kubia # 标签为kubia的pod都属于这个服务
➜ k8s-in-action kubectl get poNAME READY STATUS RESTARTS AGEbatch-job-every-fifteen-minutes-1573700400-skbmv 0/1 Completed 0 10mbatch-job-r9pkj 0/1 Completed 0 36mkubia-bbt27 1/1 Running 0 3m58skubia-h5cjl 1/1 Running 0 3m58skubia-ttv72 1/1 Running 0 3m58s➜ k8s-in-action kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10hkubia ClusterIP 10.104.37.60 <none> 80/TCP 5m33s➜ k8s-in-action kubectl exec kubia-h5cjl -- curl -s http://10.104.37.60You've hit kubia-bbt27➜ k8s-in-action kubectl exec kubia-h5cjl -- curl -s http://10.104.37.60You've hit kubia-ttv72
– 双横杠表示kubectl命令的结束。横杠之后的内容指需要在pod内部执行的命令。如果后面的命令不带横杠参数,如不带-s,则双横杠不是必须的。
集群内部服务也有endpoints,通过命令 kubectl describe svc kubia 可以看到。
kubectl describe svc kubia
➜ k8s-in-action kubectl describe svc kubiaName: kubiaNamespace: defaultLabels: <none>Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"kubia","namespace":"default"},"spec":{"ports":[{"port":80,"target...Selector: app=kubiaType: ClusterIPIP: 10.104.37.60Port: <unset> 80/TCPTargetPort: 8080/TCPEndpoints: 172.17.0.6:8080,172.17.0.7:8080,172.17.0.8:8080Session Affinity: ClientIPEvents: <none>
手工配置的 endpoints 主要用来配置集群外部服务。
endpoints
步骤如下:
创建没有选择器的服务S
apiVersion: v1kind: Servicemetadata: name: external-servicespec: ports: - port: 80
为服务S创建 Endpoints 资源
Endpoints
12345678910
apiVersion: v1kind: Endpointsmetadata: name: external-service # 名称必须和服务名称相匹配subsets: - addressses: - ip: 11.11.11.11 - ip: 22.22.22.22 # endpoint的ip地址 ports: - port: 80 # endpoint的目标端口
NodePort方式
123456789101112
apiVersion: v1kind: Servicemetadata: name: kubia-nodeportspec: type: NodePort ports: - port: 80 # 暴露出去的端口 targetPort: 8080 # 容器内部的服务端口,这里也可使用pod里ports的名字代替 nodePort: 30123 # 如果不指定,将随机分配一个端口 selector: app: kubia # 标签为kubia的pod都属于这个服务
123456
➜ k8s-in-action kubectl apply -f kubia-svc-nodeport.yamlservice/kubia-nodeport created➜ k8s-in-action kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11hkubia-nodeport NodePort 10.104.247.31 <none> 80:30123/TCP 4s
接下来,可以通过10.104.247.31:80, :30123, :30123去访问服务了。
minikube用户可以直接通过以下命令快速查看nodeport服务:
➜ k8s-in-action minikube service kubia-nodeport|-----------|----------------|-------------|---------------------------|| NAMESPACE | NAME | TARGET PORT | URL ||-----------|----------------|-------------|---------------------------|| default | kubia-nodeport | | http://192.168.64.2:30123 ||-----------|----------------|-------------|---------------------------|🎉 Opening service default/kubia-nodeport in default browser...
LoadBalance方式
注:minikube当前还不支持
apiVersion: v1kind: Servicemetadata: name: kubia-loadbalancerspec: type: LoadBalancer ports: - port: 80 targetPort: 8080 selector: app: kubia
创建Ingress资源
每个LoadBalancer都需要自己的负载均衡器以及独有的公有的IP地址,而Ingress只需要一个公网IP就能为许多服务提供访问。还可以提供基于Cookie的会话亲和性(session affinity)。
apiVersion: v1kind: Ingressmetadata: name: kubiaspec: rules: - host: kubia.example.com #将域名映射到服务 http: paths: - path: / backend: serviceName: kubia-nodeport #将所有请求发给nodeport服务的80端口 servicePort: 80
启动nodeport和rs
解析hosts
$ echo "$(minikube ip) kubia.example.com" | sudo tee -a /etc/hosts
访问 http://kubia.example.com
minikube必备:
# 开启ingress➜ minikube addons enable ingress# 执行完后看下是否下载镜像成功,下面下载失败了➜ k8s-in-action kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-67c766df46-6sxxr 1/1 Running 1 12hcoredns-67c766df46-jrmbn 1/1 Running 1 12hetcd-minikube 1/1 Running 1 12hkube-addon-manager-minikube 1/1 Running 1 12hkube-apiserver-minikube 1/1 Running 1 12hkube-controller-manager-minikube 1/1 Running 1 12hkube-proxy-9bxng 1/1 Running 1 12hkube-scheduler-minikube 1/1 Running 1 12hnginx-ingress-controller-6fc5bcc8c9-sxf25 0/1 ImagePullBackOff 0 8m45sstorage-provisioner 1/1 Running 1 12h