Error from server (Forbidden): deployments.extensions "my-super-deployment" is forbidden:
User "myuser" cannot patch resource "deployments" in API group "extensions" in the namespace
"mynamespace": requires one of ["container.deployments.update"] permission(s).
It seems to require PATCH / PUT permission under /apis/apps/v1/namespaces/{namespace}/deployments/*
or older /apis/extensions/v1beta1/namespaces/{namespace}/deployments/*
.
What you expected to happen:
Diff output to be shown.
How to reproduce it (as minimally and precisely as possible):
Attempt to diff a Deployment resource from a user lacking write access to the deployment. The permission in the example above is mapped to GKE specifically.
Anything else we need to know?:
Environment:
Kubernetes client and server versions (use kubectl version
):
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.12-gke.20", GitCommit:"0ac5f81eecab42bff5ef74f18b99d8896ba7b89b", GitTreeState:"clean", BuildDate:"2020-09-09T00:48:20Z", GoVersion:"go1.12.17b4", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration: Google, GKE
OS (e.g: cat /etc/os-release
): n/a
kubectl supports +1/-1 version of the api server so your v1.19.3 client is pretty off from our version skew policy.
You could try using an older kubectl version or upgrading your cluster to a supported version.
Please reopen if you run into this with a supported version.
/close
In response to this:
kubectl supports +1/-1 version of the api server so your v1.19.3 client is pretty off from our version skew policy.
You could try using an older kubectl version or upgrading your cluster to a supported version.
Please reopen if you run into this with a supported version.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hi @eddiezane
Here is a minimal example:
uses - https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/controllers/nginx-deployment.yaml as test.yaml
$ kubectl apply -f test.yaml
deployment.apps/nginx-deployment created
$ kubectl get deployment
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default nginx-deployment 3/3 3 3 14s
$ kubectl create clusterrolebinding view --group=view --clusterrole=view
clusterrolebinding.rbac.authorization.k8s.io/view created
$ kubectl get deployment --as=kubernetes-admin --as-group=view
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 20m
$ kubectl apply -f test-modified.yaml --as=kubernetes-admin --as-group=view
Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"app\":\"nginx\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":88}]}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"nginx"}],"containers":[{"$setElementOrder/ports":[{"containerPort":88}],"name":"nginx","ports":[{"containerPort":88},{"$patch":"delete","containerPort":80}]}]}}}}
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-deployment", Namespace: "default"
for: "test.yaml": deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl diff -f test-modified.yaml --as=kubernetes-admin --as-group=view
Error from server (Forbidden): deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:08:32Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
/reopen
Hi @eddiezane
Here is a minimal example:
uses - https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/controllers/nginx-deployment.yaml as test.yaml
$ kubectl apply -f test.yaml
deployment.apps/nginx-deployment created
$ kubectl get deployment
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default nginx-deployment 3/3 3 3 14s
$ kubectl create clusterrolebinding view --group=view --clusterrole=view
clusterrolebinding.rbac.authorization.k8s.io/view created
$ kubectl get deployment --as=kubernetes-admin --as-group=view
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 20m
$ kubectl apply -f test-modified.yaml --as=kubernetes-admin --as-group=view
Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"app\":\"nginx\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":88}]}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"nginx"}],"containers":[{"$setElementOrder/ports":[{"containerPort":88}],"name":"nginx","ports":[{"containerPort":88},{"$patch":"delete","containerPort":80}]}]}}}}
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-deployment", Namespace: "default"
for: "test.yaml": deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl diff -f test-modified.yaml --as=kubernetes-admin --as-group=view
Error from server (Forbidden): deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:08:32Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Yes this is the current expected behavior:
https://kubernetes.io/docs/reference/using-api/api-concepts/#dry-run-authorization
This issue is a duplicate of kubernetes/kubernetes#95449
Reopening this. The other discussion went nowhere.
To maybe make it more clear, comparing the deployed state and a local manifest should not be relying on write or patch permissions to a cluster. And I think that is where the misunderstanding is. At a minimum, the diff
should fallback to a client-side comparison.
The same issue of kubectl diff
using PATCH
on the server (even with --server-side=false
) applies to other things that are astonishingly broken like doing kubectl diff
on an ingress with an admission controller. The PATCH
triggers an admission controller and can cause the diff to fail.
% kubectl diff --server-side=false -f ingress-site.yaml
Error from server (BadRequest): admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "site.company.com" and path "/" is already defined in ingress dev/ingress-site
/reopen
Reopening this. The other discussion went nowhere.
To maybe make it more clear, comparing the deployed state and a local manifest should not be relying on write or patch permissions to a cluster. And I think that is where the misunderstanding is. At a minimum, the diff
should fallback to a client-side comparison.
The same issue of kubectl diff
using PATCH
on the server (even with --server-side=false
) applies to other things that are astonishingly broken like doing kubectl diff
on an ingress with an admission controller. The PATCH
triggers an admission controller and can cause the diff to fail.
% kubectl diff --server-side=false -f ingress-site.yaml
Error from server (BadRequest): admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "site.company.com" and path "/" is already defined in ingress dev/ingress-site
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
After 90d of inactivity, lifecycle/stale
is applied
After 30d of inactivity since lifecycle/stale
was applied, lifecycle/rotten
is applied
After 30d of inactivity since lifecycle/rotten
was applied, the issue is closed
You can:
Mark this issue or PR as fresh with /remove-lifecycle stale
Mark this issue or PR as rotten with /lifecycle rotten
Close this issue or PR with /close
Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
label
Nov 17, 2021
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
After 90d of inactivity, lifecycle/stale
is applied
After 30d of inactivity since lifecycle/stale
was applied, lifecycle/rotten
is applied
After 30d of inactivity since lifecycle/rotten
was applied, the issue is closed
You can:
Mark this issue or PR as fresh with /remove-lifecycle rotten
Close this issue or PR with /close
Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
lifecycle/rotten
Denotes an issue or PR that has aged beyond stale and will be auto-closed.
and removed
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
labels
Dec 17, 2021
@mbrancato apologies for the friction here.
You are correct and this appears to be a valid issue. A client side only diff should not be making a PATCH request. This is because it's currently hard coded to use dry run. This is a bug and should be fixed.
https://github.com/kubernetes/kubernetes/blob/23ee308ed7ed7c80f7e4d0437f38400f8df6faf8/staging/src/k8s.io/kubectl/pkg/cmd/diff/diff.go#L350
/triage accepted
/remove-lifecycle rotten
/priority backlog
hileef, Vince-Chenal, XiyouNiGo, driv, gunzy83, vytautas-berankis-pushgaming, yong-snow, and dimitri-fert reacted with thumbs up emoji
MarcoMatarazzo, hileef, Vince-Chenal, driv, and vytautas-berankis-pushgaming reacted with heart emoji
All reactions
One point of friction is that the --server-side
flag name is misleading. The --server-side
flag indicates if the new patch is created with server-side apply or client-side apply. Ideally this would be clearer for kubectl diff
with somethign like --server-side-apply
.
I don't think this is a bug because diff is intended to only diff a live object against the new patched object.
It sounds like this request is for a client-side version of diff, with the live object retrieved from the server diffed against a patched object that's constructed locally. I think this sounds close to #1147.
I don't think this is a bug because diff is intended to only diff a live object against the new patched object.
If that is the case, I think the help text from kubectl diff --help
is misleading:
$ kubectl diff --help
Diff configurations specified by file name or stdin between the current online configuration, and the configuration as
it would be if applied.
I interpret this as being able to diff the current online configuration with some offline configuration, and be able to see what would change if the offline configuration was applied to the online configuration. As long as I have permission to get the current online configuration, I would expect to be able to see what would change as well.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
After 90d of inactivity, lifecycle/stale
is applied
After 30d of inactivity since lifecycle/stale
was applied, lifecycle/rotten
is applied
After 30d of inactivity since lifecycle/rotten
was applied, the issue is closed
You can:
Mark this issue or PR as fresh with /remove-lifecycle stale
Mark this issue or PR as rotten with /lifecycle rotten
Close this issue or PR with /close
Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
label
Jun 23, 2022
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
After 90d of inactivity, lifecycle/stale
is applied
After 30d of inactivity since lifecycle/stale
was applied, lifecycle/rotten
is applied
After 30d of inactivity since lifecycle/rotten
was applied, the issue is closed
You can:
Mark this issue or PR as fresh with /remove-lifecycle rotten
Close this issue or PR with /close
Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
lifecycle/rotten
Denotes an issue or PR that has aged beyond stale and will be auto-closed.
and removed
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
labels
Jul 24, 2022
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
After 90d of inactivity, lifecycle/stale
is applied
After 30d of inactivity since lifecycle/stale
was applied, lifecycle/rotten
is applied
After 30d of inactivity since lifecycle/rotten
was applied, the issue is closed
You can:
Reopen this issue or PR with /reopen
Mark this issue or PR as fresh with /remove-lifecycle rotten
Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
After 90d of inactivity, lifecycle/stale
is applied
After 30d of inactivity since lifecycle/stale
was applied, lifecycle/rotten
is applied
After 30d of inactivity since lifecycle/rotten
was applied, the issue is closed
You can:
Reopen this issue or PR with /reopen
Mark this issue or PR as fresh with /remove-lifecycle rotten
Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
if you landed here looking for a way to incorporate kubectl diff
into CI with read-only access, see [https://github.com/weaveworks/kubediff]. It performs the diff by reading all the objects in a k8s namespace and then compare them to a set of local manifests. No PATCH permissions necessary.
edit: the project looks dead, the markdown diff output is kind of crap (e.g. it will say "Error: Missing" if something is missing on the server), and the json output is limited.
lifecycle/rotten
Denotes an issue or PR that has aged beyond stale and will be auto-closed.
priority/backlog
Higher priority than priority/awaiting-more-evidence.
triage/accepted
Indicates an issue or PR is ready to be actively worked on.