kyverno Kyverno-pre container takes hours to complete Go
Hi all!
I'm having some issues using Kyverno in my cluster. If for some reason the kyverno pod is re-created it takes hours to the init container to finish.
If I check the logs, I see a lot of messages like this.
I0401 12:49:00.252535 1 main.go:371] "msg"="successfully cleaned up resource" "kind"="ReportChangeRequest" "name"="rcr-2wz4m"
and if I check the ReportChangeRequest in the cluster , there are more than 20.000 items in the list, Is there a reason for that? is this the expected behaviour?
I'm running
- Kubernetes version: 1.17.5
- Kyverno version: 1.3.4
Any idea about this?
thanks in advanced!
6 Answer:
Hi @sosoriov, the ReportChangeRequest
is used to generate policy reports, and they should be cleaned up once they are merged into reports.
Ideally you should not see any ReportChangeRequest
left on the cluster. Did you encounter any failure when installing Kyverno? It would good if you can share steps to reproduce.
Link to a similar conversation on #kyverno slack channel where Kyverno failed to start, thus caused the issue.
Hi @realshuting , thanks for your reply.
No, no errors at all. Actually, Kyverno starts running fine in the cluster and after few days it just crash and never comes back. And then if I try to re-create the pod i get this issue.
Actually, Kyverno starts running fine in the cluster and after few days it just crash and never comes back.
Do we have any traces of why Kyverno crashed? I believe this is why those ReportChangeRequests
were not cleaned up by Kvyerno. The init container is designed to recover from this state but seems it took time to clean up tons of resources.
One way to recover quickly is to delete Kyverno CRDs and re-install. You can grep Kyverno CRDs and use kubectl delete
to remove them.
✗ kubectl get crd | grep kyverno
clusterpolicies.kyverno.io 2021-03-26T22:50:34Z
clusterreportchangerequests.kyverno.io 2021-03-26T22:50:34Z
generaterequests.kyverno.io 2021-03-26T22:50:34Z
policies.kyverno.io 2021-03-26T22:50:34Z
reportchangerequests.kyverno.io 2021-03-26T22:50:34Z
If you are installing from install.yaml
, you can do kubectl delete -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml
.
Note that helm delete
does not delete CRDs automatically, see https://helm.sh/docs/chartbestpractices/customresourcedefinitions/#some-caveats-and-explanations.
In this thread, we found that the policy applied to CronJob can result in stale RCRs.
@sosoriov - do you have CronJob running in your cluster? If so, upgrading to v1.3.5-rc3
may solve the issue. In the meanwhile, we will optimize the cleanup process in the init container.
@realshuting Indeed I have few Cronjobs running in our cluster.
I'll do the upgrade and let you know how it goes.
Thanks for your help.
Hey @realshuting I can confirm that v1.3.5-rc3 solves the issue regarding the Init container. It seems that ReportChangeRequests are being clean up properly .
It's worth to mentioned that this version needs extra memory resources. I had to increase the limits of the Kyverno container and everything is working fine. I'll keep an eye on the deployment and wait for the Stable release.
Thanks for your help.
Read next
- openssl Add testing of updated IV to evp_test and at least some cipher test data C
- vespa Searchnode health check fails Java
- Is there a book to learn ReactiveSwift? - Swift ReactiveSwift
- carla Fog rendering distance and black sky (0.9.11) - Cplusplus
- glusterfs Glusterfs7.5 rpc error: cannot lookup the saved frame corresponding xid C
- please Run targets in working directory or sandbox Go
- Regarding flag usage - C samtools
- amplify push gives error "Unknown type" when using union or interface in schema - amplify-cli