Resolved: Removing dead Kuberenetes cluster from view

Question:

I have started learning Kuberenetes for some time (not pro yet). I am using docker-desktop on Windows 11 with Kubernetes all works fine. But at some point I added AKS (Azure Kubernetes) cluster to my test lab and the AKS is deleted at later point from Azure Portal.
So when I run kubectl config view I get following output:
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://aksxxxxxxx-rg-aks-xxxxxx-xxxxxxxx.hcp.northeurope.azmk8s.io:443
  name: aksxxxxxxx
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://kubernetes.docker.internal:6443
  name: docker-desktop
contexts:
- context:
    cluster: docker-desktop
    user: docker-desktop
  name: docker-desktop
current-context: docker-desktop
kind: Config
preferences: {}
users:
- name: clusterUser_AKSRG_aksxxxxxxx
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    token: REDACTED
- name: clusterUser_RG-AKS_aksxxxxxxx
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    token: REDACTED
- name: docker-desktop
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
So what I did, I ran kubectl config get-contexts and deleted the unused aks using kubectl config delete-context aksxxxxxxx and now only one context, the docker-desktop
CURRENT   NAME             CLUSTER          AUTHINFO         NAMESPACE
*         docker-desktop   docker-desktop   docker-desktop
Which is fine, my question was, How to clean up the view, so it does not view unused (dead) clusters and users that does not exist? Or am I taking the wrong approach?

Answer:

kubectl config view shows you your whole configuration under .kube/config
kubectl config get-contexts will give you an output about your contexts. A context is the allocation between cluster and user.
Since you only have deleted the allocation (context) you still have the cluster and user in your .kube/config which is the output of your kubectl config view.
To delete the cluster you can use kubectl config delete-cluster aksxxxxxxx. To delete the user you can use kubectl config unset users.clusterUser_RG-AKS_aksxxxxxxx

If you have better answer, please add a comment about this, thank you!

Source: Stackoverflow.com