Problem
Pods are not functioning or running due to PVC mount failures, specifically displaying "FailedAttachVolume" or "FailedMount" errors.
βx
Warning FailedMount 5h32m (x31004 over 6h27m) kubelet MountVolume.WaitForAttach failed for volume "pvc-[pvc-id]" : volume attachment is being deletedβWarning FailedAttachVolume 5h30m (x36 over 6h27m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-[pvc-id]" : volume attachment is being deletedEnvironment
- Self-Hosted Private Cloud Director Virtualization β v2025.4 and Higher.
Procedure
- Check the number of pods in the
Initstate to identify any pod stuck in initialisation. The failure of PVC mount attachments can cause pods to remain in anInitstate.
$ kubectl get pods -A | grep -i "init"- Run the following commands to get the storage backend.
$ kubectl get csidrivers- Verify if the CSI driver pods are running. The pods can either be in a dedicated namespace or inside the
Kube-systemnamespace. In this example below, the NetApp backend uses the Trident namespace to host its storage backend pods.
$ kubectl get pods -n <CSI-driver-namespace>/<kube-system>βE.g.$ kubectl get pods -n tridenttrident trident-controller-pod 0/6 ContainerCreating 0 6h44mtrident trident-node-linux-pod 0/2 CrashLoopBackOff 20 (5h33m ago) 23mtrident trident-node-linux-pod 0/2 CrashLoopBackOff 15 (5d4h ago) 23dtrident trident-node-linux-pod 0/2 CrashLoopBackOff 34 (5d3h ago) 23d- As calico is responsible for providing pod networking, review all calico pods and determine why these pods are in a "
CrashLoopBackOff/ContainerCreating/OOMkilled/Pending/Error" state; see the events sections from the below command output.
$ kubectl describe <pod-name> -n <calico-namespace>- Get more information on the failure from the pod logs using the command:
$ kubectl logs <pod-name> -n <CSI-driver-namespace>/<kube-system>- If these steps don't resolve the issue, please contact your Backend Storage Provider or reach out to the Platform9 Support Team for additional assistance.
Most common causes
- The storage backend is unreachable.
- The underlying host does not have sufficient resources to run these pods.
- CSI Driver itself is not configured correctly or has some errors.
- Calico network pods are not working as expected.
Was this page helpful?