Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
2.4k views
in Technique[技术] by (71.8m points)

containers - delete Kubernetes persistent volume from statefulset after scale down

I scaled my statefulset up to 4, and when scaling down to 1, I saw that I still have 4 persistent volumes with indexes from 0 to 3.

I also saw that the status of all of them is Bound I guess it is because I use it as stateful set, so it doesn't delete the volumes after scale down.

I tried to manually delete one of tham (the one with index 2) because I was sure it will release my volume, so I used:

kubectl delete persistentvolume <volume>

Well, that didn't help, it just made this volume to be in a terminating state forever... :/

I have no idea how to remove this and all the other unused volumes now.

here is the volume configuration in stateful set yaml.

  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: "default"
        resources:
          requests:
            storage: 7Gi

if I run

kubectl get pvc --all-namespaces

I get

NAMESPACE    NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default      data-0         Bound    pvc-23af1aec-e385-4778-b0b0-56f1d1dfdfee   7Gi        RWO            default        4h5m
default      data-1         Bound    pvc-34625107-1352-4715-b12c-2fc6ff22ed08   7Gi        RWO            default        4h4m
default      data-2         Bound    pvc-15dbdb53-d951-465d-b9c3-ebadfcc3c725   7Gi        RWO            default        4h3m
default      data-3         Bound    pvc-d317657f-194a-4f4f-8c5f-dff2843b693f   7Gi        RWO            default        4h3m

if I run

kubectl get --no-headers persistentvolumes

I get this:

pvc-15dbdb53-d951-465d-b9c3-ebadfcc3c725   7Gi   RWO   Delete   Terminating   default/data-2            default         4h4m
pvc-23af1aec-e385-4778-b0b0-56f1d1dfdfee   7Gi   RWO   Delete   Bound         default/data-0            default         4h6m
pvc-34625107-1352-4715-b12c-2fc6ff22ed08   7Gi   RWO   Delete   Bound         default/data-1            default         4h5m
pvc-d317657f-194a-4f4f-8c5f-dff2843b693f   7Gi   RWO   Delete   Bound         default/data-3            default         4h3m

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

In statefulset, K8s doesn't delete PV or PVC by their own after termination of a pod automatically, It is to avoid further complication and for data safety. Thats why after doing scale down, we need to do it manually.Deleting the PVC after the pods have terminated will trigger deletion of the respective Persistent Volumes depending on the storage class and reclaim policy.

Please try to delete persistent volume claim or PVC instead of persistent volume. if you delete pvc it will automatically delete the respective pv.

just run this command in your bash:

kubectl delete pvc data-3

REF


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

56.6k users

...