Deleting a Greenplum Cluster
This section describes how to delete the pods and other resources that are created when you deploy a Greenplum cluster to Kubernetes. Note that deleting these cluster resources does not automatically delete the Persistenv Volume Claims (PVCs) that the cluster used to stored data. This enables you to re-deploy the same cluster at a later time, to pick up where you left off. You can optionally delete the PVCs if you to create an entirely new (empty) cluster at a later time.
Follow these steps to delete the Greenplum pods, services, and other objects, leaving the Persistent Volumes intact:
Navigate to the
workspacedirectory of the Greenplum for Kuberenetes installation (or to the location of the Kubernetes manifest that you used to deploy the cluster). For example:
$ cd ./greenplum-for-kubernetes-*/workspace
kubectl deletecommand, specifying the manifest that you used to deploy the cluster. For example:
$ kubectl delete -f ./my-gp-instance.yaml --wait=false
kubectlstops the Greenplum for Kubernetes instance and deletes the kubernetes resources for the Greenplum deployment.
Note: Use the optional
--wait=falseflag to return immediately without waiting for the deletion to complete.
kubectlto describe the Greenplum cluster to verify
$ kubectl describe greenplumcluster my-greenplum [...] Status: Instance Image: greenplum-for-kubernetes:latest Operator Version: greenplum-operator:latest Phase: Deleting Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreatingGreenplumCluster 3m greenplumOperator Creating Greenplum cluster my-greenplum in default Normal CreatedGreenplumCluster 1m greenplumOperator Successfully created Greenplum cluster my-greenplum in default Normal DeletingGreenplumCluster 6s greenplumOperator Deleting Greenplum cluster my-greenplum in default
If for any reason stopping the Greenplum instance fails, you should see a warning message in the greenplum-operator logs as shown below:
$ kubectl logs -l app=greenplum-operator [...] time="2019-04-10T17:52:14Z" level=info msg="DeletingGreenplumCluster my-greenplum in default" time="2019-04-10T17:52:15Z" level=info msg="initiating shutdown of the greenplum cluster..." time="2019-04-10T17:52:15Z" level=warning msg="gpstop did not stop cleanly. Please check gpAdminLogs for more info." [...] time="2019-04-10T17:52:15Z" level=info msg="DeletedGreenplumCluster my-greenplum in default"
However, the Greenplum instance still gets deleted and all associated resources get cleaned up.
kubectlto monitor the progress of terminating Greenplum resources in your cluster. For example, if your cluster deployment was named
$ kubectl get all -l greenplum-cluster=my-greenplum
NAME READY STATUS RESTARTS AGE pod/greenplum-operator-7b5ddcb79-vnwvc 1/1 Running 0 9m pod/master-0 0/1 Terminating 0 5m pod/segment-a-0 0/1 Terminating 0 5m pod/segment-a-1 0/1 Terminating 0 5m pod/segment-b-0 0/1 Terminating 0 5m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/greenplum-operator 1 1 1 1 9m NAME DESIRED CURRENT READY AGE replicaset.apps/greenplum-operator-7b5ddcb79 1 1 1 9m
The deletion process is complete when the segment pods are no longer available (and no resources remain):
$ kubectl get all -l greenplum-cluster=my-greenplum
No resources found.
If the Kubernetes resources remain and do not enter the “Terminating” status after 5 minutes, check the operator logs for error messages.
bash $ kubectl logs -l app=greenplum-operator
The Greenplum Operator should remain for future deployments:
$ kubectl get all
NAME READY STATUS RESTARTS AGE pod/greenplum-operator-7b5ddcb79-vnwvc 1/1 Running 0 34m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/greenplum-operator 1 1 1 1 34m NAME DESIRED CURRENT READY AGE replicaset.apps/greenplum-operator-7b5ddcb79 1 1 1 34m
Deleting the Greenplum pods and other resources does not delete the associated persistent volume claims that were created for it. This is expected behavior for a Kubernetes cluster, as it gives you the opportunity to access or back up the data. If you no longer have any use for the Greenplum volumes (for example, if you want to install a brand new cluster), then follow this procedure to delete the Persistent Volume Claims (PVCs) and Persistent Volumes (PVs).
You may also need to delete Greenplum PVCs if you want to change the storage size or deploy an existing cluster to a different storage class; the Greenplum Operator maintains an association between the cluster name and its PVCs, so you cannot redeploy a cluster to a different storage class or change the storage size without first deleting both the cluster deployment and its PVCs.
Caution: If the Persistent Volumes were created using dynamic provisioning, then deleting the PVCs will also delete the associated PVs. In this case, do not delete the PVCs unless you are certain that you no longer need the data.
Verify that the PVCs are present for your cluster. For example, to show the Persistent Volume Claims created for a cluster named
$ kubectl get pvc -l greenplum-cluster=my-greenplum
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-greenplum-pgdata-master-0 Bound pvc-fed2d560-cdae-11e8-a5ee-58691a3924c6 1G RWO standard 4d my-greenplum-pgdata-master-1 Bound pvc-05c953b8-cdaf-11e8-a5ee-58691a3924c6 1G RWO standard 4d my-greenplum-pgdata-segment-a-0 Bound pvc-fed8308c-cdae-11e8-a5ee-58691a3924c6 2G RWO standard 4d my-greenplum-pgdata-segment-b-0 Bound pvc-fedff4b4-cdae-11e8-a5ee-58691a3924c6 2G RWO standard 4d
kubectlto delete the PVCs associated with the cluster. For example, to delete all PersistentVolume Claims created for a cluster named
$ kubectl delete pvc -l greenplum-cluster=my-greenplum
persistentvolumeclaim "my-greenplum-pgdata-master-0" deleted persistentvolumeclaim "my-greenplum-pgdata-master-1" deleted persistentvolumeclaim "my-greenplum-pgdata-segment-a-0" deleted persistentvolumeclaim "my-greenplum-pgdata-segment-a-1" deleted persistentvolumeclaim "my-greenplum-pgdata-segment-b-0" deleted persistentvolumeclaim "my-greenplum-pgdata-segment-b-1" deleted
If the Persistent Volumes were provisioned manually, then deleting the PVCs does not delete the associated PVs. (You can check for the PVs using
kubectl get pv.) To delete any remaining Persistent Volumes, execute the command:
$ kubectl delete pv -l greenplum-cluster=my-greenplum
See Persistent Volumes in the Kubernetes documentation for more information.
If you also want to remove the Greenplum Operator, follow the instructions in Uninstalling Greenplum for Kubernetes.