Deleting a Greenplum Cluster
This section describes how to delete the pods and other resources that are created when you deploy a Greenplum cluster to Kubernetes. Note that deleting these cluster resources does not automatically delete the Persistenv Volume Claims (PVCs) that the cluster used to stored data. This enables you to re-deploy the same cluster at a later time, to pick up where you left off. You can optionally delete the PVCs if you to create an entirely new (empty) cluster at a later time.
Deleting Greenplum Pods and Resources
Follow these steps to delete the Greenplum pods, services, and other objects, leaving the Persistent Volumes intact:
Use the Greenplum
gpstop
utility to stop the running cluster. For example:kubectl exec -it master-0 -- bash -c "source /opt/gpdb/greenplum_path.sh; gpstop -M immediate"
20181016:23:12:58:000226 gpstop:master-0:gpadmin-[INFO]:-Starting gpstop with args: -M immediate 20181016:23:12:58:000226 gpstop:master-0:gpadmin-[INFO]:-Gathering information and validating the environment... 20181016:23:12:58:000226 gpstop:master-0:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information 20181016:23:12:58:000226 gpstop:master-0:gpadmin-[INFO]:-Obtaining Segment details from master... 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 5.11.3 build dev' 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:--------------------------------------------- 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:-Master instance parameters 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:--------------------------------------------- 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:- Master Greenplum instance process active PID = 183 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:- Database = template1 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:- Master port = 5432 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:- Master directory = /greenplum/data-1 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:- Shutdown mode = immediate 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:- Timeout = 120 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:- Shutdown Master standby host = On 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:--------------------------------------------- 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:-Segment instances that will be shutdown: 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:--------------------------------------------- 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:- Host Datadir Port Status 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:- segment-a-0 /greenplum/data 40000 u 20181016:23:12:59:000226 gpstop:master-0:gpadmin-[INFO]:- segment-b-0 /greenplum/mirror/data 50000 u Continue with Greenplum instance shutdown Yy|Nn (default=N): >
Enter
Y
when instructed to shut down the cluster.Navigate to the
workspace
directory of the Greenplum for Kuberenetes installation (or to the location of the Kubernetes manifest that you used to deploy the cluster). For example:$ cd ./greenplum-for-kubernetes-*/workspace
Execute the
kubectl delete
command, specifying the manifest that you used to deploy the cluster. For example:$ kubectl delete -f ./my-gp-instance.yaml
kubectl
begins to delete the specified the Greenplum deployment.Use
kubectl
to monitor the progress of terminating Greenplum resources in your cluster. For example, if your cluster deployment was namedmy-greenplum
:$ kubectl get all -l greenplum-cluster=my-greenplum
NAME READY STATUS RESTARTS AGE pod/greenplum-operator-7b5ddcb79-vnwvc 1/1 Running 0 9m pod/master-0 0/1 Terminating 0 5m pod/segment-a-0 0/1 Terminating 0 5m pod/segment-a-1 0/1 Terminating 0 5m pod/segment-b-0 0/1 Terminating 0 5m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/greenplum-operator 1 1 1 1 9m NAME DESIRED CURRENT READY AGE replicaset.apps/greenplum-operator-7b5ddcb79 1 1 1 9m
The deletion process is complete when the segment pods are no longer available (and no resources remain):
$ kubectl get all -l greenplum-cluster=my-greenplum
No resources found.
The Greenplum Operator should remain for future deployments:
$ kubectl get all
NAME READY STATUS RESTARTS AGE pod/greenplum-operator-7b5ddcb79-vnwvc 1/1 Running 0 34m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/greenplum-operator 1 1 1 1 34m NAME DESIRED CURRENT READY AGE replicaset.apps/greenplum-operator-7b5ddcb79 1 1 1 34m
Deleting Greenplum Persistent Volume Claims
Deleting the Greenplum pods and other resources does not delete the associated persistent volume claims that were created for it. This is expected behavior for a Kubernetes cluster, as gives you the opportunity to access or back up the data. If you no longer have any use for the Greenplum volumes (for example, if you want to install a brand new cluster), then follow this procedure to delete the Persistent Volume Claims (PVCs) and Persistent Volumes (PVs).
Caution: If the Persistent Volumes were created using dynamic provisioning, then deleting the PVCs will also delete the associated PVs. In this case, do not delete the PVCs unless you are certain that you no longer need the data.
Verify that the PVCs are present for your cluster. For example, to show the Persistent Volume Claims created for a cluster named
my-greenplum
:$ kubectl get pvc -l greenplum-cluster=my-greenplum
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-greenplum-pgdata-master-0 Bound pvc-fed2d560-cdae-11e8-a5ee-58691a3924c6 1G RWO standard 4d my-greenplum-pgdata-master-1 Bound pvc-05c953b8-cdaf-11e8-a5ee-58691a3924c6 1G RWO standard 4d my-greenplum-pgdata-segment-a-0 Bound pvc-fed8308c-cdae-11e8-a5ee-58691a3924c6 2G RWO standard 4d my-greenplum-pgdata-segment-b-0 Bound pvc-fedff4b4-cdae-11e8-a5ee-58691a3924c6 2G RWO standard 4d
Use
kubectl
to delete the PVCs associated with the cluster. For example, to delete all PersistentVolume Claims created for a cluster namedmy-greenplum
:$ kubectl delete pvc -l greenplum-cluster=my-greenplum
persistentvolumeclaim "my-greenplum-pgdata-master-0" deleted persistentvolumeclaim "my-greenplum-pgdata-master-1" deleted persistentvolumeclaim "my-greenplum-pgdata-segment-a-0" deleted persistentvolumeclaim "my-greenplum-pgdata-segment-a-1" deleted persistentvolumeclaim "my-greenplum-pgdata-segment-b-0" deleted persistentvolumeclaim "my-greenplum-pgdata-segment-b-1" deleted
If the Persistent Volumes were provisioned manually, then deleting the PVCs does not delete the associated PVs. (You can check for the PVs using
kubectl get pv
.) To delete any remaining Persistent Volumes, execute the command:$ kubectl delete pv -l greenplum-cluster=my-greenplum
See Persistent Volumes in the Kubernetes documentation for more information.
Deleting Greenplum Operator
If you also want to remove the Greenplum Operator, follow the instructions in Uninstalling Greenplum for Kubernetes.