Upgrade a Greenplum Cluster
This topic describes how to upgrade a Greenplum for Kubernetes cluster to the latest version of the Pivotal Greenplum software. The process of upgrading a Greenplum cluster involves deleting the existing cluster, and then re-creating the cluster deployment using the same manifest file. You re-use any existing persistent volumes when re-creating the cluster with the new Operator.
Prerequisites
- Ensure that you have first upgraded the Greenplum Operator to the new version using the instructions in Upgrade the Greenplum Operator.
- Create a backup of your existing Greenplum for Kubernetes cluster.
Procedure
Follow these steps to upgrade a Greenplum for Kubernetes cluster to use the new Pivotal Greenplum software:
Navigate to the
workspace
directory of the Greenplum for Kuberenetes installation (or to the location of the Kubernetes manifest that you used to deploy the cluster). For example:$ cd ./greenplum-for-kubernetes-*/workspace
Execute the
kubectl delete
command, specifying the manifest that you used to deploy the cluster. For example:$ kubectl delete -f ./my-gp-instance.yaml
kubectl
stops the Greenplum for Kubernetes instance and deletes the kubernetes resources for the Greenplum deployment.Use
kubectl
to monitor the progress of terminating Greenplum resources in your cluster. For example, if your cluster deployment was namedmy-greenplum
:$ kubectl get all -l greenplum-cluster=my-greenplum
NAME READY STATUS RESTARTS AGE pod/greenplum-operator-7b5ddcb79-vnwvc 1/1 Running 0 9m pod/master-0 0/1 Terminating 0 5m pod/segment-a-0 0/1 Terminating 0 5m pod/segment-a-1 0/1 Terminating 0 5m pod/segment-b-0 0/1 Terminating 0 5m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/greenplum-operator 1 1 1 1 9m NAME DESIRED CURRENT READY AGE replicaset.apps/greenplum-operator-7b5ddcb79 1 1 1 9m
The deletion process is complete when the segment pods are no longer available (and no resources remain):
$ kubectl get all -l greenplum-cluster=my-greenplum
No resources found.
The Greenplum Operator should remain for future deployments:
$ kubectl get all
NAME READY STATUS RESTARTS AGE pod/greenplum-operator-7b5ddcb79-vnwvc 1/1 Running 0 34m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/greenplum-operator 1 1 1 1 34m NAME DESIRED CURRENT READY AGE replicaset.apps/greenplum-operator-7b5ddcb79 1 1 1 34m
Use
kubectl apply
and specify your manifest file to send the deployment request to the Greenplum Operator. For example, to use the samplemy-gp-instance.yaml
file:$ kubectl apply -f ./my-gp-instance.yaml
greenplumcluster.greenplum.pivotal.io/my-greenplum created
The Greenplum Operator deploys the necessary Greenplum resources according to your specification, using the existing Persistent Volume Claims as-is with their available data.
Use
kubectl get all -l greenplum-cluster=<your Greenplum cluster instance name>
and wait until all Greenplum cluster pods have the statusRunning
:$ watch kubectl get all -l greenplum-cluster=my-greenplum
NAME READY STATUS RESTARTS AGE pod/master-0 1/1 Running 1 2d pod/master-1 1/1 Running 1 2d pod/segment-a-0 1/1 Running 0 23h pod/segment-b-0 1/1 Running 1 2d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/agent ClusterIP None <none> 22/TCP 2d service/greenplum LoadBalancer 10.99.14.1 <pending> 5432:30430/TCP 2d NAME DESIRED CURRENT AGE statefulset.apps/master 2 2 2d statefulset.apps/segment-a 1 1 2d statefulset.apps/segment-b 1 1 2d
At this point, the upgraded cluster is available. If you are using a persistent
storageClass
, the updated cluster is created with the same Persistent Volume Claims (PVCs) and data.Describe your Greenplum cluster to observe that the update succeeded.
$ kubectl describe greenplumClusters/my-greenplum
Name: my-greenplum Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"greenplum.pivotal.io/v1","kind":"GreenplumCluster","metadata":{"annotations":{},"name":"my-greenplum","namespace":"default"... API Version: greenplum.pivotal.io/v1 Kind: GreenplumCluster Metadata: Creation Timestamp: 2019-01-10T22:15:40Z Generation: 1 Resource Version: 28403 Self Link: /apis/greenplum.pivotal.io/v1/namespaces/default/greenplumclusters/my-greenplum UID: 43842f53-1525-11e9-941d-080027530600 Spec: Master And Standby: Cpu: 0.5 Host Based Authentication: # host all gpadmin 1.2.3.4/32 trust # host all gpuser 0.0.0.0/0 md5 Memory: 800Mi Storage: 1G Storage Class Name: standard Worker Selector: Segments: Cpu: 0.5 Memory: 800Mi Primary Segment Count: 1 Storage: 2G Storage Class Name: standard Worker Selector: Status: Instance Image: greenplum-for-kubernetes:latest Phase: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal created 12s greenplumOperator greenplumCluster created Normal updated 12s greenplumOperator greenplumCluster updated successfully
The
Phase
should beRunning
and the Events should match the output above.Start the cluster to complete the upgrade process:
kubectl exec -it master-0 -- bash -c "source /opt/gpdb/greenplum_path.sh; gpstart"