Upgrading Greenplum for Kubernetes

This topic describes how to upgrade Pivotal Greenplum for Kubernetes from version 1.10 to version 1.11. The upgrade process involves first deleting any existing Greenplum cluster deployments, and then uninstalling the older Greenplum Operator. You then install the version 1.11 Greenplum Operator and use it to re-create earlier cluster deployments, using the same manifest files. During this process, you re-use any existing persistent volumes so that Greenplum cluster data is preserved.

Prerequisites

  • This procedure assumes that you have installed the previous minor version Greenplum for Kubernetes (version 1.10). Greenplum for Kubernetes supports upgrades only from the prior minor version (for example, from version 1.10 to version 1.11). If your installed version of the product is more than one version prior, you will need to upgrade incrementally to reach your target version.

  • Before you can install the newer Greenplum for Kubernetes version, ensure that you have installed all required software and prepared your Kubernetes environment as described in Prerequisites.

Procedure

Follow these steps to upgrade the Greenplum Operator resource:

  1. Navigate to the workspace directory of the Greenplum for Kubernetes installation (or to the location of the Kubernetes manifest that you used to deploy the cluster). For example:

    $ cd ./greenplum-for-kubernetes-*/workspace
    
  2. Execute the kubectl delete command, specifying the manifest that you used to deploy the cluster. For example:

    $ kubectl delete -f ./my-gp-instance.yaml
    

    kubectl stops the Greenplum for Kubernetes instance and deletes the kubernetes resources for the Greenplum deployment.

  3. Use kubectl to monitor the progress of terminating Greenplum resources in your cluster. For example, if your cluster deployment was named my-greenplum:

    $ kubectl get all -l greenplum-cluster=my-greenplum
    
    NAME                                     READY     STATUS        RESTARTS   AGE
    pod/greenplum-operator-7b5ddcb79-vnwvc   1/1       Running       0          9m
    pod/master-0                             0/1       Terminating   0          5m
    pod/segment-a-0                          0/1       Terminating   0          5m
    pod/segment-a-1                          0/1       Terminating   0          5m
    pod/segment-b-0                          0/1       Terminating   0          5m
    
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   26m
    
    NAME                                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/greenplum-operator   1         1         1            1           9m
    
    NAME                                           DESIRED   CURRENT   READY     AGE
    replicaset.apps/greenplum-operator-7b5ddcb79   1         1         1         9m
    
  4. The deletion process is complete when the segment pods are no longer available (and no resources remain):

    $ kubectl get all -l greenplum-cluster=my-greenplum
    
    No resources found.
    

    The Greenplum Operator should remain for future deployments:

    $ kubectl get all
    
    NAME                                     READY     STATUS    RESTARTS   AGE
    pod/greenplum-operator-7b5ddcb79-vnwvc   1/1       Running   0          34m
    
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   50m
    
    NAME                                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/greenplum-operator   1         1         1            1           34m
    
    NAME                                           DESIRED   CURRENT   READY     AGE
    replicaset.apps/greenplum-operator-7b5ddcb79   1         1         1         34m
    
  5. Repeat the previous steps for each version 1.10 cluster that you deployed.

  6. Download the new Greenplum for Kubernetes version 1.11 software from Pivotal Network. The download file has the name: greenplum-for-kubernetes-<version>.tar.gz.

  7. Upgrade to helm version 3 if you have not already done so. See https://github.com/helm/helm/releases for instructions.

  8. Go to the directory where you downloaded the new version of Greenplum for Kubernetes, and unpack the downloaded software. For example:

    $ cd ~/Downloads
    $ tar xzf greenplum-for-kubernetes-*.tar.gz
    

    The above command unpacks the distribution into a new directory named greenplum-for-kubernetes-<version>.

  9. Go into the new greenplum-for-kubernetes-<version> directory:

    $ cd ./greenplum-for-kubernetes-*
    
  10. For Minikube deployments only, ensure that the local docker daemon interacts with the Minikube docker container registry:

    $ eval $(minikube docker-env)
    

    Note: To undo this docker setting in the current shell, run eval "$(docker-machine env -u)".

  11. Load the new Greenplum Operator Docker image to the Docker registry. For example:

    $ docker load -i ./images/greenplum-operator
    
    e256c39291c9: Loading layer [==================================================>]  79.69MB/79.69MB
    2250a2616dfd: Loading layer [==================================================>]  352.3kB/352.3kB
    b1e0c363fd12: Loading layer [==================================================>]  37.48MB/37.48MB
    Loaded image: greenplum-operator:v0.7.0
    
  12. Load the Greenplum for Kubernetes Docker image to the local Docker registry as well. For example:

    $ docker load -i ./images/greenplum-for-kubernetes
    
    644879075e24: Loading layer [==================================================>]  117.9MB/117.9MB
    d7ff1dc646ba: Loading layer [==================================================>]  15.87kB/15.87kB
    686245e78935: Loading layer [==================================================>]  14.85kB/14.85kB
    d73dd9e65295: Loading layer [==================================================>]  5.632kB/5.632kB
    2de391e51d73: Loading layer [==================================================>]  3.072kB/3.072kB
    4605c0a3f29d: Loading layer [==================================================>]  633.4MB/633.4MB
    c8d909e84bbf: Loading layer [==================================================>]  1.682MB/1.682MB
    7e66ff617b4c: Loading layer [==================================================>]  4.956MB/4.956MB
    db9d4b8567ab: Loading layer [==================================================>]  17.92kB/17.92kB
    223fe4d67f77: Loading layer [==================================================>]  3.584kB/3.584kB
    2e75b028b124: Loading layer [==================================================>]  43.04MB/43.04MB
    1a7d923392f7: Loading layer [==================================================>]   2.56kB/2.56kB
    2b9cc11f6cfc: Loading layer [==================================================>]  176.6kB/176.6kB
    Loaded image: greenplum-for-kubernetes:v0.7.0
    
  13. Verify that the new Docker images are now available:

    $ docker images "greenplum-*"
    
    REPOSITORY                 TAG          IMAGE ID            CREATED             SIZE
    greenplum-operator         v0.6.0       c2f5f8af7990        7 weeks ago         216MB
    greenplum-for-kubernetes   v0.6.0       63286a99e24a        7 weeks ago         785MB
    greenplum-operator         v0.7.0       1f2299e10960        28 minutes ago      232MB
    greenplum-for-kubernetes   v0.7.0       1d5b86baf556        30 minutes ago      763MB
    
  14. (Skip this step if you are using local docker images such as on Minikube.) If you push the Greenplum for Kubernetes docker images to a different container registry:

    1. If you want to push the Greenplum for Kubernetes docker images to a different container registry, set the project name and image repo name and then use Docker to push the images. For example, to push the images to Google Cloud Registry using the current Google Cloud project name:

      $ gcloud auth configure-docker
      
      $ PROJECT=$(gcloud config list core/project --format='value(core.project)')
      $ IMAGE_REPO="gcr.io/${PROJECT}"
      
      $ GREENPLUM_IMAGE_NAME="${IMAGE_REPO}/greenplum-for-kubernetes:$(cat ./images/greenplum-for-kubernetes-tag)"
      $ docker tag $(cat ./images/greenplum-for-kubernetes-id) ${GREENPLUM_IMAGE_NAME}
      $ docker push ${GREENPLUM_IMAGE_NAME}
      
      $ OPERATOR_IMAGE_NAME="${IMAGE_REPO}/greenplum-operator:$(cat ./images/greenplum-operator-tag)"
      $ docker tag $(cat ./images/greenplum-operator-id) ${OPERATOR_IMAGE_NAME}
      $ docker push ${OPERATOR_IMAGE_NAME}
      
    2. Copy the values yaml file from your existing deployment, or create a new YAML file in the workspace subdirectory as shown below:

      $ touch workspace/operator-values-overrides.yaml
      
    3. If you pushed the Greenplum Operator and Greenplum Database Docker images to a container registry, add two additional lines to the configuration file to indicate the registry where you pushed the images. For example, if you are using Google Cloud Registry with a project named “gp-kubernetes”, you would add the properties:

      operatorImageRepository: gcr.io/gp-kubernetes/greenplum-operator
      greenplumImageRepository: gcr.io/gp-kubernetes/greenplum-for-kubernetes
      

      Note: If you did not tag the images with a container registry prefix or project name (for example, if you are using your own local Minikube deployment), then you can skip this step.

  15. Delete the existing version 1.10 Greenplum Operator deployment:

    $ helm uninstall greenplum-operator
    
  16. Use helm to create the new Greenplum Operator release, specifying a customized YAML configuration file if you created one. For example, to create a new release with the name “greenplum-operator”:

    $ helm install greenplum-operator -f workspace/operator-values-overrides.yaml operator/
    

    If you did not create a YAML configuration file (as in the case with Minikube) omit the -f option:

    $ helm install greenplum-operator operator/
    

    Helm begins installing the new release into the Kubernetes namespace specified in the current Kubernetes context. If you want to install into a different namespace, include the --namespace option in the helm command.


    The command displays the following message and concludes with a link to this documentation:

    NAME: greenplum-operator
    LAST DEPLOYED: Fri Dec  6 08:36:27 2019
    NAMESPACE: default
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    greenplum-operator has been installed.
    
    Please see documentation at:
    http://greenplum-kubernetes.docs.pivotal.io/
    deployment.apps/greenplum-operator condition met
    
  17. Use watch kubectl get all -l app=greenplum-operator to monitor the progress of the operator deployment. The deployment is complete when the Greenplum Operator pod is in the Running state. For example:

    $ watch kubectl get all -l app=greenplum-operator
    
    NAME                                      READY     STATUS    RESTARTS   AGE
    pod/greenplum-operator-77d6dc5f79-wfgkk   1/1       Running   0          1m
    
    NAME                                            DESIRED   CURRENT   READY     AGE
    replicaset.apps/greenplum-operator-77d6dc5f79   1         1         1         1m
    
  18. Return to the workspace directory that contains the manifest files you used to deploy your clusters (or copy those manifest files to the new Greenplum for Kubernetes version 1.11 workspace directory.

  19. Use kubectl apply and specify your manifest file to send the deployment request to the Greenplum Operator. For example, to use the sample my-gp-instance.yaml file:

    $ kubectl apply -f ./my-gp-instance.yaml 
    
    greenplumcluster.greenplum.pivotal.io/my-greenplum created
    

    The Greenplum Operator deploys the necessary Greenplum resources according to your specification, using the existing Persistent Volume Claims as-is with their available data.

  20. Use kubectl get all -l greenplum-cluster=<your Greenplum cluster instance name> and wait until all Greenplum cluster pods have the status Running:

    $ watch kubectl get all -l greenplum-cluster=my-greenplum
    
    NAME              READY     STATUS    RESTARTS   AGE
    pod/master-0      1/1       Running   1          2d
    pod/master-1      1/1       Running   1          2d
    pod/segment-a-0   1/1       Running   0          23h
    pod/segment-b-0   1/1       Running   1          2d
    
    NAME                TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
    service/agent       ClusterIP      None         <none>        22/TCP           2d
    service/greenplum   LoadBalancer   10.99.14.1   <pending>     5432:30430/TCP   2d
    
    NAME                         DESIRED   CURRENT   AGE
    statefulset.apps/master      2         2         2d
    statefulset.apps/segment-a   1         1         2d
    statefulset.apps/segment-b   1         1         2d
    

    At this point, the upgraded cluster is available. If you are using a persistent storageClass, the updated cluster is created with the same Persistent Volume Claims (PVCs) and data.

  21. If your cluster is configured to use a standby master, connect to the master-0 pod and execute the gpstart command manually. For example:

    kubectl exec -it master-0 -- bash -c "source /opt/gpdb/greenplum_path.sh; gpstart"
    
  22. Describe your Greenplum cluster to observe that the update succeeded.

    $ kubectl describe greenplumClusters/my-greenplum
    
    Name:         my-greenplum
    Namespace:    default
    Labels:       <none>
    Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                    {"apiVersion":"greenplum.pivotal.io/v1","kind":"GreenplumCluster","metadata":{"annotations":{},"name":"my-greenplum","namespace":"default"...
    API Version:  greenplum.pivotal.io/v1
    Kind:         GreenplumCluster
    Metadata:
      Creation Timestamp:  2019-01-10T22:15:40Z
      Generation:          1
      Resource Version:    28403
      Self Link:           /apis/greenplum.pivotal.io/v1/namespaces/default/greenplumclusters/my-greenplum
      UID:                 43842f53-1525-11e9-941d-080027530600
    Spec:
      Master And Standby:
        Cpu:                        0.5
        Host Based Authentication:  # host   all   gpadmin   1.2.3.4/32   trust
    # host   all   gpuser    0.0.0.0/0   md5
    
        Memory:              800Mi
        Storage:             1G
        Storage Class Name:  standard
        Worker Selector:
      Segments:
        Cpu:                    0.5
        Memory:                 800Mi
        Primary Segment Count:  1
        Storage:                2G
        Storage Class Name:     standard
        Worker Selector:
    Status:
      Instance Image:  greenplum-for-kubernetes:latest
      Phase:           Running
    Events:
      Type    Reason   Age   From               Message
      ----    ------   ----  ----               -------
      Normal  created  12s   greenplumOperator  greenplumCluster created
      Normal  updated  12s   greenplumOperator  greenplumCluster updated successfully
    

    The Phase should be Running and the Events should match the output above.