LATEST VERSION: 1.0 - RELEASE NOTES
Pivotal Greenplum® for Kubernetes v0.8

Upgrade the Greenplum Operator

This topic describes how to install a new version of Pivotal Greenplum for Kubernetes and upgrade a deployed Greenplum Operator resource to the newly-installed version. After you have upgraded the Greenplum Operator resource, you can use it to start and stop any currently-deployed Greenplum clusters; however, you cannot change the configuration of deployed Greenplum clusters until you update them to the newer version as described in Upgrade a Greenplum Cluster.

Prerequisites

  • This procedure assumes that you have an existing Greenplum for Kubernetes installation.

  • Before you can install the newer Greenplum for Kubernetes version, ensure that you have installed all required software and prepared your Kubernetes environment as described in Prerequisites.

Procedure

Follow these steps to upgrade the Greenplum Operator resource:

  1. Download the new Greenplum for Kubernetes software from Pivotal Network. The download file has the name: greenplum-for-kubernetes-<version>.tar.gz.

  2. Go to the directory where you downloaded Greenplum for Kubernetes, and unpack the downloaded software. For example:

    $ cd ~/Downloads
    $ tar xzf greenplum-for-kubernetes-*.tar.gz
    

    The above command unpacks the distribution into a new directory named greenplum-for-kubernetes-<version>.

  3. Go into the new greenplum-for-kubernetes-<version> directory:

    $ cd ./greenplum-for-kubernetes-*
    
  4. For Minikube deployments only, ensure that the local docker daemon interacts with the Minikube docker container registry:

    $ eval $(minikube docker-env)
    

    Note: To undo this docker setting in the current shell, run eval "$(docker-machine env -u)".

  5. Load the new Greenplum Operator Docker image to the Docker registry. For example:

    $ docker load -i ./images/greenplum-operator
    
    e256c39291c9: Loading layer [==================================================>]  79.69MB/79.69MB
    2250a2616dfd: Loading layer [==================================================>]  352.3kB/352.3kB
    b1e0c363fd12: Loading layer [==================================================>]  37.48MB/37.48MB
    Loaded image: greenplum-operator:v0.7.0
    
  6. If necessary, load the Greenplum for Kubernetes Docker image to the local Docker registry as well. Check the Release Notes for both the currently-deployed Greenplum for Kubernetes software and for the newer version of the software to which you are upgrading. If the new Greenplum for Kubernetes software provides a newer version of Pivotal Greenplum (for example, if your current deployment uses Pivotal Greenplum 5.13 and the newer version uses Pivotal Greenplum 5.16) then also load the new Greenplum for Kubernetes docker image. For example:

    $ docker load -i ./images/greenplum-for-kubernetes
    
    644879075e24: Loading layer [==================================================>]  117.9MB/117.9MB
    d7ff1dc646ba: Loading layer [==================================================>]  15.87kB/15.87kB
    686245e78935: Loading layer [==================================================>]  14.85kB/14.85kB
    d73dd9e65295: Loading layer [==================================================>]  5.632kB/5.632kB
    2de391e51d73: Loading layer [==================================================>]  3.072kB/3.072kB
    4605c0a3f29d: Loading layer [==================================================>]  633.4MB/633.4MB
    c8d909e84bbf: Loading layer [==================================================>]  1.682MB/1.682MB
    7e66ff617b4c: Loading layer [==================================================>]  4.956MB/4.956MB
    db9d4b8567ab: Loading layer [==================================================>]  17.92kB/17.92kB
    223fe4d67f77: Loading layer [==================================================>]  3.584kB/3.584kB
    2e75b028b124: Loading layer [==================================================>]  43.04MB/43.04MB
    1a7d923392f7: Loading layer [==================================================>]   2.56kB/2.56kB
    2b9cc11f6cfc: Loading layer [==================================================>]  176.6kB/176.6kB
    Loaded image: greenplum-for-kubernetes:v0.7.0
    
  7. Verify that the Docker images are now available:

    $ docker images "greenplum-*"
    
    REPOSITORY                 TAG          IMAGE ID            CREATED             SIZE
    greenplum-operator         v0.6.0       c2f5f8af7990        7 weeks ago         216MB
    greenplum-for-kubernetes   v0.6.0       63286a99e24a        7 weeks ago         785MB
    greenplum-operator         v0.7.0       1f2299e10960        28 minutes ago      232MB
    greenplum-for-kubernetes   v0.7.0       1d5b86baf556        30 minutes ago      763MB
    
  8. For PKS or GCP deployments only:

    1. If you want to push the Greenplum for Kubernetes docker images to a different container registry, set the project name and image repo name and then use Docker to push the images. For example, to push the images to Google Cloud Registry using the current Google Cloud project name:

      $ gcloud auth configure-docker
      
      $ PROJECT=$(gcloud config list core/project --format='value(core.project)')
      $ IMAGE_REPO="gcr.io/${PROJECT}"
      
      $ GREENPLUM_IMAGE_NAME="${IMAGE_REPO}/greenplum-for-kubernetes:$(cat ./images/greenplum-for-kubernetes-tag)"
      $ docker tag $(cat ./images/greenplum-for-kubernetes-id) ${GREENPLUM_IMAGE_NAME}
      $ docker push ${GREENPLUM_IMAGE_NAME}
      
      $ OPERATOR_IMAGE_NAME="${IMAGE_REPO}/greenplum-operator:$(cat ./images/greenplum-operator-tag)"
      $ docker tag $(cat ./images/greenplum-operator-id) ${OPERATOR_IMAGE_NAME}
      $ docker push ${OPERATOR_IMAGE_NAME}
      
    2. Copy the Kubernetes service account key (a key.json file for an account that has read access to Google Cloud Registry) to the operator subdirectory. For example:

      $ cp ~/key.json ./operator/key.json
      

      Note: See the requirements for PKS or GCP for instructions about how to obtain the key.json file.

    3. Create a new YAML file in the workspace subdirectory. For example:

      $ touch workspace/operator-values-overrides.yaml
      
    4. Add the following line to the new YAML file to identify the key.json file to use. For example:

      dockerRegistryKeyJson: key.json
      
    5. If you pushed the Greenplum Operator and Greenplum Database Docker images to a container registry, add two additional lines to the configuration file to indicate the registry where you pushed the images. For example, if you are using Google Cloud Registry with a project named “gp-kubernetes”, you would add the properties:

      operatorImageRepository: gcr.io/gp-kubernetes/greenplum-operator
      greenplumImageRepository: gcr.io/gp-kubernetes/greenplum-for-kubernetes
      

      Note: If you did not tag the images with a container registry prefix or project name (for example, if you are using your own local Minikube deployment), then you can skip this step.

  9. Initialize and upgrade helm:

    $ helm init --wait --service-account tiller --upgrade
    
    $HELM_HOME has been configured at /<path>/.helm.
    
    Tiller (the Helm server-side component) has been upgraded to the current version.
    Happy Helming!
    
  10. Delete the existing Greenplum Operator deployment:

    $ helm del --purge greenplum-operator
    
  11. Use helm to create the new Greenplum Operator release, specifying the YAML configuration file if you created one. For example, to create a new release with the name “greenplum-operator”:

    $ helm install --name greenplum-operator -f workspace/operator-values-overrides.yaml operator/
    

    If you did not create a YAML configuration file (as in the case with Minikube) omit the -f option:

    $ helm install --name greenplum-operator operator/
    

    Helm begins installing the new release into the Kubernetes namespace specified in the current Kubernetes context. If you want to install into a different namespace, include the --namespace option in the helm command.


    The command displays the following message and concludes with a link to this documentation:

    NAME:   greenplum-operator
    LAST DEPLOYED: Fri Oct  5 12:33:35 2018
    NAMESPACE: default
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/ServiceAccount
    NAME                                SECRETS  AGE
    greenplum-operator-service-account  1        3s
    
    ==> v1/ClusterRole
    NAME                             AGE
    greenplum-operator-cluster-role  3s
    
    ==> v1/ClusterRoleBinding
    NAME                                     AGE
    greenplum-operator-cluster-role-binding  3s
    
    ==> v1/Deployment
    NAME                DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    greenplum-operator  1        1        1           1          3s
    
    ==> v1/Pod(related)
    NAME                                 READY  STATUS   RESTARTS  AGE
    greenplum-operator-58dd68b9c5-frrbz  1/1    Running  0         3s
    
    ==> v1/Secret
    NAME       TYPE                            DATA  AGE
    regsecret  kubernetes.io/dockerconfigjson  1     3s
    
    NOTES:
    greenplum-operator has been installed.
    
    Please see documentation at:
    http://greenplum-kubernetes.docs.pivotal.io/
    
  12. Use watch kubectl get all -l name=greenplum-operator to monitor the progress of the operator deployment. The deployment is complete when the Greenplum Operator pod is in the Running state. For example:

    $ watch kubectl get all -l name=greenplum-operator
    
    NAME                                      READY     STATUS    RESTARTS   AGE
    pod/greenplum-operator-77d6dc5f79-wfgkk   1/1       Running   0          1m
    
    NAME                                            DESIRED   CURRENT   READY     AGE
    replicaset.apps/greenplum-operator-77d6dc5f79   1         1         1         1m
    

    Note: If you have an existing Greenplum cluster, you will see a message in your operator logs similar to:

    time="2019-01-10T21:57:35Z" level=error msg="add greenplum cluster failed: cannot process AddEvent: cluster already exists with name: my-greenplum"
    

    This message is expected when the operator starts up with a pre-existing Greenplum cluster; you can safely ignore it.

At this point, you can interact with the Greenplum Operator to deploy new Greenplum clusters, or to start or stop existing Greenplum clusters. Note, however, that you cannot modify the deployment properties of existing Greenplum clusters until those clusters are upgraded to the latest release. See Upgrade a Deployed Greenplum Cluster.