Installing Greenplum for Kubernetes

This topic describes how to install Pivotal Greenplum for Kubernetes. The installation process involves loading the Greenplum for Kubernetes container images into your container registry, and then using the helm package manager to install the Greenplum Operator resource in Kubernetes. After the Greenplum Operator resource is available, you can interact with it to deploy and manage Greenplum clusters in Kubernetes.


Before you install Greenplum for Kubernetes, ensure that you have installed all required software and prepared your Kubernetes environment as described in Prerequisites.


Follow these steps to download and install the Greenplum for Kubernetes container images, and install the Greenplum Operator resource.

  1. Download the Greenplum for Kubernetes software from Pivotal Network. The download file has the name: greenplum-for-kubernetes-<version>.tar.gz.

  2. Go to the directory where you downloaded Greenplum for Kubernetes, and unpack the downloaded software. For example:

    $ cd ~/Downloads
    $ tar xzf greenplum-for-kubernetes-*.tar.gz

    The above command unpacks the distribution into a new directory named greenplum-for-kubernetes-<version>.

  3. Go into the new greenplum-for-kubernetes-<version> directory:

    $ cd ./greenplum-for-kubernetes-*
  4. (This step is for Minikube deployments only.) Ensure that the local docker daemon interacts with the Minikube docker container registry:

    $ eval $(minikube docker-env)

    Note: To undo this docker setting in the current shell, run eval "$(docker-machine env -u)".

  5. Load the Greenplum for Kubernetes Docker image to the local Docker registry:

    $ docker load -i ./images/greenplum-for-kubernetes
    644879075e24: Loading layer [==================================================>]  117.9MB/117.9MB
    d7ff1dc646ba: Loading layer [==================================================>]  15.87kB/15.87kB
    686245e78935: Loading layer [==================================================>]  14.85kB/14.85kB
    d73dd9e65295: Loading layer [==================================================>]  5.632kB/5.632kB
    2de391e51d73: Loading layer [==================================================>]  3.072kB/3.072kB
    4605c0a3f29d: Loading layer [==================================================>]  633.4MB/633.4MB
    c8d909e84bbf: Loading layer [==================================================>]  1.682MB/1.682MB
    7e66ff617b4c: Loading layer [==================================================>]  4.956MB/4.956MB
    db9d4b8567ab: Loading layer [==================================================>]  17.92kB/17.92kB
    223fe4d67f77: Loading layer [==================================================>]  3.584kB/3.584kB
    2e75b028b124: Loading layer [==================================================>]  43.04MB/43.04MB
    1a7d923392f7: Loading layer [==================================================>]   2.56kB/2.56kB
    2b9cc11f6cfc: Loading layer [==================================================>]  176.6kB/176.6kB
    Loaded image: greenplum-for-kubernetes:v0.6.0
  6. Load the Greenplum Operator Docker image to the Docker registry:

    $ docker load -i ./images/greenplum-operator
    e256c39291c9: Loading layer [==================================================>]  79.69MB/79.69MB
    2250a2616dfd: Loading layer [==================================================>]  352.3kB/352.3kB
    b1e0c363fd12: Loading layer [==================================================>]  37.48MB/37.48MB
    Loaded image: greenplum-operator:v0.6.0
  7. Verify that both Docker images are now available:

    $ docker images "greenplum-*"
    REPOSITORY                 TAG          IMAGE ID            CREATED             SIZE
    greenplum-operator         v0.6.0       1f2299e10960        28 minutes ago      232MB
    greenplum-for-kubernetes   v0.6.0       1d5b86baf556        30 minutes ago      763MB
  8. (This step is for PKS and GCP deployments only.) If you want to push the Greenplum for Kubernetes docker images to a different container registry:

    1. Set the project name and image repo name and then use Docker to push the images. For example, to push the images to Google Cloud Registry using the current Google Cloud project name:

      $ gcloud auth configure-docker
      $ PROJECT=$(gcloud config list core/project --format='value(core.project)')
      $ GREENPLUM_IMAGE_NAME="${IMAGE_REPO}/greenplum-for-kubernetes:$(cat ./images/greenplum-for-kubernetes-tag)"
      $ docker tag $(cat ./images/greenplum-for-kubernetes-id) ${GREENPLUM_IMAGE_NAME}
      $ docker push ${GREENPLUM_IMAGE_NAME}
      $ OPERATOR_IMAGE_NAME="${IMAGE_REPO}/greenplum-operator:$(cat ./images/greenplum-operator-tag)"
      $ docker tag $(cat ./images/greenplum-operator-id) ${OPERATOR_IMAGE_NAME}
      $ docker push ${OPERATOR_IMAGE_NAME}
    2. Copy the Kubernetes service account key (a key.json file for an account that has read access to Google Cloud Registry) to the operator subdirectory. For example:

      $ cp ~/key.json ./operator/key.json

      Note: See the requirements for PKS or GCP for instructions about how to obtain the key.json file.

    3. Create a new YAML file in the workspace subdirectory. For example:

      $ touch workspace/operator-values-overrides.yaml
    4. Add the following line to the new YAML file to identify the key.json file to use. For example:

      dockerRegistryKeyJson: key.json
    5. If you pushed the Greenplum Operator and Greenplum Database Docker images to a container registry, add two additional lines to the configuration file to indicate the registry where you pushed the images. For example, if you are using Google Cloud Registry you would properties similar to:


      Be sure to replace the project and repository names with the actual names used in your deployment.

      Note: If you did not tag the images with a container registry prefix or project name (for example, if you are using your own local Minikube deployment), then you can skip this step.

  9. Ensure that helm has the required Kubernetes service account privileges:

    $ kubectl create -f ./initialize_helm_rbac.yaml
    serviceaccount "tiller" created "tiller" created

    This sets the necessary privileges for helm with a service account named tiller.

  10. Initialize and upgrade helm:

    $ helm init --wait --service-account tiller --upgrade
    $HELM_HOME has been configured at /<path>/.helm.
    Tiller (the Helm server-side component) has been upgraded to the current version.
    Happy Helming!
  11. Use helm to create a new Greenplum Operator release, specifying the YAML configuration file if you created one. For example, to create a new release with the name “greenplum-operator”:

    $ helm install --name greenplum-operator -f workspace/operator-values-overrides.yaml operator/

    If you did not create a YAML configuration file (as in the case with Minikube) omit the -f option:

    $ helm install --name greenplum-operator operator/

    Helm begins installing the new release into the Kubernetes namespace specified in the current Kubernetes context. If you want to install into a different namespace, include the --namespace option in the helm command.

    The command displays the following message and concludes with a link to this documentation:

    NAME:   greenplum-operator
    LAST DEPLOYED: Fri Oct  5 12:33:35 2018
    NAMESPACE: default
    ==> v1/ServiceAccount
    NAME                                SECRETS  AGE
    greenplum-operator-service-account  1        3s
    ==> v1/ClusterRole
    NAME                             AGE
    greenplum-operator-cluster-role  3s
    ==> v1/ClusterRoleBinding
    NAME                                     AGE
    greenplum-operator-cluster-role-binding  3s
    ==> v1/Deployment
    greenplum-operator  1        1        1           1          3s
    ==> v1/Pod(related)
    NAME                                 READY  STATUS   RESTARTS  AGE
    greenplum-operator-58dd68b9c5-frrbz  1/1    Running  0         3s
    ==> v1/Secret
    NAME       TYPE                            DATA  AGE
    regsecret  1     3s
    greenplum-operator has been installed.
    Please see documentation at:
  12. Use watch kubectl get all to monitor the progress of the deployment. The deployment is complete when the Greenplum Operator pod is in the Running state and the replica set are available. For example:

    $ watch kubectl get all
    NAME                                      READY     STATUS    RESTARTS   AGE
    pod/greenplum-operator-77d6dc5f79-wfgkk   1/1       Running   0          1m
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes   ClusterIP    <none>        443/TCP   12d
    NAME                                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/greenplum-operator   1         1         1            1           1m
    NAME                                            DESIRED   CURRENT   READY     AGE
    replicaset.apps/greenplum-operator-77d6dc5f79   1         1         1         1m
  13. Check the logs of the operator to ensure that it is running properly.

    $ kubectl logs -l name=greenplum-operator
    time="2019-01-10T21:57:35Z" level=info msg="Go Version: go1.11.4"
    time="2019-01-10T21:57:35Z" level=info msg="Go OS/Arch: linux/amd64"
    time="2019-01-10T21:57:35Z" level=info msg="creating operator"
    time="2019-01-10T21:57:35Z" level=info msg="running operator"
    time="2019-01-10T21:57:35Z" level=info msg="creating Greenplum CRD"
    time="2019-01-10T21:57:35Z" level=info msg="successfully updated greenplum CRD"
    time="2019-01-10T21:57:35Z" level=info msg="starting Greenplum InformerFactory"
    time="2019-01-10T21:57:35Z" level=info msg="running Greenplum controller"
    time="2019-01-10T21:57:35Z" level=info msg="started workers"

At this point, you can interact with the Greenplum Operator to deploy new Greenplum clusters or manage existing Greenplum clusters. See About the Greenplum Operator.