Installing Greenplum for Kubernetes
This topic describes how to install Pivotal Greenplum for Kubernetes. The installation process involves loading the Greenplum for Kubernetes container images into your container registry, and then using the helm
package manager to install the Greenplum Operator resource in Kubernetes. After the Greenplum Operator resource is available, you can interact with it to deploy and manage Greenplum clusters in Kubernetes.
Prerequisites
Before you install Greenplum for Kubernetes, ensure that you have installed all required software and prepared your Kubernetes environment as described in Prerequisites.
Procedure
Follow these steps to download and install the Greenplum for Kubernetes container images, and install the Greenplum Operator resource.
Download the Greenplum for Kubernetes software from Pivotal Network. The download file has the name:
greenplum-for-kubernetes-<version>.tar.gz
.Go to the directory where you downloaded Greenplum for Kubernetes, and unpack the downloaded software. For example:
$ cd ~/Downloads $ tar xzf greenplum-for-kubernetes-*.tar.gz
The above command unpacks the distribution into a new directory named
greenplum-for-kubernetes-<version>
.Go into the new
greenplum-for-kubernetes-<version>
directory:$ cd ./greenplum-for-kubernetes-*
(This step is for Minikube deployments only.) Ensure that the local docker daemon interacts with the Minikube docker container registry:
$ eval $(minikube docker-env)
Note: To undo this docker setting in the current shell, run
eval "$(docker-machine env -u)"
.Load the Greenplum for Kubernetes Docker image to the local Docker registry:
$ docker load -i ./images/greenplum-for-kubernetes
644879075e24: Loading layer [==================================================>] 117.9MB/117.9MB d7ff1dc646ba: Loading layer [==================================================>] 15.87kB/15.87kB 686245e78935: Loading layer [==================================================>] 14.85kB/14.85kB d73dd9e65295: Loading layer [==================================================>] 5.632kB/5.632kB 2de391e51d73: Loading layer [==================================================>] 3.072kB/3.072kB 4605c0a3f29d: Loading layer [==================================================>] 633.4MB/633.4MB c8d909e84bbf: Loading layer [==================================================>] 1.682MB/1.682MB 7e66ff617b4c: Loading layer [==================================================>] 4.956MB/4.956MB db9d4b8567ab: Loading layer [==================================================>] 17.92kB/17.92kB 223fe4d67f77: Loading layer [==================================================>] 3.584kB/3.584kB 2e75b028b124: Loading layer [==================================================>] 43.04MB/43.04MB 1a7d923392f7: Loading layer [==================================================>] 2.56kB/2.56kB 2b9cc11f6cfc: Loading layer [==================================================>] 176.6kB/176.6kB Loaded image: greenplum-for-kubernetes:v0.6.0
Load the Greenplum Operator Docker image to the Docker registry:
$ docker load -i ./images/greenplum-operator
e256c39291c9: Loading layer [==================================================>] 79.69MB/79.69MB 2250a2616dfd: Loading layer [==================================================>] 352.3kB/352.3kB b1e0c363fd12: Loading layer [==================================================>] 37.48MB/37.48MB Loaded image: greenplum-operator:v0.6.0
Verify that both Docker images are now available:
$ docker images "greenplum-*"
REPOSITORY TAG IMAGE ID CREATED SIZE greenplum-operator v0.6.0 1f2299e10960 28 minutes ago 232MB greenplum-for-kubernetes v0.6.0 1d5b86baf556 30 minutes ago 763MB
(Skip this step if you are deploying to Minikube.) If you want to push the Greenplum for Kubernetes docker images to a different container registry:
Set the project name and image repo name and then use Docker to push the images. For example, to push the images to Google Cloud Registry using the current Google Cloud project name:
$ gcloud auth configure-docker $ PROJECT=$(gcloud config list core/project --format='value(core.project)') $ IMAGE_REPO="gcr.io/${PROJECT}" $ GREENPLUM_IMAGE_NAME="${IMAGE_REPO}/greenplum-for-kubernetes:$(cat ./images/greenplum-for-kubernetes-tag)" $ docker tag $(cat ./images/greenplum-for-kubernetes-id) ${GREENPLUM_IMAGE_NAME} $ docker push ${GREENPLUM_IMAGE_NAME} $ OPERATOR_IMAGE_NAME="${IMAGE_REPO}/greenplum-operator:$(cat ./images/greenplum-operator-tag)" $ docker tag $(cat ./images/greenplum-operator-id) ${OPERATOR_IMAGE_NAME} $ docker push ${OPERATOR_IMAGE_NAME}
Copy the Kubernetes service account key (a
key.json
file for an account that has read access to Google Cloud Registry) to theoperator
subdirectory. For example:$ cp ~/key.json ./operator/key.json
Note: See the requirements for PKS or GCP for instructions about how to obtain the
key.json
file.Create a new YAML file in the
workspace
subdirectory. For example:$ touch workspace/operator-values-overrides.yaml
Add the following line to the new YAML file to identify the
key.json
file to use. For example:dockerRegistryKeyJson: key.json
If you pushed the Greenplum Operator and Greenplum Database Docker images to a container registry, add two additional lines to the configuration file to indicate the registry where you pushed the images. For example, if you are using Google Cloud Registry you would properties similar to:
operatorImageRepository: gcr.io/my-project/greenplum-operator greenplumImageRepository: gcr.io/my-project/greenplum-for-kubernetes
Be sure to replace the project and repository names with the actual names used in your deployment.
Note: If you did not tag the images with a container registry prefix or project name (for example, if you are using your own local Minikube deployment), then you can skip this step.
(This step is for deployments using Harbor container registry only.) If you want to use a private Harbor container registry:
Create a
key.json
file in theoperator
subdirectory in the following format (placeholder values are in angle brackets):json { "auths": { "<registry URL>": { "username":"<username>", "password":"<password>" } } }
Be sure to substitute each placeholder in angle brackets with the correct value and remove the angle brackets as they are only used to delineate placeholder values.
Create a new YAML file in the
workspace
subdirectory. For example:$ touch workspace/operator-values-overrides.yaml
Add the following line to the new YAML file to identify the
key.json
file to use. For example:dockerRegistryKeyJson: key.json
Add two additional lines to the configuration file to indicate the registry where you pushed the images. For example, use properties similar to:
operatorImageRepository: my-harbor-url.com/my-project/greenplum-operator greenplumImageRepository: my-harbor-url.com/my-project/greenplum-for-kubernetes
Be sure to replace the Harbor URL, project, and repository names with the actual names used in your deployment.
Ensure that
helm
has the required Kubernetes service account privileges:$ kubectl create -f ./initialize_helm_rbac.yaml
serviceaccount "tiller" created clusterrolebinding.rbac.authorization.k8s.io "tiller" created
This sets the necessary privileges for
helm
with a service account namedtiller
.Initialize and upgrade
helm
:$ helm init --wait --service-account tiller --upgrade
$HELM_HOME has been configured at /<path>/.helm. Tiller (the Helm server-side component) has been upgraded to the current version. Happy Helming!
(Optional.) If you want to use a non-default logging level (for example, to enable
debug
logging), then follow the instructions in Enabling Debug Logging.Use
helm
to create a new Greenplum Operator release, specifying the YAML configuration file if you created one. For example, to create a new release with the name “greenplum-operator”:$ helm install --name greenplum-operator -f workspace/operator-values-overrides.yaml operator/
If you did not create a YAML configuration file (as in the case with Minikube) omit the
-f
option:$ helm install --name greenplum-operator operator/
Helm begins installing the new release into the Kubernetes namespace specified in the current Kubernetes context. If you want to install into a different namespace, include the
--namespace
option in thehelm
command.
The command displays the following message and concludes with a link to this documentation:NAME: greenplum-operator LAST DEPLOYED: Fri Oct 5 12:33:35 2018 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ServiceAccount NAME SECRETS AGE greenplum-operator-service-account 1 3s ==> v1/ClusterRole NAME AGE greenplum-operator-cluster-role 3s ==> v1/ClusterRoleBinding NAME AGE greenplum-operator-cluster-role-binding 3s ==> v1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE greenplum-operator 1 1 1 1 3s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE greenplum-operator-58dd68b9c5-frrbz 1/1 Running 0 3s ==> v1/Secret NAME TYPE DATA AGE regsecret kubernetes.io/dockerconfigjson 1 3s NOTES: greenplum-operator has been installed. Please see documentation at: http://greenplum-kubernetes.docs.pivotal.io/
Use
watch kubectl get all
to monitor the progress of the deployment. The deployment is complete when the Greenplum Operator pod is in theRunning
state and the replica set are available. For example:$ watch kubectl get all
NAME READY STATUS RESTARTS AGE pod/greenplum-operator-77d6dc5f79-wfgkk 1/1 Running 0 1m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12d NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/greenplum-operator 1 1 1 1 1m NAME DESIRED CURRENT READY AGE replicaset.apps/greenplum-operator-77d6dc5f79 1 1 1 1m
Check the logs of the operator to ensure that it is running properly.
$ kubectl logs -l app=greenplum-operator
time="2019-01-10T21:57:35Z" level=info msg="Go Version: go1.11.4" time="2019-01-10T21:57:35Z" level=info msg="Go OS/Arch: linux/amd64" time="2019-01-10T21:57:35Z" level=info msg="creating operator" time="2019-01-10T21:57:35Z" level=info msg="running operator" time="2019-01-10T21:57:35Z" level=info msg="creating Greenplum CRD" time="2019-01-10T21:57:35Z" level=info msg="successfully updated greenplum CRD" time="2019-01-10T21:57:35Z" level=info msg="starting Greenplum InformerFactory" time="2019-01-10T21:57:35Z" level=info msg="running Greenplum controller" time="2019-01-10T21:57:35Z" level=info msg="started workers"
At this point, you can interact with the Greenplum Operator to deploy new Greenplum clusters or manage existing Greenplum clusters. See About the Greenplum Operator.