Installing Greenplum for Kubernetes
This topic describes how to install Pivotal Greenplum for Kubernetes. The installation process involves loading the Greenplum for Kubernetes container images into your container registry, and then using the helm
package manager to install the Greenplum Operator resource in Kubernetes. After the Greenplum Operator resource is available, you can interact with it to deploy and manage Greenplum clusters in Kubernetes.
Prerequisites
Before you install Greenplum for Kubernetes, ensure that you have installed all required software and prepared your Kubernetes environment as described in Prerequisites.
Procedure
Follow these steps to download and install the Greenplum for Kubernetes container images, and install the Greenplum Operator resource.
Download the Greenplum for Kubernetes software from Pivotal Network. The download file has the name:
greenplum-for-kubernetes-<version>.tar.gz
.Go to the directory where you downloaded Greenplum for Kubernetes, and unpack the downloaded software. For example:
$ cd ~/Downloads $ tar xzf greenplum-for-kubernetes-*.tar.gz
The above command unpacks the distribution into a new directory named
greenplum-for-kubernetes-<version>
.Go into the new
greenplum-for-kubernetes-<version>
directory:$ cd ./greenplum-for-kubernetes-*
(This step is for Minikube deployments only.) Ensure that the local docker daemon interacts with the Minikube docker container registry:
$ eval $(minikube docker-env)
Note: To undo this docker setting in the current shell, run
eval "$(docker-machine env -u)"
.Load the Greenplum for Kubernetes Docker image to the local Docker registry:
$ docker load -i ./images/greenplum-for-kubernetes
91d23cf5425a: Loading layer [==================================================>] 127.3MB/127.3MB f36b28e4310d: Loading layer [==================================================>] 11.78kB/11.78kB 6cb741cb00b7: Loading layer [==================================================>] 15.87kB/15.87kB ... Loaded image: greenplum-for-kubernetes:v1.12.0
Load the Greenplum Operator Docker image to the Docker registry:
$ docker load -i ./images/greenplum-operator
33c58014b5a4: Loading layer [==================================================>] 65.5MB/65.5MB a1eabe7eb601: Loading layer [==================================================>] 40.65MB/40.65MB 511680a9987d: Loading layer [==================================================>] 41.01MB/41.01MB Loaded image: greenplum-operator:v1.12.0
Verify that both Docker images are now available:
$ docker images "greenplum-*"
REPOSITORY TAG IMAGE ID CREATED SIZE greenplum-operator v1.12.0 852cafa7ac90 7 weeks ago 269MB greenplum-for-kubernetes v1.12.0 3819d17a577a 7 weeks ago 3.05GB
(Skip this step if you are using local docker images such as on Minikube.) If you want to push the Greenplum for Kubernetes docker images to a different container registry:
Set the project name and image repo name and then use Docker to push the images. For example, to push the images to Google Cloud Registry using the current Google Cloud project name:
$ gcloud auth configure-docker $ PROJECT=$(gcloud config list core/project --format='value(core.project)') $ IMAGE_REPO="gcr.io/${PROJECT}" $ GREENPLUM_IMAGE_NAME="${IMAGE_REPO}/greenplum-for-kubernetes:$(cat ./images/greenplum-for-kubernetes-tag)" $ docker tag $(cat ./images/greenplum-for-kubernetes-id) ${GREENPLUM_IMAGE_NAME} $ docker push ${GREENPLUM_IMAGE_NAME} $ OPERATOR_IMAGE_NAME="${IMAGE_REPO}/greenplum-operator:$(cat ./images/greenplum-operator-tag)" $ docker tag $(cat ./images/greenplum-operator-id) ${OPERATOR_IMAGE_NAME} $ docker push ${OPERATOR_IMAGE_NAME}
Create a docker-registry secret
regsecret
for pods to be able to fetch images from remote container registries. For example:# For GCR kubectl create secret docker-registry regsecret \ --docker-server=https://gcr.io \ --docker-username=_json_key \ --docker-password="$(cat key.json)" #For ECR TOKEN=`aws ecr --region=$REGION get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2` kubectl create secret docker-registry regsecret \ --docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \ --docker-username=AWS \ --docker-password="${TOKEN}" #For Harbor kubectl create secret docker-registry regsecret \ --docker-server=${HARBOR_URL} \ --docker-username=${HARBOR_USER} \ --docker-password="${HARBOR_PASSWORD}"
Verify the
regsecret
../workspace/samples/scripts/regsecret-test.bash ${OPERATOR_IMAGE_NAME}
The output of above command should print
GREENPLUM-OPERATOR TEST OK
If this verification fails for Google Cloud Registry, then make sure thekey.json
file has theroles/storage.objectViewer
role.If you pushed the Greenplum Operator and Greenplum Database Docker images to a container registry, create a new YAML file in the
workspace subdirectory
with two lines to indicate the registry where you pushed the images. For example, if you are using Google Cloud Registry you would add properties similar to:cat <<EOF >workspace/operator-values-overrides.yaml operatorImageRepository: ${IMAGE_REPO}/greenplum-operator greenplumImageRepository: ${IMAGE_REPO}/greenplum-for-kubernetes EOF
Be sure to replace the project and repository names with the actual names used in your deployment.
Note: If you did not tag the images with a container registry prefix or project name (for example, if you are using your own local Minikube deployment), then you can skip this step.
(Optional.) If you want to use a non-default logging level (for example, to enable
debug
logging), then follow the instructions in Enabling Debug Logging.(Optional.) If you want to specify a node for the operator to run on, first apply a label to the node.
kubectl label node <node name> <key>=<value>
Then, edit the
operator-values-overrides.yaml
file to include a matching set of key/value label selectors:operatorWorkerSelector: { <key>: "<value>" [ ... ] }
See the documentation on the manifest’s workerSelector attribute for more information on how Greenplum for Kubernetes handles label selectors.
Use
helm
to create a new Greenplum Operator release, specifying the YAML configuration file if you created one. For example, to create a new release with the name “greenplum-operator”:$ helm install greenplum-operator -f workspace/operator-values-overrides.yaml operator/
If you did not create a YAML configuration file (as in the case with Minikube) omit the
-f
option:$ helm install greenplum-operator operator/
Helm begins installing the new release into the Kubernetes namespace specified in the current Kubernetes context. If you want to install into a different namespace, include the
--namespace
option in thehelm
command.
The command displays the following message and concludes with a link to this documentation:NAME: greenplum-operator LAST DEPLOYED: Wed Feb 12 10:39:03 2020 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: greenplum-operator has been installed. Please see documentation at: http://greenplum-kubernetes.docs.pivotal.io/ deployment.apps/greenplum-operator condition met
Use
watch kubectl get all
to monitor the progress of the deployment. The deployment is complete when the Greenplum Operator pod is in theRunning
state and the replica set are available. For example:$ watch kubectl get all
NAME READY STATUS RESTARTS AGE pod/greenplum-operator-667ccc59fd-x7dw7 1/1 Running 0 33s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/greenplum-validating-webhook-service-667ccc59fd-x7dw7 ClusterIP 10.105.146.151 <none> 443/TCP 32s service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6m44s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/greenplum-operator 1/1 1 1 33s NAME DESIRED CURRENT READY AGE replicaset.apps/greenplum-operator-667ccc59fd 1 1 1 33s
Check the logs of the operator to ensure that it is running properly.
$ kubectl logs -l app=greenplum-operator
time="2020-02-12T18:39:04Z" level=info msg="starting greenplum validating admission webhook server" 2020-02-12T18:39:04.497Z INFO controller-runtime.metrics metrics server is starting to listen {"addr": ":8080"} 2020-02-12T18:39:04.497Z INFO controller-runtime.controller Starting EventSource {"controller": "greenplumtextservice", "source": "kind source: /, Kind="} 2020-02-12T18:39:04.498Z INFO controller-runtime.controller Starting EventSource {"controller": "greenplumpxfservice", "source": "kind source: /, Kind="} 2020-02-12T18:39:04.498Z INFO setup starting manager 2020-02-12T18:39:04.498Z INFO controller-runtime.manager starting metrics server {"path": "/metrics"} 2020-02-12T18:39:04.599Z INFO controller-runtime.controller Starting Controller {"controller": "greenplumtextservice"} 2020-02-12T18:39:04.599Z INFO controller-runtime.controller Starting Controller {"controller": "greenplumpxfservice"} 2020-02-12T18:39:04.699Z INFO controller-runtime.controller Starting workers {"controller": "greenplumpxfservice", "worker count": 1} 2020-02-12T18:39:04.699Z INFO controller-runtime.controller Starting workers {"controller": "greenplumtextservice", "worker count": 1}
At this point, you can interact with the Greenplum Operator to deploy new Greenplum clusters or manage existing Greenplum clusters. See About the Greenplum Operator.