Pivotal Container Services (PKS) running on Google Cloud Platform (GCP)
Follow this procedure to deploy Greenplum for Kubernetes to PKS.
Cluster Requirements
This procedure requires that PKS on GCP is installed and running, along with all prerequisite software and configuration. See Installing PKS for Greenplum for Kubernetes, Using Google Cloud Platform (GCP) for information about installing PKS.
In the PKS tile under Kubernetes Cloud Provider, ensure that the service accounts that are listed under GCP Master Service Account ID and GCP Worker Service Account ID have permission to pull images from the GCS bucket named artifacts.<project-name>.appspot.com
.
Obtain a Kubernetes service account key (a key.json
file) for an account that has read access (storage.objectViewer
role) to the Google Cloud Registry. You will need to identify this file in your configuration to pull VMware Tanzu Greenplum for Kubernetes docker images from the remote registry. For example:
If necessary, create a new service account to use for VMware Tanzu Greenplum for Kubernetes. These example commands create a new account named
greenplum-image-pull
in your current GCP project:$ export GCP_PROJECT=$(gcloud config get-value core/project) $ gcloud iam service-accounts create greenplum-image-pull
Assign the required
storage.objectViewer
role to the new account:$ gcloud projects add-iam-policy-binding $GCP_PROJECT \ --member serviceAccount:greenplum-image-pull@$GCP_PROJECT.iam.gserviceaccount.com \ --role roles/storage.objectViewer
Create the key for the account:
$ gcloud iam service-accounts keys create \ --iam-account "greenplum-image-pull@$GCP_PROJECT.iam.gserviceaccount.com" \ ~/key.json
Before you attempt to deploy Greenplum, ensure that the target cluster is available. Execute the following command to make sure that the target cluster displays in the output:
pks list-clusters
Note: The pks
login cookie typically expires after a day or two.
The Greenplum for Kubernetes deployment process requires the ability to map the host system’s /sys/fs/cgroup
directory onto each container’s /sys/fs/cgroup
. Ensure that no kernel security module (for example, AppArmor) uses a profile that disallows mounting /sys/fs/cgroup
.
To use pre-created disks with PKS instead of (default) automatically-managed persistent volumes, follow the instructions in (Optional) Preparing Pre-Created Disks before continuing with the procedure.
Note: If any problems occur during deployment, retry deploying Greenplum by first removing the previous deployment.
Greenplum for Kubernetes requires the following recommended settings:
- Ability to increase the open file limit on the PKS cluster nodes
In order to accomplish this, do the following on each node using bosh:
#!/usr/bin/env bash
# fail fast if bosh isn't working
which bosh
deployment=$1
bosh deployment -d ${deployment}
# change limits file
bosh ssh master -d ${deployment} -c "sudo sed -i 's/#DefaultLimitNOFILE=/DefaultLimitNOFILE=65535/' /etc/systemd/system.conf"
# change running processes by getting pid for both monit and the kube-apiserver processes
bosh ssh master -d ${deployment} -c 'sudo bash -c "prlimit --pid $(pgrep monit) --nofile=65535:65535"'
bosh ssh master -d ${deployment} -c 'sudo bash -c "prlimit --pid $(pgrep kube-apiserver) --nofile=65535:65535"'
# repeat above for other nodes besides master in the deployment
Configuring kubectl and helm
Ensure that
helm
has sufficient privileges via a Kubernetes service account. Use a command like:$ kubectl create -f initialize_helm_rbac.yaml
serviceaccount "tiller" created clusterrolebinding.rbac.authorization.k8s.io "tiller" created
This sets the necessary privileges for
helm
with a service account namedtiller
.Initialize and upgrade
helm
with the command:$ helm init --wait --service-account tiller --upgrade
$HELM_HOME has been configured at /<path>/.helm. Tiller (the Helm server-side component) has been upgraded to the current version. Happy Helming!