LATEST VERSION: 0.8 - RELEASE NOTES
Pivotal Greenplum® for Kubernetes v0.8

Pivotal Container Services (PKS) running on Google Cloud Platform (GCP)

Follow this procedure to deploy Greenplum for Kubernetes to PKS.

Cluster Requirements

This procedure requires that PKS on GCP is installed and running, along with all prerequisite software and configuration. See Installing PKS for Greenplum for Kubernetes, Using Google Cloud Platform (GCP) for information about installing PKS.

In the PKS tile under Kubernetes Cloud Provider, ensure that the service accounts that are listed under GCP Master Service Account ID and GCP Worker Service Account ID have permission to pull images from the GCS bucket named artifacts.<project-name>.appspot.com.

Obtain a Kubernetes service account key (a key.json file) for an account that has read access (storage.objectViewer role) to the Google Cloud Registry. You will need to identify this file in your configuration to pull Greenplum for Kubernetes docker images from the remote registry. For example:

  1. If necessary, create a new service account to use for Greenplum for Kubernetes. These example commands create a new account named greenplum-image-pull in your current GCP project:

    $ export GCP_PROJECT=$(gcloud config get-value core/project)
    
    $ gcloud iam service-accounts create greenplum-image-pull
    
  2. Assign the required storage.objectViewer role to the new account:

    $ gcloud projects add-iam-policy-binding $GCP_PROJECT \
        --member serviceAccount:greenplum-image-pull@$GCP_PROJECT.iam.gserviceaccount.com \
        --role roles/storage.objectViewer
    
  3. Create the key for the account:

    $ gcloud iam service-accounts keys create \
        --iam-account "greenplum-image-pull@$GCP_PROJECT.iam.gserviceaccount.com" \
        ~/key.json
    

You will need to copy the key.json file to the operator directory of your Greenplum for Kubernetes installation. See Installing Greenplum for Kubernetes for complete instructions.

Before you attempt to deploy Greenplum, ensure that the target cluster is available. Execute the following command to make sure that the target cluster displays in the output:

pks list-clusters

Note: The pks login cookie typically expires after a day or two.

The Greenplum for Kubernetes deployment process requires the ability to map the host system’s /sys/fs/cgroup directory onto each container’s /sys/fs/cgroup. Ensure that no kernel security module (for example, AppArmor) uses a profile that disallows mounting /sys/fs/cgroup.

To use pre-created disks with PKS instead of (default) automatically-managed persistent volumes, follow the instructions in (Optional) Preparing Pre-Created Disks before continuing with the procedure.

Note: If any problems occur during deployment, retry deploying Greenplum by first removing the previous deployment.

Greenplum for Kubernetes requires the following recommended settings:

  1. Ability to increase the open file limit on the PKS cluster nodes

In order to accomplish this, do the following on each node using bosh:


#!/usr/bin/env bash

# fail fast if bosh isn't working
which bosh

deployment=$1
bosh deployment ${deployment}

# change limits file
bosh ssh master -d ${deployment} -c "sudo sed -i 's/#DefaultLimitNOFILE=/DefaultLimitNOFILE=65535/' /etc/systemd/system.conf"

# change running processes by getting pid for both monit and the kube-apiserver processes
bosh ssh master -d ${deployment} -c 'sudo bash -c "prlimit --pid $(pgrep monit) --nofile=65535:65535"'
bosh ssh master -d ${deployment} -c 'sudo bash -c "prlimit --pid $(pgrep kube-apiserver) --nofile=65535:65535"'
# repeat above for other nodes besides master in the deployment

Configuring kubectl and helm

  1. Ensure that helm has sufficient privileges via a Kubernetes service account. Use a command like:

    $ kubectl create -f initialize_helm_rbac.yaml
    
    serviceaccount "tiller" created
    clusterrolebinding.rbac.authorization.k8s.io "tiller" created
    

    This sets the necessary privileges for helm with a service account named tiller.

  2. Initialize and upgrade helm with the command:

    $ helm init --wait --service-account tiller --upgrade
    
    $HELM_HOME has been configured at /<path>/.helm.
    
    Tiller (the Helm server-side component) has been upgraded to the current version.
    Happy Helming!