Google Kubernetes Engine (GKE) running on Google Cloud Platform (GCP)

This section describes the requirements for using Greenplum for Kubernetes with Google Kubernetes Engine (GKE) deployments.

Cluster Requirements

When creating the GKE cluster, ensure that you make the following selections on the Create a Kubernetes cluster screen of the Google Cloud Platform console:

  • For the Cluster Version option, select the most recent version of Kubernetes.
  • Scale the Machine Type option to at least 2 vCPUs / 7.5 GB memory.
  • For the Node Image option, you must select Ubuntu. You cannot deploy Greenplum with the Container-Optimized OS (cos) image.
  • Set the Size to 4 or more nodes.
  • Set Automatic node repair to Disabled (the default).
  • In the Advanced Options (click More to display Advanced Options), select Enable Kubernetes alpha features in this cluster. Also select I understand the consequences to confirm the choice.

In addition to the above, the Greenplum for Kubernetes deployment process requires the ability to map the host system’s /sys/fs/cgroup directory onto each container’s /sys/fs/cgroup. Ensure that no kernel security module (for example, AppArmor) uses a profile that disallows mounting /sys/fs/cgroup.

Obtain a Kubernetes service account key (a key.json file) for an account that has read access to the Google Cloud Registry. You will need to identify this file in your configuration to pull Greenplum for Kubernetes docker images from the remote registry.

Setting the Kubernetes Context

After creating your GKE cluster, use the gcloud utility to login to GCP, and to set your current project and cluster context:

  1. Log into GCP:

    $ gcloud auth login
  2. Set the current project to the project where you will deploy Greenplum:

    $ gcloud config set project <project-name>
  3. Set the context to the Kubernetes cluster that you created for Greenplum:

    1. Access GCP Console.
    2. Select Kubernetes Engine > Clusters.
    3. Click Connect next to the cluster that you configured for Greenplum, and copy the connection command.
    4. On your local client machine, paste the command to set the context to your cluster. For example:

      $ gcloud container clusters get-credentials <username> --zone us-central1-a --project <my-project>
      Fetching cluster endpoint and auth data.
      kubeconfig entry generated for <username>.