Installing PKS for Greenplum for Kubernetes, Using Google Cloud Platform (GCP)
This release of Pivotal Greenplum for Kubernetes can be deployed with Pivotal Container Services (PKS) running on Google Cloud Platform (GCP). Management scripts are provided to help you configure required GCP and Kubernetes resources, as well as to deploy a Greenplum for Kubernetes cluster.
Follow each of the steps in this section to install required software and configure resources before you attempt to deploy Greenplum for Kubernetes.
Step 1: Install PKS
Follow the instructions in Installing PKS on GCP in the Pivotal Container Service documentation to install PKS on Google Cloud Platform. Note that this step involves completing instructions in each of these sections of the PKS documentation:
- GCP Prerequisites and Resource Requirements
- Preparing to Deploy PKS on GCP
- Deploying Ops Manager to GCP
- Configuring Ops Manager on GCP
- Installing and Configuring PKS on GCP
Completing these instructions creates a new PKS installation including an Operations Manager URL, which will be referenced in later steps as
Step 2: Install Client Tools
Pivotal Greenplum for Kubernetes provides a number of scripts to automate configuration tasks in Google Cloud Platform, PKS, and Kubernetes. These scripts require your client system to have a number of tools installed. Follow these steps:
On MacOS systems, use
homebrew
to install required client system tools:$ brew install jq yq kubernetes-helm docker kubectl
The above command installs several tools that are required for for the Greenplum for Kubernetes scripts:
jq
- for parsing json on the command lineyq
- for parsing yaml on the command linehelm
- the CLI for the Helm package manager for Kubernetesdocker
- software container platform and related toolskubectl
- the CLI for deploying and managing applciations on Kubernetes
Download and install the Google cloud tools. Ensure that the
gcloud
utility is available before you continue.Install the
PKS CLI
tool from Pivotal Network at Pivotal Container Service (PKS).
Step 3: Configure the GCP Service Account
Login to GCP with the
gcloud
tool and set a new project:$ gcloud auth login $ gcloud projects list $ gcloud config set project <project_name>
As a best practice, create a separate service account to use for deploying Greenplum for Kubernetes:
Note: Update the location of the
key.json
file in Values-common.yaml keydockerRegistryKeyJson
.The Greenplum for Kubernetes service account must have the following permissions, which are automatically set by the script:
- Compute Viewer
- Compute Network Admin
- Storage Object Creator
You can optionally grant these permissions to a different service account using IAM & admin in the GCP console.
Activate the service account using the key file
key.json
:$ gcloud auth activate-service-account --key-file=./key.json
Note: The gpcloud team uses a service account key “gpcloud-pks (workstation)” stored in LastPass.
Set a preferred availability zone with a command similar to:
$ gcloud config set compute/zone us-west1-a
Update the zone name as necessary for your configuration.
Set up Google Container Registry to host Docker images
Enable the
Container Registry API
Configure docker to use the gcloud command-line tool as a credential helper for the activated service-account:
$ gcloud auth configure-docker
For additional details see instructions.
Step 4: Configure the GCP Firewall
In order to provide communication between the nodes and containers running Greenplum, you must create a firewall rule and add it to the default deployment tag. This creates routes between nodes and their containers. Follow these steps:
Access the Operations Manager configuration page at
https://<OPSMAN_IP_OR_FQDN>/infrastructure/iaas_configuration/edit
. Look for the field namedDefault Deployment Tag
and record its value. If it is empty, record the value,greenplum-tag
.Export a variable for the
Default Deployment Tag
value, as in:$ export DEFAULT_DEPLOYMENT_TAG="greenplum-tag"
The
Default Deployment Tag
controls the default firewall rules for every VM that is created.Determine the
Google Parent Network
name. To do so, access the Operations Manager page athttps://<OPSMAN_IP_OR_FQDN>/infrastructure/networks/edit
and disclose one of the networks that you created for PKS. TheGoogle Network Name
value uses the format<parent-network-name>/<subnet-name>/<region-name>
. Because all of the networks are based on the underlying Google Network, the first part of the full network name,<parent-network-name>
corresponds to theGoogle Parent Network
.For example, if the value of the
Google Network Name
readsgpcloud-pks-net/foo/us-west1
thengpcloud-pks-net
corresponds to the Google Parent network.Export an environment for the Google parent network:
$ export GOOGLE_PARENT_NETWORK="gpcloud-pks-net"
Finally, create a firewall rule to allow inter-container traffic using
DEFAULT_DEPLOYMENT_TAG
andGOOGLE_PARENT_NETWORK
variables that you exported:$ gcloud compute firewall-rules create greenplum-rule \ --network=${GOOGLE_PARENT_NETWORK} \ --action=ALLOW \ --rules=tcp:1024-65535,tcp:22,icmp,udp \ --source-ranges=0.0.0.0/0 \ --target-tags ${DEFAULT_DEPLOYMENT_TAG}
Step 5: Configure the Kubernetes Load Balancer
Note: This step should not be necessary after the release of PKS 1.1.
Create or re-use a Load Balancer for a new Kubernetes cluster. If a TCP load balancer from a previous kubernetes cluster exists, reuse it by specifying the front-end IP address below in the Step 6: Deploy a Kubernetes Cluster.
If no load balancer exists, use these commands to create one:
$ export LB_NAME=greenplum-cluster1-lb
$ gcloud compute target-pools create ${LB_NAME}
$ REGION=$(gcloud config get-value compute/region)
$ gcloud compute forwarding-rules create ${LB_NAME}-forwarding-rule \
--target-pool ${LB_NAME} --ports 8443 --ip-protocol TCP --region ${REGION}
$ echo " Front-end IP Address of load balancer for ${LB_NAME}:"
$ gcloud compute forwarding-rules describe ${LB_NAME}-forwarding-rule --region ${REGION} --format="value(IP_ADDRESS)"
You will reset the backend instances to attach to the new Kubernetes cluster in the next section.
Step 6: Deploy a Kubernetes Cluster
Follow these steps to deploy a new PKS cluster using the firewall rules and load balancer you configured for Kubernetes:
Follow the instructions to Log in to PKS CLI.
Execute the
create_pks_cluster_on_gcp.bash
script, specifying the IP address of the Load Balancer’s front end. The command uses the syntaxcreate_cluster.bash <my_cluster-name> <IP_address_of_Load_Balancer>
. For example:$ cd workspace $ samples/scripts/create_pks_cluster_on_gcp.bash gpdb 1.1.1.1
This creates a Kubernetes cluster and assigns it to be accessed via the specified load balancer. You can use either an existing Load Balancer or a newly-created one, as described in Step 5: Set Up the Kubernetes Load Balancer.
Use the
kubectl
command to show the system containers and newly-created nodes available for deploying Greenplum for Kubernetes:$ kubectl get pods --namespace kube-system $ kubectl get nodes
FAQ
What permissions are required for the Google Service Account?
The service account used by PKS must have the compute.admin
role to create a new service.
As part of creating a service, it must create firewall rules and create an external ip for the load balancer.
How can I use bosh to obtain a shell?
In order to get a Bash shell on a particular node, go through Operations Manager to use it as a jump box:
In the Google Cloud web interface, look for “compute instances” and identify Ops Manager.
Click the “ssh” button to launch an ssh shell on that instance.
After launching the ssh shell, use the BOSH CLI command from the Ops Manager Interface Credentials Tab. Look for a
Bosh Commandline Credentials
line that looks like:BOSH_CLIENT=<SOME_CLIENT> BOSH_CLIENT_SECRET=<SOME_SECRET> BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate BOSH_ENVIRONMENT=<SOME_IP> bosh
Create an alias named
pks
and use it in commands below:BOSH_CLIENT=<SOME_CLIENT> BOSH_CLIENT_SECRET=<SOME_SECRET> BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate bosh alias-env pks -e <SOME_IP>
For convenience, export the above two variables in the current shell:
export BOSH_CLIENT=<SOME_CLIENT> export BOSH_CLIENT_SECRET=<SOME_SECRET> export BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate
After exporting those variables, the following command should work:
bosh -e pks deployments
Determine the IP address of the node you want by listing the nodes for each pod:
kubectl get pods -o wide
bosh -e pks vms bosh -e pks ssh <VM_NAME_FROM_COMMAND_ABOVE>
How do I set permissions for MacOS HomeBrew?
If necessary, execute the command:
sudo chown -R $USER:admin /usr/local