Greenplum PL/Container Service Properties

This section describes each of the properties that you can define for a GreenplumPLService configuration in the Pivotal Greenplum manifest file.

Synopsis

apiVersion: "greenplum.pivotal.io/v1beta1"
kind: "GreenplumPLService"
metadata:
  name: <string>
spec:
  replicas: <integer>
  cpu: <cpu-limit>
  memory: <memory-limit>
  workerSelector: {
        <label>: "<value>"
        [ ... ]
  }

Description

You specify Greenplum PL/Container configuration properties to the Greenplum Operator via the YAML-formatted Greenplum manifest file. A sample manifest file is provided in workspace/samples/my-gp-with-pl-instance.yaml. The current version of the manifest supports configuring the cluster name, number of PL/Container replicas, and the memory, CPU, and worker selector settings. See also Deploying PL/Container with Greenplum for information about deploying a new Greenplum cluster with PL/Container using a manifest file.

Note: As a best practice, keep the PL./Container configuration properties in the same manifest file as Greenplum Database, to simplify upgrades or changes to the related service objects.

Keywords and Values

Cluster Metadata

name: <string>
(Required.) Sets the name of the Greenplum PL/Container instance resources. You can filter the output of kubectl commands using this name.

This value cannot be dynamically changed for an existing cluster. If you attempt to change this value and re-apply it to an existing cluster, the Operator will create a new deployment.

Greenplum PL/Container Configuration

replicas: <int>
(Required.) The number of PL/Container replica pods to create in the Greenplum cluster.

You can increase this value and re-apply it to an existing cluster as necessary.

memory: <memory-limit>
(Optional.) The amount of memory allocated to each Greenplum PL/Container pod. This value defines a memory limit; if a pod tries to exceed the limit it is removed and replaced by a new pod. You can specify a suffix to define the memory units (for example, 4.5Gi.). If omitted, the pod has no upper bound on the memory resource it can use or inherits the default limit if one is specified in its deployed namespace. See Assign Memory Resources to Containers and Pods in the Kubernetes documentation for more information.

If you attempt to make changes to this value and re-apply it to an existing cluster, it immediately re-creates existing pods causing service interruptions.

Note: If you do not want to specify a memory limit, comment-out or remove the memory: keyword from the YAML file.

cpu: <cpu-limit>
(Optional.) The amount of CPU resources allocated to each Greenplum PL/Container pod, specified as a Kubernetes CPU unit (for example, cpu: "1.2"). If omitted, the pod has no upper bound on the CPU resource it can use or inherits the default limit if one is specified in its deployed namespace. See Assign CPU Resources to Containers and Pods in the Kubernetes documentation for more information.

If you attempt to make changes to this value and re-apply it to an existing cluster, it re-creates existing pods causing service interruptions.

Note: If you do not want to specify a cpu limit, comment-out or remove the cpu: keyword from the YAML file.

workerSelector: <map of key-value pairs>
(Optional.) One or more selector labels to use for choosing Greenplum PL/Container pods. Specify one or more label-value pairs to constrain Greenplum PL./Container pods to nodes having the matching labels. Define the selector labels as you would for a pod’s nodeSelector attribute. If a workerSelector is not desired, remove the workerSelector attribute from the manifest file.

For example, consider the case where you assign the label worker=gpdb-pl4k to one or more pods using the command:

$ kubectl label node <node_name> worker=gpdb-pl4k

With the above labels present in your cluster, you would edit the Greenplum Operator manifest file to specify the same key-value pairs in the workerSelector attribute. This shows the relevant excerpt from the manifest file:

    ...
    workerSelector: {
      worker: "gpdb-pl4k"
    }
    ...


This value cannot be dynamically changed for an existing cluster. If you update this value, it recreates the Greenplum cluster for the new value to take effect.

Examples

See the workspace/samples/my-gp-with-pl-instance.yaml file for an example manifest that configures the PL/Container resource.

See Also

Deploying PL/Container with Greenplum