Greenplum PXF Service Properties (Beta)
This section describes each of the properties that you can define in a Greenplum PXF manifest file.
Synopsis
apiVersion: "greenplum.pivotal.io/v1beta1"
kind: "GreenplumPXFService"
metadata:
name: <string>
namespace: <string>
spec:
replicas: <integer>
cpu: <cpu-limit>
memory: <memory-limit>
workerSelector: {
<label>: "<value>"
[ ... ]
}
pxfConf:
s3Source:
secret: <Secrets name string>
endpoint: <valid URL string>
bucket: <string>
folder: <string> [Optional]
Description
You specify Greenplum PXF configuration properties to the Greenplum Operator via a YAML-formatted manifest file. A sample manifest file is provided in workspace/samples/my-gp-with-pxf-instance.yaml
. The current version of the manifest supports configuring the cluster name, number of PXF replicas, and the memory, cpu, and remote PXF_CONF configs. See also Deploying PXF with Greenplum for information about deploying a new Greenplum cluster with PXF using a manifest file.
Keywords and Values
Cluster Metadata
name: <string>
kubectl
commands using this name. This value cannot be dynamically changed for an existing cluster. If you attempt to change this value and re-apply it to an existing cluster, the Operator will create a new deployment.
namespace: <string>
$ kubectl config set-context $(kubectl config current-context) --namespace=<NAMESPACE>
Greenplum PXF Configuration
replicas: <int>
You can increase this value and re-apply it to an existing cluster as necessary.
memory: <memory-limit>
4.5Gi
.). If omitted, the pod has no upper bound on the memory resource it can use or inherits the default limit if one is specified in its deployed namespace. See Assign Memory Resources to Containers and Pods in the Kubernetes documentation for more information. This value cannot be dynamically changed for an existing cluster. If you attempt to make changes to this value and re-apply it to an existing cluster, it immediately re-creates existing pods causing service interruptions.
Note: If you do not want to specify a memory limit, comment-out or remove the
memory:
keyword from the YAML file.cpu: <cpu-limit>
cpu: "1.2"
). If omitted, the pod has no upper bound on the CPU resource it can use or inherits the default limit if one is specified in its deployed namespace. See Assign CPU Resources to Containers and Pods in the Kubernetes documentation for more information.This value cannot be dynamically changed for an existing cluster. If you attempt to make changes to this value and re-apply it to an existing cluster, it re-creates existing pods causing service interruptions.
Note: If you do not want to specify a cpu limit, comment-out or remove the
cpu:
keyword from the YAML file.workerSelector: <map of key-value pairs>
nodeSelector
attribute. If a workerSelector
is not desired, remove the workerSelector
attribute from the manifest file. For example, consider the case where you assign the label
worker=gpdb-pxf
to one or more pods using the command:
$ kubectl label node <node_name> worker=gpdb-pxf
With the above labels present in your cluster, you would edit the Greenplum Operator manifest file to specify the same key-value pairs in the workerSelector
attribute. This shows the relevant excerpt from the manifest file:
...
workerSelector: {
worker: "gpdb-pxf"
}
...
This value cannot be dynamically changed for an existing cluster. If you update this value, it recreates the Greenplum PXF cluster for the new value to take effect.
pxfConf: <s3Source>
PXF_CONF
directory (/etc/pxf
). You must ensure that the bucket-folder path contains the complete directory structure and customized files for one or more PXF server configurations. See Deploying PXF with the Default Configuration for information about deploying Greenplum with a default, initialized PXF configuration directory that you can customize for accessing your data sources.These values cannot be dynamically changed for an existing cluster. If you update any value, you must delete the existing cluster and recreate the cluster for the new value to take effect.
s3Source
secret: <string>
$ kubectl create secret generic my-greenplum-pxf-configs --from-literal=‘access_key_id=<accessKey>’ --from-literal=‘secret_access_key=<secretKey>’
The above command creates a secret file named
my-greenplum-pxf-configs
using the S3 access and secret keys that you provide. Replace <accessKey>
and <secretKey>
with the actual S3 access and secret key values for your system. If necessary, use your S3 implementation documentation to generate a secret access key.
endpoint: <string>
bucket: <string>
folder: <string>
Examples
See the workspace/samples/my-gp-with-pxf-instance.yaml
file for an example manifest that configures the PXF resource.