Greenplum GPText Service Properties (Beta)
This section describes each of the properties that you can define in a Greenplum GPText manifest file.
Synopsis
apiVersion: "greenplum.pivotal.io/v1beta1"
kind: "GreenplumTextService"
metadata:
name: <string>
spec:
solr:
replicas: <integer>
cpu: <cpu-limit>
memory: <memory-limit>
workerSelector: {
<label>: "<value>"
[ ... ]
}
storageClassName: <storage-class>
storage: <size>
zookeeper:
replicas: <integer>
cpu: <cpu-limit>
memory: <memory-limit>
workerSelector: {
<label>: "<value>"
[ ... ]
}
storageClassName: <storage-class>
storage: <size>
Description
You specify Greenplum Text configuration properties to the Greenplum Operator via a YAML-formatted manifest file. A sample manifest file is provided in workspace/samples/my-gp-with-gptext-instance.yaml
. The current version of the manifest supports configuring the number of replicas for ZooKeeper and Apache Solr Cloud, and the memory, and cpu limits for those pods. See also Deploying PXF with Greenplum for information about deploying a new Greenplum cluster with PXF using a manifest file.
Keywords and Values
Cluster Metadata
name: <string>
kubectl
commands using this name. This value cannot be dynamically changed for an existing cluster. If you attempt to change this value and re-apply it to an existing cluster, the Operator will create a new deployment.
Greenplum Text Configuration
replicas: <int>
replica
property for solr
must be set to 1 (only one pod is created for Apache Solr Cloud). For ZooKeeper, specify a minimum of three replicas.You can increase this value and re-apply it to an existing cluster
memory: <memory-limit>
4.5Gi
.). If omitted or left empty, the pod has no upper bound on the memory resource it can use or inherits the default limit if one is specified in its deployed namespace. See Assign Memory Resources to Containers and Pods in the Kubernetes documentation for more information. This value cannot be dynamically changed for an existing cluster. If you attempt to make changes to this value and re-apply it to an existing cluster, it re-creates existing pods causing any unprecedented service interruptions
Note: If you do not want to specify a memory limit, comment-out or remove the
memory:
keyword from the YAML file, or specify an empty string for its value (memory: ""
). If the keyword appears in the YAML file, you must assign a valid string value to it.cpu: <cpu-limit>
cpu: "1.2"
). If omitted or left empty, the pod has no upper bound on the CPU resource it can use or inherits the default limit if one is specified in its deployed namespace. See Assign CPU Resources to Containers and Pods in the Kubernetes documentation for more information.This value cannot be dynamically changed for an existing cluster. If you attempt to make changes to this value and re-apply it to an existing cluster, it re-creates existing pods causing any unprecedented service interruptions
Note: If you do not want to specify a cpu limit, comment-out or remove the
cpu:
keyword from the YAML file, or specify an empty string for its value (cpu: ""
). If the keyword appears in the YAML file, you must assign a valid string value to it.workerSelector: <map of key-value pairs>
nodeSelector
attribute. If a workerSelector
is not desired, remove the workerSelector
attribute from the manifest file. For example, consider the case where you assign the label
worker=gpdb-gptext
to one or more pods using the command:
$ kubectl label node <node_name> worker=gpdb-gptext
With the above labels present in your cluster, you would edit the Greenplum Operator manifest file to specify the same key-value pairs in the workerSelector
attribute. This shows the relevant excerpt from the manifest file:
...
workerSelector: {
worker: "gpdb-gptext"
}
...
This value cannot be dynamically changed for an existing cluster. If you update this value, it recreates the Solr and ZooKeeper pods for the new value to take effect.
storageClassName: <storage-class>
For best performance, Pivotal recommends using persistent volumes that are backed by a local SSD with the XFS filesystem, using
readahead
cache for best performance. Use the mount options rw,nodev,noatime,nobarrier,inode64
to mount the volume. See Creating Local Persistent Volumes for Greenplum for information about manually provisioning local persistent volumes. See Optimizing Persistent Disk and Local SSD Performance in the Google Cloud documentation for information about the performance characteristics of different storage types.You cannot change this value for an existing cluster unless you first delete both the deployed cluster and the PVCs that were created for that cluster. See Deleting Greenplum Persistent Volume Claims.
storageSize: <size>
100G
, 1T
).You cannot change this value for an existing cluster unless you first delete both the deployed cluster and the PVCs that were created for that cluster. See Deleting Greenplum Persistent Volume Claims.
Examples
See the workspace/samples/my-gp-with-gptext-instance.yaml
file for an example manifest.