Greenplum GPText Service Properties (Beta)
This section describes each of the properties that you can define in a Greenplum GPText manifest file.
Synopsis
apiVersion: "greenplum.pivotal.io/v1beta1"
kind: "GreenplumTextService"
metadata:
name: <string>
spec:
solr:
replicas: <integer>
cpu: <cpu-limit>
memory: <memory-limit>
workerSelector: {
<label>: "<value>"
[ ... ]
}
storageClassName: <storage-class>
storage: <size>
zookeeper:
replicas: <integer>
cpu: <cpu-limit>
memory: <memory-limit>
workerSelector: {
<label>: "<value>"
[ ... ]
}
storageClassName: <storage-class>
storage: <size>
Description
You specify Greenplum Text configuration properties to the Greenplum Operator via a YAML-formatted manifest file. A sample manifest file is provided in workspace/samples/my-gp-with-gptext-instance.yaml
. The current version of the manifest supports configuring the number of replicas for ZooKeeper and Apache Solr Cloud, and the memory, and cpu limits for those pods. See also Deploying GPText with Greenplum for information about deploying a new Greenplum cluster with GPText using a manifest file.
Keywords and Values
Cluster Metadata
name: <string>
kubectl
commands using this name. This value cannot be dynamically changed for an existing cluster. If you attempt to change this value and re-apply it to an existing cluster, the Operator will create a new deployment.
Greenplum Text Configuration
replicas: <int>
replicas
property for solr
must be set to 1 (only one pod is created for Apache Solr Cloud). For ZooKeeper, replicas
is optional, but must be 3 if specified.memory: <memory-limit>
4.5Gi
.). If omitted, the pod has no upper bound on the memory resource it can use or inherits the default limit if one is specified in its deployed namespace. See Assign Memory Resources to Containers and Pods in the Kubernetes documentation for more information. This value cannot be dynamically changed for an existing cluster. If you attempt to make changes to this value and re-apply it to an existing cluster, it re-creates existing pods with a rolling update strategy.
Note: If you do not want to specify a memory limit, comment-out or remove the
memory:
keyword from the YAML file.cpu: <cpu-limit>
cpu: "1.2"
). If omitted, the pod has no upper bound on the CPU resource it can use or inherits the default limit if one is specified in its deployed namespace. See Assign CPU Resources to Containers and Pods in the Kubernetes documentation for more information.This value cannot be dynamically changed for an existing cluster. If you attempt to make changes to this value and re-apply it to an existing cluster, it re-creates existing pods with a rolling update strategy.
Note: If you do not want to specify a cpu limit, comment-out or remove the
cpu:
keyword from the YAML file.workerSelector: <map of key-value pairs>
nodeSelector
attribute.For example, consider the case where you assign the label
worker=gpdb-gptext
to one or more pods using the command:
$ kubectl label node <node_name> worker=gpdb-gptext
With the above labels present in your cluster, you would edit the Greenplum Operator manifest file to specify the same key-value pairs in the workerSelector
attribute. This shows the relevant excerpt from the manifest file:
...
workerSelector: {
worker: "gpdb-gptext"
}
...
This value cannot be dynamically changed for an existing cluster. Applying an update to this field will recreate the Solr and ZooKeeper pods with a rolling-update strategy for the new value to take effect.
storageClassName: <storage-class>
You cannot change this value for an existing cluster unless you first delete both the deployed cluster and the PVCs that were created for that cluster. See Deleting Greenplum Persistent Volume Claims.
storageSize: <size>
100G
, 1T
).You cannot change this value for an existing GPText instance unless you first delete both the deployed GPText instance and the PVCs that were created for that cluster. See Deleting Greenplum Persistent Volume Claims.
Examples
See the workspace/samples/my-gp-with-gptext-instance.yaml
file for an example manifest.