Production Configuration
This guide provides instructions for configuring the Apache Polaris Helm chart for a production environment. For full list of chart values, see the Chart Reference page.
Prerequisitesđź”—
- A Kubernetes cluster (1.33+ recommended)
- Helm 3.x or 4.x installed
kubectlconfigured to access your cluster- A PostgreSQL or MongoDB database
Adding the Helm Repositoryđź”—
Add the official Apache Polaris Helm repository:
helm repo add polaris https://downloads.apache.org/polaris/helm-chart
helm repo update
Installationđź”—
Create a values.yaml file with your production configuration. See the Chart Values Reference for all available configuration options.
Create the target namespace and install the chart:
kubectl create namespace polaris
helm install polaris polaris/polaris --namespace polaris --values your-production-values.yaml
📝 Note
For Apache Polaris releases up to 1.3.0-incubating, the--devel flag is required for helm invocations.
Helm treats the -incubating suffix as a pre‑release by SemVer rules, and will skip charts that are not in a stable versioning scheme by default.Verify the installation:
helm test polaris --namespace polaris
Production Configurationđź”—
The default Helm chart values are suitable for development and testing, but they are not recommended for production. The following sections describe the key areas to configure for a production deployment.
Authenticationđź”—
Polaris supports internal authentication (with RSA key pairs or symmetric keys) and external authentication via OIDC with identity providers like Keycloak, Okta, Azure AD, and others.
By default, the Polaris Helm chart uses internal authentication with auto-generated keys. In a multi-replica production environment, all Polaris pods must share the same token signing keys to avoid token validation failures.
See the Authentication page for detailed configuration instructions.
Persistenceđź”—
By default, the Polaris Helm chart uses the in-memory metastore, which is not suitable for production. A persistent metastore must be configured to ensure data is not lost when pods restart.
Polaris supports PostgreSQL (JDBC) and MongoDB (NoSQL, beta) as production-ready metastores. See the Persistence page for detailed configuration instructions.
Networkingđź”—
For configuring external access to Polaris using the Gateway API or Ingress, see the Services & Networking guide.
Resource Managementđź”—
For a production environment, it is crucial to define resource requests and limits for the Polaris pods. Resource requests ensure that pods are allocated enough resources to run, while limits prevent them from consuming too many resources on the node.
Define resource requests and limits for the Polaris pods:
resources:
requests:
memory: "8Gi"
cpu: "4"
limits:
memory: "8Gi"
cpu: "4"
Adjust these values based on expected workload and available cluster resources.
Scalingđź”—
For high availability, multiple replicas of the Polaris server can be run. This requires a persistent metastore to be configured as described above.
Static Replicasđź”—
replicaCount must be set to the desired number of pods:
replicaCount: 3
Autoscalingđź”—
Horizontal autoscaling can be enabled to define the minimum and maximum number of replicas, and CPU or memory utilization targets:
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80
Pod Topology Spreadingđź”—
For better fault tolerance, topologySpreadConstraints can be used to distribute pods across different nodes, racks, or availability zones. This helps prevent a single infrastructure failure from taking down all Polaris replicas.
Here is an example that spreads pods across different zones and keeps the number of pods in each zone from differing by more than one:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: "DoNotSchedule"
Pod Priorityđź”—
In a production environment, it is advisable to set a priorityClassName for the Polaris pods. This ensures that the Kubernetes scheduler gives them higher priority over less critical workloads, and helps prevent them from being evicted from a node that is running out of resources.
First, a PriorityClass must be created in the cluster. For example:
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: polaris-high-priority
value: 1000000
globalDefault: false
description: "This priority class should be used for Polaris service pods only."
Then, the priorityClassName can be set in the values.yaml file:
priorityClassName: "polaris-high-priority"
Bootstrapping Realmsđź”—
When installing Polaris for the first time, it is necessary to bootstrap each realm using the Polaris admin tool.
For more information on bootstrapping realms, see the Admin Tool guide.
Example for the PostgreSQL metastore:
kubectl run polaris-bootstrap \
-n polaris \
--image=apache/polaris-admin-tool:latest \
--restart=Never \
--rm -it \
--env="polaris.persistence.type=relational-jdbc" \
--env="quarkus.datasource.username=$(kubectl get secret polaris-persistence -n polaris -o jsonpath='{.data.username}' | base64 --decode)" \
--env="quarkus.datasource.password=$(kubectl get secret polaris-persistence -n polaris -o jsonpath='{.data.password}' | base64 --decode)" \
--env="quarkus.datasource.jdbc.url=$(kubectl get secret polaris-persistence -n polaris -o jsonpath='{.data.jdbcUrl}' | base64 --decode)" \
-- \
bootstrap -r polaris-realm1 -c polaris-realm1,root,$ROOT_PASSWORD
Example for the NoSQL (MongoDB) metastore:
kubectl run polaris-bootstrap \
-n polaris \
--image=apache/polaris-admin-tool:latest \
--restart=Never \
--rm -it \
--env="polaris.persistence.type=nosql" \
--env="polaris.persistence.nosql.backend=MongoDb" \
--env="quarkus.mongodb.database=polaris" \
--env="quarkus.mongodb.connection-string=$(kubectl get secret polaris-nosql-persistence -n polaris -o jsonpath='{.data.connectionString}' | base64 --decode)" \
-- \
bootstrap -r polaris-realm1 -c polaris-realm1,root,$ROOT_PASSWORD
Both commands above bootstrap a realm named polaris-realm1 with root password: $ROOT_PASSWORD.
⚠️ Warning
Replace$ROOT_PASSWORD with a strong, unique password for the root credentials.Uninstallingđź”—
helm uninstall --namespace polaris polaris
kubectl delete namespace polaris --wait=true --ignore-not-found