Bug description
pod err log .and 10.168.8.60 is ok .tag is 6.0.0rc1-dev 2025/09/03 07:56:26 Waiting for: tcp://10.168.8.60:6432 2025/09/03 07:56:26 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:27 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:28 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:29 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:30 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:31 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:32 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:33 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:34 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:35 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:36 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:37 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:38 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:39 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:40 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:41 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:42 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:43 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:44 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:45 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:46 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:47 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:48 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:49 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:50 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:51 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:52 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:53 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:54 Problem with dial: dial tcp 10.168.8.60:6432: connect: no route to host. Sleeping 1s 2025/09/03 07:56:55 Problem with dial: dial tcp 10.16 8.8.60:6432: connect: no route to host. Sleeping 1s
Screenshots/recordings
No response
Superset version
master / latest-dev
Python version
3.9
Node version
16
Browser
Chrome
Additional context
No response
Checklist
- [ ] I have searched Superset docs and Slack and didn't find a solution to my problem.
- [ ] I have searched the GitHub issue tracker and didn't find a similar bug report.
- [ ] I have checked Superset's logs for errors and if I found a relevant Python stacktrace, I included it here as text in the "additional context" section.
Comment From: dosubot[bot]
The "no route to host" error is not caused by Superset itself—it's a network connectivity issue between your Superset pod and the external database. Superset's connection logic surfaces these errors from the OS/driver level and does not implement special handling for them; it simply reports the failure as a generic connection error (source).
Common causes in Kubernetes/containerized environments include: network policies blocking traffic, firewall rules on the database host, missing allowed IP ranges in the database's firewall, or routing issues. Even if the database is reachable from somewhere else, the pod itself may not have network access.
Here’s a quick troubleshooting checklist:
1. Use kubectl exec
to open a shell in the Superset pod and try nc -vz 10.168.8.60 6432
or telnet 10.168.8.60 6432
to test connectivity.
2. Check your Kubernetes NetworkPolicies and ensure the pod's namespace can reach the database IP/port.
3. Verify the database firewall allows connections from your pod's IP range.
4. Confirm your Helm chart is configured correctly: disable internal PostgreSQL (postgresql.enabled: false
), set external DB parameters under supersetNode.connections
, and use Kubernetes Secrets for passwords (source).
5. If using SSL, add ?sslmode=require
in your connection string.
No recent changes in Superset 6.0.0rc1-dev affect network connectivity—these errors are always environmental, not application-level (source).
If you need more targeted help, sharing your Helm values.yaml and details about your Kubernetes networking setup would help pinpoint the issue.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
Comment From: worldofchu
in other pod 10.168.8.60 is ok .but superset
no shell .and values.yaml
Comment From: worldofchu
The "no route to host" error is not caused by Superset itself—it's a network connectivity issue between your Superset pod and the external database. Superset's connection logic surfaces these errors from the OS/driver level and does not implement special handling for them; it simply reports the failure as a generic connection error (source).
Common causes in Kubernetes/containerized environments include: network policies blocking traffic, firewall rules on the database host, missing allowed IP ranges in the database's firewall, or routing issues. Even if the database is reachable from somewhere else, the pod itself may not have network access.
Here’s a quick troubleshooting checklist:
- Use
kubectl exec
to open a shell in the Superset pod and trync -vz 10.168.8.60 6432
ortelnet 10.168.8.60 6432
to test connectivity.- Check your Kubernetes NetworkPolicies and ensure the pod's namespace can reach the database IP/port.
- Verify the database firewall allows connections from your pod's IP range.
- Confirm your Helm chart is configured correctly: disable internal PostgreSQL (
postgresql.enabled: false
), set external DB parameters undersupersetNode.connections
, and use Kubernetes Secrets for passwords (source).- If using SSL, add
?sslmode=require
in your connection string.No recent changes in Superset 6.0.0rc1-dev affect network connectivity—these errors are always environmental, not application-level (source).
If you need more targeted help, sharing your Helm values.yaml and details about your Kubernetes networking setup would help pinpoint the issue.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
![]()
![]()
values.yaml and 10.168.8.60 in previous is ok connect
Comment From: dosubot[bot]
The most common cause for your Superset pod's "no route to host" error is a network policy, firewall rule, or routing issue that affects only this pod or its namespace—even if other pods can reach the database. Double-check that postgresql.enabled
is set to false
in your values.yaml, and that your external DB connection parameters (host, port, user, password, db name) are set under supersetNode.connections
and passed via Kubernetes Secrets as required by the Helm chart source.
Compare the Superset pod's namespace, subnet, and any applied NetworkPolicies or security groups with those of the pods that can connect. If you can't exec into the pod, you can use Kubernetes ephemeral containers or a temporary debug pod in the same namespace to run nc -vz 10.168.8.60 6432
for connectivity testing.
Review your Helm deployment output and pod events for clues about resource limits or failed init containers. If using SSL, make sure to add ?sslmode=require
to your connection string. All DB connection details must align between values.yaml, secrets, and environment variables for connectivity to work source.
If you share your values.yaml and any relevant Kubernetes manifests, it will be easier to pinpoint the issue.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
Comment From: worldofchu
The most common cause for your Superset pod's "no route to host" error is a network policy, firewall rule, or routing issue that affects only this pod or its namespace—even if other pods can reach the database. Double-check that
postgresql.enabled
is set tofalse
in your values.yaml, and that your external DB connection parameters (host, port, user, password, db name) are set undersupersetNode.connections
and passed via Kubernetes Secrets as required by the Helm chart source.Compare the Superset pod's namespace, subnet, and any applied NetworkPolicies or security groups with those of the pods that can connect. If you can't exec into the pod, you can use Kubernetes ephemeral containers or a temporary debug pod in the same namespace to run
nc -vz 10.168.8.60 6432
for connectivity testing.Review your Helm deployment output and pod events for clues about resource limits or failed init containers. If using SSL, make sure to add
?sslmode=require
to your connection string. All DB connection details must align between values.yaml, secrets, and environment variables for connectivity to work source.If you share your values.yaml and any relevant Kubernetes manifests, it will be easier to pinpoint the issue.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
![]()
![]()
i have the values.yaml it is !!
Comment From: dosubot[bot]
Based on Superset's Helm chart requirements, double-check that your values.yaml has postgresql.enabled: false
to disable the internal database, and that all external DB connection parameters (db_host, db_port, db_user, db_pass, db_name) are set under supersetNode.connections
and match your external PostgreSQL instance. For production, database passwords should be referenced via Kubernetes Secrets, not hardcoded in values.yaml. If your database requires SSL, add ?sslmode=require
to your connection string or as an extra argument. The init container will wait for the database port to be reachable before starting Superset, so any network policy or firewall blocking traffic from the Superset pod to 10.168.8.60:6432 will cause the "no route to host" error. Review your namespace, subnet, and NetworkPolicies to ensure connectivity, and confirm that all DB connection details are aligned between values.yaml, secrets, and environment variables for a successful deployment source.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
Comment From: worldofchu
I confirm that other pods can connect to 10.168.8.60 internally Therefore, how to check based on the provided values.yaml? The parameters of dbhost are all correct (I used bitnami/Superset for deploying external PG databases, but its version is 5.0.0 and does not support path base)- https://github.com/apache/superset Version: 0.15.0 Helmchart package
Comment From: dosubot[bot]
To check your values.yaml for external PostgreSQL connectivity issues, make sure these key points are covered:
postgresql.enabled
must be set tofalse
to disable the internal database and avoid port conflicts or misrouting source.- All external DB connection parameters (
db_host
,db_port
,db_user
,db_pass
,db_name
) should be set undersupersetNode.connections
. These are templated into a Kubernetes Secret and injected as environment variables for the Superset pod source. - The init container waits for
$DB_HOST:$DB_PORT
to be reachable before starting Superset. If network policies or firewalls block traffic from the Superset pod, you'll see "no route to host" errors even if other pods can connect source. - If your database requires SSL, append
?sslmode=require
to your connection string ordb_pass
value source. - For Bitnami Superset, bootstrapScript is not supported—use a custom Docker image if you need extra Python packages like
psycopg2-binary
source.
If all parameters are correct and other pods can connect, focus on differences in namespace, subnet, NetworkPolicies, or serviceAccount between Superset and the working pods. You can use an ephemeral debug pod in the same namespace to test connectivity with nc -vz 10.168.8.60 6432
.
If you want a more detailed review, please share the actual contents of your values.yaml.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
Comment From: worldofchu
i supply the values.yaml.
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Default values for superset.
This is a YAML-formatted file.
Declare variables to be passed into your templates.
A README is automatically generated from this file to document it,
using helm-docs (see https://github.com/norwoodj/helm-docs)
To update it, install helm-docs and run helm-docs from the root of this chart
-- Provide a name to override the name of the chart
nameOverride: ~
-- Provide a name to override the full names of resources
fullnameOverride: ~
-- Labels to be added to all resources
extraLabels: {}
-- User ID directive. This user must have enough permissions to run the bootstrap script
Running containers as root is not recommended in production. Change this to another UID - e.g. 1000 to be more secure
runAsUser: 0
-- Specify rather or not helm should create the secret described in secret-env.yaml
template
secretEnv:
# -- Change to false in order to support externally created secret (Binami "Sealed Secrets" for Kubernetes or External Secrets Operator)
# note: when externally creating the secret, the chart still expects to pull values from a secret with the name of the release defaults to release-name-superset-env
- full logic located in _helpers.tpl file: define "superset.fullname"
create: true
-- Specify service account name to be used
serviceAccountName: ~
serviceAccount:
# -- Create custom service account for Superset. If create: true and serviceAccountName is not provided, superset.fullname
will be used.
create: false
annotations: {}
-- Install additional packages and do any other bootstrap configuration in this script
For production clusters it's recommended to build own image with this step done in CI
@default -- see values.yaml
bootstrapScript: | #!/bin/bash if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{ .Values.runAsUser }}" > ~/bootstrap; fi
-- The name of the secret which we will use to generate a superset_config.py file
Note: this secret must have the key superset_config.py in it and can include other files as well
configFromSecret: '{{ template "superset.fullname" . }}-config'
-- The name of the secret which we will use to populate env vars in deployed pods
This can be useful for secret keys, etc.
envFromSecret: '{{ template "superset.fullname" . }}-env'
-- This can be a list of templated strings
envFromSecrets: []
-- Extra environment variables that will be passed into pods
extraEnv: SUPERSET_APP_ROOT: "/superset" # Different gunicorn settings, refer to the gunicorn documentation # https://docs.gunicorn.org/en/stable/settings.html# # These variables are used as Flags at the gunicorn startup # https://github.com/apache/superset/blob/master/docker/run-server.sh#L22 # Extend timeout to allow long running queries. # GUNICORN_TIMEOUT: 300 # Increase the gunicorn worker amount, can improve performance drastically # See: https://docs.gunicorn.org/en/stable/design.html#how-many-workers # SERVER_WORKER_AMOUNT: 4 # WORKER_MAX_REQUESTS: 0 # WORKER_MAX_REQUESTS_JITTER: 0 # SERVER_THREADS_AMOUNT: 20 # GUNICORN_KEEPALIVE: 2 # SERVER_LIMIT_REQUEST_LINE: 0 # SERVER_LIMIT_REQUEST_FIELD_SIZE: 0
# OAUTH_HOME_DOMAIN: .. # # If a whitelist is not set, any address that can use your OAuth2 endpoint will be able to login. # # this includes any random Gmail address if your OAuth2 Web App is set to External. # OAUTH_WHITELIST_REGEX: ...
-- Extra environment variables in RAW format that will be passed into pods
extraEnvRaw: [] # Load DB password from other secret (e.g. for zalando operator) # - name: DB_PASS # valueFrom: # secretKeyRef: # name: superset.superset-postgres.credentials.postgresql.acid.zalan.do # key: password
-- Extra environment variables to pass as secrets
extraSecretEnv: SUPERSET_SECRET_KEY: 'rwH264rfkz7UqJQw6FtDyBoRrjMAuUvULclUn3j9TAzRjQGa7fXVtM2x' # MAPBOX_API_KEY: ... # # Google API Keys: https://console.cloud.google.com/apis/credentials # GOOGLE_KEY: ... # GOOGLE_SECRET: ... # # Generate your own secret key for encryption. Use openssl rand -base64 42 to generate a good key # SUPERSET_SECRET_KEY: 'CHANGE_ME_TO_A_COMPLEX_RANDOM_SECRET'
-- Extra files to be mounted as ConfigMap on the path specified in extraConfigMountPath
extraConfigs: {} # import_datasources.yaml: | # databases: # - allow_file_upload: true # allow_ctas: true # allow_cvas: true # database_name: example-db # extra: "{\r\n \"metadata_params\": {},\r\n \"engine_params\": {},\r\n \"\ # metadata_cache_timeout\": {},\r\n \"schemas_allowed_for_file_upload\": []\r\n\ # }" # sqlalchemy_uri: example://example-db.local # tables: []
-- Extra files to be mounted as Secrets on the path specified in configMountPath
extraSecrets: {}
extraVolumes: [] # - name: customConfig # configMap: # name: '{{ template "superset.fullname" . }}-custom-config' # - name: additionalSecret # secret: # secretName: my-secret # defaultMode: 0600
extraVolumeMounts: [] # - name: customConfig # mountPath: /mnt/config # readOnly: true # - name: additionalSecret: # mountPath: /mnt/secret
-- A dictionary of overrides to append at the end of superset_config.py - the name does not matter
WARNING: the order is not guaranteed
Files can be passed as helm --set-file configOverrides.my-override=my-file.py
configOverrides: {}
# extend_timeout: |
# # Extend timeout to allow long running queries.
# SUPERSET_WEBSERVER_TIMEOUT = ...
# enable_oauth: |
# from flask_appbuilder.security.manager import (AUTH_DB, AUTH_OAUTH)
# AUTH_TYPE = AUTH_OAUTH
# OAUTH_PROVIDERS = [
# {
# "name": "google",
# "whitelist": [ os.getenv("OAUTH_WHITELIST_REGEX", "") ],
# "icon": "fa-google",
# "token_key": "access_token",
# "remote_app": {
# "client_id": os.environ.get("GOOGLE_KEY"),
# "client_secret": os.environ.get("GOOGLE_SECRET"),
# "api_base_url": "https://www.googleapis.com/oauth2/v2/",
# "client_kwargs": {"scope": "email profile"},
# "request_token_url": None,
# "access_token_url": "https://accounts.google.com/o/oauth2/token",
# "authorize_url": "https://accounts.google.com/o/oauth2/auth",
# "authorize_params": {"hd": os.getenv("OAUTH_HOME_DOMAIN", "")}
# }
# }
# ]
# # Map Authlib roles to superset roles
# AUTH_ROLE_ADMIN = 'Admin'
# AUTH_ROLE_PUBLIC = 'Public'
# # Will allow user self registration, allowing to create Flask users from Authorized User
# AUTH_USER_REGISTRATION = True
# # The default user self registration role
# AUTH_USER_REGISTRATION_ROLE = "Admin"
# secret: |
# # Generate your own secret key for encryption. Use openssl rand -base64 42
to generate a good key
# SECRET_KEY = 'CHANGE_ME_TO_A_COMPLEX_RANDOM_SECRET'
-- Same as above but the values are files
configOverridesFiles: {} # extend_timeout: extend_timeout.py # enable_oauth: enable_oauth.py
configMountPath: "/app/pythonpath"
extraConfigMountPath: "/app/configs"
image: repository: nexus.pb.com:8082/superset/apache/superset tag: 6.0.0rc1-dev pullPolicy: IfNotPresent
imagePullSecrets: []
initImage: repository: nexus.pb.com:8082/superset/apache/superset tag: 6.0.0rc1-dockerize pullPolicy: IfNotPresent
service: type: ClusterIP port: 8088 annotations: {} # cloud.google.com/load-balancer-type: "Internal" loadBalancerIP: ~ nodePort: # -- (int) http: nil
ingress: enabled: false ingressClassName: ~ annotations: {} # kubernetes.io/tls-acme: "true" ## Extend timeout to allow long running queries. # nginx.ingress.kubernetes.io/proxy-connect-timeout: "300" # nginx.ingress.kubernetes.io/proxy-read-timeout: "300" # nginx.ingress.kubernetes.io/proxy-send-timeout: "300" path: / pathType: ImplementationSpecific hosts: - chart-example.local tls: [] extraHostsRaw: [] # - secretName: chart-example-tls # hosts: # - chart-example.local
resources: {} # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. # The limits below will apply to all Superset components. To set individual resource limitations refer to the pod specific values below. # The pod specific values will overwrite anything that is set here. # limits: # cpu: 100m # memory: 128Mi # requests: # cpu: 100m # memory: 128Mi
-- Custom hostAliases for all superset pods
https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/
hostAliases: []
- hostnames:
- nodns.my.lan
ip: 18.27.36.45
Superset node configuration
supersetNode: replicas: enabled: true replicaCount: 1 autoscaling: enabled: false minReplicas: 1 maxReplicas: 100 targetCPUUtilizationPercentage: 80 # targetMemoryUtilizationPercentage: 80 # -- Sets the pod disruption budget for supersetNode pods podDisruptionBudget: # -- Whether the pod disruption budget should be created enabled: false # -- If set, maxUnavailable must not be set - see https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget minAvailable: 1 # -- If set, minAvailable must not be set - see https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget maxUnavailable: 1
# -- Startup command
# @default -- See values.yaml
command:
- "/bin/sh"
- "-c"
- ". {{ .Values.configMountPath }}/superset_bootstrap.sh; /usr/bin/run-server.sh"
connections:
# -- Change in case of bringing your own redis and then also set redis.enabled:false
redis_host: "{{ .Release.Name }}-redis-headless"
redis_port: "6379"
redis_user: ""
# redis_password: superset
redis_cache_db: "1"
redis_celery_db: "0"
# Or SSL port is usually 6380
# Update following for using Redis with SSL
redis_ssl:
enabled: false
ssl_cert_reqs: CERT_NONE
# You need to change below configuration incase bringing own PostgresSQL instance and also set postgresql.enabled:false
db_host: "10.168.8.60"
db_port: "6432"
db_user: superset
db_pass: superset
db_name: superset
env: {}
# -- If true, forces deployment to reload on each upgrade
forceReload: false
# -- Init containers
# @default -- a container waiting for postgres
initContainers:
- name: wait-for-postgres
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: "{{ .Values.initImage.pullPolicy }}"
envFrom:
- secretRef:
name: "{{ tpl .Values.envFromSecret . }}"
command:
- /bin/sh
- -c
- dockerize -wait "tcp://10.168.8.60:6432" -timeout 120s
# -- Launch additional containers into supersetNode pod extraContainers: [] # -- Annotations to be added to supersetNode deployment deploymentAnnotations: {} # -- Labels to be added to supersetNode deployment deploymentLabels: {} # -- Affinity to be added to supersetNode deployment affinity: {} # -- TopologySpreadConstrains to be added to supersetNode deployments topologySpreadConstraints: [] # -- Annotations to be added to supersetNode pods podAnnotations: {} # -- Labels to be added to supersetNode pods podLabels: {} startupProbe: httpGet: path: /health port: http initialDelaySeconds: 15 timeoutSeconds: 1 failureThreshold: 60 periodSeconds: 5 successThreshold: 1 livenessProbe: httpGet: path: /health port: http initialDelaySeconds: 15 timeoutSeconds: 1 failureThreshold: 3 periodSeconds: 15 successThreshold: 1 readinessProbe: httpGet: path: /health port: http initialDelaySeconds: 15 timeoutSeconds: 1 failureThreshold: 3 periodSeconds: 15 successThreshold: 1 # -- Resource settings for the supersetNode pods - these settings overwrite might existing values from the global resources object defined above. resources: {} # limits: # cpu: 100m # memory: 128Mi # requests: # cpu: 100m # memory: 128Mi podSecurityContext: {} containerSecurityContext: {} strategy: {} # type: RollingUpdate # rollingUpdate: # maxSurge: 25% # maxUnavailable: 25%
Superset Celery worker configuration
supersetWorker:
replicas:
enabled: true
replicaCount: 1
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
# -- Sets the pod disruption budget for supersetWorker pods
podDisruptionBudget:
# -- Whether the pod disruption budget should be created
enabled: false
# -- If set, maxUnavailable must not be set - see https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
minAvailable: 1
# -- If set, minAvailable must not be set - see https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
maxUnavailable: 1
# -- Worker startup command
# @default -- a celery worker
command
command:
- "/bin/sh"
- "-c"
- ". {{ .Values.configMountPath }}/superset_bootstrap.sh; celery --app=superset.tasks.celery_app:app worker"
# -- If true, forces deployment to reload on each upgrade
forceReload: false
# -- Init container
# @default -- a container waiting for postgres and redis
initContainers:
- name: wait-for-postgres-redis
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: "{{ .Values.initImage.pullPolicy }}"
envFrom:
- secretRef:
name: "{{ tpl .Values.envFromSecret . }}"
command:
- /bin/sh
- -c
- dockerize -wait "tcp://10.168.8.60:6432" -wait "tcp://{{ .Release.Name }}-redis-master:6379" -timeout 120s
# -- Launch additional containers into supersetWorker pod
extraContainers: []
# -- Annotations to be added to supersetWorker deployment
deploymentAnnotations: {}
# -- Labels to be added to supersetWorker deployment
deploymentLabels: {}
# -- Affinity to be added to supersetWorker deployment
affinity: {}
# -- TopologySpreadConstrains to be added to supersetWorker deployments
topologySpreadConstraints: []
# -- Annotations to be added to supersetWorker pods
podAnnotations: {}
# -- Labels to be added to supersetWorker pods
podLabels: {}
# -- Resource settings for the supersetWorker pods - these settings overwrite might existing values from the global resources object defined above.
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
podSecurityContext: {}
containerSecurityContext: {}
strategy: {}
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 25%
# maxUnavailable: 25%
livenessProbe:
exec:
# -- Liveness probe command
# @default -- a celery inspect ping
command
command:
- sh
- -c
- celery -A superset.tasks.celery_app:app inspect ping -d celery@$HOSTNAME
initialDelaySeconds: 120
timeoutSeconds: 60
failureThreshold: 3
periodSeconds: 60
successThreshold: 1
# -- No startup/readiness probes by default since we don't really care about its startup time (it doesn't serve traffic)
startupProbe: {}
# -- No startup/readiness probes by default since we don't really care about its startup time (it doesn't serve traffic)
readinessProbe: {}
# -- Set priorityClassName for supersetWorker pods
priorityClassName: ~
Superset beat configuration (to trigger scheduled jobs like reports)
supersetCeleryBeat:
# -- This is only required if you intend to use alerts and reports
enabled: false
# -- Sets the pod disruption budget for supersetCeleryBeat pods
podDisruptionBudget:
# -- Whether the pod disruption budget should be created
enabled: false
# -- If set, maxUnavailable must not be set - see https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
minAvailable: 1
# -- If set, minAvailable must not be set - see https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
maxUnavailable: 1
# -- Command
# @default -- a celery beat
command
command:
- "/bin/sh"
- "-c"
- ". {{ .Values.configMountPath }}/superset_bootstrap.sh; celery --app=superset.tasks.celery_app:app beat --pidfile /tmp/celerybeat.pid --schedule /tmp/celerybeat-schedule"
# -- If true, forces deployment to reload on each upgrade
forceReload: false
# -- List of init containers
# @default -- a container waiting for postgres
initContainers:
- name: wait-for-postgres-redis
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: "{{ .Values.initImage.pullPolicy }}"
envFrom:
- secretRef:
name: "{{ tpl .Values.envFromSecret . }}"
command:
- /bin/sh
- -c
- dockerize -wait "tcp://10.168.8.60:6432" -wait "tcp://{{ .Release.Name }}-redis-master:6379" -timeout 120s
# -- Launch additional containers into supersetCeleryBeat pods
extraContainers: []
# -- Annotations to be added to supersetCeleryBeat deployment
deploymentAnnotations: {}
# -- Affinity to be added to supersetCeleryBeat deployment
affinity: {}
# -- TopologySpreadConstrains to be added to supersetCeleryBeat deployments
topologySpreadConstraints: []
# -- Annotations to be added to supersetCeleryBeat pods
podAnnotations: {}
# -- Labels to be added to supersetCeleryBeat pods
podLabels: {}
# -- Resource settings for the CeleryBeat pods - these settings overwrite might existing values from the global resources object defined above.
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
podSecurityContext: {}
containerSecurityContext: {}
# -- Set priorityClassName for CeleryBeat pods
priorityClassName: ~
supersetCeleryFlower:
# -- Enables a Celery flower deployment (management UI to monitor celery jobs)
# WARNING: on superset 1.x, this requires a Superset image that has flower<1.0.0
installed (which is NOT the case of the default images)
# flower>=1.0.0 requires Celery 5+ which Superset 1.5 does not support
enabled: false
replicaCount: 1
# -- Sets the pod disruption budget for supersetCeleryFlower pods
podDisruptionBudget:
# -- Whether the pod disruption budget should be created
enabled: false
# -- If set, maxUnavailable must not be set - see https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
minAvailable: 1
# -- If set, minAvailable must not be set - see https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
maxUnavailable: 1
# -- Command
# @default -- a celery flower
command
command:
- "/bin/sh"
- "-c"
- "celery --app=superset.tasks.celery_app:app flower"
service:
type: ClusterIP
annotations: {}
loadBalancerIP: ~
port: 5555
nodePort:
# -- (int)
http: nil
startupProbe:
httpGet:
path: /api/workers
port: flower
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 60
periodSeconds: 5
successThreshold: 1
livenessProbe:
httpGet:
path: /api/workers
port: flower
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 5
successThreshold: 1
readinessProbe:
httpGet:
path: /api/workers
port: flower
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 5
successThreshold: 1
# -- List of init containers
# @default -- a container waiting for postgres and redis
initContainers:
- name: wait-for-postgres-redis
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: "{{ .Values.initImage.pullPolicy }}"
envFrom:
- secretRef:
name: "{{ tpl .Values.envFromSecret . }}"
command:
- /bin/sh
- -c
- dockerize -wait "tcp://10.168.8.60:6432" -wait "tcp://{{ .Release.Name }}-redis-master:6379" -timeout 120s
# -- Launch additional containers into supersetCeleryFlower pods
extraContainers: []
# -- Annotations to be added to supersetCeleryFlower deployment
deploymentAnnotations: {}
# -- Affinity to be added to supersetCeleryFlower deployment
affinity: {}
# -- TopologySpreadConstrains to be added to supersetCeleryFlower deployments
topologySpreadConstraints: []
# -- Annotations to be added to supersetCeleryFlower pods
podAnnotations: {}
# -- Labels to be added to supersetCeleryFlower pods
podLabels: {}
# -- Resource settings for the CeleryBeat pods - these settings overwrite might existing values from the global resources object defined above.
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
podSecurityContext: {}
containerSecurityContext: {}
# -- Set priorityClassName for supersetCeleryFlower pods
priorityClassName: ~
supersetWebsockets:
# -- This is only required if you intend to use GLOBAL_ASYNC_QUERIES
in ws
mode
# see https://github.com/apache/superset/blob/master/CONTRIBUTING.md#async-chart-queries
enabled: false
replicaCount: 1
# -- Sets the pod disruption budget for supersetWebsockets pods
podDisruptionBudget:
# -- Whether the pod disruption budget should be created
enabled: false
# -- If set, maxUnavailable must not be set - see https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
minAvailable: 1
# -- If set, minAvailable must not be set - see https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
maxUnavailable: 1
ingress:
path: /ws
pathType: Prefix
image:
# -- There is no official image (yet), this one is community-supported
repository: superset/oneacrefund/superset-websocket
tag: latest
pullPolicy: IfNotPresent
# -- The config.json to pass to the server, see https://github.com/apache/superset/tree/master/superset-websocket
# Note that the configuration can also read from environment variables (which will have priority), see https://github.com/apache/superset/blob/master/superset-websocket/src/config.ts for a list of supported variables
# @default -- see values.yaml
config:
{
"port": 8080,
"logLevel": "debug",
"logToFile": false,
"logFilename": "app.log",
"statsd": { "host": "127.0.0.1", "port": 8125, "globalTags": [] },
"redis":
{
"port": 6379,
"host": "127.0.0.1",
"password": "",
"db": 0,
"ssl": false,
},
"redisStreamPrefix": "async-events-",
"jwtSecret": "CHANGE-ME",
"jwtCookieName": "async-token",
}
service:
type: ClusterIP
annotations: {}
loadBalancerIP: ~
port: 8080
nodePort:
# -- (int)
http: nil
command: []
resources: {}
# -- Launch additional containers into supersetWebsockets pods
extraContainers: []
deploymentAnnotations: {}
# -- Affinity to be added to supersetWebsockets deployment
affinity: {}
# -- TopologySpreadConstrains to be added to supersetWebsockets deployments
topologySpreadConstraints: []
podAnnotations: {}
podLabels: {}
strategy: {}
podSecurityContext: {}
containerSecurityContext: {}
startupProbe:
httpGet:
path: /health
port: ws
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 60
periodSeconds: 5
successThreshold: 1
livenessProbe:
httpGet:
path: /health
port: ws
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 5
successThreshold: 1
readinessProbe:
httpGet:
path: /health
port: ws
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 5
successThreshold: 1
# -- Set priorityClassName for supersetWebsockets pods
priorityClassName: ~
init:
# Configure resources
# Warning: fab command consumes a lot of ram and can
# cause the process to be killed due to OOM if it exceeds limit
# Make sure you are giving a strong password for the admin user creation( else make sure you are changing after setup)
# Also change the admin email to your own custom email.
resources: {}
# limits:
# cpu:
# memory:
# requests:
# cpu:
# memory:
# -- Command
# @default -- a superset_init.sh
command
command:
- "/bin/sh"
- "-c"
- ". {{ .Values.configMountPath }}/superset_bootstrap.sh; . {{ .Values.configMountPath }}/superset_init.sh"
enabled: true
jobAnnotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": "before-hook-creation"
loadExamples: false
createAdmin: true
adminUser:
username: admin
firstname: Superset
lastname: Admin
email: admin@superset.com
password: admin
# -- List of initContainers
# @default -- a container waiting for postgres
initContainers:
- name: wait-for-postgres
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: "{{ .Values.initImage.pullPolicy }}"
envFrom:
- secretRef:
name: "{{ tpl .Values.envFromSecret . }}"
command:
- /bin/sh
- -c
- dockerize -wait "tcp://10.168.8.60:6432" -timeout 120s
# -- A Superset init script
# @default -- a script to create admin user and initialize roles
initscript: |-
#!/bin/sh
set -eu
echo "Upgrading DB schema..."
superset db upgrade
echo "Initializing roles..."
superset init
{{ if .Values.init.createAdmin }}
echo "Creating admin user..."
superset fab create-admin \
--username {{ .Values.init.adminUser.username }} \
--firstname {{ .Values.init.adminUser.firstname }} \
--lastname {{ .Values.init.adminUser.lastname }} \
--email {{ .Values.init.adminUser.email }} \
--password {{ .Values.init.adminUser.password }} \
|| true
{{- end }}
{{ if .Values.init.loadExamples }}
echo "Loading examples..."
superset load_examples
{{- end }}
if [ -f "{{ .Values.extraConfigMountPath }}/import_datasources.yaml" ]; then
echo "Importing database connections.... "
superset import_datasources -p {{ .Values.extraConfigMountPath }}/import_datasources.yaml
fi
# -- Launch additional containers into init job pod
extraContainers: []
## Annotations to be added to init job pods
podAnnotations: {}
# Labels to be added to init job pods
podLabels: {}
podSecurityContext: {}
containerSecurityContext: {}
## Tolerations to be added to init job pods
tolerations: []
## Affinity to be added to init job pods
affinity: {}
# -- TopologySpreadConstrains to be added to init job
topologySpreadConstraints: []
# -- Set priorityClassName for init job pods
priorityClassName: ~
-- Configuration values for the postgresql dependency.
ref: https://github.com/bitnami/charts/tree/main/bitnami/postgresql
@default -- see values.yaml
postgresql: ## ## Use the PostgreSQL chart dependency. ## Set to false if bringing your own PostgreSQL. enabled: false
## Authentication parameters
auth:
## The name of an existing secret that contains the postgres password.
existingSecret:
## PostgreSQL name for a custom user to create
username: superset
## PostgreSQL password for the custom user to create. Ignored if auth.existingSecret
with key password
is provided
password: superset
## PostgreSQL name for a custom database to create
database: superset
image: tag: "14.17.0-debian-12-r3"
## PostgreSQL Primary parameters primary: ## ## Persistent Volume Storage configuration. ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes persistence: ## ## Enable PostgreSQL persistence using Persistent Volume Claims. enabled: true ## ## Persistent class # storageClass: classname ## ## Access modes: accessModes: - ReadWriteOnce ## PostgreSQL port service: ports: postgresql: "5432"
-- Configuration values for the Redis dependency.
ref: https://github.com/bitnami/charts/blob/master/bitnami/redis
More documentation can be found here: https://artifacthub.io/packages/helm/bitnami/redis
@default -- see values.yaml
redis: ## ## Use the redis chart dependency. ## ## If you are bringing your own redis, you can set the host in supersetNode.connections.redis_host ## ## Set to false if bringing your own redis. enabled: true ## ## Set architecture to standalone/replication architecture: standalone ## ## Auth configuration: ## auth: ## Enable password authentication enabled: false ## The name of an existing secret that contains the redis password. existingSecret: "" ## Name of the key containing the secret. existingSecretKey: "" ## Redis password password: superset ## ## Master configuration ## master: ## ## Image configuration # image: ## ## docker registry secret names (list) # pullSecrets: nil ## ## Configure persistence persistence: ## ## Use a PVC to persist data. enabled: false ## ## Persistent class # storageClass: classname ## ## Access mode: accessModes: - ReadWriteOnce
nodeSelector: {}
tolerations: []
affinity: {}
-- TopologySpreadConstrains to be added to all deployments
topologySpreadConstraints: []
-- Set priorityClassName for superset pods
priorityClassName: ~
Comment From: dosubot[bot]
Your values.yaml is correctly configured for external PostgreSQL: postgresql.enabled
is false, and all DB connection parameters are set under supersetNode.connections
. The Helm chart templates these into Kubernetes Secrets, which inject the environment variables into the Superset pod, and the init container waits for the DB port to be reachable before startup. Superset builds the connection string from these variables internally—there’s no hidden override or fallback logic source.
Given that other pods can connect but Superset cannot, the issue is almost certainly environmental: check for differences in namespace, subnet, NetworkPolicies, or firewall rules that might isolate the Superset pod. If you can’t exec into the pod, use a debug/ephemeral pod in the same namespace to test connectivity (nc -vz 10.168.8.60 6432
). Also, review Kubernetes events and logs for resource or permission issues.
If your DB requires SSL, append ?sslmode=require
to your connection string. For production, consider referencing DB passwords via Kubernetes Secrets rather than hardcoding them source.
The values.yaml itself is not the source of the connectivity problem—focus on cluster networking and pod-level isolation.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
Comment From: sadpandajoe
@worldofchu if this is a bug, do you mind giving some reproduction steps, else it might be better as a discussion to try and get you help from others that have experience with this.
Comment From: worldofchu
i can supply the helm chart zip file. use namespace superset .and the external DB i can't supply. it is the zip file supersetbitnami015.zip
Comment From: worldofchu
oh what's happend?
Comment From: worldofchu
Excuse me, has this problem been resolved
Comment From: sadpandajoe
@worldofchu no the problem has not been resolved as there is not reproducible steps to debug.