Bug description

Hello !

I still have some trouble with my superset install. I opened a thread a few weeks ago about issues to configure root application here https://github.com/apache/superset/issues/34773 but I realized that my problems have not connection with that, So I create a new thread for that.

I'm trying to use docker compose to deploy superset in production environment, but I have several issues when the app start. Image

Image

Image

First, Can you tell me more about websocket, and the intereset to use it ?

My docker/superset_config_docker.py :

HTML_SANITIZATION = True
HTML_SANITIZATION_SCHEMA_EXTENSIONS = {
  "attributes": {
    "*": ["style", "className", "class"],
  },
  "tagNames": ["style"],
}

GLOBAL_ASYNC_QUERIES = True
GLOBAL_ASYNC_QUERIES_TRANSPORT = "ws"
GLOBAL_ASYNC_QUERIES_WEBSOCKET_URL = "ws://127.0.0.1:8080/"
GLOBAL_ASYNC_QUERIES_JWT_SECRET = "MY_CUSTOM_JWT_KEY"
GLOBAL_ASYNC_QUERIES_JWT_COOKIE_NAME = "async-token"
# Secret key to 
SUPERSET_SECRET_KEY = "MY_CUSTOM_SUPERSET_KEY"

# Set log level
SUPERSET_LOG_LEVEL = debug

# Disable examples
#SUPERSET_LOAD_EXAMPLES = no

# Disable frontend container
BUILD_SUPERSET_FRONTEND_IN_DOCKER = false

# Configuration for websocket connection
JWT_SECRET = "MY_CUSTOM_JWT_KEY"
JWT_COOKIE_NAME = "async-token"

My nginx configuration :

upstream superset_app {
    server localhost:8088;
    keepalive 100;
}

upstream superset_websocket {
    server localhost:8080;
    keepalive 100;
}

server {
    listen [::]:443 ssl;
    listen 443 ssl;
    server_name my.domain.fr;

    include conf.d/ssl.conf;

    location /ws {
        proxy_pass http://superset_websocket;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host $host;
    }

    location / { 
        proxy_pass http://superset_app;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_http_version 1.1;
        port_in_redirect off;
        proxy_connect_timeout 300;
    }

    access_log /var/log/nginx/superset/access.log;
    error_log /var/log/nginx/superset/error.log;
}

server {
    listen 80;
    listen [::]:80;

    server_name my.domain.fr;

    return 301 https://$host$request_uri;

    access_log /var/log/nginx/superset/access.log;
    error_log /var/log/nginx/superset/error.log;
}

My docker compose file :

x-superset-user: &superset-user root
x-superset-volumes: &superset-volumes
  # /app/pythonpath_docker will be appended to the PYTHONPATH in the final container
  - ./docker:/app/docker
  - ./superset:/app/superset
  - ./superset-frontend:/app/superset-frontend
  - superset_home:/app/superset_home
  - ./tests:/app/tests
x-common-build: &common-build
  context: .
  target: ${SUPERSET_BUILD_TARGET:-dev} # can use `dev` (default) or `lean`
  cache_from:
    - apache/superset-cache:3.10-slim-trixie
  args:
    DEV_MODE: "true"
    INCLUDE_CHROMIUM: ${INCLUDE_CHROMIUM:-false}
    INCLUDE_FIREFOX: ${INCLUDE_FIREFOX:-false}
    BUILD_TRANSLATIONS: ${BUILD_TRANSLATIONS:-true}
    LOAD_EXAMPLES_DUCKDB: ${LOAD_EXAMPLES_DUCKDB:-true}

services:
  redis:
    image: redis:7
    container_name: superset_cache
    restart: unless-stopped
    ports:
      - "127.0.0.1:6379:6379"
    volumes:
      - redis:/data

  db:
    env_file:
      - path: docker/.env # default
        required: true
      - path: docker/.env-local # optional override
        required: false
    image: postgres:16
    container_name: superset_db
    restart: unless-stopped
    ports:
      - "127.0.0.1:5432:5432"
    volumes:
      - db_home:/var/lib/postgresql/data
      - ./docker/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d

  superset:
    env_file:
      - path: docker/.env # default
        required: true
      - path: docker/.env-local # optional override
        required: false
    build:
      <<: *common-build
    container_name: superset_app
    command: ["/app/docker/docker-bootstrap.sh", "app"]
    restart: unless-stopped
    ports:
      - 8088:8088
      # When in cypress-mode ->
      - 8081:8081
    extra_hosts:
      - "host.docker.internal:host-gateway"
    user: *superset-user
    depends_on:
      superset-init:
        condition: service_completed_successfully
    volumes: *superset-volumes
    environment:
      SUPERSET__SQLALCHEMY_EXAMPLES_URI: "duckdb:////app/data/examples.duckdb"

  superset-websocket:
    container_name: superset_websocket
    build: ./superset-websocket
    ports:
      - 8080:8080
    extra_hosts:
      - "host.docker.internal:host-gateway"
    depends_on:
      - redis
    # Mount everything in superset-websocket into container and
    # then exclude node_modules and dist with bogus volume mount.
    # This is necessary because host and container need to have
    # their own, separate versions of these files. .dockerignore
    # does not seem to work when starting the service through
    # docker compose.
    #
    # For example, node_modules may contain libs with native bindings.
    # Those bindings need to be compiled for each OS and the container
    # OS is not necessarily the same as host OS.
    volumes:
      - ./superset-websocket:/home/superset-websocket
      - /home/superset-websocket/node_modules
      - /home/superset-websocket/dist

      # Mounting a config file that contains a dummy secret required to boot up.
      # do not use this docker compose in production
      - ./docker/superset-websocket/config.json:/home/superset-websocket/config.json
    environment:
      - PORT=8080
      - REDIS_HOST=redis
      - REDIS_PORT=6379
      - REDIS_SSL=false

  superset-init:
    build:
      <<: *common-build
    container_name: superset_init
    command: ["/app/docker/docker-init.sh"]
    env_file:
      - path: docker/.env # default
        required: true
      - path: docker/.env-local # optional override
        required: false
    depends_on:
      db:
        condition: service_started
      redis:
        condition: service_started
    user: *superset-user
    volumes: *superset-volumes
    environment:
      SUPERSET__SQLALCHEMY_EXAMPLES_URI: "duckdb:////app/data/examples.duckdb"
    healthcheck:
      disable: true

  superset-worker:
    build:
      <<: *common-build
    container_name: superset_worker
    command: ["/app/docker/docker-bootstrap.sh", "worker"]
    env_file:
      - path: docker/.env # default
        required: true
      - path: docker/.env-local # optional override
        required: false
    environment:
      CELERYD_CONCURRENCY: 2
    restart: unless-stopped
    depends_on:
      superset-init:
        condition: service_completed_successfully
    user: *superset-user
    volumes: *superset-volumes
    extra_hosts:
      - "host.docker.internal:host-gateway"
    healthcheck:
      test: ["CMD-SHELL", "celery -A superset.tasks.celery_app:app inspect ping -d celery@$$HOSTNAME"]
    # Bump memory limit if processing selenium / thumbnails on superset-worker
    # mem_limit: 2038m
    # mem_reservation: 128M

  superset-worker-beat:
    build:
      <<: *common-build
    container_name: superset_worker_beat
    command: ["/app/docker/docker-bootstrap.sh", "beat"]
    env_file:
      - path: docker/.env # default
        required: true
      - path: docker/.env-local # optional override
        required: false
    restart: unless-stopped
    depends_on:
      - superset-worker
    user: *superset-user
    volumes: *superset-volumes
    healthcheck:
      disable: true

  superset-tests-worker:
    build:
      <<: *common-build
    container_name: superset_tests_worker
    command: ["/app/docker/docker-bootstrap.sh", "worker"]
    env_file:
      - path: docker/.env # default
        required: true
      - path: docker/.env-local # optional override
        required: false
    profiles:
      - optional
    environment:
      DATABASE_HOST: localhost
      DATABASE_DB: test
      REDIS_CELERY_DB: 2
      REDIS_RESULTS_DB: 3
      REDIS_HOST: localhost
      CELERYD_CONCURRENCY: 8
    network_mode: host
    depends_on:
      superset-init:
        condition: service_completed_successfully
    user: *superset-user
    volumes: *superset-volumes
    healthcheck:
      test: ["CMD-SHELL", "celery inspect ping -A superset.tasks.celery_app:app -d celery@$$HOSTNAME"]

volumes:
  superset_home:
    external: false
  db_home:
    external: false
  redis:
    external: false

Do you have any idea to resolve these errors ? Thanks

Screenshots/recordings

No response

Superset version

master / latest-dev

Python version

3.9

Node version

16

Browser

Chrome

Additional context

No response

Checklist

  • [ ] I have searched Superset docs and Slack and didn't find a solution to my problem.
  • [ ] I have searched the GitHub issue tracker and didn't find a similar bug report.
  • [ ] I have checked Superset's logs for errors and if I found a relevant Python stacktrace, I included it here as text in the "additional context" section.

Comment From: dosubot[bot]

WebSockets let Superset push async query results to your browser in real time, so you don’t have to poll or refresh to see updates. This is especially useful for dashboards and long-running queries.

Your setup has a few common pitfalls:

  1. WebSocket URL: The GLOBAL_ASYNC_QUERIES_WEBSOCKET_URL must be reachable from the browser, not just from inside the container. If you use ws://127.0.0.1:8080/, browsers outside Docker can’t connect. Set it to your public hostname and proxied path, e.g. wss://my.domain.fr/ws if you’re using HTTPS and Nginx is proxying /ws to the websocket service. source

  2. JWT Secret: The JWT secret for async queries must be at least 32 bytes and must match exactly in both Superset and the websocket server config. If it’s too short or mismatched, you’ll get connection/auth errors. source

  3. Nginx Proxy: Your Nginx config for /ws looks correct for basic setups, but make sure it’s actually forwarding requests to the right container and port, and that no other service is using port 8080. Superset and superset-websocket must use different ports. source

  4. CSP Errors: The Content Security Policy (CSP) blocks inline scripts and WebSocket connections by default. You need to add your websocket URL to the connect-src directive in TALISMAN_CONFIG in your superset_config.py: python TALISMAN_CONFIG = { "content_security_policy": { "connect-src": [ "'self'", "wss://my.domain.fr/ws", # Add your websocket endpoint here ], # ... other directives ... }, # ... other config ... } source

  5. Frontend/Runtime Errors: Errors like messages is undefined or JSON parse errors often mean the frontend is getting a bad or empty response from the backend or websocket. These usually clear up once the WebSocket and CSP issues are fixed, but can also be caused by incomplete frontend builds. If you see missing assets or broken UI, make sure the frontend assets are built and available. source

  6. Production Note: The default Docker Compose setup is for development only. For production, use strong secrets, review all configs, and consider a more secure orchestration setup. source

In summary: update your WebSocket URL to a public address, ensure your JWT secret is long and matches everywhere, add your WebSocket endpoint to CSP, and check that Nginx is proxying correctly. If you still see errors, check browser network logs for failed requests and the websocket server logs for JWT or connection errors.

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other  Chat with Dosu Join Discord Share on X

Comment From: rusackas

Hmm... I see that "GLOBAL_ASYNC_QUERIES" is indeed in the "Testing" section of FEATURE_FLAGS.md, but it should probably be in the "In Development" section, as the architecture/details have not quite been finalized.

CC @villebro for good measure, who was looking at rearchitecting much of that, but we just haven't gotten to it yet.

I think this is largely in the "no warranties" status for now, but the bug sounds legit, so we'll leave it here and open for anyone willing to investigate/fix.