Bug description

Steps to replicate:

  1. Define a chart with some jinja SQLs, referencing columns dynamically based on a dashboard level filter
  2. Keep async enabled GLOBAL_ASYNC_QUERIES = True ENABLE_ASYNC_QUERIES = True
  3. Load the dashboard

Scenario:

I have a chart in which we have defined a custom metric using jinja SQLs

It looks as below where

MAX( {% if filter_values('perfmetrics')|length %} {% set metric_value = filter_values('perfmetrics')[0] %} {% if metric_value == 'Total IOPs' %} vlun_iops_total {% elif metric_value == 'Read IOPs' %} vlun_iops_read {% elif metric_value == 'Write IOPs' %} vlun_iops_write {% elif metric_value == 'Total Latency (ms)' %} vlun_svt_total {% elif metric_value == 'Read Latency (ms)' %} vlun_svt_read {% elif metric_value == 'Write Latency (ms)' %} vlun_svt_write {% elif metric_value == 'Total Throughput (MiB/s)' %} vlun_kbps_total/1024 {% elif metric_value == 'Read Throughput (MiB/s)' %} vlun_kbps_read/1024 {% elif metric_value == 'Write Throughput (MiB/s)' %} vlun_kbps_write/1024 {% elif metric_value == 'Total IO Size (KiB)' %} vlun_iosz_total {% elif metric_value == 'Read IO Size (KiB)' %} vlun_iosz_read {% elif metric_value == 'Write IO Size (KiB)' %} vlun_iosz_write {% endif %} {% endif %} )

perfmetrics => This is a list filter at dashboard level, where we can pick values as 'Total IOPs' / 'Read IOPs' / 'Write IOPs' etc.

Based on the user selection, we select the right database column and aggregate it. All of the columns are of type "double".

With async disabled, charts are loading fine.

If async is enabled,

Image

Superset version: 3.0.0

we are getting "422 UNPROCESSABLE ENTITY", in the UI it shows as unexpected error with no specific error message string.

Error screenshots are similar to the ones seen in the thread "https://github.com/apache/superset/issues/34543"

Screenshots/recordings

No response

Superset version

master / latest-dev

Python version

3.9

Node version

16

Browser

Chrome

Additional context

No response

Checklist

  • [x] I have searched Superset docs and Slack and didn't find a solution to my problem.
  • [x] I have searched the GitHub issue tracker and didn't find a similar bug report.
  • [x] I have checked Superset's logs for errors and if I found a relevant Python stacktrace, I included it here as text in the "additional context" section.

Comment From: dosubot[bot]

Hi @rahul-plavelimana! I'm Dosu and I’m helping the superset team.

This is a known bug in Superset when using async queries with Jinja SQLs and dashboard-level filters: enabling GLOBAL_ASYNC_QUERIES causes HTTP 422 errors due to inconsistent cache key serialization between the Celery worker (which writes the cache) and the web UI (which reads it). This mismatch happens especially when multiple URL parameters or Jinja filters are involved, so the cache key generated during async execution doesn't match the one used for retrieval, even though the result exists in Redis. See issue #34543 for details.

Workarounds are limited: disabling async (GLOBAL_ASYNC_QUERIES = False) avoids the bug but sacrifices scalability; using url_param('param', add_to_cache_keys=False) can bypass the error but breaks filtering, since the cache key no longer reflects filter values. There’s no official fix as of now, and the bug persists in master/latest-dev. If you need async and dynamic Jinja filters, you may need to limit to single-parameter filters or monitor for upstream changes. Advanced users can explore a custom patch to serialize URL parameters in a canonical way for cache keys—see the code paths in superset/jinja_context.py, superset/common/query_object.py, and superset/connectors/sqla/models.py for where this logic lives source.

For dynamic SQL logic, consider moving conditional logic into the main SQL query using CASE WHEN or virtual datasets, as Jinja templating in chart filters is no longer supported in recent versions source.

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other  Chat with Dosu Join Discord Share on X

Comment From: msyavuz

As dosu pointed out this is the same issue as #34543. I still haven't got around to looking into this yet but if you have any ideas on how to fix it, we would appreciate it. Also closing this one as duplicate but thank you for the detailed report @rahul-plavelimana!