Bug description

We’ve identified an issue when using FileSystemCache where setting values for TABLE CACHE TIMEOUT and/or SCHEMA CACHE TIMEOUT in the database properties modal leads to failures in fetching the schema list and table list in SQLLab. Specifically, requests to /api/v1/database/{dbId}/tables/?q=(force:!f,schema_name:{schemaName})return a status code 422 UNPROCESSABLE ENTITY with the response: {"message":"Unexpected error occurred, please check your logs for details"}

The issue appears to be related to how table_cache_timeout and schema_cache_timeout values are handled in the payload and how these values are handled in the backend (cachelib).

Proposed fix: #28737

How to reproduce the bug

  1. Set CACHE_CONFIG to use FileSystemCache in config.py/superset_config.py
  2. Go to Settings → Database Connections
  3. Edit a DB connection
  4. Go to Advanced → Performance
  5. Set values to SCHEMA CACHE TIMEOUT and TABLE CACHE TIMEOUT
  6. Go to SqlLab, select the DB you edited
  7. Try to refresh the schema list and table list

Screenshots/recordings

https://github.com/apache/superset/assets/8809047/16eda66e-f28e-455f-8bd2-6ad9a7a05d1b

Superset version

4.0.1

Python version

3.9

Node version

16

Browser

Chrome

Additional context

Stacktrace

2024-05-28 03:48:58,426:ERROR:flask_appbuilder.api:unsupported operand type(s) for +: 'int' and 'str'
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/flask_appbuilder/api/__init__.py", line 110, in wraps
    return f(self, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/flask_appbuilder/api/__init__.py", line 182, in wraps
    return f(self, *args, **kwargs)
  File "/app/superset/views/base_api.py", line 127, in wraps
    raise ex
  File "/app/superset/views/base_api.py", line 121, in wraps
    duration, response = time_function(f, self, *args, **kwargs)
  File "/app/superset/utils/core.py", line 1463, in time_function
    response = func(*args, **kwargs)
  File "/app/superset/utils/log.py", line 255, in wrapper
    value = f(*args, **kwargs)
  File "/app/superset/databases/api.py", line 608, in schemas
    schemas = database.get_all_schema_names(
  File "/app/superset/utils/cache.py", line 135, in wrapped_f
    cache.set(cache_key, obj, timeout=kwargs.get("cache_timeout", 0))
  File "/usr/local/lib/python3.9/site-packages/flask_caching/__init__.py", line 193, in set
    return self.cache.set(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/cachelib/file.py", line 241, in set
    timeout = self._normalize_timeout(timeout)
  File "/usr/local/lib/python3.9/site-packages/cachelib/file.py", line 100, in _normalize_timeout
    timeout = int(time()) + timeout
TypeError: unsupported operand type(s) for +: 'int' and 'str'

Checklist

  • [X] I have searched Superset docs and Slack and didn't find a solution to my problem.
  • [ ] I have searched the GitHub issue tracker and didn't find a similar bug report.
  • [X] I have checked Superset's logs for errors and if I found a relevant Python stacktrace, I included it here as text in the "additional context" section.

Comment From: dosubot[bot]

Hey @kidusmakonnen! I'm here to help you with any bugs, questions, or contributions you have for the repository. Let me know how I can assist you!

The error TypeError: unsupported operand type(s) for +: 'int' and 'str' indicates that the values for table_cache_timeout and schema_cache_timeout are being treated as strings instead of integers. This can happen if these values are not correctly set as integers in your configuration.

To resolve this issue, ensure that the values for table_cache_timeout and schema_cache_timeout are set as integers. Here is how you can set these values correctly:

  1. Directly in the configuration file: python # Ensure these values are set as integers table_cache_timeout = 300 # Example value in seconds schema_cache_timeout = 300 # Example value in seconds

  2. Reading from environment variables: ```python import os

    Example of reading from environment variables and converting to integers

    table_cache_timeout = int(os.getenv('TABLE_CACHE_TIMEOUT', 300)) schema_cache_timeout = int(os.getenv('SCHEMA_CACHE_TIMEOUT', 300)) ```

Additionally, ensure that any cache-related functions handle the cache timeout values correctly. For example, in the set_and_log_cache function from superset/utils/cache.py, the cache_timeout parameter should be an integer:

def set_and_log_cache(
    cache_instance: Cache,
    cache_key: str,
    cache_value: dict[str, Any],
    cache_timeout: int | None = None,
    datasource_uid: str | None = None,
) -> None:
    if isinstance(cache_instance.cache, NullCache):
        return

    timeout = (
        cache_timeout
        if cache_timeout is not None
        else app.config["CACHE_DEFAULT_TIMEOUT"]
    )
    try:
        dttm = datetime.utcnow().isoformat().split(".")[0]
        value = {**cache_value, "dttm": dttm}
        cache_instance.set(cache_key, value, timeout=timeout)
        stats_logger.incr("set_cache_key")

        if datasource_uid and config["STORE_CACHE_KEYS_IN_METADATA_DB"]:
            ck = CacheKey(
                cache_key=cache_key,
                cache_timeout=cache_timeout,
                datasource_uid=datasource_uid,
            )
            db.session.add(ck)
    except Exception as ex:  # pylint: disable=broad-except
        logger.warning("Could not cache key %s", cache_key)
        logger.exception(ex)

By ensuring these values are integers and correctly handled in your cache-related functions, you should be able to avoid the TypeError and successfully fetch the schema list and table list in SQLLab [1][2].

#### Details **About Dosu** This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

Comment From: rusackas

Let's try to get the linked PR across the finish line! Any updates/context or reviews/input welcomed!

Comment From: rusackas

Still hoping for a PR touch-up 🤞 Anyone is welcome to adopt it, if the author doesn't patch it.