Pandas version checks

  • [x] I have checked that this issue has not already been reported.

  • [x] I have confirmed this bug exists on the latest version of pandas.

  • [x] I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

>>> df = pd.DataFrame({'id': [uuid.uuid4(), uuid.uuid4(), uuid.uuid4()]})
>>> df
                                     id
0  6f6303cd-516d-4a27-9165-bb703f9e2240
1  c250ba7f-31db-47de-b02b-54296ac6a4df
2  c523257a-51ab-4160-957b-619ce55c78f9
>>> df.to_parquet('sample_pandas_pa.parquet', engine='pyarrow')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File ".venv/lib/python3.12/site-packages/pandas/util/_decorators.py", line 333, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.12/site-packages/pandas/core/frame.py", line 3113, in to_parquet
    return to_parquet(
           ^^^^^^^^^^^
  File ".venv/lib/python3.12/site-packages/pandas/io/parquet.py", line 480, in to_parquet
    impl.write(
  File ".venv/lib/python3.12/site-packages/pandas/io/parquet.py", line 190, in write
    table = self.api.Table.from_pandas(df, **from_pandas_kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "pyarrow/table.pxi", line 4793, in pyarrow.lib.Table.from_pandas
  File ".venv/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 639, in dataframe_to_arrays
    arrays = [convert_column(c, f)
              ^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 626, in convert_column
    raise e
  File ".venv/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 620, in convert_column
    result = pa.array(col, type=type_, from_pandas=True, safe=safe)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "pyarrow/array.pxi", line 365, in pyarrow.lib.array
  File "pyarrow/array.pxi", line 90, in pyarrow.lib._ndarray_to_array
  File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: ("Could not convert UUID('6f6303cd-516d-4a27-9165-bb703f9e2240') with type UUID: did not recognize Python value type when inferring an Arrow data type", 'Conversion failed for column id with type object')

Issue Description

Writing UUIDs fail. pyarrow supports writing UUIDs

Expected Behavior

Writing UUIDs pass

Installed Versions

INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.12.9 python-bits : 64 OS : Linux OS-release : 6.8.0-57-generic Version : #59~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Mar 19 17:07:41 UTC 2 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.2.3 numpy : 2.2.6 pytz : 2025.2 dateutil : 2.9.0.post0 pip : None Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : 2025.5.1 html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 20.0.0 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : 1.15.3 sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.2 qtpy : None pyqt5 : None

Comment From: DevastatingRPG

After review, this seems to be an issue with PyArrow not Pandas

Specifically the convert_columns in from_pandas function of PyArrow as it is not able to work with the UUID library's UUID datatype.

import uuid
import pandas as pd
import pyarrow as pa

df = pd.DataFrame({'id': [uuid.uuid4(), uuid.uuid4(), uuid.uuid4()]})

new_df = pa.Table.from_pandas(df)

Even the above code gives the same error

A temporary workaround is to use the bytes data type before saving to parquet

import uuid
import pandas as pd

df = pd.DataFrame({'id': [uuid.uuid4(), uuid.uuid4(), uuid.uuid4()]})

# Convert UUIDs to bytes for Arrow compatibility
df['id'] = df['id'].apply(lambda x: x.bytes)

df.to_parquet('sample_pandas_pa.parquet', engine='pyarrow')

The above code produces no error and saves the file successfully

Of course we can implement it on our side in to_parquet function but seems like a fix is needed on their side. Can we get a comment from the Maintainers to see how to proceed?

Comment From: chilin0525

Thanks @DevastatingRPG , found related issue in PyArrow: https://github.com/apache/arrow/issues/44224