Pandas version checks
-
[x] I have checked that this issue has not already been reported.
-
[x] I have confirmed this issue exists on the latest version of pandas.
-
[x] I have confirmed this issue exists on the main branch of pandas.
Reproducible Example
Over in https://github.com/dask/dask/issues/12022#issuecomment-3104950072, I'm debugging a test failure with dask and pandas 3.x that comes down to the behavior of DataFrame.copy(deep=True)
with an arrow-backed extension array.
In https://github.com/pandas-dev/pandas/blob/628c7fb1ec33bc02f1f0010381b03cb3d10f87df/pandas/core/arrays/arrow/array.py#L1092, we deliberately return a shallow copy (a new object with a view on the original buffers) of the backing array. For correctness, this is fine since pyarrow arrays are immutable, so copying should be unnecessary. However, it does mean that after a DataFrame.copy(deep=True)
, you'll still have a reference back to the original buffer. If the output of the .copy(deep=True)
is the only one with a reference to the original buffer, then it won't be garbage collected. Consider:
import pandas as pd
import pyarrow as pa
pool = pa.default_memory_pool()
print("before", pool.bytes_allocated()) # 0
df = pd.DataFrame({"a": ["a", "b", "c"] * 1000})
print("df", pool.bytes_allocated()) # 27200
del df
print("df", pool.bytes_allocated()) # 0
df2 = pd.DataFrame({"a": ["a", "b", "c"] * 1000})
clone = df2.iloc[:0].copy(deep=True)
print("df2", pool.bytes_allocated()) # 27200
del df2
print("after - clone", pool.bytes_allocated()) # 27200
Maybe this is fine. We can probably figure out some workaround in dask (in this case we're making an empty dataframe object as a kind of Schema object. We can probably do something other than df.iloc[:0].copy(deep=True)
). But perhaps pandas could consider changing the behavior here.
The downside is that df.copy(deep=True)
will become more expensive and use more memory.
Installed Versions
In [4]: pd.show_versions()
INSTALLED VERSIONS
------------------
commit : 962168f06d15d1aced28b414eb82909d3c930916
python : 3.12.8
python-bits : 64
OS : Darwin
OS-release : 24.5.0
Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:53:27 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6041
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+2254.g962168f06d
numpy : 2.4.0.dev0+git20250717.d02611a
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
fastparquet : None
fsspec : 2025.7.0
html5lib : None
hypothesis : 6.136.1
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
psycopg2 : None
pymysql : None
pyarrow : 21.0.0
pyiceberg : None
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pytz : 2025.2
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
qtpy : None
pyqt5 : None
Prior Performance
No response
Comment From: TomAugspurger
cc @mroeschke if you have any thoughts here. I'll start looking for a workaround in dask though.
Comment From: TomAugspurger
Using pyarrow.compute.take
with an empty array seems to work:
out = x.iloc[:0].copy(deep=True)
for k, v in out.items():
if isinstance(v.array, pd.arrays.ArrowExtensionArray):
values = pyarrow.compute.take(pyarrow.array(v.array), pyarrow.array([], type="int32"))
out[k] = v._constructor(pd.array(values, dtype=v.array.dtype), index=v.index, name=v.name)
So I'd probably be fine with leaving the behavior as is (not having to copy the data is nice, in many cases).
Comment From: mroeschke
I would currently lean towards maintaining the current behavior of copy
being a shallow copy because the underlying array being immutable, but open to other thoughts cc @jbrockmendel @jorisvandenbossche
Another solution, if you don't care about preserving the original chunking layout like in take
method, is to just combine_chunks()
of the underlying array in the pd.arrays.ArrowExtensionArray
i.e. values = pa.array(v.array).combine_chunks()
Comment From: jbrockmendel
Do we expect end-users to have issues with this or just libraries like dask (who we can trust to handle this on their own)?
Comment From: TomAugspurger
Another solution, if you don't care about preserving the original chunking layout like in take method, is to just combine_chunks() of the underlying array
Thanks.
Do we expect end-users to have issues with this
I'm not sure, but my guess is that this that the vast majority of users are better off with the current behavior (deep copy not actually copying): The only place this is really (negatively) observable is when you have a copy of a slice of a DataFrame that outlives the original DataFrame. Dask does this a lot (for its schemas). I'm guessing it's not too common.
I'll plan to close this later today if no one pushes strongly for changing the behavior.
Comment From: jorisvandenbossche
I think we should maybe reconsider this. Certainly now we try to make much less copies within pandas (with CoW), I do think that there is value in an expected / gauranteed "deep copy" behaviour.
Comment From: TomAugspurger
Either works for me. The workaround in dask isn't free, but it ends up being relatively cheap since we're declining with size-0 arrays.
This is pretty subtle. I'm not sure how many people are using .copy()
to break references vs. .copy()
to get a mutable copy of the data (and if we're being honest, the majority of the .copy()
calls in the wild are probably attempts to avoid a SettingWithCopy warning, who won't be so performance / memory sensitive).
Comment From: jbrockmendel
Another use case that comes to mind for wanting an actual copy is to de-chunk a highly fragmented ChunkedArray.