Pandas version checks
-
[x] I have checked that this issue has not already been reported.
-
[x] I have confirmed this bug exists on the latest version of pandas.
-
[ ] I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
import pandas as pd
s = pd.Series(['23', '³', '⅕', ''], dtype=pd.StringDtype(storage="pyarrow"))
s.str.isdigit()
0
0 True
1 False
2 False
3 False
dtype: boolean
Issue Description
Series.str.isdigit() with pyarrow string dtype doesn't honor unicode superscript/subscript. Which diverges with the public doc. https://pandas.pydata.org/docs/reference/api/pandas.Series.str.isdigit.html#pandas.Series.str.isdigit
The bug only happens in Pyarrow string dtype, Python string dtype behavior is correct.
Expected Behavior
import pandas as pd
s = pd.Series(['23', '³', '⅕', ''], dtype=pd.StringDtype(storage="pyarrow"))
s.str.isdigit()
0
0 True
1 True
2 False
3 False
dtype: boolean
Installed Versions
Comment From: rhshadrach
Thanks for the report, confirmed on main. Further investigations and PRs to fix are welcome!
Comment From: iabhi4
@rhshadrach The issue stems from pyarrow.compute.utf8_is_digit not recognizing non-ASCII Unicode digits (e.g., '³'). To align with str.isdigit()'s behavior and pandas docs, I propose replacing the Arrow compute call in _str_isdigit() with
def _str_isdigit(self):
values = self.to_numpy(na_value=None)
data = []
mask = []
for val in values:
if val is None:
data.append(False)
mask.append(True)
else:
data.append(val.isdigit())
mask.append(False)
from pandas.core.arrays.boolean import BooleanArray
return BooleanArray(np.array(data, dtype=bool), np.array(mask, dtype=bool))
While this isn’t vectorized, it correctly honors all Unicode digit categories, which aligns with user expectations. Let me know if this workaround is acceptable for now, or if you’d prefer keeping the current Arrow-based behavior and instead clarifying the limitation in the documentation.
Related upstream issue: I’ve confirmed that this is a pyarrow limitation and have raised an enhancement request in the Arrow repo to bring utf8_is_digit in line with str.isdigit().
Optionally, we could also explore reimplementing this in Cython using PyUnicode_READ and Py_UNICODE_ISDIGIT for performance while maintaining Unicode correctness.
Let me know what direction you'd prefer, happy to work on a patch either way
Comment From: rhshadrach
Looks like this is getting fixed upstream (thanks!). Assuming that to be the case, my preference would be to leave pandas as-is.
cc @WillAyd @jorisvandenbossche for any thoughts.
Comment From: WillAyd
Yes I agree - let's keep it as an upstream fix. Thanks for the thorough investigation and solution @iabhi4
Comment From: GarrettWu
Thanks @iabhi4 for the upstream fix https://github.com/apache/arrow/issues/46589. It solves the superscripts issue, but introduces another discrepancy:
// '¾' (vulgar fraction) is treated as a digit by utf8proc 'No'
Any chance we can fix it too? Otherwise str.isdigit is still different on python string and pyarrow string types.
Comment From: jorisvandenbossche
Looks like this is getting fixed upstream (thanks!). Assuming that to be the case, my preference would be to leave pandas as-is.
If this comes in the next pyarrow version, I think we could still add a fallback based on the version. I.e. use pyarrow for recent versions, otherwise still fallback to the python implemenation. Potentially, assuming the cases that behave differently are all in unicode, we could also first do a check if all elements are ascii, and if so always use the pyarrow version (I think the overhead is worth it)
Comment From: jorisvandenbossche
I am doing a PR for what I mentioned above (still fallback to python for pyarrow<21) at https://github.com/pandas-dev/pandas/pull/61962, but that also shows that this now introduces a new inconsistency ..
The superscript is now seen as a digit, but the ⅕ as well, while that is not the case for python:
>>> '⅕'.isdigit()
False
vs
>>> import pyarrow.compute as pc
>>> pa.__version__
'21.0.0'
>>> pc.utf8_is_digit(['⅕'])
<pyarrow.lib.BooleanArray object at 0x7f28ca52da80>
[
true
]
Comment From: jorisvandenbossche
What do we want to do here? Stay as close as possible to the Python behaviour? (in that case we could fast-check if all values are ascii, and in that case still using the faster pyarrow algorithm, but otherwise fall back to Python) Or accept that this is one of the differences between both engines and document it as such?
(it's also not clear to me if one of both behaviours is "more correct" than the other ..)
Comment From: WillAyd
I think that there are endless possibilities for differences and ambiguities on the compute interpretation of unicode sequences, so I'd rather just document that we are using Arrow/utf8proc. I think the only real alternatives are playing "whack-a-mole" as people report issues, or always pushing everything to a PyObject and continuing to use Python's interpretation. That would definitely be the "pure" backwards-compat approach, but I'm not sure it is all that practical
Comment From: jorisvandenbossche
Yes, I think I agree for differences that are unicode corner cases like this one. I updated the PR to test the different behaviour, and then we should also document this as one of the known differences (in addition to the upper case of ß)