Pandas version checks
-
[X] I have checked that this issue has not already been reported.
-
[X] I have confirmed this bug exists on the latest version of pandas.
-
[ ] I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
import numpy as np
import pandas as pd
df2 = pd.DataFrame({'a': [1, 2], 'b': pd.Categorical([3, np.nan])})
df2.dtypes
df2.iloc[0, :] # The series has dtype int
df2.iloc[1, :] # ValueError: cannot convert float NaN to integer
df2 = pd.DataFrame({'a': [1., 2.], 'b': pd.Categorical([3, np.nan])})
df2.dtypes
df2.iloc[0, :] # The series has dtype float
df2.iloc[1, :] # OK, because the first column is float
df2 = pd.DataFrame({'a': [1, 2], 'b': pd.Series([3, np.nan], dtype=object)})
df2.dtypes
df2.iloc[0, :] # The series has dtype object
df2.iloc[1, :] # OK, because the Series of dtype object can hold mixed element type
Issue Description
When columns are created as pd.Categorical
, taking a row out sometimes encounter strange error, because a row is of type pd.Series
, which has to take a fixed type for all the elements. If there is np.nan
in the row, it might throw error if the earlier column is of type int
. Would it make sense to make the row ALWAYS take dtype object
, because it is very common to have mixed types as row ALWAYS spans different columns?
Expected Behavior
Taking a row out of a DataFrame that has a pd.Categorical
column should not report inconsistent error, depending on what earlier columns are present.
Installed Versions
Comment From: rhshadrach
Thanks for the report - confirmed on main. Further investigations and PRs to fix are welcome!
Comment From: ai-naymul
Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
```python import numpy as np import pandas as pd
df2 = pd.DataFrame({'a': [1, 2], 'b': pd.Categorical([3, np.nan])}) df2.dtypes df2.iloc[0, :] # The series has dtype int df2.iloc[1, :] # ValueError: cannot convert float NaN to integer
df2 = pd.DataFrame({'a': [1., 2.], 'b': pd.Categorical([3, np.nan])}) df2.dtypes df2.iloc[0, :] # The series has dtype float df2.iloc[1, :] # OK, because the first column is float
df2 = pd.DataFrame({'a': [1, 2], 'b': pd.Series([3, np.nan], dtype=object)}) df2.dtypes df2.iloc[0, :] # The series has dtype object df2.iloc[1, :] # OK, because the Series of dtype object can hold mixed element type ```
Issue Description
When columns are created as
pd.Categorical
, taking a row out sometimes encounter strange error, because a row is of typepd.Series
, which has to take a fixed type for all the elements. If there isnp.nan
in the row, it might throw error if the earlier column is of typeint
. Would it make sense to make the row ALWAYS take dtypeobject
, because it is very common to have mixed types as row ALWAYS spans different columns?Expected Behavior
Taking a row out of a DataFrame that has a
pd.Categorical
column should not report inconsistent error, depending on what earlier columns are present.Installed Versions
Hey,
Could you please try the following code in your machine:
```import numpy as np import pandas as pd
Explicitly setting dtype to object for columns that might contain mixed types
df2 = pd.DataFrame({'a': np.array([1, 2], dtype=object),
'b': pd.Categorical([3, np.nan], categories=[3], ordered=False)})
print(df2.dtypes)
print(df2.iloc[0, :])
print(df2.iloc[1, :]) ```
Comment From: cinntamani
Here is the output on my machine:
a object
b category
dtype: object
a 1
b 3
Name: 0, dtype: object
a 2
b NaN
Name: 1, dtype: object
Thank you!
Comment From: samyukta-31
Hi! I would like to attempt this issue if that's alright. :)
Comment From: samyukta-31
Initial thoughts -
- np.nan cannot be coerced into an integer. If either the column it belongs to has float or the row it belongs to has float, it does not throw the error - rather its datatype becomes float as well
pd.DataFrame({'a': [1, 2], 'b': pd.Categorical([3.0, np.nan])}).iloc[1, :] # No error
pd.DataFrame({'a': [1.0, 2.0], 'b': pd.Categorical([3, np.nan])}).iloc[1, :] # No error
pd.DataFrame({'a': [1, 2], 'b': pd.Categorical([3, np.nan])}).iloc[1, :] # Error
- The issue repeats for
None
,np.datetime64('NaT')
andpd.NA
as well - The issue repeats for
pd.DataFrame({'a': [1, 2], 'b': pd.Series([3, np.nan], dtype="category")}).iloc[1, :]
, as well as fordtype=int
(which is expected since np.nan is not an integer), but does not occur fordtpe='string'
, since string dtype inherently is compatible with pd.NA and therefore converts all nan values to pd.NA
Therefore the easiest way to fix this seems to be to make any column containing a Nan into an object
datatype (that can accommodate heterogenous datatypes) within pandas/core/arrays/categorical.py
. This however may have some performance implications, so would love to hear some thoughts on what you think the trade offs might be.
Comment From: liammaguire1
take