Pandas version checks

  • [X] I have checked that this issue has not already been reported.

  • [X] I have confirmed this bug exists on the latest version of pandas.

  • [ ] I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd
import datetime
import numpy as np

delta = datetime.timedelta(days=1)

# this is true
assert pd.Series(delta).dtype == np.dtype('timedelta64[ns]')

# this fails
actual_dtype = (delta * pd.Series(1)).dtype
assert actual_dtype == np.dtype('timedelta64[ns]'), actual_dtype

Issue Description

Constructing a series out of a datetime.timedelta gives a timedelta with nanosecond resolution, but multiplying (or dividing) a datetime.timedelta by a numeric series gives microsecond resolution.

Expected Behavior

The resolution should be the same in both cases.

Installed Versions

INSTALLED VERSIONS
------------------
commit                : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140
python                : 3.9.18.final.0
python-bits           : 64
OS                    : Darwin
OS-release            : 23.6.0
Version               : Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:04 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6020
machine               : arm64
processor             : arm
byteorder             : little
LC_ALL                : None
LANG                  : en_US.UTF-8
LOCALE                : en_US.UTF-8

pandas                : 2.2.2
numpy                 : 1.26.3
pytz                  : 2023.3.post1
dateutil              : 2.8.2
setuptools            : 68.2.2
pip                   : 23.3.1
Cython                : None
pytest                : None
hypothesis            : None
sphinx                : None
blosc                 : None
feather               : None
xlsxwriter            : None
lxml.etree            : None
html5lib              : None
pymysql               : None
psycopg2              : None
jinja2                : None
IPython               : 8.18.1
pandas_datareader     : None
adbc-driver-postgresql: None
adbc-driver-sqlite    : None
bs4                   : None
bottleneck            : None
dataframe-api-compat  : None
fastparquet           : None
fsspec                : None
gcsfs                 : None
matplotlib            : None
numba                 : None
numexpr               : None
odfpy                 : None
openpyxl              : None
pandas_gbq            : None
pyarrow               : None
pyreadstat            : None
python-calamine       : None
pyxlsb                : None
s3fs                  : None
scipy                 : None
sqlalchemy            : None
tables                : None
tabulate              : None
xarray                : None
xlrd                  : None
zstandard             : None
tzdata                : 2023.4
qtpy                  : None
pyqt5                 : None

Comment From: rhshadrach

Thanks for the report, confirmed on main. Further investigations and PRs to fix are welcome!

Comment From: yuanx749

Hmm, pd.Timedelta(days=1) has microsecond resolution.

Comment From: rhshadrach

@yuanx749 - I'm seeing nanosecond.

x = pd.Timedelta(days=1)
print(x.unit)
# ns

Comment From: eabjab

take

Comment From: eabjab

Alright, I did a little digging and found what I believe to be the source of the issue. It seems that there is a special case for how the pandas Timedelta constructor handles resolution when constructed from a datetime.timedelta object.

To illustrate:

import pandas as pd
import datetime
import numpy as np

print(pd.Timedelta(days=1).unit) 
# ns

print(pd.Timedelta(datetime.timedelta(days=1)).unit) 
# us

When a datetime.timedelta object is prepared for array ops it is cast up to Timedelta, which explains the microsecond resolution observed in the issue description:

# array_ops.py
def maybe_prepare_scalar_for_op(obj, shape: Shape):
    if type(obj) is datetime.timedelta:
        # GH#22390  cast up to Timedelta to rely on Timedelta
        # implementation; otherwise operation against numeric-dtype
        # raises TypeError
        return Timedelta(obj)

However when a Series is constructed with a datetime.timedelta object, that object is NOT cast up to a pandas Timedelta. Rather it is converted using the maybe_infer_to_datetimelike function (and subsequently the convert_to_timedelta64 function), which results in a numpy timedelta64 object with nanosecond resolution.

The interesting part is that when handling datetime.timedelta objects, the Timedelta constructor explicitly sets the resolution to microseconds, whereas the convert_to_timedelta64 function explicitly sets the resolution to nanoseconds.

# in Timedelta constructor
elif PyDelta_Check(value):
    # pytimedelta object -> microsecond resolution
    new_value = delta_to_nanoseconds(
        value, reso=NPY_DATETIMEUNIT.NPY_FR_us
    )
    return cls._from_value_and_reso(
        new_value, reso=NPY_DATETIMEUNIT.NPY_FR_us
    )
# in convert_to_timedelta64 function
if PyDelta_Check(ts):
    ts = np.timedelta64(delta_to_nanoseconds(ts), "ns")

In light of this inconsistency, could I get some clarification on which resolution is the correct/expected behavior for datetime.timedelta objects? My intuition is that the Timedelta constructor is handling it incorrectly because the delta_to_nanoseconds call seems to be redundant, as the target resolution passed in is the same as the input resolution defined inside the function for datetime.timedelta objects, but I'm not 100% sure.

Comment From: jbrockmendel

The inconsistency is that we have implemented unit-inference for the Timedelta scalar but not in array_to_timedelta. Implementing it in array_to_timedelta is The Correct Solution, but that's a non-trivial effort.