Pandas version checks

  • [x] I have checked that this issue has not already been reported.

  • [x] I have confirmed this bug exists on the latest version of pandas.

  • [x] I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd
import numpy as np
import sys


# Create DataFrame with datetime index and multiple columns
# This reproduces the bug with ~6 years of hourly data (2033-2038)
np.random.seed(42)
date_range = pd.date_range('2033-01-01', '2038-12-31 23:00:00', freq='H')
n_cols = 40
data = np.random.rand(len(date_range), n_cols) * 0.1  # Values between 0 and 0.1

df = pd.DataFrame(data, index=date_range, columns=range(n_cols))

# Create a Series of ones with the same index
ones_series = pd.Series(1.0, index=df.index)

print(f"DataFrame shape: {df.shape}")
print(f"Memory usage (MB): {df.memory_usage(deep=True).sum() / 1024**2:.2f}")
print(f"Original data sample (should be > 0):")
print(df.iloc[32:37, 23])  # Show some sample values

# Perform the multiplication that causes corruption
print("\nPerforming multiplication...")
result = df.mul(ones_series, axis=0)

# Check for corruption
print(f"After multiplication (should be identical):")
print(result.iloc[32:37, 23])

# Verify corruption
are_equal = df.equals(result)
print(f"\nDataFrames equal: {are_equal}")

if not are_equal:
    # Count corrupted values
    diff_mask = df.values != result.values
    n_corrupted = diff_mask.sum()
    print(f"CORRUPTION DETECTED: {n_corrupted} values corrupted!")

    # Show corruption details
    corrupted_rows, corrupted_cols = np.where(diff_mask)
    if len(corrupted_rows) > 0:
        print(f"\nCorruption sample:")
        for i in range(min(5, len(corrupted_rows))):
            row, col = corrupted_rows[i], corrupted_cols[i]
            original = df.iloc[row, col]
            corrupted = result.iloc[row, col]
            date = df.index[row]
            print(f"  {date}, Col {col}: {original:.4f} -> {corrupted:.4f}")

    # Verify that corrupted values are zeros
    corrupted_values = result.values[diff_mask]
    all_zeros = np.all(corrupted_values == 0.0)
    print(f"\nAre all corrupted values zero? {all_zeros}")

    # Show which columns are affected
    unique_affected_cols = np.unique(corrupted_cols)
    print(f"Number of affected columns: {len(unique_affected_cols)}")
    print(f"Affected columns: {unique_affected_cols}")

# Demonstrate that numpy approach works correctly
print(f"\nTesting numpy workaround...")
numpy_result = pd.DataFrame(
    df.to_numpy() * ones_series.to_numpy()[:, None],
    index=df.index,
    columns=df.columns
)

numpy_works = df.equals(numpy_result)
print(f"Numpy approach works correctly: {numpy_works}")

Issue Description

The DataFrame.mul() method is corrupting data by setting non-zero values to zero when multiplying a DataFrame with datetime index by a Series of ones. This occurs only under specific conditions related to DataFrame size and affects data integrity.

Expected Behavior

When multiplying a DataFrame by a Series of ones using df.mul(ones_series, axis=0), all original values should be preserved (multiplied by 1.0).

Installed Versions

INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.11.11 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.14393 machine : AMD64 processor : Intel64 Family 6 Model 85 Stepping 4, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : English_United States.1252 pandas : 2.2.3 numpy : 2.3.0 pytz : 2025.2 dateutil : 2.9.0.post0 pip : 24.3.1 Cython : None sphinx : 8.1.3 IPython : 8.31.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.12.3 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.5 lxml.etree : 5.3.0 matplotlib : 3.10.0 numba : None numexpr : 2.10.2 odfpy : None openpyxl : 3.1.5 pandas_gbq : None psycopg2 : 2.9.9 pymysql : None pyarrow : 18.1.0 pyreadstat : None pytest : 8.3.4 python-calamine : None pyxlsb : 1.0.10 s3fs : None scipy : 1.15.2 sqlalchemy : 2.0.37 tables : 3.10.2 tabulate : None xarray : 2025.1.1 xlrd : 2.0.1 xlsxwriter : 3.2.0 zstandard : 0.23.0 tzdata : 2025.2 qtpy : 2.4.3 pyqt5 : None

Comment From: MarceloVelludo

This bug doesn't occurs on recent versions of pandas. Our test runs perfectly fine on the last version and was related with the numexrp that is activated when the elements is more than 1M for optimizations.

Comment From: jbrockmendel

I also cannot reproduce on main (though on mac)