Pandas version checks
-
[x] I have checked that this issue has not already been reported.
-
[x] I have confirmed this bug exists on the latest version of pandas.
-
[x] I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
import pandas as pd
import numpy as np
import sys
# Create DataFrame with datetime index and multiple columns
# This reproduces the bug with ~6 years of hourly data (2033-2038)
np.random.seed(42)
date_range = pd.date_range('2033-01-01', '2038-12-31 23:00:00', freq='H')
n_cols = 40
data = np.random.rand(len(date_range), n_cols) * 0.1 # Values between 0 and 0.1
df = pd.DataFrame(data, index=date_range, columns=range(n_cols))
# Create a Series of ones with the same index
ones_series = pd.Series(1.0, index=df.index)
print(f"DataFrame shape: {df.shape}")
print(f"Memory usage (MB): {df.memory_usage(deep=True).sum() / 1024**2:.2f}")
print(f"Original data sample (should be > 0):")
print(df.iloc[32:37, 23]) # Show some sample values
# Perform the multiplication that causes corruption
print("\nPerforming multiplication...")
result = df.mul(ones_series, axis=0)
# Check for corruption
print(f"After multiplication (should be identical):")
print(result.iloc[32:37, 23])
# Verify corruption
are_equal = df.equals(result)
print(f"\nDataFrames equal: {are_equal}")
if not are_equal:
# Count corrupted values
diff_mask = df.values != result.values
n_corrupted = diff_mask.sum()
print(f"CORRUPTION DETECTED: {n_corrupted} values corrupted!")
# Show corruption details
corrupted_rows, corrupted_cols = np.where(diff_mask)
if len(corrupted_rows) > 0:
print(f"\nCorruption sample:")
for i in range(min(5, len(corrupted_rows))):
row, col = corrupted_rows[i], corrupted_cols[i]
original = df.iloc[row, col]
corrupted = result.iloc[row, col]
date = df.index[row]
print(f" {date}, Col {col}: {original:.4f} -> {corrupted:.4f}")
# Verify that corrupted values are zeros
corrupted_values = result.values[diff_mask]
all_zeros = np.all(corrupted_values == 0.0)
print(f"\nAre all corrupted values zero? {all_zeros}")
# Show which columns are affected
unique_affected_cols = np.unique(corrupted_cols)
print(f"Number of affected columns: {len(unique_affected_cols)}")
print(f"Affected columns: {unique_affected_cols}")
# Demonstrate that numpy approach works correctly
print(f"\nTesting numpy workaround...")
numpy_result = pd.DataFrame(
df.to_numpy() * ones_series.to_numpy()[:, None],
index=df.index,
columns=df.columns
)
numpy_works = df.equals(numpy_result)
print(f"Numpy approach works correctly: {numpy_works}")
Issue Description
The DataFrame.mul() method is corrupting data by setting non-zero values to zero when multiplying a DataFrame with datetime index by a Series of ones. This occurs only under specific conditions related to DataFrame size and affects data integrity.
Expected Behavior
When multiplying a DataFrame by a Series of ones using df.mul(ones_series, axis=0), all original values should be preserved (multiplied by 1.0).
Installed Versions
Comment From: MarceloVelludo
This bug doesn't occurs on recent versions of pandas. Our test runs perfectly fine on the last version and was related with the numexrp that is activated when the elements is more than 1M for optimizations.
Comment From: jbrockmendel
I also cannot reproduce on main (though on mac)