>>> False & pd.NA
False
>>> np.bool_(False) & pd.NA
<NA>

this surprised me; is it on purpose @jorisvandenbossche?

Comment From: jorisvandenbossche

Yes, ideally those are consistent .. This looks like a bug. Similarly, using a numpy array (instead of numpy scalar) also gives this different result:

>>> pd.NA & False
False
>>> pd.NA & np.bool_(False)
<NA>
>>>  pd.NA & np.array([True, False])
array([<NA>, <NA>], dtype=object)

In the implementation of __and__, we are only considering the cases of other being True, False or NA, and so defer to numpy for numpy arrays or scalars (I am not entirely sure how we then end up with the above result, though? Because explicitly putting the NA in a numpy container like np.array([pd.NA], dtype=object) & np.array([True, False]) actually gives the correct result).

Looking at the other dunders, the arithmetic/comparison ops seems to be using helpers like _create_binary_propagating_op, which also handle numpy arrays for other. We should probably do something similar for the logical operators?

Comment From: jorisvandenbossche

So what I don't understand is how we get this difference in behaviour:

>>> np.array(pd.NA, dtype=object) & np.array([True, False])
array([<NA>, False], dtype=object)

>>> pd.NA & np.array([True, False])
array([<NA>, <NA>], dtype=object)

Comment From: rajiv-dsc-ml

Can I take up this task ? I want to contribute . Any suggestions are welcome

So what I don't understand is how we get this difference in behaviour:

```python

np.array(pd.NA, dtype=object) & np.array([True, False]) array([, False], dtype=object)

pd.NA & np.array([True, False]) array([, ], dtype=object) ```

Comment From: rajiv-dsc-ml

I think the logic of a & b is to return False directly if any of the a and b is python inbuilt False . However , if the False is inside the numpy array or numpy operation , then pandas missing value logic supersedes it . Hence

np.array([ False]) & pd.NA array([], dtype=object) By the same logic np.bool(False) & pd.NA pd.NA