Hello
I am encountering a problem with keras.layers.Reshape . I am currently running 3.9, but given that the related code did not change in 3.10, the problem will be the same.
in contrast to my expectation. and to the behavior I was accustomed to from keras-2, the keras 3 version of the reshape operation statically fixes all dimensions except the batch dimension. The current version of the code is here https://github.com/keras-team/keras/blob/3bedb9a970394879360fcb1c0264f3ffdc634a77/keras/src/layers/reshaping/reshape.py#L56:
def build(self, input_shape):
sample_output_shape = operation_utils.compute_reshape_output_shape(
input_shape[1:], self.target_shape, "target_shape"
)
self._resolved_target_shape = tuple(
-1 if d is None else d for d in sample_output_shape
)
def call(self, inputs):
return ops.reshape(
inputs, (ops.shape(inputs)[0],) + self._resolved_target_shape
)
Clearly, this will reduce the possible uses of the reshape layer compared to the ops.reshape operator, which can handle a dimension with -1 without problem. It appears straightforward to extend the use of the Reshape layer to preserve the flexibility of the ops.reshape operator. Is there any reason why this is kept static? If not I may provide a suggestion for a fix.
Thanks for your comments.