Hi,

are there any plans to add cuDNN-accelerated versions of LSTM and GRU to the PyTorch backend? Without cuDNN acceleration, the LSTM and GRU are considerably (several times) slower, even when running on GPU; however, we still use RNNs heavily (for example, adding them after Transformer encoder still helps in some cases).

The torch.nn.LSTM/torch.nn.GRU offer cuDNN acceleration, and wrapping them to a keras.layers.Layer works, but the resulting model is not backend-agnostic (so the resulting model cannot be used cross-frameworks).

Thanks for consideration :pray: and cheers!

PS: Relatedly, torch.nn.LSTM/GRU offers bidirectional computation by a single call (by passing bidirectional=True) -- I am not sure how much faster it is compared to two asynchronous unidirectional computations, but if it is faster, keras.layers.Bidirectional would probably have to be updated to handle keras.layers.LSTM and keras.layers.GRU specifically to support it.

Comment From: haifeng-jin

@foxik Thanks for the issue! Would you like to contribute this by modifying the following file? https://github.com/keras-team/keras/blob/master/keras/backend/torch/rnn.py#L377C1-L382C30

Comment From: foxik

@haifeng-jin I am not sure I can do it correctly. I assume that - The cudnn_ok will probably need to consider also the current device (whether it is cuda or not) [in addition to verifying that the arguments are supported by CuDNN implementation) - that may require changes in the code, because cudnn_ok is currently being called only for tensorflow backend; on the other hand, it is used only to set supports_jit to False, which is probably not needed for PyTorch, because the sources indicate that TorchScript can compile torch.nn.LSTM/GRU. - the torch.nn.LSTM/GRU is a whole layer including parameters, but we need to use the given parameters. Therefore, we should probably call torch._VF.lstm/gru, but I am not sure whether that would be considered OK - nontrivial care must be taken to assure the results of CuDNN branch are the same as the usual branch - the go_backwards has no direct analogue in Torch API, so some manual reversing will be needed - on the other hand, bidirectional run is supported by Torch API, so - similarly to backend.lstm/gru, backend.lstm/gru_bidirectional should be introduced - Bidirectional wrapper should try calling this lstm/gpu_bidirectional to use CuDNN-accelerated bidirectional call (only PyTorch would implement this method) - With this support in place, the go_backwards would not be used in PyTorch for most usages, so it would not matter its implementation would not be great

In any case, for the time being I unfortunately do not have time to work on this.

Comment From: park

This feature would be great indeed. Hopefully someone high capable will attend to this sometime soon.

Comment From: github-actions[bot]

This issue is stale because it has been open for 180 days with no activity. It will be closed if no further activity occurs. Thank you.

Comment From: Jeffrharr

I'd be interested in tackling this. I believe there's enough information in this ticket to assist in adapting it from one of the other implementations without too much trouble. I'll have to make sure it's well tested and comparable to other implementations.

@haifeng-jin since this would be my first contribution -- how would I take this ticket? Additionally, do any of the test servers triggered from github have NVIDIA cards?