Moving this feature request from TF repository https://github.com/tensorflow/tensorflow/issues/38803

Are you willing to contribute it (Yes/No): No Describe the feature and the current behavior/state. Currently there's no way to use multiple validation sets with independent tracking of metrics.

Will this change the current api? How?

Simplest way I can think of is to accept a list of datasets in the validation_data parameter in Model.fit. Ideally there should also be a way to specify the name of each set so that the logs indicate what set each validation step corresponds to.

Who will benefit with this feature?

Anyone training with multiple validation sets.

Comment From: rchao

There are a couple of solutions for this:

1) Use a callback to run over a set of eval dataset. You can use multiple for different eval datasets. 2) Use as many SidecarEvaluator as you like, either in separate threads, or separate processes optionally on different machines. 3) Call Model.fit without a validation data provided, followed by Model.evaluate calls each of which has different dataset used.

Hope this helps!

Comment From: Gelesh

This would be helpful to pass a sample of Training Data as one of the Validation Data Set. The Loss on Training Data during batch is subjected to Data Augmentation, Dropout , and other ways to impart noise and prevent overfitting. Hence The Training Loss may not give a clear picture. What if the model has fitted so well on training data , but the loss is only because of the difference in computation of validation prediction and training prediction.

Comment From: tilakrayal

Hello, Thank you for reporting an issue.

We're currently in the process of migrating the new Keras 3 code base from keras-team/keras-core to keras-team/keras. Consequently, This issue may not be relevant to the Keras 3code base. After the migration is successfully completed, feel free to reopen this issue at keras-team/keras if you believe it remains relevant to the Keras 3 code base. If instead this issue is a bug or security issue in legacy tf.keras, you can instead report a new issue at keras-team/tf-keras, which hosts the TensorFlow-only, legacy version of Keras.

To know more about Keras 3, please take a look at https://keras.io/keras_core/announcement/. Thank you!

Comment From: ShamrockLee

@tilakrayal This issue is currently inside the keras-team/keras repository and it still seems relevant. Would there be a chance to re-open this issue?