Donate. I desperately need donations to survive due to my health

Get paid by answering surveys Click here

Click here to donate

Remote/Work from Home jobs

Overfitting when validation loss is close to train loss

I am training an LSTM network with Keras. The train-val split ratio is 0.4. Here is the figure for the loss. You can see the loss from training set and validation set are pretty close.

enter image description here

The r value for the train set is around 0.8, but 0.6 for the test set. Here are the scatter plots.

enter image description here

enter image description here

Any ideas on how to improve performance?

I have tried to stop at epoch 60, but it is still overfitting with similar train and validation loss.

Comments