Regularization in Deep Learning is very important to overcome overfitting. When your training accuracy is very high, but test accuracy is very low, the model highly overfits the training dataset set and struggle to make good predictions on test dataset.
Overfitting in Deep Learning can be the result of having a very deep neural network or high number of neurons. And the technique to reduce the number of neurons or nullify the effect of certain neurons is called Regularization.
With Regularization in Deep Learning, we nullifying the effect of certain neurons, and thus, we create a simple network that will generate decision boundary that fits well in both training as well as test dataset.
If our model is not overfitting, then we need not use Regularization. But when our model is overfitting, only then we use Regularization.
Overfitting and Underfitting : https://www.youtube.com/watch?v=SOI39DEHGSk&t=58s
Complete Neural Network playlist : https://www.youtube.com/watch?v=U1omz0B9FTw&list=PLuhqtP7jdD8Chy7QIo5U0zzKP8-emLdny&t=0s
Complete Logistic Regression Playlist : https://www.youtube.com/watch?v=xJjr_LPfBCQ&list=PLuhqtP7jdD8BpW2kOdIbjLI3HpuqeoMb-&t=0s
Complete Linear Regression Playlist : https://www.youtube.com/watch?v=xJjr_LPfBCQ&list=PLuhqtP7jdD8BpW2kOdIbjLI3HpuqeoMb-&t=0s
Timestamps:
0:00 The Problem
0:56 Overfitting in Deep Learning
2:35 Overfitting in Linear Regression
3:39 Regularization Definition
This is Your Lane to Machine Learning
Subscribe to my channel, because I upload a new Machine Learning video every week : https://www.youtube.com/channel/UCJFAF6IsaMkzHBDdfriY-yQ?sub_confirmation=1