| ... | ... | @@ -85,7 +85,7 @@ If you are more curious about its derivation and the underlying mathematics of t |
|
|
|
|
|
|
|
## Bayesian Linear Regression
|
|
|
|
|
|
|
|
_Note: Visit [statistics](DDE-1/Statistics) page if you are not familiar with the concepts of probability or likelihood. _
|
|
|
|
_Note: Visit [statistics](DDE-1/Statistics) page if you are not familiar with the concepts of probability or likelihood._
|
|
|
|
|
|
|
|
In the “regression problem” (see lecture notes for “ode to learning” and “regression”), we have discussed that selection of model complexity is needed to be compatible with the data (dimensions, volume) in order to minimize the over-fitting problem. We have also seen that we can force regularization on the lost function to give additional penalty for the over-fitting. Nonetheless, its impact is limited as the nature of the base function (i.e. our scientific hypothesis) is still there, affecting the overall behavior of the ML model deployed.
|
|
|
|
|
| ... | ... | |
| ... | ... | |