| ... | @@ -129,8 +129,6 @@ This is basically what Bayes’s formula is. It utilized under the hood of sever |
... | @@ -129,8 +129,6 @@ This is basically what Bayes’s formula is. It utilized under the hood of sever |
|
|
|
|
|
|
|
Likelihood function gives us the joint probability of an observation data as a function of model parameters. In other words, it describes the how likely it is to make that particular observation with that parameters. As usual, best seen with an illustration:
|
|
Likelihood function gives us the joint probability of an observation data as a function of model parameters. In other words, it describes the how likely it is to make that particular observation with that parameters. As usual, best seen with an illustration:
|
|
|
|
|
|
|
|

|
|
|
|
|
|
|
|
|
|
<div align="center">
|
|
<div align="center">
|
|
|
<img src="uploads/7a6f01eb27597c075e4d8fcee7eee1b8/bs1.png" width="400">
|
|
<img src="uploads/7a6f01eb27597c075e4d8fcee7eee1b8/bs1.png" width="400">
|
|
|
</div>
|
|
</div>
|
| ... | @@ -149,9 +147,11 @@ Here we have two model parameters, $`w_1, w_2`$. What we want to know, is the li |
... | @@ -149,9 +147,11 @@ Here we have two model parameters, $`w_1, w_2`$. What we want to know, is the li |
|
|
<img src="uploads/619342fc426ef02e69f0f85a45629f92/bs2.png" width="400">
|
|
<img src="uploads/619342fc426ef02e69f0f85a45629f92/bs2.png" width="400">
|
|
|
</div>
|
|
</div>
|
|
|
|
|
|
|
|
We just assumed to be Gaussians with a means of zero. After making an observation $`P = (x_1,x_2)`$, we can calculate the likelihood for varying combinations of $`w_1, w_2`$:
|
|
We just assumed to be Gaussians with a means of zero. After making an observation $`P = (x_1,y_1)`$, we can calculate the likelihood for varying combinations of $`w_1, w_2`$:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
<div align="center">
|
|
|
|
<img src="uploads/53a2da4fb2cb37511704cbcd1fc82f77/bs3.png" width="400">
|
|
|
|
</div>
|
|
|
|
|
|
|
|
Here, we can see that most likely distributions are spread around the $`y=1-x`$ line. The true values are also show as + on the figure. In Bayesian regression, for instance, this likelihood will be combined with the prior distribution (p(w|x)) to update the weight probabilities (p(w|y,X)). We will discuss the procedure in more detail [later on](DDE-1/Regression#bayesian-linear-regression).
|
|
Here, we can see that most likely distributions are spread around the $`y=1-x`$ line. The true values are also show as + on the figure. In Bayesian regression, for instance, this likelihood will be combined with the prior distribution (p(w|x)) to update the weight probabilities (p(w|y,X)). We will discuss the procedure in more detail [later on](DDE-1/Regression#bayesian-linear-regression).
|
|
|
|
|
|