| ... | @@ -141,17 +141,17 @@ in the Bayesian inference, the likelihood indicates the compatibility of the evi |
... | @@ -141,17 +141,17 @@ in the Bayesian inference, the likelihood indicates the compatibility of the evi |
|
|
y_p(x, w) = w_0 + w_1x_1
|
|
y_p(x, w) = w_0 + w_1x_1
|
|
|
```
|
|
```
|
|
|
|
|
|
|
|
Here we have two model parameters, $`w_1, w_2`$. What we want to know, is the likelihood of observing $`y_1`$, given $`x_1`$ with the above model equation: function, p(y|w,x). Lets assume that model weights has the following probability distributions:
|
|
Here we have two model parameters, $`w_1, w_2`$. What we want to know, is the likelihood (p(y|w,x)) of observing $`y_1`$, given $`x_1`$ with the above linear model equation. Lets assume that model weights has the following probability distributions (assumed to be Gaussians with a means of zero):
|
|
|
|
|
|
|
|
<div align="center">
|
|
<div align="center">
|
|
|
<img src="uploads/619342fc426ef02e69f0f85a45629f92/bs2.png" width="400">
|
|
<img src="uploads/619342fc426ef02e69f0f85a45629f92/bs2.png" width="400">
|
|
|
</div>
|
|
</div>
|
|
|
|
|
|
|
|
We just assumed to be Gaussians with a means of zero. After making an observation $`P = (x_1,y_1)`$, we can calculate the likelihood for varying combinations of $`w_1, w_2`$:
|
|
After making an observation $`P = (x_1,y_1)`$, we can calculate the likelihood for varying combinations of $`w_1, w_2`$:
|
|
|
|
|
|
|
|
<div align="center">
|
|
<div align="center">
|
|
|
<img src="uploads/53a2da4fb2cb37511704cbcd1fc82f77/bs3.png" width="400">
|
|
<img src="uploads/53a2da4fb2cb37511704cbcd1fc82f77/bs3.png" width="400">
|
|
|
</div>
|
|
</div>
|
|
|
|
|
|
|
|
Here, we can see that most likely distributions are spread around the $`y=1-x`$ line. The true values are also show as + on the figure. In Bayesian regression, for instance, this likelihood will be combined with the prior distribution (p(w|x)) to update the weight probabilities (p(w|y,X)). We will discuss the procedure in more detail [later on](DDE-1/Regression#bayesian-linear-regression).
|
|
We can see that most likely positions for $`w_1, w_2`$ are spread around the $`y=1-x`$ line. The true values are also show as + on the figure. In Bayesian regression, for instance, this likelihood will be combined with the prior distribution (p(w|x)) to update the weight probabilities (p(w|y,X)). We will discuss the procedure in more detail [later on](DDE-1/Regression#bayesian-linear-regression).
|
|
|
|
|
|