| ... | ... | @@ -112,6 +112,14 @@ Next task is to decide the model graph. In this example, I will use moderate num |
|
|
|
This is how the custom model looks like:
|
|
|
|
|
|
|
|
<div align="center">
|
|
|
|
<img src="uploads/90c5d0580c35b1076a042d4d533af472/rnn_1.png" width="400">
|
|
|
|
<img src="uploads/90c5d0580c35b1076a042d4d533af472/rnn_1.png" width="600">
|
|
|
|
|
|
|
|
RNN model as a graph. Here the features are noted as a,b,c,d for simplicity. Computational nodes are given as gray-blue. Time information is highlighted with purple. Note that recurrent neurons (RNs) are unrolled in time (for three steps we pass in the training instances (1,3,4)).
|
|
|
|
</div>
|
|
|
|
...
|
|
|
|
|
|
|
|
Let's go over it step by step. In one instance, we have 12 values: 4 features at 3 consecutive time steps. These are denoted with numbers. When we pass an example to the model, what we do first is to feed the data with time label (1) to the 6 recurrent neurons(RNs). Note here that we have increased the number of dimensions from 4 to 6. This is a critical decision and what we essentially do here is feature engineering. We looked at different combinations of our base 4 features. In this very first step, RNs will update their hidden states and give us 6 memory signals, h (denoted as purple arrows).
|
|
|
|
|
|
|
|
In the next time iteration, same RNs will use the current hidden states and the second time data for base features a,b,c,d (denoted as (2)) to update the hidden states once more. We do the same again for the input data for the third time step (denoted as (3) in the figure).
|
|
|
|
|
|
|
|
|