| ... | ... | @@ -167,10 +167,20 @@ _Encoder block with the bridging RepeatVector layer_ |
|
|
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
In order to connect the encoded vector (1,32) to the first RNN layer of the decoder block, we use RepeatVector as a bridge. It simply creates n number of duplicates, where n is the length of the decoded sequence. Let's take it also three (in the notebook it was 24).
|
|
|
|
In order to connect the encoded vector (1,32) to the first RNN layer of the decoder block, we use RepeatVector as a bridge. It simply creates n number of duplicates, where n is the length of the decoded sequence. Let's take it also three (attention: in the notebook it was 24). Since we want to output a sequence, we need to keep the trace of it, so we will connect every unrolled time to each other in the RNN layers. If we want to increase the non-linearity of the model, we can add a dense layer, one for each unrolled time (1,3,32). We are not done here as we need to compress 32 features to a single feature, load, giving an output of (1,3,1). In TensorFlow, `TimeDistributed` wrapper is the easiest way to do it, as it allows the same output layer to be reused for each element in the output sequence.
|
|
|
|
|
|
|
|
<div align="center">
|
|
|
|
<img src="uploads/d899b830055693a9ed8f1715f802d1be/ed2.PNG" width="600">
|
|
|
|
|
|
|
|
_Decoder block with the bridging RepeatVector layer_
|
|
|
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
Once trained, such an approach can make predictions for long periods of time with much higher accuracy, and this was also what we observed in the load problem ($`R^2`$ of 0.98, with average relative percent error < 1%.).
|
|
|
|
|
|
|
|
## Backpropagation Through Time
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
## Special Recurrent Cells
|
|
|
|
|
| ... | ... | |
| ... | ... | |