| ... | ... | @@ -149,12 +149,28 @@ Is there an alternative way? In principle, you can do anything you want for thes |
|
|
|
<div align="center">
|
|
|
|
<img src="uploads/1687b3f3d06a07c5694cd92ad0d18f50/rnn_n4.png" width="600">
|
|
|
|
|
|
|
|
_RNN graph representation. Herein, both the first and second RNN layers only connects the final time step results. In order to use the second RNN layer, we duplicate the output (blue nodes) and pass it to the second layer. This is nothing but a simple trick to use the same RNN interface in TF.
|
|
|
|
_RNN graph representation. Herein, both the first and second RNN layers only connects the final time step results. In order to use the second RNN layer, we duplicate the output (blue nodes) and pass it to the second layer. This is nothing but a simple trick to use the same RNN interface in TF._
|
|
|
|
</div>
|
|
|
|
|
|
|
|
### Building an Autoencoder with RNN layers
|
|
|
|
|
|
|
|
...
|
|
|
|
If we are interested in making forecasting for horizon > 1, we have to basic options: (i) we can train a model to make 1 step future predictions and use the model with updated history, (ii) make multi-step forecasting by training a model to predict sequences from sequences.
|
|
|
|
|
|
|
|
One custom model alternative here encoder-decoder structure. Here the model graph consists of 3 zones: (i) encoder, (ii) bridge layer and (iii) decoder. Encoder block reads the input sequence (in the notebook example it has a shape of (18880, 24, 4) for training set) and yields a latent representation (in the notebook, (1,32)) for a given instance (in the notebook, (1,24,32)).
|
|
|
|
|
|
|
|
In order to simplify the graph, let's say we have a sequence of length 3 and each RNN layer has 32 nodes (drawn as a block below). The input data is passed to the first RNN layer, which keeps track of the sequential information (return_sequences=True). The first RNN layer is connected to the second one, which compresses the time information into a vector of (1,32). In other words, we have created a latent representation of (1,24,32) as (1,32) with these two layers.
|
|
|
|
|
|
|
|
<div align="center">
|
|
|
|
<img src="uploads/78159b2701d0945abaa77998adb6c886/ed1.PNG" width="600">
|
|
|
|
|
|
|
|
_Encoder block with the bridging RepeatVector layer_
|
|
|
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
In order to connect the encoded vector (1,32) to the first RNN layer of the decoder block, we use RepeatVector as a bridge. It simply creates n number of duplicates, where n is the length of the decoded sequence. Let's take it also three (in the notebook it was 24).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Special Recurrent Cells
|
|
|
|
|
| ... | ... | @@ -170,7 +186,6 @@ _RNN graph representation. Herein, both the first and second RNN layers only con |
|
|
|
|
|
|
|
- [Spurious correlations](https://www.tylervigen.com/spurious-correlations)
|
|
|
|
- [Machine Learning Strategies for Time Series Forecasting](https://link.springer.com/chapter/10.1007/978-3-642-36318-4_3)
|
|
|
|
|
|
|
|
- [Understanding LSTM Networks](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)
|
|
|
|
- [Illustrated Guide to LSTM’s and GRU’s](https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21)
|
|
|
|
-[Exploring lstms](http://blog.echen.me/2017/05/30/exploring-lstms/)
|
| ... | ... | @@ -179,13 +194,8 @@ _RNN graph representation. Herein, both the first and second RNN layers only con |
|
|
|
- [tf-seq2seq](https://google.github.io/seq2seq/)
|
|
|
|
- [Attention and Augmented Recurrent Neural Networks](https://distill.pub/2016/augmented-rnns/)
|
|
|
|
|
|
|
|
Example case studies
|
|
|
|
**Example case studies**
|
|
|
|
|
|
|
|
- [A sequential model-based approach for gas turbine performance diagnostics](https://www.sciencedirect.com/science/article/abs/pii/S036054422032764X?dgcid=raven_sd_aip_email)
|
|
|
|
- []()
|
|
|
|
- []()
|
|
|
|
- [Time Series Split with Scikit-learn](https://medium.com/keita-starts-data-science/time-series-split-with-scikit-learn-74f5be38489e)
|
|
|
|
- [Time Series Data in Python](https://www.pluralsight.com/guides/machine-learning-for-time-series-data-in-python) |
|
|
|
- []()
|
|
|
|
|
|
|
|
|
|
|
|
... |
|
|
\ No newline at end of file |