| ... | @@ -32,28 +32,46 @@ _C. Ates_ |
... | @@ -32,28 +32,46 @@ _C. Ates_ |
|
|
* [Using TensorFlow 2.0 to Compose Music](https://www.datacamp.com/community/tutorials/using-tensorflow-to-compose-music)
|
|
* [Using TensorFlow 2.0 to Compose Music](https://www.datacamp.com/community/tutorials/using-tensorflow-to-compose-music)
|
|
|
* []()
|
|
* []()
|
|
|
|
|
|
|
|
Colab notebooks:
|
|
_Colab notebooks:_
|
|
|
|
|
|
|
|
* [Making music with Magenta](https://colab.research.google.com/notebooks/magenta/hello_magenta/hello_magenta.ipynb)
|
|
* [Making music with Magenta](https://colab.research.google.com/notebooks/magenta/hello_magenta/hello_magenta.ipynb)
|
|
|
|
|
|
|
|
|
|
|
|
|
## Research papers:
|
|
## Research papers:
|
|
|
|
|
|
|
|
**Science**
|
|
**Methods and techniques**
|
|
|
|
|
|
|
|
|
*[Wasserstein GAN](https://arxiv.org/abs/1701.07875)
|
|
|
|
*[method to stabilize GAN](https://arxiv.org/pdf/1611.02163.pdf)
|
|
|
|
*[GANs for Improved Quality, Stability, and Variation](https://arxiv.org/abs/1710.10196)
|
|
|
|
*[The Earth Mover's Distance](http://infolab.stanford.edu/pub/cstr/reports/cs/tr/99/1620/CS-TR-99-1620.ch4.pdf)
|
|
|
|
*[Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer](https://openreview.net/forum?id=S1fQSiCcYm)
|
|
|
|
*[]()
|
|
|
|
*[]()
|
|
|
|
*[]()
|
|
|
|
|
|
|
|
**Science applications**
|
|
|
|
|
|
|
|
*[Failure Prediction](https://arxiv.org/abs/1910.02034)
|
|
|
*[Predict single-cell perturbation response across4
|
|
*[Predict single-cell perturbation response across4
|
|
|
cell types, studies and species](https://www.biorxiv.org/content/10.1101/478503v2.full.pdf)
|
|
cell types, studies and species](https://www.biorxiv.org/content/10.1101/478503v2.full.pdf)
|
|
|
|
|
|
|
|
**Art**
|
|
**Art applications**
|
|
|
|
|
|
|
|
*[Jukebox: A Generative Model for Music](https://cdn.openai.com/papers/jukebox.pdf)
|
|
*[Jukebox: A Generative Model for Music](https://cdn.openai.com/papers/jukebox.pdf)
|
|
|
*[Generating Art using GANs](https://blog.jovian.ai/generating-art-with-gans-352ceef3d51f)
|
|
*[Generating Art using GANs](https://blog.jovian.ai/generating-art-with-gans-352ceef3d51f)
|
|
|
* [Neural networks for music: a journey through its history](http://www.jordipons.me/neural-networks-for-music-history/)
|
|
* [Neural networks for music: a journey through its history](http://www.jordipons.me/neural-networks-for-music-history/)
|
|
|
|
* [Deep Learning Techniques for Music Generation](http://www-desir.lip6.fr/~briot/dlt4mg/Related_Resources/)
|
|
|
|
* [Composing Music With Recurrent Neural Networks](https://www.danieldjohnson.com/2015/08/03/composing-music-with-recurrent-neural-networks/)
|
|
|
|
* [Music generation with variational recurrent autoencoder supported by history](https://link.springer.com/article/10.1007/s42452-020-03715-w)
|
|
|
|
* [music generation controlled by tonal tension](https://arxiv.org/abs/2010.06230)
|
|
|
|
* [An intelligent music generation based on Variational Autoencoder](https://ieeexplore.ieee.org/document/9262797)
|
|
|
|
|
|
|
|
## Music datasets
|
|
## Music datasets
|
|
|
|
|
|
|
|
* [Mutopia](https://www.mutopiaproject.org/)
|
|
* [Mutopia](https://www.mutopiaproject.org/)
|
|
|
* [GTZAN Dataset](https://www.kaggle.com/andradaolteanu/gtzan-dataset-music-genre-classification?select=Data)
|
|
* [GTZAN Dataset](https://www.kaggle.com/andradaolteanu/gtzan-dataset-music-genre-classification?select=Data)
|
|
|
* [Deep Learning Techniques for Music Generation](http://www-desir.lip6.fr/~briot/dlt4mg/Related_Resources/)
|
|
|
|
|
* [Composing Music With Recurrent Neural Networks](https://www.danieldjohnson.com/2015/08/03/composing-music-with-recurrent-neural-networks/)
|
|
|
|
|
|
|
|
|
|
|
## Videos
|
|
|
|
|
|
|
|
* [Ian Goodfellow: Adversarial Machine Learning (ICLR 2019 invited talk)](https://www.youtube.com/watch?v=sucqskXRkss) |