| ... | @@ -22,19 +22,38 @@ _C. Ates_ |
... | @@ -22,19 +22,38 @@ _C. Ates_ |
|
|
* [Human face: stylegan2 -- Official TensorFlow Implementation](https://github.com/NVlabs/stylegan2)
|
|
* [Human face: stylegan2 -- Official TensorFlow Implementation](https://github.com/NVlabs/stylegan2)
|
|
|
* [DALL·E: Creating Images from Text](https://openai.com/blog/dall-e/)
|
|
* [DALL·E: Creating Images from Text](https://openai.com/blog/dall-e/)
|
|
|
* [GAN Zoo:compiling all named GANs](https://github.com/hindupuravinash/the-gan-zoo)
|
|
* [GAN Zoo:compiling all named GANs](https://github.com/hindupuravinash/the-gan-zoo)
|
|
|
|
|
|
|
|
**Music related**
|
|
|
|
|
|
|
* [Keras LSTM Music Generator](https://github.com/jordan-bird/Keras-LSTM-Music-Generator)
|
|
* [Keras LSTM Music Generator](https://github.com/jordan-bird/Keras-LSTM-Music-Generator)
|
|
|
* [Generating Music with GANs:An Overview and Case Studies” at ISMIR 2019](https://salu133445.github.io/ismir2019tutorial/)
|
|
* [Generating Music with GANs:An Overview and Case Studies” at ISMIR 2019](https://salu133445.github.io/ismir2019tutorial/)
|
|
|
* [MuseGAN is a project on music generation](https://github.com/salu133445/musegan)
|
|
* [MuseGAN is a project on music generation](https://github.com/salu133445/musegan)
|
|
|
* [Music generation notebooks](https://magenta.tensorflow.org/demos/colab/)
|
|
* [Music generation notebooks](https://magenta.tensorflow.org/demos/colab/)
|
|
|
* []()
|
|
* [Using TensorFlow 2.0 to Compose Music](https://www.datacamp.com/community/tutorials/using-tensorflow-to-compose-music)
|
|
|
* []()
|
|
* []()
|
|
|
|
|
|
|
|
Colab notebooks:
|
|
Colab notebooks:
|
|
|
|
|
|
|
|
* [Making music with Magenta](https://colab.research.google.com/notebooks/magenta/hello_magenta/hello_magenta.ipynb)
|
|
* [Making music with Magenta](https://colab.research.google.com/notebooks/magenta/hello_magenta/hello_magenta.ipynb)
|
|
|
|
|
|
|
|
* [Music generation with LSTM in Keras]()
|
|
|
|
|
|
|
|
|
|
## Additional resources
|
|
## Research papers:
|
|
|
|
|
|
|
|
**Science**
|
|
|
|
|
|
|
|
*[Predict single-cell perturbation response across4
|
|
|
|
cell types, studies and species](https://www.biorxiv.org/content/10.1101/478503v2.full.pdf)
|
|
|
|
|
|
|
|
**Art**
|
|
|
|
|
|
|
|
*[Jukebox: A Generative Model for Music](https://cdn.openai.com/papers/jukebox.pdf)
|
|
*[Jukebox: A Generative Model for Music](https://cdn.openai.com/papers/jukebox.pdf)
|
|
|
|
*[Generating Art using GANs](https://blog.jovian.ai/generating-art-with-gans-352ceef3d51f)
|
|
|
|
* [Neural networks for music: a journey through its history](http://www.jordipons.me/neural-networks-for-music-history/)
|
|
|
|
|
|
|
|
## Music datasets
|
|
|
|
|
|
|
|
* [Mutopia](https://www.mutopiaproject.org/)
|
|
|
|
* [GTZAN Dataset](https://www.kaggle.com/andradaolteanu/gtzan-dataset-music-genre-classification?select=Data)
|
|
|
|
* [Deep Learning Techniques for Music Generation](http://www-desir.lip6.fr/~briot/dlt4mg/Related_Resources/)
|
|
|
|
* [Composing Music With Recurrent Neural Networks](https://www.danieldjohnson.com/2015/08/03/composing-music-with-recurrent-neural-networks/)
|
|
|
|
|