Lab Materials for MIT 6.S191: Introduction to Deep Learning
-
Updated
Sep 29, 2020 - Jupyter Notebook
{{ message }}
Lab Materials for MIT 6.S191: Introduction to Deep Learning
An AI for Music Generation
Resources on Music Generation with Deep Learning
Train an LSTM to generate piano or violin/piano music.
Experiment diverse Deep learning models for music generation with TensorFlow
"Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions", ACM Multimedia 2020
a list of demo websites for automatic music generation research
Projects from the Deep Learning Specialization from deeplearning.ai provided by Coursera
generates music (midi files) using a Tensorflow RNN
Music generation with Keras and LSTM
Event-based music generation with RNN using PyTorch
A toolkit for symbolic music generation
The "Hands-On Music Generation with Magenta" book code repository and info resource
Code for “Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation”
This is the dataset repository for the paper: POP909: A Pop-song Dataset for Music Arrangement Generation
Code accompanying ISMIR'19 paper titled "Learning to Traverse Latent Spaces for Musical Score Inpaintning"
Melody of Life is a step sequencer using cellular automata
Code repo for ICME 2020 paper "Style-Conditioned Music Generation". VAE model that allows style-conditioned music generation.
ISMIR 2020 Paper repo: Music SketchNet: Controllable Music Generation via Factorized Representations of Pitch and Rhythm
Generating Music and Lyrics using Deep Learning via Long Short-Term Recurrent Networks (LSTMs). Implements a Char-RNN in Python using TensorFlow.
Hum2Song: Multi-track Polyphonic Music Generation from Voice Melody Transcription with Neural Networks
cRNN-GAN to generate music by training on instrumental music (midi)
PyTorch implementation of " Synthesizing Audio with Generative Adversarial Networks"
RaveForce - An OpenAI Gym style toolkit for music generation experiments.
A Talk on Ragalur Expressions
music generation with a classifying variational autoencoder (VAE) and LSTM
Codes for the paper 'Learning to Fuse Music Genres with Generative Adversarial Dual Learning' ICDM 17
Implementation of a paper "Polyphonic Music Generation with Sequence Generative Adversarial Networks" in TensorFlow
Add a description, image, and links to the music-generation topic page so that developers can more easily learn about it.
To associate your repository with the music-generation topic, visit your repo's landing page and select "manage topics."