CS 898: Deep Learning and Its Applications
||DC 3355, x84659
Course time and location:||Mondays
2:00-4:50pm, DC 2568|
|Office hours:||Mondays: 5-6pm, or
|Reference Materials:||Papers listed below.
Deep learning has led to significant progress in
image analysis, speech recognition, natural language processing,
game playing, bio-medial-informatics, self-driving cars, and
in many other fields, in the last few years. It is changing the
industry, the way we do research, and our everyday life.
In this course we will study the principles and various applications
of deep learning. The course material
will be mainly chosen from the quickly growing volume
of recent research papers.
I will do some lectures at the beginning teaching the basics.
These will include: basic structures such as fully connected layer,
recurrent structure, LSTM and GRU,
convolutional and pooling layer, and more specialized
structures such as highway network and Grid LSTM, recursive structure,
external memory, sequence-to-sequence structure, generative adversarial nets
(GANs). We will also discuss
backpropogation, gradient descent, and computational graph.
Finally we will do some mathematical analysis showing why deep learning works
better than "shallower" learning.
Then during the second part of the course,
the students will present research papers on various topics of
deep learning applications or new models.
Homework. Read the Tensorflow tutorial at
https://www.tensorflow.org/ and get familiar with Tensorflow.
Install it (CPU only version) and read and run the MNIST experiment at:
This homework will not be marked. (A CPU should be sufficient here.)
GPU: Students can go to https://www.awseducate.com/application
to sign up. Amazon will review the application for a couple of days.
More information can be
found at: https://aws.amazon.com/cn/education/awseducate/
Sharcnet might be another resource for GPU.
Each student is expected to do one deep learning (or
reinforcement learning) project (hand in a final
paper, 55% marks)
and present (40% marks) it in class (1/2 hour, 20 minute presentation and
10 minute discussion -- depending on the number of registered students,
presentation length will be adjusted.). Class participation counts for 5% marks.
I expect that these projects are related to your own research and original
or improving some existing work.
I will be very happy
to discuss projects with you. Presentations should also contain in-depth
surveys of the relevant literature. Presentations should be educational
(1/2 on the background and 1/2 on your own work).
Presentations and relevant papers will be posted on this website
(the presenters should provide these materials
to me) several days before class.
GPUs: I am purchasing GPU's, if they come in time I will make
some available for your experiments.
However please do not depend on this.
For the course projects, in most cases, please try to use CPUs with smaller
datasets. You can also try Sharcnet.
Course announcements and lecture notes will appear on this page.
Please look at this page regularly.
Potential course projects:
In deep learning, the key is data. It is a
good idea to work on a problem from your own research area where
you can find labeled data. Otherwise, the following is a random collection
of potential problems for you to explore:
Image analysis (for example specialize on a small class of
food, or plants, or objects such as phones or cars (hierarchically),
or a relatively less popular type of cancer, investigate
how to do scaling, translation and rotation and how to focus attention to
a small part of an image);
Speech recognition (for example fast and small sample
personalized text-to-speech generation, possibly using Generative
Game playing (reinforcement learning on a small game);
Bioinformatics (protein/DNA binding sites, deep motif,
deep families by recursive NN, mass spect. feature detection);
Natural Language Processing (Use variations of
sequence-to-sequence with attention models, or GANs to generate peoms
or short writings in a language you can get data,
generating music with some theme, develop ways to tell if the
chatbot replied properly and feedback using reinforcement learning);
Theory (prove deep is better than shallow -- this is only
for very theoretically oriented students; demonstrate Kolmogorov small
neural networks are better than Kolmogorov larger neural networks
for example using the MNIST dataset).
Note, do not be too ambitious, try to limit your problem
and data size in order to finish in 1-2 months. Training may be
very slow when you have a lot of data. Please start early. You will
have no extensions by the end of the term, and the student presentation
starts in June.
Reading Materials (the list is far from being finished, will expand gradually):
Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature 521, 436-444(2015).
D. Silver et al: Mastering the game of Go with deep neural networks
and tree search, Nature 529, pp 484-489(28 Jan. 2016)
R. Socher, Y. Bengio, C. Manning, Deep learning for NLP, ACL 2012
R. Sutton and A. Barto: Reinforcement Learning: an introduction.
MIT Press (1998).
B. Aliphanahi et al, Predicting the sequence specificities of
DNA- and RNA-binding proteins by deep learning. Nature Biotechnology,
33, 831-838, 2015.
J. Deng et al, Imagenet: a large-scale hierarchical image database.
IEEE Conf. on CVPR 2009.
S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural
Computation, 9(8) 1735-1780, 1997.
K. Cho, B. van Merrienboer, D. Bahdanau, Y. Bengio,
On the properties of neural machine translation: encoder-decoder approaches,
K. Cho, B. van Merrienboer, C. Culcehre, D. Bahdanau, F. Bougares, H. Schwenk,
Learning phrase representations using RNN encoder-decoder for statistical
machine translation. Jun. 2014
M. Telgarsky, Benefits of depth in neural networks, arXiv preprint, 2016.
I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair,
A. Courville, Y. Bengio, Generative adversarial networks. 2014.
D. Ulyanov, V. Lebedev, A. Vedaldi, V. Lempitsky,
Texture Networks: feed-forward synthesis of textures and stylized,
Alex Graves, Generating sequences with recurrent neural networks.
2013-2014 (this paper generates handwritting characters by LSTM)
V. Mnih et al,
Playing Atari with deep reinforcement learning. 2013.
V. Mnih et al.
Human-level control through deep reinforcement learning, 2015.
M. Arjovsky and L. Bottou, Towards principled methods for training generative
adversarial networks, 2017.
W. Lotter, G. Kreiman, D Cox, Deep predictive coding networks for video
prediction and unsupervised learning. 2016.
C. Ledig, L. Theis, F. Huszar, J. Caballero, et al
Photo-realistic single image super-resolution using a generative adversarial
P. isola, JY Zhu, T. Zhou, AA Efros, Image-to-image translation with
conditional adversarial networks. 2016.
A. Radford, L. Metz, S. Chintala,
Unsupervised representation learning with deep convolutional
generative adversarial networks. 2015.
A. Nguyen, J. Clune, Y. Bengio, A. Dosoviskiy, J. Yoshinski
Plug and play generative networks: conditional iterative generation
of images in latent space. 2016.
H. Mao et al, Resource management with deep reinforcement learning. 2016
S. Reed et al.
Generative adversarial text-to-image synthesis. ICML 2016.
H. Zhang et al, StachGAN: text to photo-realistic image synthesis
with stacked generative adversarial networks. 2016.
S. Reed et al.
Learning what and where to draw. NIPS 2016.
S. Nowozin, B. Cseke, R. Tomioka, f-GAN: training generative neural
samplers using variational divergence minimization" NIPS 2016
M. Arjovsky, S. Chintala, L. Bottou,
Wasserstein GAN, 2017
I. Galrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. Courville
Improved training of Wasserstein GANs. 2017.
F. Ture and O. Jojic,
Simple and effective question answering with recurrent neural networks. 2016.
Discovering neural netws with low Kolmogorov complexity and high generalization
capacity. Neural Networks, 10(5), 857-873, 1997.
L. Deng, A tutorial survey of architectures, algorithms, and applications
for deep learning. SIP (2014).
L. Yu, W. Zhang, J. Wang, Y. Yu, SeqGAN: sequence generative adversarial
Nets with policy gradient. AAAI, 2017.
J. Li, W. Monroe, T. Shi, S. Jean, A. Ritter, D. Jurafsky,
Adversarial learning for neural dialogue generation.
arXiv: 1701.06547v4, 2017.
History, Overview, Perceptrons
Fully connected neural network. Hello world
Lecture 4: Why small?
Lecture 5: Convolutional
Lecture 6: Recurrent
Neural Networks, LSTM, GRU, Highway network, residual network,
Generative Adversarial Networks
Backpropagation, gradient descent, computational graph
Sequence GAN, chatbot, and reinforcement learning.
On May 23 Tuesday, we will not have lecture. We will make up the lost
time on Monday May 22 at the end of the term so that
more student presentations can be moved to the end of the term.
Fiodar Kazhamiaka, Mikhail Kazhamiaka,
June 12. Rene Bidart,
Salman Mohammed, and
my lecture on Deep better than Shallow, and SeqGAN and reinforcement learning.
June 19. Adam Schunk,
and my lecture on backpropagation.
June 26. Move to August 1.
July 3. Move to August 2.
July 10. Wasif Khan,
plus my lecture on deep learning in bioinformatics.
July 17. Murray Dunne,
July 24 Sara Ross-howe,
Vera Lin & Simon Suo,
Peiyuan Liu & Nikita Volodin,
August 1 (2:00pm same classroom):
August 2 (2:00pm same classroom):
Pan Pan Cheng,
Ling Yun Li,
Final Project Paper:
Approximately 10 pages. Due on August 15th. Please send the PDF file
to me via email. The paper should cover: the background and significance of your problem,
relevant literature, data source, source code link, your contribution, what you have learned from this project,
potential future work, and references. If you used other people's work, data, or programs,
please properly give credits to other people. You should write the paper independently even you
worked in a team.
If possible please also submit the source code and data (or provide a link to
Maintained by Ming Li