CS 898: Deep Learning and Its Applications
Spring 2017


   
INSTRUCTOR:
Ming Li DC 3355, x84659 mli@uwaterloo.ca
Course time and location:Mondays 2:00-4:50pm, DC 2568
Office hours:Mondays: 5-6pm, or by appointment
Reference Materials:Papers listed below.

Deep learning has led to significant progress in image analysis, speech recognition, natural language processing, game playing, bio-medial-informatics, self-driving cars, and in many other fields, in the last few years. It is changing the industry, the way we do research, and our everyday life.

In this course we will study the principles and various applications of deep learning. The course material will be mainly chosen from the quickly growing volume of recent research papers.

I will do some lectures at the beginning teaching the basics. These will include: basic structures such as fully connected layer, recurrent structure, LSTM and GRU, convolutional and pooling layer, and more specialized structures such as highway network and Grid LSTM, recursive structure, external memory, sequence-to-sequence structure, generative adversarial nets (GANs). We will also discuss backpropogation, gradient descent, and computational graph. Finally we will do some mathematical analysis showing why deep learning works better than "shallower" learning. Then during the second part of the course, the students will present research papers on various topics of deep learning applications or new models.

Homework. Read the Tensorflow tutorial at https://www.tensorflow.org/ and get familiar with Tensorflow. Install it (CPU only version) and read and run the MNIST experiment at: https://www.tensorflow.org/get_started/mnist/beginners This homework will not be marked. (A CPU should be sufficient here.)

GPU: Students can go to https://www.awseducate.com/application to sign up. Amazon will review the application for a couple of days. More information can be found at: https://aws.amazon.com/cn/education/awseducate/ Sharcnet might be another resource for GPU.

Marking Scheme: Each student is expected to do one deep learning (or reinforcement learning) project (hand in a final paper, 55% marks) and present (40% marks) it in class (1/2 hour, 20 minute presentation and 10 minute discussion -- depending on the number of registered students, presentation length will be adjusted.). Class participation counts for 5% marks. I expect that these projects are related to your own research and original or improving some existing work. I will be very happy to discuss projects with you. Presentations should also contain in-depth surveys of the relevant literature. Presentations should be educational (1/2 on the background and 1/2 on your own work). Presentations and relevant papers will be posted on this website (the presenters should provide these materials to me) several days before class.

GPUs: I am purchasing GPU's, if they come in time I will make some available for your experiments. However please do not depend on this. For the course projects, in most cases, please try to use CPUs with smaller datasets. You can also try Sharcnet.

Course announcements and lecture notes will appear on this page. Please look at this page regularly.

Potential course projects: In deep learning, the key is data. It is a good idea to work on a problem from your own research area where you can find labeled data. Otherwise, the following is a random collection of potential problems for you to explore:

    Reading Materials (the list is far from being finished, will expand gradually):

Lecture Notes:

Announcements:

On May 23 Tuesday, we will not have lecture. We will make up the lost time on Monday May 22 at the end of the term so that more student presentations can be moved to the end of the term.

Presentation Schedule:

Final Project Paper: