Department of Computer Science
University of Waterloo
Instructor: Richard Mann, DC2510, x3006, firstname.lastname@example.org
Meetings: Fridays 13:00-16:00, MC2306.
First class: Friday 8 January
Important Note: Please try
all course material from a
University of Waterloo account (ie., SSH into a teaching or
research machine and send from there). If you don't have a
Waterloo account yet, please contact the MFCF help
center (MC3011). In the meantime you can use the account you gave
in the first class. Also please don't send any
messages with "zip" or "gif" attachments (send "tif" or "jpg" instead,
or refer me to a URL). Those are common filetypes in Microsoft
viruses and are removed by my SPAM filter.
|1. (Jan 8)
||Organizational meeting. Overview.|| Overview paper: Object Perception as
Kersten et al, Ann. Rev. Psychol. 2004, 55:271-304.
|Read paper to prepare for course.
|2. (Jan 15)
||Image alignment by maximum
Registering a Multi-sensor
Images, Orchard and Mann. To appear, IEEE Transactions
on Image Processing, 2010.
Supplementary material: Alignment by Maximization of Mutual Information. IJCV 24(2):137-154 (1997).
|Read paper and write one page summary/commentary (due
|3. (Jan 22)
||Markov random field models for images.
Modeling Image Analysis Problems Using
Markov Random Fields. S.Z. Li. Handbook of Statistics,
Vol.20. Elsevier Science 2000. Pages 1-43.
Additional references: Bayesian Modeling of Uncertainty in Low-Level Vision. R. Szeliski. International Journal of Comptuer Vision. 5(3):271-301, 1990. Stochasstic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images. IEEE Trans. PAMI. 6(6). Nov. 1884. Markov Random Fields with Efficient Approximations. Y Boykov, O. Veksler, R. Zabih. CVPR 1998.
|Read Li and Szeliski.
Write commentary on Li. Geman and Geman is difficult, but worth
|4. (Jan 29)
||Bayesian model comparison,
generic viewpoint assumption.
David MacKay. Neural Computation 4, 415-447 (1992). Exploiting the Generic Viewpoint Assumption.
||Read MacKay first, then
Freeman. Comment on the one of your choice.
|5. (Feb 5)
What makes a good feature?.
Allan Jepson and Whitman Richards. Qualitative
interpretatation. Allan Jepson and
Richard Mann. ICCV (1999).
||Read both Jepson/Richards
first. Comment on Jepson/Mann.
|6. (Feb 12)
||Hidden Markov Models (presenter,
A Tutorial on Hidden Markov models and
Selected Applications in Speech Recognition. Lawrence
Rabiner. Proc. IEEE. 77(2). 1989.
||Read and comment on this paper.
|. Reading week.|
|7 (Feb 26)
Infinite Mixture model (Ricardo)
CONDENSATION—Conditional Density Propagation for
Visual Tracking. M. Isard and A. Blake. IJCV
Reference #2: The Infinite Gaussian Mixture Model. Carl Edward Rasmussen. NIPS 12 (2000).
|Read and comment on both papers.
|8 (Mar 5)
||CAT scan (Ahmed), Stereo vision
||No reference for #1.
Reference #2.Jian Sun; Nan-Ning Zheng;
Heung-Yeung Shum; , "Stereo
matching using belief propagation," Pattern Analysis and Machine
Intelligence, IEEE Transactions on, vol.25, no.7, pp. 787- 800, July
||No reading for #1. Read
Stereo paper and comment.
|9 (Mar 12)
||Optical snow (application of the
IJCV 55(1):55-71. 2003.
||Read and comment on paper.
|10 (Mar 19)
||Statistics of Natural Images
#1: D L Ruderman. Origins of
Scaling in Natural Images Vision Research 37(23):3385-3398, 1997
Reference #2: B A Olshausen and D J Field. Natural image statistics and efficient encoding. Network: Computation in Neural Systems. 7(2):333-339, 1996.
|Read and comment on Olshausen.
|11 (Mar 26)
||Information theory for learning
A Bell and T
Sejnowski. An Information-Maximization Approach to Blind
Separation and Blind Deconvolution. Neural Computation 7:1129-1159, 1995.
||Read and comment on paper.
|12 (Apr 1).
Note: different time: Thursday 2:30pm,
DC2306 (AI lab).