Schedule of Topics

This is the schedule of topics for Seminar on Computational Learning, Fall 2008. Readings are pointers to the list of readings.

In addition, some topic areas may take longer than expected, so keep an eye on the class mailing list or e-mail me for "official" dates.

Class Topic
Readings Other
Sep 8 Administrivia, overview, and core computational learning concepts Mitchell, Chapter 1; Sagan (1979); Pinker (1979), p. 217-220; Gordon and desJardins (1995)
Sep 15 Maximum likelihood; smoothing; expectation maximization (EM) algorithms. S. Purcell, Maximum Likelihood Estimation, Sections 1-2; Gale and Sampson (1995); Pereira (2000), esp. Section 4 Mathworld: Bernoulli Distribution and Maximum Likelihood; Church and Gale (1990); Mitchell, Sec 6.12
Sep 22 EM for HMMs (the forward-backward algorithm); Feature-based representations and dimensionality reduction (LSA) Philip Resnik, A Simple Recipe for EM Update Equations; Landauer (1998) or Landauer and Dumais (1997); all discussion in the class blog. HMM background: Jurafsky and Martin (2nd ed.): pp. 139-151 and 173-192 (in the PDF locker as hmm_part[1234567].pdf)
Sep 29 [Bill Idsardi] Perceptron learning, neural network classification. Brief introductory discussion, followed by watching Geoff Hinton, The Next Generation of Neural Networks (60 minutes); discussion. Mitchell, Chapter 4
Oct 6 [Bill Idsardi] Support vector machines and the kernel trick; VC dimension
Oct 13 Bayesian inference and graphical models Charniak, E. (1991), Bayesian networks without tears. AI Magazine, 12, 50-63 Goldwater on lexical acquisition; Google Talk by Justin Domke;
Oct 20 [Chris Dyer] The minimum description length principle (MDL) Chapter 1 of Peter Grunwald's tutorial on MDL; Brent et al. (1995).
Oct 27 The maximum entropy principle (maxent) Adwait Ratnaparkhi, A Maximum Entropy Model for Part-Of-Speech Tagging (EMNLP 1996) Other useful readings include Adwait Ratnaparkhi's A Simple Introduction to Maximum Entropy Models for Natural Language Processing (1997) and Adam Berger's maxent tutorial; and Noah Smith's notes on loglinear models.
Nov 3 Stochastic optimization: genetic algorithms, simulated annealing Darrell Whitley, A Genetic Algorithm Tutorial Gendreu, An Introduction to Tabu Search; a bit of ancient history
Nov 10 [Bill Idsardi] PAC learnability Optional reading
Nov 17 Language identification in the limit (Gold's paradigm) Chapters 1 and 2 of Wexler and Culicover (1980) (zipfile) Gold (1967), Language Identification in the Limit; Osherson, Stob, and Weinstein (1986), Systems That Learn (MIT Press);
Nov 24 Grammatical inference (syntactic pattern recognition, grammar induction) Pereira and Wright, Finite-state approximation of phrase structure grammars; Stolcke and Omohundro, Inducing Probabilistic Grammars by Bayesian Model Merging Angluin and Smith (1983), Inductive Inference: Theory and Methods; Parekh and Honavar, Grammar inference, automata induction, and language acquisition (1998); de la Phil Blunsom, Trevor Cohn, Miles Osborne, Bayesian Synchronous Grammar Induction;
Dec 2 Current machine learning approaches to grammar learning Reading Goldwater and Johnson (ref?); Taskar et al., Max-Margin Parsing; Klein and Manning, A Generative Constituent-Context Model for Improved Grammar Induction; Klein and Manning, Corpus-Based Induction of Syntactic Structure: Models of Dependency and Constituency; Zettlemoyer and Collins, Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars; Zettlemoyer and Collins, Online Learning of Relaxed CCG Grammars for Parsing to Logical Form
Dec 8 Carry-over from last class; project status reports
Dec 15 Come to Chris Dyer's 895 defense!

Philip Resnik, Associate Professor
Department of Linguistics and Institute for Advanced Computer Studies

Department of Linguistics
1401 Marie Mount Hall            UMIACS phone: (301) 405-6760       
University of Maryland           Linguistics phone: (301) 405-8903
College Park, MD 20742 USA	   Fax: (301) 314-2644 / (301) 405-7104	   E-mail: resnik AT umd _DOT.GOES.HERE_ edu