Videos of the workshop are now available: http://techtalks.tv/icml/2012/inferning2012/. The talks will take place in Lecture Theatre 5 (LT5), while the posters will be in the Appleton Tower (AT) atrium.The workshop will be hosted at International Conference of Machine Learning (ICML) 2012 in Edinburgh, UK.## Organizers- Michael Wick (University of Massachusetts, Amherst)
- Sameer Singh (University of Massachusetts, Amherst)
- David Weiss (University of Pennsylvania)
- Andrew McCallum (University of Massachusetts, Amherst)
## DescriptionThis workshop studies the interactions between algorithms that learn a model, and algorithms that use the resulting model parameters for inference. These interactions are studied from two perspectives. The first perspective studies how the choice of an inference algorithm influences the parameters the model ultimately learns. For example, many parameter estimation algorithms require inference as a subroutine. Consequently, when we are faced with models for which exact inference is expensive, we must use an approximation instead: MCMC sampling, belief propagation, beam-search, etc. On some problems these approximations yield superior models, yet on others, they fail catastrophically. We invite studies that analyze (both empirically and theoretically) the impact of approximate inference on model learning. How does approximate inference alter the learning objective? Affect generalization? Influence convergence properties? Further, does the behavior of inference change as learning continues to improve the quality of the model? A second perspective from which we study these interactions is by considering how the learning objective and model parameters can impact both the quality and performance of inference during “test time.” These unconventional approaches to learning combine generalization to unseen data with other desiderata such as fast inference. For example, work in structured cascades learns model for which greedy, efficient inference can be performed at test time while still maintaining accuracy guarantees. Similarly, there has been work that learns operators for efficient search-based inference. There has also been work that incorporates resource constraints on running time and memory into the learning objective. This workshop brings together practitioners from different fields (information extraction, machine vision, natural language processing, computational biology, etc.) in order to study a unified framework for understanding and formalizing the interactions between learning and inference. The following is a partial list of relevant keywords for the workshop: - learning with approximate inference
- cost-aware learning
- learning sparse structures
- pseudo-likelihood training
- contrastive divergence
- piecewise training
- coarse to fine learning and inference
- scoring matching
- stochastic approximation
- incremental gradient methods
- and more ...
## News |