The videos of the workshop are available now.

Date: June 20, 2013
Time: 8:30am to 5:30pm (schedule)
Room: L401, 402, 403 (floorplan)
Venue: Atlanta Marriott Marquis
Posters: 10th Floor, J & K (poster location)
The workshop is sponsored by:


There are strong interactions between learning algorithms which estimate the parameters of a model from data, and inference algorithms which use a model to make predictions about data. Understanding the intricacies of these interactions is crucial for advancing the state-of-the-art on real-world tasks in natural language processing, computer vision, computation biology, etc. Yet, many facets of these interactions remain unknown. In this workshop, we study the interactions between inference and learning using two reciprocating perspectives. 


Perspective one: how does inference affect learning?
The first perspective studies the influence of the choice of inference technique during learning on the resulting model. When faced with models for which exact inference is intractable, efficient approximate inference techniques may be used, such as MCMC sampling, stochastic approximation, belief propagation, beam-search, dual decomposition, etc. The workshop will focus on work that evaluates the impact of the approximations on the resulting parameters, in terms of both the generalization of the model, the effect it has on the objective functions, and the convergence properties. We will also study approaches that attempt to correct for the approximations in inference by modifying the objective and/or the learning algorithm (for example, contrastive divergence for deep architectures), and approaches that minimize the dependence on the inference algorithms by exploring inference-free methods (e.g., piece-wise training, pseudo-max and decomposed learning).

Perspective two: how does learning affect inference?
Traditionally, the goal of learning has been to find a model for which prediction (i.e., inference) accuracy is as high as possible. However, an increasing emphasis on modeling complexity has shifted the goal of learning: find models for which prediction (i.e., inference) is as efficient as possible. Thus, there has been recent interest in more unconventional approaches to learning that combine generalization accuracy with other desiderata such as faster inference. Some examples of this kind are: learning classifiers for greedy inference (e.g., Searn, Dagger); structured cascade models that learn a cost function to perform multiple runs of inference from coarse to fine level of abstraction by trading-off accuracy and efficiency at each level; learning cost function to search in the space of complete outputs (e.g., SampleRank, search in Limited Discrepancy Search space); learning structures that exhibit efficient exact inference etc. Similarly, there has been work that learns operators for efficient search-based inference, approaches that trade-off speed and accuracy by incorporating resource constraints such as run-time and memory into the learning objective.

The workshop attempts to bring together the practitioners of these approaches in an attempt to study a unified framework under which these interactions can be studied, understood, and formalized. The following is a partial list of relevant topics for the workshop:
  • learning with approximate inference
  • cost-aware learning
  • learning sparse structures
  • pseudo-likelihood, composite likelihood training
  • contrastive divergence
  • piece-wise and decomposed training
  • decomposed learning
  • coarse to fine learning and inference
  • scoring matching
  • stochastic approximation
  • incremental gradient methods
  • adaptive proposal distributions
  • learning for anytime inference
  • learning approaches that trade-off speed and accuracy
  • learning to speed up inference
  • learning structures that exhibit efficient exact inference
  • lifted inference for first-order models
  • more ...

    Organizers

    News


    403days since
    Workshop