Workshop on Visual and Contextual Learning from Annotated Images and Videos

June 25, 2009, Fontainebleau Resort, Miami Beach

VCL Workshop
Joint Workshop Links
 

 

 



Important Dates:

Paper Submission Deadline:  
Notification of Acceptance: 
Camera-Ready Copies:       
Workshop Date
March 29, 2009 (5PM Pacific Time)
April 10, 2009
April 14, 2009
June 25, 2009 (Jointly held with ViSU'09)

Technical Program 

Overview:

There has been a significant interest in the computer vision community on utilizing visual and    contextual models for high level semantic reasoning. There are many weakly annotated images and videos available on the internet, along with other rich sources of information such as dictionaries, which can be used to learn visual and contextual models for recognition. The goal of this workshop is to investigate how linguistic information available in the form of captions and other sources can be used to aid in visual and contextual learning. For example, captions can help train object detectors; adjectives in captions could be used to train material recognizers; or written descriptions  of objects could be used to train object recognizers.
 

This workshop aims to bring together researchers in the fields of contextual modeling in computer vision, machine learning and natural language processing to explore a variety of perspectives on how these datasets can be employed to learn visual appearance and contextual models for recognition. Recent progress in machine learning on scalable learning and modeling of uncertainty in large-scale annotated data has also stimulated the use of data available on websites where annotations are obtained in more uncontrolled environments.

Scope:

The workshop program will consist of spotlights, posters, invited talks and discussion panels. The list of possible topics includes (but is not limited to) the following:

  • Contextual Relationships for Recognition

    • Scene, Object, Action and Event Recognition using context models

    • Learning parameters of contextual models

    • Learning structure of contextual models

    • Inference using contextual models

  • Annotations to Assist Learning and Incremental Labeling

    • Using annotations to learn visual models

    • Using richer language models of annotations

    • Learning to Recognize by Reading

    • Using text corpora on the web including dictionaries

    • Modeling annotations with errors

  • Others

    • Visual learning for classification and recognition

    • Biologically motivated visual and contextual models

    • Scalable learning from large datasets

Webmaster: Abhinav Gupta