RI: Small: Collaborative Research: A Hierarchy of Describable and Localizable Attributes for Identification Search and Damage Exploration

The automatic identification in images of people, places, objects, and especially object categories is a central and ongoing challenge within computer vision. This project addresses this problem using low-level image features to learn intermediate representations, ones in which objects in images are labeled with an extensive list of highly descriptive visual attributes. This work demonstrates this approach in three domains: faces, plant species, and architecture. In each domain, it develops techniques for deriving visual attribute vocabularies, training attribute detectors, and building compositional models to automatically label attributes in images.

The project is making four fundamental contributions to the use of visual attributes. 1) It is developing new methods by which automatic systems and humans can interact to select domain-appropriate attribute vocabularies and label large image collections. 2) It is developing compositional models that capture dependencies between attributes. This provides more accurate attribute detection and enables inference of global properties of objects. 3) Using compositional models, the project is developing new, localizable attributes that capture the geometric relations between object parts and landmarks. 4) The project is designing algorithms that combine attributes to identify objects, search through image vast collections, and automatically annotate image databases.

Not only is this research generating large datasets of labeled images that should help catalyze new research, it is also demonstrating the feasibility of new systems for analyzing images in specialized domains such as faces, plants, and architecture. For example, the project develops new software applications for analyzing and searching images of faces as well as free mobile apps for plant species identification.

Principal Investigators