“Large Scale, Highly Parallel Methods for Machine Learning and Sparse Signal Recovery”

Thu Aug 13, 2015 2:00 PM

Location: LTS Auditorium, 8080 Greenmead Drive

Speaker:
Thomas Goldstein
Department of Computer Science and UMIACS

Abstract:
The abundance of large, distributed web-based data sets and the recent popularity of cloud computing platforms have opened many doors in machine learning and statistical modeling. However, these resources present a host of new algorithmic challenges.

Practical algorithms for large-scale data analysis must scale well across many machines, have low communication requirements, and have low (nearly linear) runtime complexity to handle extremely large problems.

In this talk, we will discuss alternating direction methods as a practical and general tool for solving a wide range of model-fitting problems in a distributed framework. We will then focus on new transpose reduction strategies that allow extremely large regression problems to be solved quickly on a single node. We will also study the performance of these algorithms for fitting linear classifiers and sparse regression models on tera-scale datasets using thousands of cores.

Bio:
Thomas Goldstein is an assistant professor in the Department of Computer Science and a member of the University of Maryland Institute for Advanced Computer Studies (UMIACS).

Before coming to UMD, Goldstein held research positions at Stanford University and Rice University.

His research focuses on efficient, low complexity methods for model fitting and data analysis. Goldstein’s work ranges from large-scale computing on distributed architectures, to inexpensive power-aware algorithms for small-scale embedded systems.

His research is supported by grants from the National Science Foundation and Office of Naval Research, in addition to resources from Google and the U.S. Naval Academy. Goldstein also conducts research with the Department of Defense Super Computing Resource Center.