“Guaranteed Non-convex Learning
Algorithms through Tensor Factorization”
Modern machine learning involves massive
datasets of text, images, videos, biological data, and so on. While supervised
learning has seen impressive results over the last few years, unsupervised
learning is a big unsolved challenge. Can we have machines make automated
discoveries using unlabeled samples? I will demonstrate how tensor methods are
able to significantly improve upon previous unsupervised learning approaches
such as variational inference, in terms of both training time and model fit.
Tensor methods are applicable in a variety of domains such as document
categorization, social network analysis, and learning text embeddings. In
addition, tensor methods have strong theoretical guarantees and can learn the
correct model under mild conditions. The analysis of tensor methods also provides
guidelines on how other non-convex learning problems can be solved efficiently,
both in theory and in practice.
Speaker: Anima Anandkumar is a
faculty at the EECS Dept. at U.C. Irvine since August
2010. Her research interests are in the areas of large-scale machine learning,
non-convex optimization and high-dimensional statistics. In particular, she has
been spearheading the development and analysis of tensor algorithms for a
variety of learning problems. She is the recipient of several awards such
as the Alfred. P. Sloan Fellowship, Microsoft Faculty Fellowship, Google
research award, ARO and AFOSR Young Investigator Awards, NSF CAREER Award,
Early Career Excellence in Research Award at UCI, Best Thesis Award from the
ACM SIGMETRICS society, IBM Fran Allen PhD fellowship, and best paper awards
from the ACM SIGMETRICS and IEEE Signal Processing societies. She received her
B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from
Cornell University in 2009. She was a postdoctoral researcher at MIT from 2009
to 2010, and a visiting faculty at Microsoft Research New England in 2012 and
2014.
Host: Boaz Barak
- Tags
-