Electrical Engineering Seminar Yue Lu 20150917
From Margaret Hastings on September 18th, 2015
views
comments
Dynamics of Randomized Iterative Methods for Large Systems
We consider efficient methods for solving large systems widely encountered in signal processing, machine learning, and big data analytics. Examples of such methods include randomized iterative projections (aka the Kaczmarz method), iterative sketching, and randomized coordinate descent. Although early variants have been known to and used by practitioners for a long time, it was only recently that theoretical performance guarantees have been obtained in specific cases. A flurry of current work has been focusing on establishing theoretical performance bounds for these methods. This intense interest is spurred by their remarkably impressive empirical performance, despite their often disarming simplicity. Furthermore, their linear complexity makes them welladapted to the computational challenges faced in higherdimensional settings, in contrast to classical convexrelaxation based schemes such as SDP.
In this talk, I will present an exact analysis of the dynamics of such methods for solving large linear and quadratic systems. For the linear case, I will show how to compute the exact mean square error (MSE) using a simple “lifting trick”: the empirical MSE is the trace of the empirical error covariance, whose evolution can be described by a random linear dynamical system in a higher dimensional “lifted” space. The typical convergence rate of the algorithm is much faster than what the MSE may suggest; I will define a "quenched" error exponent to characterize the typical convergence behavior and apply statistical physicsbased bounds to approximate it. For the quadratic case, with applications to phase retrieval and blind deconvolution, I will consider large random instances of such problems, and focus on the randomized Kaczmarz algorithms (RKA). I will show that, in the large systems limit, the dynamics of RKA converges to trajectories governed by a set of deterministic, coupled ODEs. Combined with suitable spectral initialization, our analysis establishes the theoretical performance guarantee of the RKA for solving exactly the nonconvex quadratic systems. Our analysis agrees with numerical results, which also indicate that previous upper bounds in the literature can often be several orders of magnitude too high. Finally, I will outline a few more results and topics of current research in my group.
We consider efficient methods for solving large systems widely encountered in signal processing, machine learning, and big data analytics. Examples of such methods include randomized iterative projections (aka the Kaczmarz method), iterative sketching, and randomized coordinate descent. Although early variants have been known to and used by practitioners for a long time, it was only recently that theoretical performance guarantees have been obtained in specific cases. A flurry of current work has been focusing on establishing theoretical performance bounds for these methods. This intense interest is spurred by their remarkably impressive empirical performance, despite their often disarming simplicity. Furthermore, their linear complexity makes them welladapted to the computational challenges faced in higherdimensional settings, in contrast to classical convexrelaxation based schemes such as SDP.
In this talk, I will present an exact analysis of the dynamics of such methods for solving large linear and quadratic systems. For the linear case, I will show how to compute the exact mean square error (MSE) using a simple “lifting trick”: the empirical MSE is the trace of the empirical error covariance, whose evolution can be described by a random linear dynamical system in a higher dimensional “lifted” space. The typical convergence rate of the algorithm is much faster than what the MSE may suggest; I will define a "quenched" error exponent to characterize the typical convergence behavior and apply statistical physicsbased bounds to approximate it. For the quadratic case, with applications to phase retrieval and blind deconvolution, I will consider large random instances of such problems, and focus on the randomized Kaczmarz algorithms (RKA). I will show that, in the large systems limit, the dynamics of RKA converges to trajectories governed by a set of deterministic, coupled ODEs. Combined with suitable spectral initialization, our analysis establishes the theoretical performance guarantee of the RKA for solving exactly the nonconvex quadratic systems. Our analysis agrees with numerical results, which also indicate that previous upper bounds in the literature can often be several orders of magnitude too high. Finally, I will outline a few more results and topics of current research in my group.
 Tags
