In recent years, research in matrix recovery has flourished, spanning both theory ( 1 – 6) and efficient algorithms ( 7 – 12). In recent years, intense research in applied mathematics, optimization, and information theory has shown that, when the rank r = r a n k( X) is low, nonlinear algorithms based on convex optimization allow exact recovery of X from only O( r M + r N) measurements, up to logarithmic factors, whereby solving a severely underdetermined system of linear equations. ![]() Basic linear algebra tells us that, to reconstruct X from these measurements using a linear algorithm, successful recovery is possible only if n ≥ M N. Let X be our M-by- N data matrix, and consider n linear measurements of X, y i = T r ( A i ⊤ X ) ( i = 1, …, n), where A 1, …, A n are measurement matrices. When the full dataset is not directly observable, the scientist can obtain only measurements, from which she hopes to recover the dataset. Modern datasets often take the form of large matrices. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state of the art in terms of convergence rate, but also near optimal in terms of the matrices it successfully recovers. The phase transition curve of our algorithm is noticeably better than that of NNM. The precise tradeoff between r and n, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the ( r, n) plane. ![]() It is well known that there is a recovery tradeoff between the information content of the object X 0 to be recovered (specifically, its matrix rank r) and the number of linear measurements n from which recovery is to be attempted. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for nuclear norm minimization (NNM). We present a matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular-value shrinker-a nonconvex nonlinearity tailored specifically for matrix estimation. In matrix recovery from random linear measurements, one is interested in recovering an unknown M-by- N matrix X 0 from n < M N measurements y i = T r ( A i ⊤ X 0 ), where each A i is an M-by- N measurement matrix with i.i.d.
0 Comments
Leave a Reply. |