We build a scalable optimization and data acquisition framework that achieves optimal computational efficiency and dimensionality reduction while still being practically implementable for a broad set of processing and learning problems. We introduce a number of new key theories to develop optimization methods that minimize convex objectives on combinatorial low-dimensional models, and to adaptively design extremal combinatorial objects, such as extractors, expanders, and polar codes, as compressive sensing architectures that fully leverage learning.
Learning theory and methods for low-dimensional signal models
We create theoretical and algorithmic foundations for provable learning (both in generalization and complexity) of structured low-dimensional models. We investigate how to exploit geometric topologies and the diminishing returns (i.e., submodularity) within our learning objectives to significantly move from compressive sensing of signals towards compressive processing of information for scalable parameter estimation.
We develop new compressive sensing architectures to reshape the fields of data streaming and analog-to-digital conversion design in the presence of increasing memory and energy restrictions of future. We focus on a new paradigm, called analog-to-information conversion, as a replacement for conventional ADC technologies.