My research sits at the intersection of statistical theory and computer science methodology and is part of the modern ascendancy of mining “big data” to produce fundamentally novel science from complicated datasets. Specifically, I seek to illuminate the role played by the nature and quantity of regularization as a tool for improved scientific understanding.

Through this lens, my research can be divided into four intersecting areas: (1) computational approximation methodology, (2) model selection, (3) high-dimensional and nonparametric theory, and (4) applications related to these.

My work explores and exploits the connections between these areas rather than approaching them separately—my contributions have been developed out of the pressing need to justify methodology as implemented in applications rather than in a vacuum devoid of empirical motivation.

Statistics, Computer Science
PhD, Carnegie Mellon University, Statistics, 2012
MS, Carnegie Mellon University, Statistics , 2008
BS, Indiana University Bloomington, Music, 2006
BA, Indiana University Bloomington, Economics and Mathematics, 2006