Peter Radchenko

Photo of Peter Radchenko

PhD Yale
Associate Professor

Rm 4158
H70 - Abercrombie Building
The University of Sydney
NSW 2006 Australia

Telephone +61 2 8627 5196
peter.radchenko@sydney.edu.au

Bio

Peter Radchenko is an Associate Professor of Business Analytics at the University of Sydney Business School.Prior to joining the University of Sydney in 2017, he held academic positions at the University of Chicago and in the Marshall School of Business at the University of Southern California.Peter has a PhD in Statistics from Yale University, and an undergraduate degree in Mathematics and Applied Mathematics, from the Lomonosov Moscow State University.

Peter Radchenko's primary research focus is on developing new methodology for dealing with massive and complex modern data.Such large scale problems fall under the general framework of high-dimensional statistics and statistical machine learning, which are the main areas of Peter’s research.In particular, Peter has done extensive work in the area of high-dimensional regression, where the number of predictors is large relative to the number of observations. He has also worked on the problems of large-scale cluster analysis, including estimating the number of clusters and feature screening.  Another area of Peter’s research is functional data analysis, in which the measurements of a function or curve are treated as a single observation of the function as a whole. Peter's research papers have been published in the Journal of the Royal Statistical Society, the Annals of Statistics, the Journal of the American Statistical Association, Biometrika, and the Annals of Applied Statistics.

Research Interests

Peter Radchenko’s research focusses on developing and analysing novel methodology for dealing with massive and complex modern data. Fields ranging from finance, marketing and economics to image analysis, signal processing, data compression and computational biology nowadays share the common feature of trying to extract information from vast noisy data sets. The age of Big Data has created an abundance of interesting problems, posing new challenges, not present in conventional data analysis. Such large scale problems fall under the general framework of High Dimensional Statistics and Statistical Machine Learning, which are the primary areas of Peter Radchenko’s research. His main focus has been on the problems in high-dimensional regression, convex clustering and functional data analysis.

Peter Radchenko’s work on high dimensional regression problems involves fitting models and performing variable selection in settings where the number of predictors is large relative to the number of observations. His corresponding methodological work covers a wide range of topics, including linear and nonlinear additive models, nonlinear interaction models, generalized linear models, and single index models.

One serious limitation of the traditional clustering methods, such as k-means, is the non-convexity of the corresponding optimization problems. Peter Radchenko has worked on developing and analysing highly scalable convex clustering approaches that can handle massive amounts of data. His recent papers focus on estimating the number of clusters and on feature screening in large scale cluster analysis.

The key principle of the Functional Data Analysis field is to treat the measurements of a function or curve not as multiple data points, but as a single observation of the function as a whole. This approach allows one to more fully exploit the structure of the data. The infinite dimensional nature of functional data makes it critical to reduce the dimension of the predictor data before fitting a regression model. Most existing methods utilize an unsupervised approach, such as functional principal component analysis. The novel methodology developed by Peter Radchenko and his collaborators performs the dimension reduction in a supervised fashion, taking the response information into account.

A recent new direction of Peter Radchenko’s research takes advantage of the impressive advances in mixed integer optimization and modern optimisation techniques to solve certain classes of discrete problems arising in statistics. Together with his collaborators, he has developed novel mixed integer optimization based approaches for fitting sparse high-dimensional linear and nonlinear additive models. He has also worked on the problem of best subset selection in low-signal high-dimensional regimes.  His areas of interest for current and future research include high-dimensional nonlinear regression models with shape constraints and sparse generalized

Selected publications

2017

Journal Articles

Banerjee T, Mukherjee G, and Radchenko P (2017) Feature Screening in Large Scale Cluster Analysis Journal of Multivariate Analysis, 161, 191-212. [More Information]

Mazumder R, and Radchenko P (2017) The Discrete Dantzig Selector: Estimating Sparse Linear Models via Mixed Integer Linear Optimization IEEE Transactions on Information Theory, 63 (5), 3053-3075. [More Information]

Radchenko P, and Mukherjee G (2017) Convex clustering via l1 fusion penalization Journal of the Royal Statistical Society Series B, 79 (5), 1527-1546. [More Information]

2015

Journal Articles

Fan Y, James G, and Radchenko P (2015) Functional additive regression Annals of Statistics, 43 (5), 2296-2325. [More Information]

Radchenko P (2015) High dimensional single index models Journal of Multivariate Analysis, 139, 266-282. [More Information]

Radchenko P, Qiao X, and James G (2015) Index Models for Sparsely Sampled Functional Data Journal of the American Statistical Association, 110 (510), 824-836. [More Information]

2011

Journal Article

Radchenko P, and James G (2011) Improved variable selection with Forward-Lasso adaptive shrinkage Annals of Applied Statistics, 5 (1), 427-448. [More Information]

Selected grants

Recent Units Taught

  • QBUS3820 Data Mining and Data Analysis

  • QBUS6810 Statistical Learning and Data Mining