Statistical Computing
The last twenty years has witnessed nothing short of a digital data deluge. Modern data have become both voluminous and high dimensional creating an urgent need to revise classical inferential techniques to handle big data, all while reducing computation times. In addition to the need for new theory, computationally efficient and scalable algorithm development has become an exciting new frontier for innovating statistical methodology. Furthermore, the advent of high performance computing has opened avenues for massive parallel computation, with supercomputers at our fingertips (https://hpc.ncsu.edu/main.php). Faculty in the computational statistics group are developing numerically sound optimization and sampling algorithms to fit models that exploit inherently low-dimensional structure in high-dimensional big data.