Skip to main content

HPC Simulation Cluster

The department has configured a Linux cluster designed to meet the needs of most Statistics Department faculty and students. We have created tools which hopefully will make it easy to submit your program. We are willing to work with you personally on your initial job submission to our cluster and to answer questions or problems you may encounter.

The files stored on the cluster are NOT backed up. The disk space is available for running simulations only. Please keep your results and important code backed up elsewhere. This space is shared by everyone using the cluster, so we have implemented a 300GB quota to keep someone from filling up all of the disk space. If you need more disk space than this, please let us know.

You can submit as many jobs as you wish, however you can use up to 32 processor cores (16 max per job) at a time and the other jobs will stay in the queue. The scheduling software places your job submission in line with others waiting to run a simulation. As a queue becomes available the next person in line gets to use the next available queue assuming that person is not already using 32 processor cores. 512 total cores are available for processing. Do not submit jobs under other users names to bypass the 32 core limit. Both accounts will be heavily restricted or removed.


  • (7) DELL R7425 Dual Processor AMD Epyc 32 core 2.2 GHz machines with 512GB RAM each running 64Bit Ubuntu Linux Version 18.04
  • 10Gb fiber optic private cluster networking
  • 18 TB of storage space that can be expanded as needed
  • Highly tuned BLAS libraries integrated with R
  • Software includes SAS 9.4 and recent versions of R and Matlab
  • The department server room maintains a proper temperature and humidity, and provides approximately 25 minutes of battery backup run time to the cluster.