Back to Home

HPC Server Management & Automation

Building and maintaining the computational infrastructure that powers materials research — from cluster administration to automated DFT workflows.

HPCCluster AdministrationSLURM / PBSLinuxWorkflow AutomationPythonBash
Place image at assets/research/hpc-cluster.jpg
Fig. 1: Computational cluster architecture for DFT and materials simulation workflows.

Overview

High-performance computing is essential infrastructure for computational materials science. Since 2019, I have served as the Chief Manager for the computational cluster at SUNY-Buffalo State, responsible for system administration, user management, software stack maintenance, and DFT training. This role has developed expertise in Linux systems administration, job scheduling, parallel computing, and workflow automation that complements my materials science research.

Cluster Administration

The computational cluster supports 20+ active researchers running density functional theory (DFT) calculations, molecular dynamics simulations, and data analysis workloads. My responsibilities include hardware monitoring and maintenance, operating system updates (CentOS/Rocky Linux), network configuration, storage management (NFS, parallel filesystems), and security administration. The system has maintained 99% uptime under my management.

Job Scheduling and Resource Management

I administer SLURM and PBS job scheduling systems, optimizing queue configurations, fair-share scheduling policies, and resource limits to maximize throughput while ensuring equitable access for all research groups. Custom submission scripts and wrapper tools I have developed simplify the workflow for users running VASP, Quantum ESPRESSO, and other computational codes.

DFT Workflow Automation

A significant contribution has been developing automated workflows for common DFT calculation sequences. These Python and Bash scripts handle input file generation (INCAR, POSCAR, KPOINTS, POTCAR for VASP), convergence testing (k-point meshes, energy cutoffs), job chain submission (relaxation → static → DOS → band structure), and output parsing/visualization. These tools reduce setup time from hours to minutes and help prevent common errors.

20+

Active Researchers

Supporting a diverse group of computational researchers across multiple projects.

99%

Uptime

Consistent high availability through proactive system administration.

15+

Students Trained

Undergraduate researchers mentored in DFT methods and HPC usage.

Training and Mentorship

A key aspect of this role is training new users — primarily undergraduate students — in DFT methods and HPC usage. I have developed a structured training curriculum covering Unix/Linux basics, DFT theory fundamentals, VASP input preparation, job submission and monitoring, output analysis, and common troubleshooting strategies. Multiple students trained through this program have co-authored peer-reviewed publications.

Industry Relevance

The systems administration and automation skills developed through this role translate directly to semiconductor industry needs. Modern process engineering relies on statistical data analysis, automated experimental workflows, and computational modeling — all areas where strong computing skills are essential. The ability to build, maintain, and optimize computational infrastructure is increasingly valued in both research and manufacturing environments.