Evaluating Large Language Models for HPC Education

This project engages undergraduate students in learning high-performance computing (HPC) by using large language models (LLMs) to generate and evaluate instructional content. 

Project Summary

This project introduces undergraduate students to high-performance computing (HPC) by exploring how large language models (LLMs) can assist in teaching fundamental HPC concepts. Students will use LLMs to generate explanations, tutorial content, and answers to basic HPC-related questions. They will then evaluate the accuracy, clarity, and pedagogical effectiveness of these responses, comparing them against official documentation and expert sources. Through this process, students will gain foundational HPC knowledge while critically assessing AI-generated content.

Learning Objectives 

  1. Understand and Apply Core HPC Concepts like parallel computing, job scheduling, supercomputing architecture
  2. Evaluate AI-Generated Educational Content by assessing the accuracy, clarity, and instructional value of LLM-generated responses to HPC-related questions