Virginia Tech® home

Seminar: Unboxing the Black Box: Integrating LLMs into Learning and Usability Evaluation

Sehrish Basir Nizamani

Collegiate Assistant Professor 
Department of Computer Science
Faculty Affiliate at the Center for
Human-Computer Interaction and
The Center for Future Work Places and Practices Virginia Tech


Friday, October 24, 2025
2:30 - 3:45 p.m.
Classroom Building, Room 260

 

Abstract

In an era where Large Language Model (LLM) can produce polished answers faster than students can think through them, the value of education must shift from output to understanding. As educators, our challenge is not to prohibit use of LLM but to guide students toward discerning and responsible use of it. In teaching computer science, we’ve seen students swing between over-trusting and dismissing LLMs, often lacking the nuanced judgment to know when to rely on an output and when to double-check it. To bridge this gap, we are redesigning our curriculum to help students uncover the logic and limits of these systems, cultivating habits of verification, reflection, and informed reliance rather than blind trust. Our early findings suggest that students who understand how LLMs work tend to view them more as assistants than as experts. I will begin by sharing insights from this curricular redesign and how it has shaped students’ understanding of LLM as a learning partner rather than a shortcut.

LLM’s influence extends far beyond the classroom. It is transforming how we design, test, and create across every stage of the computing process. Tasks that once demanded manual expertise are now being reimagined through the lens of automation and intelligent assistance. In usability engineering, for instance, processes that traditionally required detailed human observation, such as identifying design flaws or evaluating interfaces, are increasingly supported by LLMs. What was once a time-intensive, human-centered cycle is now evolving into a hybrid workflow where LLM performs preliminary analyses, freeing experts to focus on deeper interpretation and refinement. In this seminar, I will also reflect on preliminary findings from several ongoing studies, including how LLMs perform when tasked with identifying usability issues directly from code during development, how they can be used to analyze classroom dynamics, and how persona diversity influences LLM-driven usability evaluations.

Biography

Dr. Sehrish Basir Nizamani is a Collegiate Assistant Professor of Computer Science and a Faculty Affiliate at the Center for Human-Computer Interaction and the Center for Future Work Places and Practices at Virginia Tech. Her research lies at the intersection of Human-Computer Interaction (HCI) and Digital Education. She is particularly interested in how Large Language Models (LLMs) are reshaping both education and interaction design. Her current work explores the integration of LLMs into undergraduate computing courses to foster critical thinking, ethical reflection, and collaborative problem-solving. In parallel, she investigates how LLMs can be applied across various HCI phases to enhance interface design and user experiences. Her ongoing projects also examine the use of LLMs to evaluate classroom dynamics, focusing on posture, spatial layout, and engagement in smart learning environments. Dr. Nizamani has authored and reviewed numerous research papers published in national and international journals and conferences. Additionally, she has played key roles in organizing conferences, including her current position as Chair of ACM CAPWIC 2026.