Description
Game development courses often involve multiple iterations of conducting play-testing, giving peer play-testing feedback and writing reflections. Effective play-testing feedback and thoughtful reflective writing are essential for deepening students’ learning. Yet, structured feedback can be difficult with diverse projects involved. Hence, this project explores the use of large language models (LLMs) to enhance the play-testing process and support students in critically reflecting on their design and development practices through automated, constructive feedback!