Abstract:
This article examines the process of revising asynchronous online discussion (AOD) grading rubrics in a teacher preparation program at a Southern California university. Drawing on a reflective practitioner case study design combined with a literature review; the study explores the intersection of theory and practice in online assessment. Guided by Bandura’s social learning theory and the Community of Inquiry (CoI) framework, the research docu ments a collaborative revision process involving faculty colleagues, consultation with the original creator of the RISE feedback model, and the exploratory integration of artificial intelligence tools for rubric drafting. The analy sis highlights deficiencies in the original rubric—such as ambiguous criteria, misalignment with course objectives, and inflexibility in scoring—and details the development of a simplified, transparent 3×3 rubric structure. Find ings emphasize the importance of alignment with instructional objectives, clarity in performance expectations, and equitable assessment practices to enhance student engagement and learning outcomes. The study offers a replica ble framework for other educators seeking to refine online assessment tools, contributing to ongoing discourse on effective AOD implementation in higher education. Limitations and recommendations for future research in rubric validation, cross-institutional comparison, and AI-assisted instructional design are discussed.