Skip to main content

Grading with Artificial Intelligence

This framework provides BYU-Idaho faculty with clear guidance for the thoughtful evaluation and implementation of Artificial Intelligence in student assessment, ensuring alignment with our mission and educational values.

Foundational Principles for AI in Assessment

Faculty-Led Assessment
AI tools are designed to support, not supplant, faculty-designed assessments that align with course objectives and ensure the ethical and accurate measurement of student learning outcomes.
Transparency and Trust
Faculty must clearly communicate to students when and how AI is utilized in grading processes, upholding university standards for student privacy and intellectual contributions, and fostering confidence in assessment.
Strengthening Relationships
AI applications should enable faculty to dedicate more time to mentoring and fostering deeper connections with students, preserving the personal, discipleship-focused interactions central to the BYU-Idaho community and the Spirit of Ricks.

Operational Guidelines for AI Grading Applications

Rubric Alignment and Validation

AI grading applications must be calibrated to straightforward rubrics, and their consistency and effectiveness of feedback should be rigorously evaluated through comparison with expert scores on sample artifacts.

Continuous Monitoring and Adjustment

Faculty are responsible for actively monitoring AI outputs to ensure consistent, ethical, and fair assessment practices. This includes tracking outcomes, documenting anomalies, and retraining the AI as needed to prevent "drift" in assessment patterns.

Human Review Petition Process

A clear and accessible process must be provided for students to petition for a faculty review of any assignment graded using AI.

Adherence to Data Privacy and Approved Tools

To safeguard student intellectual property (IP) and personally identifiable information (PII), faculty should use only AI grading tools that have been approved by the BYU-Idaho AI Executive Committee. See approved tools.

Critical Risks of Non-Compliance

Failure to adhere to these guidelines carries significant risks:
Risk Factor Consequence
Inconsistent or Unethical Grading Can lead to AI-generated grades that are inconsistent, unfair, or misaligned with established learning outcomes, potentially compromising academic integrity.
Erroneous or Unhelpful Feedback Unmonitored or poorly configured AI may provide false, inaccurate, or unhelpful feedback, hindering student learning and development.
Erosion of Student Trust Lack of transparency or perceived erroneous or unjust grading practices can significantly erode student confidence in the assessment process and the institution.
Misuse of Student Intellectual Property Using unapproved AI tools carries the risk that student work may be inadvertently used to train future AI models, violating student intellectual property rights.
Compromised Student Privacy The use of non-approved AI tools can result in the unauthorized collection or exposure of students' Personally Identifiable Information (PII), leading to breaches of university policy and legal regulations such as FERPA.
Faculty are strongly encouraged to review the full Framework for Vetting and Adopting AI Grading Applications for an in-depth understanding of university policies and best practices.