The purpose of the end-of-course survey is to provide faculty with actionable feedback on teaching and learning practices that support student success and the university’s mission. When considered alongside other sources of feedback—such as learning outcome achievement, peer observations, curriculum designer consultations, or Student Consultants on Teaching (SCOT)—the survey offers meaningful insight to guide ongoing, reflective improvement.
The survey was updated to take advantage of advances in survey design and student success research. While the previous survey served the university well for many years, the revised version offers more detailed, actionable feedback through focused items and an improved response scale. For more information on the development and performance of the new survey, see the measurement development and validation document.
Based on focus groups, we found that some faculty had a hard time navigating the report and understanding the results. The new report has all major report content on one page. Additionally, the new report links out to resources to help faculty understand the data and how to use it for improvement.
The survey followed an iterative process of consulting the literature (student success research, BYU-Idaho founding documents, course and instructor quality frameworks, etc.), conducting student cognitive interviews, seeking feedback from faculty and stakeholders, and testing items in multiple pilots to assess item quality.
Yes, the survey questions are largely the same for all courses, with minor adjustments for online courses. Currently we cannot tailor items for specific course types (e.g., labs, internships, seminars), and acknowledge that not all questions will apply to every type of course. As system capabilities expand in the future, we hope to tailor questions to certain types of courses. Additionally, the previous end-of-course survey was excluded from labs, internships, etc. Many faculty who are responsible for these courses, or who teach low-enrolling courses, requested that we include the survey to help them gather feedback. To support this, we are including the survey now in all courses.
Yes. Faculty may add up to two open-ended questions for each course section they teach. Two weeks before the survey opens, faculty receive an email with a link to submit their custom questions for that semester. To ensure you receive this message, please add noreply@qualtrics-research.com to your safe sender list in your university email account.
Research shows that students are more likely to become fatigued and provide lower-quality responses when surveys exceed 10 minutes, especially when they are completing multiple course evaluations in a short period of time. Keeping the survey to around fifteen questions helps maintain student engagement and data quality. We also chose to avoid using a small set of broad questions, which tend to be more susceptible to bias and provide less actionable insights. Instead, the survey includes a focused set of specific items that offer clearer, more useful feedback while still respecting students’ time.
The development of the survey questions was informed by BYU–Idaho’s mission and Learning Model, research-based teaching frameworks (e.g., Quality Matters, Tripod 7Cs, Danielson), student success literature, and examples from other universities. Draft items were tested with students and reviewed by faculty and academic leaders, with revisions made based on their feedback. More information is available in our measurement development and validation document.
We use a “How well…” response scale because it mirrors how people naturally answer experience-based questions (such as “How was your experience at this restaurant?” or “How do you like this product?”) and provides clearer, more direct feedback than agreement scales. The options—from “Not well at all” to “Extremely well”—capture meaningful differences and helps reduce some of the high-end skew common in course evaluations. This makes the data more reliable and more useful for continuous improvement. Guidance for interpreting results is provided in the faculty reports.
During the development process, we received feedback from the Faculty Association Board, the Female Faculty Task Force, and from faculty who participated in the survey pilots during Spring 2024 and Spring 2025 semesters. We will continuously review the survey instrument and make refinements where needed. If you have feedback about the survey, you are welcome to submit that feedback to measurement@byui.edu.
Faculty input helped ensure that survey questions reflected key elements of quality learning, were clearly understood, and addressed areas within instructors’ control. Based on faculty feedback, we revised wording, broadened applicability across teaching methods, emphasized items directly tied to learning, and included items on course rigor and real-world relevance.
Administration and Timing
For full semester and second block classes, evaluations open two Saturdays before the last day of the semester and close at midnight on the last testing day.
For first block and Summer Session courses, evaluations open the Saturday before the last day of class and are open for one week.
Students receive an email invitation when the survey opens and one reminder if they haven’t completed it. Additionally, online courses include the survey as a course assignment. Campus faculty may add the survey as an assignment in Canvas to encourage participation. Instructions are available in this video.
One effective approach is to include the survey as an assignment in your Canvas course. This assignment can be worth points, extra credit points, or no points. For more information on how to add the survey to your course, see this video.
Yes. The best way to track who has completed the survey is to add the survey as an assignment to your course in Canvas. This assignment can be worth points, extra credit points, or no points. When students complete the survey from your course, it will automatically pass back points to the assignment, indicating the student has completed the survey. To protect student anonymity, results will not display in the report for a section until at least 6 student responses have been collected. For more information on how to add the survey to your course, see this video.
Results are available the day after grades are due for the semester. Faculty will receive an email with a link to access their reports.
Guidance for interpreting your scores is provided directly in the faculty Power BI report and resources linked to the report. The report explains how to understand the scale. It also highlights which areas may represent strengths and which may indicate opportunities for improvement. We encourage faculty to interpret results in context and alongside other data on student learning and teaching.
Survey results highlight potential strengths and areas to explore further, but they do not diagnose specific solutions. If an item signals an area for improvement, faculty are encouraged to gather additional perspectives—such as peer or SCOT observations, conversations with current students, or input from a curriculum designer. Student comments can also add helpful context. Together, these sources can guide meaningful instructional improvements.
At this time, written comments are not moderated due to the large volume received each semester. Faculty see comments exactly as students entered them. While most comments are constructive, some may be unhelpful or inappropriate; if a comment appears to violate university conduct standards, faculty may consult with their department chair or HR. We also recognize that reading comments can be emotionally challenging. It is common for a few negative remarks to feel more prominent than the positive ones. Faculty are encouraged to review comments with a reflective mindset and, if helpful, discuss them with a trusted colleague or department leader. Faculty can also access an AI-generated summary of their comments. Copilot is integrated into BYU–Idaho’s Power BI reports. When viewing the comments page, faculty can select the Copilot option and ask for a summary of the open-ended responses displayed in the report.
Access to the survey data is not changing. As has been in the past, the following individuals and groups have access to the survey data:
Faculty can see their individual survey results.
Department chairs and deans have access to help support faculty under their area of stewardship.
Academic and university leaders have access to help them support department and college leadership and to guide institutional improvement. If academic and university leaders have a concern about a specific course or faculty member, they work with the appropriate dean and department chair.
Survey results are primarily intended to support faculty in their own continuous improvement efforts. Department chairs and deans may also use the results to identify areas where faculty might benefit from additional support or resources. In such cases, the data should serve as a starting point for conversation—not a final judgment—and should always be interpreted in context and triangulated with other data.
We conducted rigorous analyses—including confirmatory factor analysis, item response analysis, reliability testing, and hierarchical linear modeling—to ensure the survey measures teaching practices accurately and consistently. The results showed no meaningful effects from external factors such as faculty gender or age, student demographics, or course characteristics. The only significant relationship was a small, expected link between student grades and scores. We chose not to adjust scores based on grades. For more information, see our validation report: Measurement Development and Validation Report—Spring 2025 Pilot.
Yes. Below are the faculty who reviewed the Spring report and their responses.
Survey response data is always challenging to leverage for productive decision-making. To be used appropriately, the intersection of human cognition, effect-size estimation, non-response bias, and other biases inherent in measuring human thought must be evaluated and mitigated. Even with all this work, the shortcomings can be apparent.
I continue to worry that our primary measures of quality focus on students who assess quality based on their previous 12 years of education outside BYU-I. I wish we could figure out how to survey our employers and alumni about the impact of our courses in more meaningful ways. As with all surveys, I continue to worry about heavy non-response bias and would like to see continued evaluation of biases associated with grade, gender, and other factors that affect evaluation scores, but shouldn't be part of an evaluation of teachers.
We will never reach an ideal measure that accurately reflects students' preferences and teachers' effectiveness in meeting university mandates. However, this process impressed me with how much they moved the needle away from the poor quality measures of our history. I believe this team documented their work comprehensively and worked comprehensively to derive high-quality measures that measure high-quality objectives. I left the review believing that many of the new questions could offer insightful differentiation to help me progress as a teacher.
Matt Zachreson:
Student evaluations will always be flawed because students are not expert teachers. For example, we will always see things like teachers getting higher ratings in classes that students like (or are easy) than in ones that students don't like (something I'm well aware of as someone who teaches Gen-Ed science). However, flawed does not mean useless.
I think that this new course evaluation gives us the best chance to get actually meaningful data out of our students. I've spent many years studying survey design principles and using them to create and administer surveys. I was also given the opportunity to review the new survey questions and the data analysis that Institutional Research did as they piloted these questions. In my opinion, their works represents the gold standard in good survey design.
Putting too much faith in any single measurement is bad practice, but paired with performance on assessments, reviews with other faculty, etc., the new course evaluation should give us good insight into how our classes our going.
Overall, I'm excited for the new questions and can't wait to see the results in my own courses.
Survey results should always be interpreted in context and, when possible, triangulated with other sources of data. Several limitations are important to keep in mind:
Response rates and sample size
Results are less stable when few students respond. Averages become more sensitive to outliers as the number of responses decreases. To protect anonymity and reduce volatility, results are not reported for sections with fewer than five responses. Faculty and leaders should consider both the number of responses and the score distributions available in the report to understand the experience of the majority of students. Low numbers—common in small-enrollment courses—do not make the data unusable, but they do require caution. The new report allows faculty to aggregate data across multiple sections and semesters of a course. Increasing the number of responses behind the summary results can give more stable interpretations.
Influence of external factors
We conducted extensive analyses to examine whether student scores were influenced by factors unrelated to teaching. Because the survey uses specific, observable questions, the results showed no statistically significant effects based on faculty gender or age, student demographics, or most course characteristics. Grades were significantly related to scores, but the effect was small and expected, given that the questions reflect practices that contribute to student success. The impact was not large enough to justify adjusting scores. Details are available in our measurement development and validation report.
Unmeasured sources of bias
Although we have carefully evaluated many potential sources of bias, it is not possible to test for all of them. This is why it is important for faculty and leaders to discuss results together and consider additional perspectives when interpreting the data.
No adjustments are made to survey results to correct for bias. Our analyses did not identify any systematic biases large enough to justify statistical corrections. More detail is available in our measurement development and validation report. Faculty occasionally ask whether scores could be adjusted based on student attendance or participation. While such information might help filter out some noise, including additional survey questions to capture it would increase survey length and contribute to student fatigue. We prioritize keeping the survey brief and focused on feedback that students can reliably provide. Feedback and metrics collected from students or employees for other offices and services (including the Employee Engagement Survey used by President’s Executive Group) are not adjusted based on activity or other characteristics of responders. We feel there is good reason to follow a consistent approach for all offices and employees.
Effective and ineffective uses of the survey data differ somewhat for faculty and for academic leaders, but the guiding principle is the same: survey results should be used as a starting point for inquiry, not as a definitive judgment.
Effective Use by Faculty
Faculty use the data effectively when they:
Review results to identify areas that may warrant further attention.
Focus on broad themes rather than isolated scores or comments.
Consider the limitations of the data, including response rates and course context.
Triangulate survey findings with other evidence.
Approach results with curiosity, including a willingness to challenge self-perceptions or assumptions.
Set goals and make instructional adjustments where appropriate.
Revisit future survey results to examine patterns and potential impact over time.
Ineffective Use by Faculty
Faculty may misuse results if they:
Dismiss results solely because of low response counts. While small samples require caution, they can still offer meaningful insight when interpreted thoughtfully.
React strongly to small changes in scores (e.g., 4.2 to 4.0), which may reflect normal variation rather than meaningful change.
Focus on a small number of negative comments or a single low score while ignoring the broader pattern of data.
Rely exclusively on student survey data without considering other sources of evidence. Student feedback is valuable, but it does not capture all aspects of teaching and learning.
Dismiss results that conflict with self-perception without further reflection.
Assume that every issue raised by students requires immediate change. While student feedback is valuable, not all suggestions align with effective teaching or support student learning.
Effective Use by Leaders (Chairs, Deans)
Leaders use the data effectively when they:
Review results to identify potential patterns that may indicate a need for support.
Interpret scores using suggested benchmarks and attention to data quality.
Follow up with faculty to understand the context behind results before drawing conclusions.
Triangulate survey data with other sources, such as peer observations, student performance, SCOTs, or curriculum designer consultations.
Support faculty in setting developmental goals and connecting them with appropriate resources.
Revisit results over time to look for patterns, growth, or improvement.
Ineffective Use by Leaders
Leaders may misuse results if they:
Make high-stakes decisions without understanding the context of the scores or triangulating with other sources of information.
Rank faculty based on scores without considering whether scores warrant concern.
Do not seek to understand the context of the scores.
Place too much emphasis on student survey scores.
Do not triangulate the findings with other information.
React to small changes (4.2 to a 4.0). There will be normal variation in the data.
Ignore response counts or response distributions.
Overweight individual comments or isolated low scores.
Expect improvement without providing appropriate support, resources, or time.
Troubleshooting
For help with the survey or report, contact Institutional Research at measurement@byui.edu.
End of Course Survey Resources
Use the links below to find information specific to faculty, students, and academic leaders.