REXBURG—Artificial intelligence is becoming a regular part of life at BYU–Idaho. Students use it to brainstorm, revise and research. Professors use it to design courses and explore new teaching tools. And across campus, one question keeps coming up: how do you use AI in a way that strengthens learning instead of replacing it?
One of the first places AI made an impact was academic integrity. When generative AI tools became widely available, universities everywhere had to respond. BYU–Idaho was no exception.
BYU-Idaho AI officer Sidney Palmer says the concern showed up almost immediately.
“Every, every school faces that,” Palmer said. “That's the first crisis point for generative AI, and we were no different.”
But Palmer says that crisis was only the beginning. As AI tools became more powerful, the real challenge became making sure students still learn the skills their classes are designed to teach.
Writing is one of the clearest examples. Writing assignments aren’t just about producing words—they’re about processing ideas, making connections and developing original thought. When AI does that work for a student, the assignment may look polished, but the learning is missing.
English professor Samuel Head has been studying AI detection software and how it affects students. He says many people misunderstand what these detectors actually do.
“It's not like AI leaves some sort of signature in the code embedded in the text in some way that says it's AI,” Head said. “That doesn't happen.”
Instead, detectors rely on patterns—patterns that can appear in strong writing, in Grammarly edits or even in completely original work. And that can lead to false positives.
I experienced that myself earlier this semester when one of my assignments was flagged as 100% AI generated, even though I hadn’t used AI at all. As a communication major, I write constantly—for school, for work and for BYU–Idaho Radio. So, when I was flagged, I wasn’t sure what to think.
And I wasn’t the only one. Several students I spoke with said they’ve had similar experiences—and some admitted they were afraid to write well because they didn’t want to be accused of cheating.
But faculty member Kathy Schmid, who teaches film and art analysis, says the goal isn’t to punish students. It’s to understand what’s happening.
“We were counseled not to just to not accuse, but to reach out and say, ‘Hey, this was flagged, can you get back to me as soon as possible and to discuss it?’” Schmid said.
Schmid says she only takes action when multiple detectors agree at a very high level. And even then, she starts with a conversation—not a penalty. She also says students shouldn’t feel pressure to lower the quality of their writing.
“We don't want you to dumb anything down,” Schmid said. “We're proud of you for trying to make everything you write better than the thing you wrote before.”
AI can help students brainstorm ideas, organize their thoughts, or clarify their writing. It can explain difficult concepts or help them see their work from a different angle. But there are also moments when AI works against learning—especially when assignments rely on personal interpretation, judgment or voice.
Across campus, professors and administrators agree on one thing: AI can be a powerful tool, but only when it supports a student’s thinking instead of replacing it.
And to help students understand when AI is appropriate, Palmer says the university adopted a simple, consistent framework—one that shows up in many course syllabi.
“We adopted the stoplight metaphor so that a faculty member could say, ‘Look, this is a green assignment which means go, go, go,’” Palmer said. “A yellow would be kind of like, ‘Be cautious about how you use it.’ And red would be absolutely no AI.”
The stoplight method gives students a clear way to navigate expectations:
- Green means AI is allowed—usually for brainstorming or feedback.
- Yellow means use caution—check the syllabus or ask your professor.
- Red means AI should not be used at all—especially when the assignment is meant to reflect your own thinking.
Professors say communication is key. If students are unsure about whether AI is allowed, they should ask. And if they’re worried about being falsely flagged, they should talk to their instructor early.
“My advice for students is to talk to your professors about why they have the AI policies that they do,” Head said. “Two years ago, perhaps you might have run into professors who wouldn’t have thought very much about this. I don’t think that’s the case anymore. I think across campus, we’ve been thinking about this quite a lot. And so, to students, ask your professors about why they have as much AI or as little AI in their policies as they do, because they will have good, thoughtful responses that tell you something.”
For many students, the challenge isn’t whether to use AI—it’s how to use it responsibly. And for faculty, the goal isn’t to eliminate AI, but to help students learn how to work with it without losing their own voice.
Because while AI can imitate writing, it can’t imitate experience. It can’t imitate perspective. And it can’t imitate growth.
At BYU–Idaho, the future of AI in education isn’t just about technology. It’s about helping students become better thinkers, better communicators and better learners—with AI as a tool, not a replacement.