Skip to main content

GenAI and Privacy

Your Role in Data Privacy and AI at BYU-Idaho

For every BYU-Idaho faculty member, employee, and student, understanding data privacy isn't just a guideline; it's about protecting yourself and our community. This resource is crucial because your responsible handling of information directly safeguards personal details, sensitive research, and the university's operational integrity.

As AI tools become integrated into our work and studies, responsible and ethical use of these tools is critical to protecting data and upholding trust. Your active role in applying these principles is fundamental to promoting a secure and trustworthy digital environment for everyone at BYU-Idaho.

BYU-Idaho AI Usage & Data Guide

When interacting with AI tools, it's crucial to understand how your data is handled. BYU-Idaho is committed to protecting your privacy and intellectual property. The following guidelines will help you determine the appropriate use of AI tools based on data sensitivity. (See also the CES Privacy Principles)

Always remember:

  1. Don't Feed Private University Data to Unapproved AI Tools: Never input sensitive university information like student IDs, grades, financial data, meeting transcripts, or confidential data into unapproved AI tools (e.g., personal accounts in ChatGPT, Claude, Perplexity, Grok, etc.). These tools learn from what you type, and your private data could become public.
  2. Only Use University-Approved AI Tools: When using AI for work, only use the specific tools and platforms that have been approved by the University. The University has contracts and licenses in place that provide security and privacy to our users. Personally Identifiable Information (PII) should not be used with any AI tools without approval. If in doubt, ask your supervisor or the AI Strategy and Leadership team before using an AI service for university work.
  3. Verify AI Output, Especially for Facts: AI tools can sometimes "make things up" or provide incorrect information. Always double-check any facts, figures, or critical information generated by AI, especially if it's for official university communications, reports, or academic work. Don't blindly trust AI results.
  4. Be Transparent: When AI meaningfully contributed to your work (follow course or unit guidance on how to disclose).
  5. Understand AI's Limitations and Biases: Recognize that AI reflects the data it was trained on and can sometimes show biases or lack a full understanding of context. Avoid using AI to make critical decisions about individuals (like hiring or grading) without human oversight and judgment.

If you want to use an AI tool or data in ways that are not prescribed on this page, you may submit a formal request to the BYU-Idaho AI Committee.

Submit a Formal Request

Data Classification for AI Use

Understanding how to classify data is essential for responsible AI use. (See also the BYU-I Data Classification Policy)

AI Data Use Cards
Public data icon

Public Data

Definition: Any data that can be found publicly on the web or can reasonably be accessed without credentials.

Internal data icon

Internal or Confidential Data without PII

Definition: Any data that you have access to and a reason for using, related to your role within the University.

PII icon

Personally Identifiable Information (PII)

Definition: Any data that could potentially identify a specific individual, either on its own or when combined with other data.

Restricted data icon

Restricted Data

Definition: Information of the highest sensitivity where inappropriate loss, changes, or disclosure could have grave consequences to the University or its students.

Recording Meetings & Privacy

Zoom and MS Teams have built-in notifications to others in your meeting when recordings and transcripts are being generated. Tools like Zoom, MS Teams, and ChatGPT (CES) are approved for recording meetings. Zoom and MS Teams automatically notify participants when recordings and transcripts are being created. ChatGPT does not. Therefore, when using any tool to record a meeting, you are legally and ethically responsible for notifying all participants.

Understanding the Risks: AI Scenarios & Consequences

Your everyday choices with AI tools can have significant impacts. Here are a few examples:

Scenario/Action Possible Consequence(s)
Pasting a student's essay (with their name/ID) into a free online AI grammar checker. Data Exposure & Privacy Breach: Student's private information becomes part of the AI's data, potentially exposed to others or used for future AI training, violating FERPA.
Using an unapproved AI tool to create study flashcards from your professor's lecture notes. Intellectual Property Loss & Academic Compromise: The AI tool could use your professor's unreleased lecture notes, which are intellectual property, to train its model or make the content public. This compromises academic integrity by making proprietary course materials accessible to others.
Using an unapproved AI tool to summarize confidential meeting notes containing employee salaries or strategic plans. Confidentiality Breach & Security Risk: Sensitive university data is transmitted to an unsecured third-party, potentially leading to unauthorized access, competitive disadvantage, or disciplinary action.
Using a public AI service to analyze a large dataset of survey responses from a class project, even though you removed names. Re-identification & Privacy Breach: Even with names removed, the AI might inadvertently link seemingly anonymous data points or store the raw data, potentially making individual students identifiable and violating privacy regulations. This is a risk because the data is not public and can't be found on the web.
Relying solely on AI to write a faculty recommendation letter or grade an assignment without human review. Bias & Unfair Outcomes: AI may introduce biases from its training data, leading to inaccurate or discriminatory evaluations, potentially impacting a student's future or compromising academic integrity.
Uploading unreleased course materials (e.g., lecture notes, exam questions, proprietary curriculum) to a public AI service for content generation or review. Intellectual Property Loss & Academic Compromise: Your valuable teaching materials or secure exam content could be used by the AI provider, become public, or be accessible to others, compromising academic integrity and the university's intellectual property.
Uploading a large spreadsheet of student survey responses (even if you think it's anonymized) to a public AI tool for quick analysis. Re-identification & Privacy Breach: The AI might inadvertently link seemingly anonymous data points, or store the raw data, making individual students identifiable and violating privacy regulations.