Skip to main content

GenAI and Privacy

BYU-Idaho AI Usage & Data Guide

Is the AI tool you plan to use approved by the AI Committee? (See www.byui.edu/genai/products for a current list of approved AI tools)

✅ Yes: Continue to Step 2.

❌ No: You may use unapproved tools only if private data is NOT involved. 

**Caution—unknown tools can pose security risks, especially when linked to your BYU-Idaho account. Contact the BYU-Idaho AI Committee at genai@byui.edu to request an AI tool review.

Are you using the BYUI-authenticated version of this tool that prevents AI models from training on the data? (See www.byui.edu/genai/training to learn how to check)

✅ Yes: Continue to Step 3.

❌ No: Always login with your BYU-Idaho account to protect private data.

What type of data will you put into the AI tool? (See the CES Data Classification Policy for more information)

Public Data (e.g., website info, event calendars, course catalogs) → ✅ Go ahead!

Internal or Confidential Data (e.g., policies, schedules, de-identified surveys) → ⚠️ Proceed with care.

Personally Identifiable Information (PII) (e.g., names, IDs, contact details, photos) → 🔎 Submit an AI solution request for approval.

Restricted Data (e.g., SSNs, passwords, bank info, HIPAA-protected data) → ❌ Not allowed in AI tools. Contact the AI Committee for alternatives.

The following flowchart outlines how to use artificial intelligence (AI) tools at BYU-Idaho in compliance with AI Acceptable Use Policies and data usage guidelines.

  1. Is the AI tool you plan to use approved by the AI Committee? See BYUI GenAI Products for a current list of approved AI tools.
  2. Are you using the BYUI-authenticated version of this tool that prevents AI models from training on the data? See BYUI GenAI Training to learn how to check.
  3. What type of data will you put into the AI tool? See the CES Data Classification Policy for more information.
  4. Submit an AI Solution Request for Personally Identifiable Information (PII) or Restricted Data.

Always remember:

  • Whenever working with private data, always follow the CES Privacy Principles.
  • Make sure to comply with intellectual property (IP) guidelines and copyright laws when using AI tools. 
  • AI output can be wrong—verify all results before making important decisions. 

Which AI tools are provided by BYU-Idaho?

  • Microsoft Copilot Chat (the free, authenticated version included with our Microsoft 365 apps) and Google Gemini (the free, authenticated version included with our Google Workspace) are available to all users. Be sure to log in with your @byui.edu account. 
  • A limited number of ChatGPT Edu licenses are also available for use with your @byui.edu login. The first wave of licenses will be available by invitation only, with more being made available later in the year. Other free or paid ChatGPT versions not managed by CES are not approved AI tools and should be used with the same restrictions as other unapproved tools. 
  • Other approved tools with additional costs, such as Microsoft Copilot 365 (paid version) and Google Gemini Premium (paid version), must be requested and funded separately by departments. 

Privacy Principles

Within the world of artificial intelligence, data privacy is a major concern. The nature of artificial intelligence, including the reliance on abundant and accurate data, lends itself to the potential abuse of personal data. Each AI model or service is different, but six privacy principles drive how data should be managed, whether when simply using a GenAI service or training a new GenAI model. These privacy principles include purpose limitation, data minimization, lawfulness, transparency, protection, and duration. If your data use, or that of a service you use, violates these principles, you must find an alternative that adheres to these privacy principles.

icon-purpose-limitation.svg
Purpose Limitation
From the start, we are clear about the purposes for processing personal data. These purposes are explicit and legitimate. We do not further process the data in a way incompatible with those purposes.
icon-data-minimization.svg
Data Minimization
We confirm that the personal data we are processing is sufficient to properly fulfill the stated purpose, has a rational link to that purpose, and is limited to what is necessary. We do not collect or maintain more than we need for that purpose.
icon-lawfulness.svg
Lawfulness
We verify that the collection and use of personal data is justified, legal, and is either necessary for the performance of a contract, pursues a legitimate interest, is necessary for compliance with a legal obligation or based on consent.
icon-transparency.svg
Transparency
We are open and honest with people from the start about who we are, and how and why we use their personal data. We provide them with clear and intelligible information either through concise privacy notices or just-in-time statements.
icon-protection.svg
Protection
We verify that we have appropriate security measures in place to protect the personal data we hold against unauthorized or unlawful processing and against accidental loss, destruction, or damage.
icon-duration.svg
Duration
We do not keep personal data for longer than needed. After the original, defined purposes for which the data was collected are achieved, we securely destroy or de-identify the data in accordance with defined standards and policies.

Risk Factors for Violating Privacy Laws

If you’re considering using a GenAI service for your work, consider asking yourself the following questions as potential indicators of whether the service uses data in a way that protects privacy:

Risk Factor Example
Do they have a current privacy policy? If a service doesn't have a privacy policy, it almost certainly doesn't handle data in a safe way.
Is there an option not to use your data to train models? Don't use your data to train models whenever possible. Some services will reset this setting whenever you open the app, so be careful!
How is data going to be stored? Ensure that the way a service stores their data is secure. Insufficient security can lead to data leaks.
Does the service disclose data sharing policies? If the service does not disclose data sharing policies at all, treat the service as if it will share your data.