BYU-Idaho AI Usage & Data Guide

The following flowchart outlines how to use artificial intelligence (AI) tools at BYU-Idaho in compliance with AI Acceptable Use Policies and data usage guidelines.
- Is the AI tool you plan to use approved by the AI Committee? See BYUI GenAI Products for a current list of approved AI tools.
- Are you using the BYUI-authenticated version of this tool that prevents AI models from training on the data? See BYUI GenAI Training to learn how to check.
- What type of data will you put into the AI tool? See the CES Data Classification Policy for more information.
- Submit an AI Solution Request for Personally Identifiable Information (PII) or Restricted Data.
Always remember:
- Whenever working with private data, always follow the CES Privacy Principles.
- Make sure to comply with intellectual property (IP) guidelines and copyright laws when using AI tools.
- AI output can be wrong—verify all results before making important decisions.
Which AI tools are provided by BYU-Idaho?
- Microsoft Copilot Chat (the free, authenticated version included with our Microsoft 365 apps) and Google Gemini (the free, authenticated version included with our Google Workspace) are available to all users. Be sure to log in with your @byui.edu account.
- A limited number of ChatGPT Edu licenses are also available for use with your @byui.edu login. The first wave of licenses will be available by invitation only, with more being made available later in the year. Other free or paid ChatGPT versions not managed by CES are not approved AI tools and should be used with the same restrictions as other unapproved tools.
- Other approved tools with additional costs, such as Microsoft Copilot 365 (paid version) and Google Gemini Premium (paid version), must be requested and funded separately by departments.
Privacy Principles
Within the world of artificial intelligence, data privacy is a major concern. The nature of artificial intelligence, including the reliance on abundant and accurate data, lends itself to the potential abuse of personal data. Each AI model or service is different, but six privacy principles drive how data should be managed, whether when simply using a GenAI service or training a new GenAI model. These privacy principles include purpose limitation, data minimization, lawfulness, transparency, protection, and duration. If your data use, or that of a service you use, violates these principles, you must find an alternative that adheres to these privacy principles.
Risk Factors for Violating Privacy Laws
If you’re considering using a GenAI service for your work, consider asking yourself the following questions as potential indicators of whether the service uses data in a way that protects privacy:
Risk Factor | Example |
---|---|
Do they have a current privacy policy? | If a service doesn't have a privacy policy, it almost certainly doesn't handle data in a safe way. |
Is there an option not to use your data to train models? | Don't use your data to train models whenever possible. Some services will reset this setting whenever you open the app, so be careful! |
How is data going to be stored? | Ensure that the way a service stores their data is secure. Insufficient security can lead to data leaks. |
Does the service disclose data sharing policies? | If the service does not disclose data sharing policies at all, treat the service as if it will share your data. |