Public Data
Definition: Any data that can be found publicly on the web or can reasonably be accessed without credentials.
For every BYU-Idaho faculty member, employee, and student, understanding data privacy isn't just a guideline; it's about protecting yourself and our community. This resource is crucial because your responsible handling of information directly safeguards personal details, sensitive research, and the university's operational integrity.
As AI tools become integrated into our work and studies, responsible and ethical use of these tools is critical to protecting data and upholding trust. Your active role in applying these principles is fundamental to promoting a secure and trustworthy digital environment for everyone at BYU-Idaho.
When interacting with AI tools, it's crucial to understand how your data is handled. BYU-Idaho is committed to protecting your privacy and intellectual property. The following guidelines will help you determine the appropriate use of AI tools based on data sensitivity. (See also the CES Privacy Principles)
If you want to use an AI tool or data in ways that are not prescribed on this page, you may submit a formal request to the BYU-Idaho AI Committee.
Understanding how to classify data is essential for responsible AI use. (See also the BYU-I Data Classification Policy)
Definition: Any data that can be found publicly on the web or can reasonably be accessed without credentials.
Definition: Any data that you have access to and a reason for using, related to your role within the University.
Definition: Any data that could potentially identify a specific individual, either on its own or when combined with other data.
Definition: Information of the highest sensitivity where inappropriate loss, changes, or disclosure could have grave consequences to the University or its students.
Zoom and MS Teams have built-in notifications to others in your meeting when recordings and transcripts are being generated. Tools like Zoom, MS Teams, and ChatGPT (CES) are approved for recording meetings. Zoom and MS Teams automatically notify participants when recordings and transcripts are being created. ChatGPT does not. Therefore, when using any tool to record a meeting, you are legally and ethically responsible for notifying all participants.
Your everyday choices with AI tools can have significant impacts. Here are a few examples:
Scenario/Action | Possible Consequence(s) |
---|---|
Pasting a student's essay (with their name/ID) into a free online AI grammar checker. | Data Exposure & Privacy Breach: Student's private information becomes part of the AI's data, potentially exposed to others or used for future AI training, violating FERPA. |
Using an unapproved AI tool to create study flashcards from your professor's lecture notes. | Intellectual Property Loss & Academic Compromise: The AI tool could use your professor's unreleased lecture notes, which are intellectual property, to train its model or make the content public. This compromises academic integrity by making proprietary course materials accessible to others. |
Using an unapproved AI tool to summarize confidential meeting notes containing employee salaries or strategic plans. | Confidentiality Breach & Security Risk: Sensitive university data is transmitted to an unsecured third-party, potentially leading to unauthorized access, competitive disadvantage, or disciplinary action. |
Using a public AI service to analyze a large dataset of survey responses from a class project, even though you removed names. | Re-identification & Privacy Breach: Even with names removed, the AI might inadvertently link seemingly anonymous data points or store the raw data, potentially making individual students identifiable and violating privacy regulations. This is a risk because the data is not public and can't be found on the web. |
Relying solely on AI to write a faculty recommendation letter or grade an assignment without human review. | Bias & Unfair Outcomes: AI may introduce biases from its training data, leading to inaccurate or discriminatory evaluations, potentially impacting a student's future or compromising academic integrity. |
Uploading unreleased course materials (e.g., lecture notes, exam questions, proprietary curriculum) to a public AI service for content generation or review. | Intellectual Property Loss & Academic Compromise: Your valuable teaching materials or secure exam content could be used by the AI provider, become public, or be accessible to others, compromising academic integrity and the university's intellectual property. |
Uploading a large spreadsheet of student survey responses (even if you think it's anonymized) to a public AI tool for quick analysis. | Re-identification & Privacy Breach: The AI might inadvertently link seemingly anonymous data points, or store the raw data, making individual students identifiable and violating privacy regulations. |