Skip to main content

Protecting AI Chats

When using generative AI chatbots, it is crucial to protect sensitive and personal information. Users should never share any confidential data, including but not limited to:

  • Personally Identifiable Information (PII) such as names, addresses, phone numbers, social security numbers, or financial information.
  • Protected Health Information (PHI) like medical records, diagnoses, or any details covered under HIPAA regulations.
  • Academic and Institutional Data such as unpublished research, exam questions, or proprietary institutional strategies.
  • Login Credentials or any form of access information for systems or networks.
  • Sensitive Internal Communications that are meant to remain within the institution or among specific individuals.

By avoiding the sharing of these types of data, users can help ensure their personal safety and the security of their community and institution.

Enhanced Security with Microsoft Copilot and Google Gemini

At BYU-Idaho, we are committed to protecting the privacy and security of our faculty, staff, students, and administrators. To this end, we strongly encourage the use of Microsoft Copilot and Google Gemini through your BYU-Idaho accounts. These tools provide Enterprise-level security features designed to safeguard your data:

  1. Human Review Protection: When you use Microsoft Copilot or Google Gemini with your BYU-Idaho account, your AI interactions are not subject to human review. This means that your data is not exposed to external human evaluators, reducing the risk of sensitive information being leaked or mishandled.
  2. Data Privacy and Security: Both Microsoft and Google have robust security protocols in place for their enterprise AI tools. When accessed through your BYU-Idaho account, your chats and interactions are not used to train their AI models or shared across their servers. This ensures that your information remains confidential and protected against unauthorized access.
  3. Compliance with Institutional Policies: Using these tools via your BYU-Idaho accounts ensures compliance with institutional data policies. This alignment with BYU-Idaho’s standards for data security and privacy helps safeguard institutional and personal data from potential breaches or misuse.

By choosing to use Microsoft Copilot and Google Gemini with your BYU-Idaho credentials, you are leveraging tools that not only enhance productivity but also adhere to the highest standards of data protection. We encourage all members of the BYU-Idaho community to take advantage of these secure platforms to maintain the privacy and integrity of their communications and work.

Managing Data Privacy in Non-enterprise AI Tools

While tools like Microsoft Copilot and Google Gemini offer enterprise-level security when accessed through BYU-Idaho accounts, other generative AI tools, such as ChatGPT, may provide additional settings that can help you control your data privacy.

For instance, ChatGPT allows users to decide whether or not their interactions are used to improve the model. By turning off the setting that says “Improve the model for everyone,” you can ensure that your chats are not used for training purposes or reviewed by the AI developers. This setting can typically be found under the “Data Controls” section in the application’s settings menu.

Here’s how you can manage this setting in ChatGPT:

  1. Go to the Settings menu in your ChatGPT application.
  2. Navigate to Data Controls.
  3. Find the option labeled “Improve the model for everyone” and toggle it Off.

By disabling this option, you are taking an extra step to protect your personal information and maintain greater control over your data. Remember, keeping sensitive information private is crucial when using any AI tool outside of the secured BYU-Idaho platforms.