Data Security
Please be aware that you may not enter internal, sensitive, or restricted data into any generative AI tool or service, unless approved to do so by the AI Governance Committee. Only publicly available information (data that has no legal or other requirement for confidentiality, integrity, or availability under the Freedom of Information Act) may be used in generative AI tools and services (See Data Governance Policy).
Privacy Principles
Within the world of artificial intelligence, data privacy is a major concern. The nature of artificial intelligence, including the reliance on abundant and accurate data, lends itself to the potential abuse of personal data. Each AI model or service is different, but six privacy principles drive how data should be managed, whether when simply using a GenAI service or training a new GenAI model. These privacy principles include purpose limitation, data minimization, lawfulness, transparency, protection, and duration. If your data use, or that of a service you use, violates these principles, you must find an alternative that adheres to these privacy principles.
Risk Factors for Violating Privacy Laws
If you’re considering using a GenAI service for your work, consider asking yourself the following questions as potential indicators of whether the service uses data in a way that protects privacy:
Risk Factor | Example |
---|---|
Do they have a current privacy policy? | If a service doesn't have a privacy policy, it almost certainly doesn't handle data in a safe way. |
Is there an option not to use your data to train models? | Don't use your data to train models whenever possible. Some services will reset this setting whenever you open the app, so be careful! |
How is data going to be stored? | Ensure that the way a service stores their data is secure. Insufficient security can lead to data leaks. |
Does the service disclose data sharing policies? | If the service does not disclose data sharing policies at all, treat the service as if it will share your data. |