Duke’s AI Suite is designed to provide a secure, private, and trustworthy environment for learning, research, and exploration with generative AI. Compliant with Duke’s security policies and accessible only through a NetID login, these AI tools protect your conversations and data while supporting responsible AI use.
Can anyone see my chats in Duke licensed and supported AI tools?
Your instructors or supervisors will not be able to actively monitor your chats. These AI tools fall under the same acceptable use policy as other tools such as Outlook.
How does Duke’s AI Suite keep you and your data safe?
- Secure Login
Duke’s AI Suite tools are only accessible by NetID login. This means your personal information is not shared with public companies such as OpenAI. Duke’s AI tools are covered by Duke’s security policies and have been configured to create a trustworthy AI environment for Duke users.
- Private Browsing
No AI models are trained with the prompts or files you add to Duke’s AI Suite tools. Staff and faculty cannot view each others’ interactions with AI. Your AI interactions are private.
- Protected Data
Under no circumstances should PHI (personal health information) be uploaded to Duke’s AI tools. In some cases, sensitive data can be worked with in Duke’s AI tools. Refer to Duke’s Security Office guidelines for data storage for more information. If your data is being used for research, be sure to follow Duke’s research protocols for generative AI use.
What data can I share with Duke licensed and supported AI tools?
Be wary of what kinds of data are uploaded to any AI models. PHI data cannot be entered into any AI tool, but in some cases, sensitive data can be used. Refer to Duke’s Security Office guidelines for more information about acceptable use of Duke’s AI tools and data.
What are best practices for integrating AI content into your work, studies and research?
When consulting with AI, be sure to cite AI sources and/or create an AI declaration statement. If you do not cite generative AI or plagiarize its output, this is considered an academic integrity violation. If generative AI is being used for research, be sure to follow best practices for research and generative AI.