Overview
The integration of AI assistants in meetings and classes offers numerous benefits, such as automating tasks like note taking, scheduling, and follow-ups. AI can transcribe discussions, highlight key points, and distribute action items, enhancing productivity and accessibility. Additionally, AI tools can analyze content to provide insights and suggest improvements, making meetings more efficient. For higher education and health care professionals, these technologies can reduce administrative burdens, allowing more focus on scholarship, discovery and innovation.

However, the use of AI in capturing meetings and classes comes with significant risks, particularly concerning privacy and security. Sensitive information, such as PHI and other research data, discussed in meetings could be exposed, and this data might be used to train AI systems, potentially leading to unintended data exposure. Over-reliance on AI could also weaken critical thinking and human oversight, leading to the acceptance of inaccuracies. Mistakes in transcription or analysis could cause misunderstandings, poor decision-making, or amplify biases, potentially leading to legal issues.

It is essential to ensure the ethical use of AI while maintaining a balance between automation and human input to maximize benefits and minimize risks.

These guidelines aim to help Duke leverage the advantages of AI technology while minimizing associated risks.

Safety and Security Best Practices:

  • Informing Attendees: As a best practice, hosts should notify attendees at the beginning of a meeting if they plan to use an AI assistant, similar to how participants are informed when a meeting is being recorded.
  • Accuracy of AI Tools: While AI tools can be beneficial, they are not flawless. It is crucial to review and edit AI-generated meeting summaries and recordings for accuracy, particularly when handling sensitive information.
  • Handling Sensitive Data: Always exercise caution when dealing with sensitive data, regardless of the communication method.
  • Approved Tools: While the Zoom AI Companion is approved for use, the university advises against using third-party bots in meetings due to privacy and data security concerns.
  • Nature and Purpose of Meetings: Consider the nature of the meeting and the content of discussions before using AI tools like the Zoom AI Companion. AI assistants may not be suitable for meetings where personal stories are shared or open deliberations are anticipated.
  • Attendance: Individuals should not "send" an AI assistant to attend a meeting on their behalf if they are not also present. No one should be required to "send" an AI assistant to attend a meeting on their behalf.

It is strongly recommended to avoid enabling the “Automatically start Meeting Summary for all meetings I host” setting. Instead, enable AI Meeting Summary at the start of a meeting, allowing attendees to confirm their consent before proceeding.

Acknowledgments:
Special thanks to colleagues from the following universities for their contributions and review: UC Davis, UC Office of the President, University of Maryland, University of Michigan, University of New Mexico, and Yale University.