Skip to main
News

Privacy by Design: How Duke is building trust into AI

Generative AI didn’t arrive quietly. It showed up in classrooms, offices, and research workflows, and the Duke community did what people everywhere were doing: experimenting, testing, and exploring what these tools could do.

As interest in AI grew across campus, many of those questions surfaced through the Office of Information Technology (OIT), alongside conversations happening in schools, departments, and research groups. Faculty wanted to try new approaches. Staff saw opportunities to streamline work. Students were already incorporating AI into their daily routines

At the same time, the risks were becoming clearer to those at the forefront of AI at Duke. Many AI platforms are designed to learn from user inputs by default, a model that can clash with the obligations universities have to protect research data, student information, and institutional trust.

“As a researcher, data stewardship is fundamental to our work,” said David MacAlpine, professor of Pharmacology & Cancer Biology at Duke. “Even when data is de-identified, how it’s handled and where it ends up matters.”

Suddenly Duke was facing a familiar higher education dilemma, intensified by AI. How can we support innovation without asking people to trade privacy for convenience?

Where the information goes

Many publicly available AI tools retain data a user inputs to train models, for analytics, or product improvement unless explicitly restricted. For individuals — and especially for researchers working with sensitive data — that reality raises important questions about where information goes and how it may be used beyond its original purpose.

Image
David MacAlpine

“Privacy is at the heart of many of our systems and software negotiations — AI is just one example,” said Nick Tripp, Duke’s chief information security officer, and one of the leaders who took on the work most people never see. 

“It’s not glamorous, and it’s largely invisible,” said Tripp. “But instead of asking every individual to assess privacy risk on their own, Duke builds those protections into the infrastructure people rely on every day.”

Tripp and others chose a different path for Duke, negotiating systems and software agreements that embed privacy protections at the institutional level.

Privacy and AI tools at Duke

The approach to protect privacy at an institutional level is reflected in Duke’s development of DukeGPT, the university’s AI platform designed for institutional use. DukeGPT provides access to generative AI capabilities while operating within Duke’s security, governance, and privacy frameworks — giving the community a trusted environment to experiment and innovate.

Duke has also negotiated an educational license with OpenAI for ChatGPT Edu. Under this agreement, data submitted through the university’s licensed environment is not used to train models or for marketing purposes, aligning with Duke’s expectations around data stewardship and privacy.

“We negotiated strong privacy protections so that Duke retains ownership of its data and AI outputs,” said John Robinson, director of OIT’s Academic & Campus Technology Services. “Duke information is used only to deliver the service. It is not used to train OpenAI’s models, and is protected by enterprise-grade security, with provisions governing data handling and deletion when the agreement ends.”

Duke initiated a year-long pilot of the ChatGPT Edu license last spring which was designed to inform longer-term decisions about AI tools at Duke while maintaining privacy and security standards.

Who sees what

Just as important, Tripp emphasized, is how Duke governs access to data internally.

“By policy, individuals at Duke do not not access the content of individual AI interactions, except in the limited and extraordinary circumstances outlined in Duke’s Acceptable Use Policy,” Tripp said.

Instead of accepting standard consumer terms, Tripp and others worked with the university’s legal, security, privacy and procurement teams to ensure that contractual protections matched the realities of how AI is used in higher education across teaching, research, and administrative functions.

Image
Photo of Nick Tripp

That work matters because AI at Duke is not limited to a single experiment or use case. Across the university, teams are exploring AI and automation to support student advising, financial aid communications, research compliance, contract review, hiring workflows, academic policy guidance, and other core functions.

Many of these efforts interact with systems that manage sensitive information — student records, personnel data, research materials, financial transactions, and institutional decision-making processes.

“When AI is embedded in core university operations, privacy can’t be an afterthought,” said Tripp “It has to be foundational — because these tools interact with data people trust Duke to protect.”

Experimenting responsibly

“Responsible AI isn’t about saying no — it’s about creating the conditions where researchers can say yes with confidence,” MacAlpine said. “That matters as these tools become part of everyday research.” 

From a research perspective, consistency throughout the technology ecosystem is essential.

“When the institution puts these guardrails in place it changes how we can responsibly experiment,” said Tripp. “It allows us to explore new tools while staying aligned with ethical obligations, data-use agreements, and community standards.”

The result is a model of technology adoption that treats privacy not as a constraint, but as core infrastructure. Faculty can explore new teaching tools. Students can experiment with AI to support learning. Researchers can evaluate emerging methods — all with the confidence that privacy considerations have been addressed upstream.

“This isn’t about slowing innovation,” Tripp said. “It’s about making sure innovation aligns with Duke’s values.”