The TRUST Lab at Duke University conducts research in applied AI explainability, technology evaluation, and adversarial alignment to ensure AI systems are transparent, safe, and beneficial for society.
Our interdisciplinary team tackles challenges in developing trust in technology.
Applying AI explainability methods to real-world problems.
We focus both on the technical evaluation/benchmarking of AI systems and on the assessment of the societal impact of emerging technologies.
Using adversarial techniques to better explain AI systems.

We are applying state of the art computer vision approaches and explainable machine learning techniques to support wildlife conservation efforts. This interdisciplinary project bridges machine learning with ecological science to create transparent decision-making tools.

This project investigates how large embedding models encode geographical and temporal information, with implications for understanding cultural biases and historical shifts in AI systems.

This study explores how voice-based, conversational LLM agents can function as “research translators” in interdisciplinary collaborations.

This project addresses gaps in responsible AI for digital health by developing explainable and adversarially robust machine learning models for sleep monitoring.

We aim to turn the “bug” of adversarial attacks into a feature for improving AI transparency, trustworthiness, and alignment with human goals. In this project, we are developing an open-source adversarial probing platform for LLMs.
Jiayi Zhou, Günel Aghakishiyeva, Saagar Arya, Julian Dale, James David Poling, Holly Houliston, Jamie Womble, Gregory Larsen, David Johnston, Brinnae Bent
NeurIPS Workshop on Imageomics (accepted) • 2025
Günel Aghakishiyeva, Jiayi Zhou, Saagar Arya, Julian Dale, James David Poling, Holly Houliston, Jamie Womble, Gregory Larsen, David sJohnston, Brinnae Bent
NeurIPS Workshop on Imageomics (accepted) • 2025
Watch our latest presentations and research overviews
Our recent symposium, hosted at Duke University, introduces society-centered AI
Dr. Bent introduces the AI 2030 audience to Adversarial Alignment
Interested in our research? We welcome collaborations, inquiries from prospective students, and partnerships with industry and academia.