The TRUST Lab at Duke University conducts research in applied AI explainability, technology evaluation, and adversarial alignment to ensure AI systems are transparent, safe, and beneficial for society.
Our interdisciplinary team tackles challenges in developing trustworthy technology.
Applying AI explainability methods to real-world problems.
We focus both on the technical evaluation/benchmarking of AI systems and on the assessment of the societal impact of emerging technologies.
Using adversarial techniques to better explain AI systems.
We are applying state of the art computer vision approaches and explainable machine learning techniques to support wildlife conservation efforts. This interdisciplinary project bridges machine learning with ecological science to create transparent decision-making tools.
This project investigates how large embedding models encode geographical and temporal information, with implications for understanding cultural biases and historical shifts in AI systems.
This study explores how voice-based, conversational LLM agents can function as “research translators” in interdisciplinary collaborations.
Watch our latest presentations and research overviews
Our recent symposium, hosted at Duke University, introduces society-centered AI
Dr. Bent introduces the AI 2030 audience to Adversarial Alignment
Interested in our research? We welcome collaborations, inquiries from prospective students, and partnerships with industry and academia.