Trustworthy Research for Understandable, Safe, Technology

The TRUST Lab at Duke University conducts research in applied AI explainability, technology evaluation, and adversarial alignment to ensure AI systems are transparent, safe, and beneficial for society.

Research Areas

Our interdisciplinary team tackles challenges in developing trustworthy technology.

Applied Explainability

Applying AI explainability methods to real-world problems.

Technology Evaluation

We focus both on the technical evaluation/benchmarking of AI systems and on the assessment of the societal impact of emerging technologies.

Adversarial Alignment

Using adversarial techniques to better explain AI systems.

Current Projects

Explainability in Conservation project image

Explainability in Conservation

Applied Explainability
Researchers:Jiayi Zhou, Gunel Aghakishiyeva

We are applying state of the art computer vision approaches and explainable machine learning techniques to support wildlife conservation efforts. This interdisciplinary project bridges machine learning with ecological science to create transparent decision-making tools.

Exploring Geolingual and Temporal Components of AI Embeddings project image

Exploring Geolingual and Temporal Components of AI Embeddings

Applied Explainability
Researchers:Bochu Ding, Junyu Zhang, Vivienne Foley, Alexis Golart, Neha Shukla, James Sohigian

This project investigates how large embedding models encode geographical and temporal information, with implications for understanding cultural biases and historical shifts in AI systems.

Consilience: AI in Interdisciplinary Research Augmentation project image

Consilience: AI in Interdisciplinary Research Augmentation

Technology Evaluation
In collaboration with:Society-Centered AI Initiative
Researchers:TBD

This study explores how voice-based, conversational LLM agents can function as “research translators” in interdisciplinary collaborations.

Aligned Machine project image

Aligned Machine

Technology Evaluation
Researchers:Jiechen Li

The Aligned Machine aims to builds a benchmark of human-aligned similarity by comparing AI model outputs with human judgments of meaning, using an interactive platform designed to support public engagement with AI research.

Explainable and Adversarially Robust Sleep Monitoring project image

Explainable and Adversarially Robust Sleep Monitoring

Adversarial AlignmentApplied Explainability
Researchers:TBD

This project addresses gaps in responsible AI for digital health by developing explainable and adversarially robust machine learning models for sleep monitoring.

Featured Videos

Watch our latest presentations and research overviews

Responsible AI Symposium

Our recent symposium, hosted at Duke University, introduces society-centered AI

Adversarial Alignment

Dr. Bent introduces the AI 2030 audience to Adversarial Alignment

Get in Touch

Interested in our research? We welcome collaborations, inquiries from prospective students, and partnerships with industry and academia.

brinnae.bent@duke.edu
Duke University, Durham, NC