Our interdisciplinary team investigates the technical, social, and regulatory dimensions of artificial intelligence. From algorithmic fairness to organizational adoption, each research area addresses a critical facet of building AI systems that are safe, accountable, and worthy of public trust.
We study how algorithmic hiring tools encode, amplify, and perpetuate discrimination across protected categories. Our work combines statistical auditing methods with legal analysis to develop practical bias-detection frameworks that employers and regulators can deploy in real-world screening pipelines.
View publications →Rigorous validation is the backbone of trustworthy AI. We develop measurement frameworks, benchmark suites, and statistical tests that quantify model reliability, robustness, and generalizability — bridging the gap between laboratory performance and deployment-grade confidence.
View publications →We analyze emerging legislation — from the EU AI Act to NYC Local Law 144 — and translate complex regulatory requirements into actionable compliance strategies. Our research helps organizations navigate overlapping jurisdictional mandates while maintaining innovation velocity.
View publications →From adversarial attacks to data poisoning and model inversion, AI systems face a growing landscape of threats. We build structured threat models, red-team evaluation protocols, and safety benchmarks that help teams identify and mitigate risks before deployment — not after.
View publications →Algorithms do not operate in a vacuum — humans interpret, override, and act on their outputs. We investigate automation bias, calibration of trust, explanation interfaces, and decision-support design to ensure that human-AI teaming produces better outcomes than either alone.
View publications →Deploying AI responsibly requires more than good algorithms — it demands new governance structures, workforce upskilling, and cultural readiness. We research change-management strategies that help organizations adopt AI in ways that are ethical, sustainable, and aligned with institutional values.
View publications →Training data is the foundation of every machine-learning system — and one of its most vulnerable attack surfaces. This post examines data-poisoning vectors, supply-chain risks, and the procedural safeguards organizations should implement to protect dataset integrity throughout the ML lifecycle.
Read article →