Peer-reviewed and working papers

Publications

Research outputs from Nauta Research Labs spanning AI security, fairness and adverse impact analysis, governance frameworks, validation methodology, and MLSecOps. Papers are available as PDFs where possible, with citation export for reference managers.

All AI Security Fairness Governance Validation MLSecOps

Protecting AI Training Datasets from Threats: A Defense-in-Depth Framework

Ahmad, S.

Nauta Research Labs Technical Report, 2025

AI Security MLSecOps
Training data is the foundation of every model. This paper presents a structured framework for dataset integrity, covering threat taxonomy, risk scoring, defense-in-depth controls, and governance checks that auditors can verify. The framework is aligned with NIST AI RMF and addresses data poisoning, exfiltration, and supply-chain risks specific to machine learning pipelines.

The Impact of Structured Validation and Audit Frameworks on the Fairness and Efficiency of AI-Driven Hiring Systems

Ahmad, S.

International Journal of Research and Applied Innovations (IJRAI), Vol. 8, Issue 6, Nov–Dec 2025. DOI: 10.15662/IJRAI.2025.0806023

Fairness Validation
Organizations increasingly use artificial intelligence (AI) to screen and rank job applicants, yet these systems can produce disparate outcomes across protected groups. This study evaluates whether structured validation and audit frameworks are associated with improved fairness and reduced time to hire. Using an applicant-tracking-system dataset from a logistics supply-chain employer (N = 1,906 active applicants; n = 400 hires), we compared a pre-AI manual baseline with two AI screening configurations: a compliance-only audit and an assurance-level audit. Results indicated that assurance-level auditing coincided with substantial improvements in adverse impact ratios (e.g., minority AIR increased from 0.12 to 0.85 under assurance) while also reducing time to hire. Findings suggest that structured, higher-intensity audit frameworks can be associated with simultaneous gains in fairness and process efficiency.

Evaluating an AI-Driven Computerized Adaptive Testing Platform for Psychological Assessment: A Randomized Controlled Trial

Ahmad, S.

International Journal of Engineering & Extended Technologies Research (IJEETR), Vol. 7, Issue 3, May–Jun 2025. DOI: 10.15662/IJEETR.2025.0703005

Validation AI Security
This randomized controlled trial evaluated the psychometric performance, efficiency, and clinical utility of an artificial intelligence (AI)–driven computerized adaptive testing (CAT) platform for mood and anxiety assessment, compared with traditional fixed-form measures. A total of 300 adults from urban community mental health clinics were randomized to complete either an AI-based adaptive battery or a traditional fixed-form battery. The AI platform demonstrated high internal consistency (Cronbach's α = .88; McDonald's ω = .86) and strong convergent validity with established self-report scores (r = .78–.84, p < .001). Administration time was reduced by 35%. Clinician diagnostic concordance with SCID-5 increased when using explainable AI aids (κ = .82) compared to no AI support (κ = .71). These findings support the reliability, validity, and efficiency of AI-based adaptive assessment.

Green AI for Sustainable Employee Attrition Prediction: Balancing Energy Efficiency and Predictive Accuracy

Ahmad, S. & Ahmad, H. M.

International Journal of Innovative Research in Computer and Communication Engineering (IJIRCCE), Vol. 13, Issue 5, May 2025. DOI: 10.15680/IJIRCCE.2025.130500

MLSecOps Governance
This study investigates the application of Green Artificial Intelligence (AI) principles to employee attrition prediction models, aiming to reduce computational energy consumption while maintaining comparable predictive accuracy to conventional approaches. A mixed factorial design evaluated six machine learning algorithms in both conventional and Green AI-optimized versions, across three feature selection methods. Experiments were conducted on the IBM HR Analytics Employee Attrition & Performance dataset (N=1,470). Results indicate that Green AI models achieved a significant average energy reduction of 44.8% during training and 35.6% during inference compared to conventional models, with only minimal and statistically non-significant differences in predictive performance. This research provides compelling empirical evidence that sustainable AI practices are feasible and effective in HR analytics.

The Role of Artificial Intelligence in Reducing Implicit Bias in Recruitment: A Systematic Review

Ahmad, S.

International Journal of Advanced Research in Education and Technology (IJARETY), Vol. 11, Issue 6, Nov–Dec 2024. DOI: 10.15680/IJARETY.2024.1106001

Fairness Governance
This systematic review critically examines the role of artificial intelligence (AI) in mitigating implicit bias within recruitment practices. Drawing on empirical and theoretical literature published between 2010 and 2024, this review synthesizes findings from diverse sources to evaluate the effectiveness, limitations, and ethical implications of AI-based recruitment tools. The analysis identifies both promising advancements such as AI gamification, fairness-aware algorithms, and hybrid decision-making models and persistent challenges, including algorithmic opacity, data bias, and inadequate regulatory oversight. The findings suggest that AI can contribute to more equitable hiring outcomes when implemented with transparency, robust data governance, and interdisciplinary oversight.