This chapter introduces explainable artificial intelligence (XAI) as a novel methodological approach for studying person–environment misfit. The authors argue that traditional methods often oversimplify misfit by assuming linear and symmetrical relationships, neglecting its complex and multifaceted nature. XAI techniques, by contrast, can model nonlinear, asymmetrical, and context-dependent effects, offering a richer understanding of how and why misfit occurs. The chapter demonstrates how XAI methods such as logistic regression, decision trees, gradient boosting, SHAP values, and counterfactual explanations can be used to detect patterns of calculated and perceived misfit. Using survey data on personal and organisational values, the authors show how XAI can identify which attributes most strongly predict misfit, when contextual variables such as tenure matter, and what small changes could transform a misfit into a fit. The chapter concludes that XAI offers an exploratory yet theoretically generative means of advancing misfit research by uncovering hidden interactions, boundary conditions, and unique individual experiences. By combining human reasoning with algorithmic insight, XAI enables a more precise and theory-informed understanding of misfit, providing new pathways for scholars to model complexity and for organisations to design interventions that reduce harmful misalignments.

doi.org/10.1007/978-981-96-8208-9_12

Boon, C., Durak, E., & Birbil, I. (2025). Towards a better understanding of misfit through explainable AI techniques. In Employee misfit: Theories, perspectives, and new directions. doi:10.1007/978-981-96-8208-9_12