The ITU-T recommendations BT.500 and P.910 outline multiple subject screening methodologies for subjective multimedia quality experiments. Yet, their real-world effectiveness remains difficult to verify due to the lack of known ground truth. This paper introduces a comprehensive simulation framework designed to objectively assess subject screening methods by generating synthetic subjective scores with known parameters. Two primary experimental scenarios — typical and super-precise subject models — were evaluated using simulated data. Results indicate that correlation-based screening methods (P.910) outperform kurtosis-based methods (BT.500) in detecting irrelevant subjects, thereby improving the precision of subjective experiment outcomes. Additional contributions include the development of a novel score generation model and the definition of robust evaluation metrics. We hope this paper will serve as the basis for future analysis based on simulations of subjective experiments.