The Science Behind Our Six Interview Variables
Six research-validated variables—Confidence, Specificity, Clarity, Consistency, Engagement, and Criteria Alignment—that are easy to observe and predict better hiring outcomes.
Bottom line: We focus on observable interview behavior linked to better decision quality through evidence-based methodology. Our composite Verilo Score is built from variables with established research foundations.
Ongoing Research: Currently collaborating with BYU Information Systems professorsDr. James GaskinandDr. Mark Keithon machine learning approaches to variable detection—expanding both the rigor and innovation of our methodology.
Confidence — direct link (strong/moderate)
What the research shows.
People's confidence-like beliefs relate to success in interviews and on the job. A longitudinal field study found that interviewing self-efficacy predicts later interview success (1–3). Meta-analyses also show that generalized work self-efficacy and core self-evaluations have reliable, positive relations with job performance (4–5). Survey evidence specific to interviews further indicates that recruiters perceive confident applicants more favorably (6).
Key sources: (1–3, 4–5, 6)
How we measure it.
We score expressed belief in capability (assured delivery, steady composure, appropriate assertiveness) using anchored rubrics aligned with those constructs.
Specificity — direct link (strong)
What the research shows.
Decades of meta-analysis show that interview content matters. Situational and behavioral description formats—questions that elicit concrete, job-focused examples—predict job performance better than vague prompts (7–9). Modern comparisons reach the same conclusion (10). Newer meta-analytic work also parses which constructs (what you ask about) show stronger criterion-related validity (11).
Key sources: (7–11)
How we measure it.
Our "Specificity" score tracks who/what/when/where/how details, actions, and verifiable outcomes (STAR depth) in responses.
Clarity — mixed: predictor + measurement amplifier
What the research shows.
Clear, organized communication influences interviewer evaluations (12). Just as important, clarity in prompts and scoring boosts inter-rater reliability—the foundation for valid, fair decisions (13–14). Interviews also commonly assess communication/organization as a construct (15).
Key sources: (12–15)
How we measure it.
We score organization, directness, and coherence (minimal rambling/ambiguity). On the process side, we enforce standardized prompts and anchored scoring to maximize rater agreement.
Consistency — ratings link (strong) → job link (moderate)
What the research shows.
Candidates who keep their story straight across the interview—i.e., use a stable, correct criterion and reconcile facts/metrics across answers—receive higher interview ratings and, in field data, show better supervisor-rated performance. Research on criteria recognition (ATIC) and situation assessment explains why: candidates who consistently identify what a question is testing tend to answer coherently across scenarios, which partially explains the interview → job link.
Key sources: (26, 16–18)
How we measure it.
We score within-interview alignment (no contradictions; steady rationale; same decision rule in similar scenarios). If an issue only affects one answer's detail/organization, that's Specificity/Clarity, not Consistency. We handle interrater-reliability practices (panel, FOR) in methods as measurement quality, separate from this construct.
Engagement — ratings link (strong) → job link (moderate)
What the research shows.
A comprehensive meta-analysis across 70+ years finds that visible nonverbal/engagement cues (e.g., eye contact, head movement, appearance) relate strongly to interview performance ratings (19). Classic field data show the same pattern in real interviews (20). For the job-outcome link, broader evidence connects socio-emotional skill (e.g., emotional intelligence) to job performance (21). New work shows perceived authenticity cues in interviews relate to interview and, for verbal cues, job performance (22).
Key sources: (19–22)
How we measure it.
We score attentiveness and social presence (responsive listening, appropriate eye contact/expression, constructive turn-taking), emphasizing authentic rather than performative signals.
Criteria Alignment (ATIC) — direct link (strong)
What the research shows.
Criteria Alignment (ATIC) is a candidate's ability to figure out what the interviewer is really testing and shape responses around the job-relevant criterion. Evidence shows ATIC relates to structured/situational interview performance and to supervisor-rated job performance; in one field study, controlling for ATIC significantly reduced the interview → job performance relationship—sometimes to non-significance—indicating partial mediation and suggesting ATIC explains much of the interview's predictive power (17). Additional research shows that assessing situational demands/criteria explains meaningful variance in both selection results and job performance (18), that ATIC is a distinct, job-relevant ability (23), and that knowing how you should behave (criteria knowledge) is the active ingredient behind situational interviews' validity (24–25).
Key sources: (17–18, 23–25)
How we measure it.
We score criteria alignment by looking for evidence that candidates identify the key demand in the question (what "good" looks like for the role) and frame their answer accordingly (rules/rationale that match the criterion).
Why customers still see value beyond the literature
The evidence above establishes what to measure and why it matters. Our value is to deliver human-level reliability automatically and to prove incremental lift over each client's current process—with transparent validation on their outcomes (work samples, early performance, quality-of-hire).
Ongoing Research: Machine Learning Variable Detection
Advancing the science through academic collaboration
Research Partnership
We are collaborating with two leading Information Systems professors at BYU's Marriott School of Business to validate and enhance our methodology through rigorous academic research.
Dr. James Gaskin, Professor
AI, Human-Computer Interaction, NSF Grant Recipient
Dr. Mark Keith, Associate Professor
Data Security, Team Collaboration, Technology Behavior
Research Focus
Our collaboration focuses on validating machine learning algorithms' ability to accurately detect and predict the six established variables, strengthening the technological foundation of our methodology.
Expected Publication: June 2026
Research Question: How effectively can ML algorithms detect the six observable interview variables compared to human raters?
Why This Matters
While our six variables are validated by existing research, this collaboration adds rigorous academic validation specifically for machine learning detection methods. This ensures our technological approach meets the highest academic standards while advancing the field of interview analytics.
References
Open-access versions and publisher links provided where available.
(1) Tay, C., Ang, S., & Van Dyne, L. (2006). Personality, biographical characteristics, and job interview success. Journal of Applied Psychology.
(2) Tay, C., Ang, S., & Van Dyne, L. (2006). Record & abstract. ResearchGate
(3) Tay, C., Ang, S., & Van Dyne, L. (2006). Abstract & citations. Semantic Scholar
(4) Stajkovic, A. D., & Luthans, F. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychological Bulletin.
(5) Judge, T. A., & Bono, J. E. (2001). Relationship of core self-evaluations… with job performance. Journal of Applied Psychology.
(6) Dimopoulos, A. (2020). Applicant's self-confidence influence in employment interview process… International Journal of Human Resource Studies.
(7) McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology.
(8) Huffcutt, A. I., & Arthur, W. Jr. (1994). Interview validity for entry-level jobs. Journal of Applied Psychology.
(9) Motowidlo, S. J., et al. (1992). Studies of the structured behavioral interview. Journal of Applied Psychology.
(10) Hartwell, C. J., Johnson, C. D., & Posthuma, R. A. (2019). Are we asking the right questions? Journal of Business Research.
(11) Wingate, T. G., Bourdage, J. S., & Steel, P. (2025). Evaluating interview criterion-related validity for distinct constructs: A meta-analysis. International Journal of Selection and Assessment.
(12) Riggio, R. E., & Throckmorton, B. (1988). Verbal and nonverbal behavior in hiring interviews. Journal of Applied Social Psychology.
(13) Conway, J. M., Jako, R. A., & Goodman, D. F. (1995). A meta-analysis of interrater and internal consistency reliability of selection interviews. Journal of Applied Psychology.
(14) Huffcutt, A. I., Culbertson, S. S., & Weyhrauch, W. S. (2013). Employment interview reliability: New meta-analytic estimates by structure and format. International Journal of Selection and Assessment.
(15) Huffcutt, A. I., Conway, J. M., Roth, P. L., & Stone, N. J. (2001). Identification and meta-analytic assessment of psychological constructs measured in employment interviews. Journal of Applied Psychology.
(16) Taylor, P. J., & Small, B. (2002). Asking applicants what they would do versus what they did do: A meta-analytic comparison of situational and past-behavior employment interview questions. Journal of Occupational and Organizational Psychology.
(17) Ingold, P. V., Kleinmann, M., König, C. J., Melchers, K. G., & Van Iddekinge, C. H. (2015). Why do situational interviews predict job performance? The role of ATIC. Journal of Business and Psychology.
(18) Jansen, A., Melchers, K. G., Lievens, F., Kleinmann, M., Brändli, M., Fraefel, L., & König, C. J. (2013). Situation assessment as an ignored factor… Journal of Applied Psychology.
(19) Martín-Raugh, M. P., Kell, H. J., Randall, J. G., Anguiano-Carrasco, C., & Banfi, J. T. (2023). Speaking without words: A meta-analysis of nonverbal cues in job interviews. Journal of Organizational Behavior.
(20) Gifford, R., Ng, C. F., & Wilkinson, M. (1985). Nonverbal cues in the employment interview. Journal of Applied Psychology.
(21) O'Boyle, E. H., Humphrey, R. H., Pollack, J. M., Hawver, T. H., & Story, P. A. (2011). Emotional intelligence and job performance: A meta-analysis. Journal of Organizational Behavior.
(22) Heimann, A. L., & Schmitz-Wilhelmy, A. (2025). Observing interviewees' inner self: How authenticity cues in job interviews relate to interview and job performance. Journal of Business and Psychology.
(23) Kleinmann, M., Ingold, P. V., Lievens, F., Jansen, A., Melchers, K. G., & König, C. J. (2011). A different look at why selection procedures work: The role of ability to identify criteria (ATIC). Organizational Psychology Review.
(24) Melchers, K. G., Bosser, S., Hartstein, T., & Kleinmann, M. (2012). Assessment of situational demands in a selection interview. International Journal of Selection and Assessment.
(25) Oostrom, J. K., Melchers, K. G., Ingold, P. V., & Kleinmann, M. (2016). Why do situational interviews predict performance? Is it saying how you would behave or knowing how you should behave? Journal of Business and Psychology.
(26) Van Iddekinge, C. H., Raymark, P. H., Eidson, C. E., & Attenweiler, W. J. (2004). What do structured selection interviews really measure? The construct validity of behavior description interviews. Human Performance.
26 citations • Meta-analyses, field studies, and validation research
Request bibliography