So, you’ve worked hard and got some great applications for your new job. There are a few that have real potential.
But how do you know which one to pick?
You test them. Whether formal or informal, we all have a selection process. After a review of the CVs, maybe you do a phone screening followed by an interview with the hiring manager. Perhaps you add in a code challenge for your developer roles.
Whatever the case, you’re ultimately trying to predict how well someone will perform in the job based on how well they do in the different ways you assess them during the recruitment process.
Luckily researchers regularly measure how well assessment tools predict job performance. We’ll take a look at how the accuracy of tools is measured, and how well common assessment tools perform.
Then we’ll look at some of the reasons why you may not get accurate test results, including the candidates’ experience. Because if candidates are under pressure, or don’t think the test is relevant, it will affect their results - and your ability to pick the right person for your job.
P.S. If you’re enjoying the Inside Job, please help spread the word! Recommend it to a friend in an email or on LinkedIn, with this link: https://www.idealrole.com/newsletter.
For over 100 years researchers have been measuring the effectiveness of assessment tools that can be used in recruitment. In every study, they aim to measure how well evaluations of candidates made during recruitment predict how those same people go on to perform in the job after they’re hired.
How well an assessment predicts job performance is known as the tool’s validity. The higher the validity of a tool, the better it is able to predict how well someone will do in the job.
Ideally the results of assessment tools should indicate how someone will perform in the job. A high score reflects someone that will perform well. A low score someone that is not suitable for the job.
To test how accurate assessment tools are, researchers look at peoples’ assessment results and job performance. How well the assessment results predict job performance are expressed as a correlation coefficient - a statistical technique for finding how strongly two things are related.
You don’t need to understand the details of how to calculate the correlation coefficient, but should be aware that it is between 0 and 1.
A perfect tool will have a correlation coefficient of 1, which means that it the assessment results perfectly predict job performance. At the other end of the scale, an assessment tool with a correlation coefficient of 0 won’t predict job performance at all.
In 2016, researchers did a meta-analysis of the effectiveness of different assessment methods. This paper reviews the results of their findings.
A meta-analysis is a statistical technique that allows the findings from many different studies to be combined. The benefit of doing this is that the overall combined result is more accurate as it reduces the impact of errors in individual studies.
One of the key findings from the meta-analysis was that tests of general mental ability (GMA) had the highest validity, with a correlation coefficient of 0.65.
The next best predictor of job performance was the interview (0.58). And, unlike past studies, this one found that there was no difference in the effectiveness of structured and unstructured interviews.
But better results can be achieved by using more than one type of assessment.
For example, using a GMA and integrity test increases the validity coefficient to 0.78. Using a GMA combined with a structured interview results in a validity coefficient of 0.76. And using a GMA and an unstructured interview increases it to 0.74.
Interestingly, personality tests were found to be only moderately predictive - and offered almost nothing extra when a GMA test is also used.
You can find a table with the validity coefficients of some common assessment types in the paper.
It also cautions that there are ethnic group differences in GMA test results, and outlines the limitations of meta-analysis studies.
While it’s important to be aware of the limitations, research shows that when used thoughtfully, assessment tools can lead to significant improvements in recruitment results.
In deciding what selection tools to use, you need to make sure that they can be verified over time by linking the assessments to actual job performance.
You’ll also need to make sure that when assessments are used the results are followed. One common problem is that the results of the tests are ignored. Especially when the best-liked candidate does not score well.
This situation needs to be avoided. Because it’s been found that using tools, and ignoring their results, leads to worse outcomes than not using the tools at all.
Another thing to consider is how the accuracy of assessment tools can be impacted by the candidate experience.
For example, imagine that one of the assessments is rather stressful. If some candidates respond poorly to the stressful test environment but would perform well on the job, they’ll be incorrectly overlooked.
This scenario is even more problematic if certain types of candidates are affected. In particular, candidates from disadvantaged or minority groups, who may be prone to experiencing pressure due to negative stereotypes and the sense of being an outsider.
Given that the accuracy of assessment tools can be affected by the candidate experience - what do candidates really think about the different tools?
Perhaps unsurprisingly, when applicants have the impression that the selection method reflects the day to day reality of the job, they show a more positive attitude towards testing - and tend to perform better.
Also, when applicants believe selection tools and processes to be fair and job related, they are more likely to:
On the other hand, when selection tools and procedures are seen as unfair, the companies using them risk being unable to attract top candidates and face more litigation or negative publicity.
Interviews and work sample tests are among the tools most appreciated by candidates - personal contacts and honesty tests among the least.
Before you pick someone with a score of 83 over someone else with a score of 82, you need to consider how accurate the test is.
While we often take scores at face value, they’re actually more like a range. And depending on the accuracy of the test, that range can be large.
Imagine you took a test and scored 55. If the accuracy of the test is 0.75, your true score is anything from 30 to 80. If the accuracy is 0.85, your true score is between 40 and 70. And if the test accuracy is 0.95, your true score is between 45 and 65.
If Jane scores 65, but the accuracy of the test is 0.75, her score is somewhere between 40 and 90. Due to the overlap between your scores, you can’t be sure Jane actually did better than you did.
This is an important concept when working out who actually performed the best - and at what score to cut people from consideration.
Personality tests are common in business. But as we saw in the results of the meta-analysis, they’re not very good at predicting how well someone will perform in the job.
So, what do candidates think about personality tests?
This study asked 138 people to rank the fairness of two selection processes - one with an interview and personality test, the other with just an interview. The process with the personality test was seen as significantly less fair. Additionally, this negative perception was not changed by explaining why the personality test was used.
Overall, the findings suggest that it may be difficult to overcome negative perceptions of personality tests.
Trying to understand and evaluate whether someone is capable, motivated and will perform well in a job, in the limited time available in the recruitment process, is no easy task. But by selecting the best assessment tools you can significantly increase your chances of making a great hire.
In selecting which tools to use, it’s important to be aware of how each tool works and what it’s limitations are. Also, how it will be used and perceived by candidates. Because if candidates are comfortable with a test, and can see it is fair, they’re more likely to perform better - and your assessments will better predict who’ll be the best fit for the job.
That’s all for now. Let me know if you have any questions - you can get in touch here.
And if you’re ever looking for a past newsletter, you can find them all here.