Ability Tests - To what extent are they still relevant for talent assessment and acquisition in the digital age?
Part II - Latest Research on Ability Tests – What are the findings and how do they impact your selection practices?
For decades the use of, and research into, cognitive ability tests has been widespread, particularly in relation to assessment for selection. But what do we really know about the research into them and their true impact? If you are like me, I can suffer from information fatigue, too many updates and research articles and I sometimes struggle to find the true meaning or value.
From the moment I entered the field of assessment and talent management I was told of the importance of using Ability Tests as part of talent assessment due to them being strong predictors of future performance, but with an ever-changing workforce and working environment, are there any new insights we can utilise?
In the following brief blog, I will cover two topical areas of research into ability tests and selection practices.
From the moment I entered the field of assessment and talent management I was told of the importance of using Ability Tests as part of talent assessment due to them being strong predictors of future performance, but with an ever-changing workforce and working environment, are there any new insights we can utilise?
In the following brief blog, I will cover two topical areas of research into ability tests and selection practices.
To use a mobile device or not to use a mobile device, is that the question?In this modern age we have multiple devices at our fingertips, with ‘on-the-go’ devices such as mobile phones more easily accessible than the typically traditional desktop or even laptop. We have seen a surge in test providers trying to keep up with this mobile trend by making tests available on such devices. However, with some applicants sitting the assessments using mobiles and others using computers, could the variation in assessment device lead to a variation in test performance?
Is it the device that makes the difference or individual differences that determine which device a candidate uses? Research has presented a relationship between the device used to take an ability test and the test result attained, with non-mobile users outperforming mobile users (Traylor et al., 2021). However, another recent study has shown that when mobile or non-mobile devices were randomly assigned to candidates, taking away any selection bias, no substantial difference was found between test results. This would lead us to suggest that there are other factors contributing to the test result differences, not merely the device we use. Research has shown that candidates with higher educational attainment are more likely to select a non-mobile device to complete an ability test which could account for this variance. Conversely, candidates applying for lower complexity jobs were found more likely to complete the test using their mobile. |
This recent study has shed greater light on why we may see a relationship between the device used to take the test and the test result, namely the test taker and their choice of device, rather than the limiting or advantageous properties of devices themselves. Such findings could help ease practitioner’s concern over the type of device used and can help us think more about when we might consider testing in a controlled environment, without the choice of device to limit perceived disparity.
Can cognitive ability test taking predict Personality faking?
‘Faking’ personality questionnaires or responding to them in a socially desirable way is often a key initial concern raised with interpreting personality questionnaire results, especially for selection decisions. It is not surprising therefore that ‘why’ and ‘how’ people may look to ‘fake’ personality questionnaires is often a subject of interest. Historically, research has shown higher scores on cognitive ability tests to positively correlate with inflated scores on Personality questionnaires, suggesting a link between cognitive ability and the rate of ‘faking’ (Levashina et al, 2014). A more recent meta analysis conducted by Schilling et al., 2021/3, sought to holistically investigate the link between cognitive ability and personality ‘faking’. The study found the same initial relationship between Ability test results and personality questionnaire results, but in addition found some other interesting insights.
The study found that this relationship between ability tests and personality questionnaire is significantly stronger in ‘selection’ as opposed to ‘non-selection’ circumstances. This finding implies that candidates with higher cognitive ability are more likely to ‘fake’ personality questionnaires, and that when the stakes are higher, candidates are more likely to engage in such ‘faked’ responses. They also found the relationship between ability and personality to differ depending on the type of response style utilized by personality questionnaires. Whilst ‘forced choice’ (Ipsative) personality questionnaires are thought to be less prone to ‘faking’ than in likert scale (normative) based personality questionnaires, ‘forced choice’ methods were shown to yield a stronger shared variance with ability test results, meaning that they may less accurately measure the intended construct of personality alone. |
However, just because someone has the ability to ‘fake’ does not mean that they will, therefore we need to consider other factors, not just cognitive ability and capacity to fake, but rather a propensity and intent to skew reality for personal / professional gain.
In terms of implications, I’ve often heard the phrase “well if they can fake it, they can do it!” These findings could suggest that as cognitive ability is linked with personality results and perhaps faking ability, if a candidate performs well on the personality questionnaire and ability tests, they have the cognitive ability to perform well on the job, regardless of whether their responses were inflated. However other factors such as motivation will always play a key role. There are even two different perspectives on this area “faking-is-bad” (FIB) and “faking-is-good” (FIG) and camps are divided. Perhaps the most optimum outcome would be identifying participants with the ability to ‘fake’ but the propensity not to. One avenue is to look at ability in conjunction with conscientiousness – although even this trait itself could be faked, therefore observed conscientiousness may serve better here. Similarly, levels of conscientiousness, competitive worldviews and religiosity have been found to impact faking intentions (Schilling et al 2021/4).
Regardless on your view of whether faking is good or bad, action should be taken to seek to mitigate faking in selection contexts. Research like that explored above may help in this mitigation through guiding our choice of personality questionnaire. For example, whilst ‘forced choice’ tests were found less prone to faking, they were also found less likely to measure intended personality constructs when compared to likert scale response questionnaires. Perhaps one recommendation could be to opt for hybrid personality questionnaires, those adopting both response style types as an optimum outcome.
In conclusion, from the device we use to sit ability tests to the links with other assessment scores, we can see from recent research the apparent buzz and impact associated with ability tests. What cannot be denied is their strong predictive validity and well-earned prominent place within selection practices. But as the working world and environment in which we operate continues to evolve, we will no doubt see how we use ability tests change and their evolution over time.
In terms of implications, I’ve often heard the phrase “well if they can fake it, they can do it!” These findings could suggest that as cognitive ability is linked with personality results and perhaps faking ability, if a candidate performs well on the personality questionnaire and ability tests, they have the cognitive ability to perform well on the job, regardless of whether their responses were inflated. However other factors such as motivation will always play a key role. There are even two different perspectives on this area “faking-is-bad” (FIB) and “faking-is-good” (FIG) and camps are divided. Perhaps the most optimum outcome would be identifying participants with the ability to ‘fake’ but the propensity not to. One avenue is to look at ability in conjunction with conscientiousness – although even this trait itself could be faked, therefore observed conscientiousness may serve better here. Similarly, levels of conscientiousness, competitive worldviews and religiosity have been found to impact faking intentions (Schilling et al 2021/4).
Regardless on your view of whether faking is good or bad, action should be taken to seek to mitigate faking in selection contexts. Research like that explored above may help in this mitigation through guiding our choice of personality questionnaire. For example, whilst ‘forced choice’ tests were found less prone to faking, they were also found less likely to measure intended personality constructs when compared to likert scale response questionnaires. Perhaps one recommendation could be to opt for hybrid personality questionnaires, those adopting both response style types as an optimum outcome.
In conclusion, from the device we use to sit ability tests to the links with other assessment scores, we can see from recent research the apparent buzz and impact associated with ability tests. What cannot be denied is their strong predictive validity and well-earned prominent place within selection practices. But as the working world and environment in which we operate continues to evolve, we will no doubt see how we use ability tests change and their evolution over time.