Webinar: Do you have a business case for pre-hire assessments?

Webinar: Do you have a business case for pre-hire assessments?

Join us for a FREE webinar co-hosted by Talent Grader and ThriveMap.

24 Feb 2021. at 2pm GMT Register

Why Talent Grader?

  • Give you confidence in choosing legally defensible talent assessments
  • Find talent assessments easily based on the areas you want to assess and by job function and industry sector
  • Make informed decisions, based on reviews from other test users

From our blog

Why cognitive ability matters

Why cognitive ability matters

October 28 / Angus McDonald
“If you could use just one type of assessment when evaluating candidates, what would you use?”  This is a question I often ask training course delegates as a way of starting a conversation about the effectiveness of different selection methods.  By far the most common response is ‘an interview’.  My answer: ‘a test of cognitive ability’.  Delegates are often uncomfortable with this idea despite the training courses they are on showing just how effective psychometric tests of cognitive ability are.

The evidence

Here’s a brief summary of the evidence that supports cognitive ability.  In a recent comparison of 31 methods of predictive job performance, cognitive ability was by far the best predictor of both job performance and job-related training outcomes[1].  It has been known for some time that the more complex a job the better cognitive ability predicts performance, but even for unskilled jobs it still has substantial predictive validity[2].  The effect of ability on job performance also appears to be linear[3].  This contradicts the view that it’s important, but only up to a point, after which we need to look for other factors – such as emotional intelligence or ‘fit’ – to identify superior performance.  Whilst not discounting the potential of other factors to add to our understanding, on average, the better someone scores on a test of cognitive ability, the better they will do in the job.

How organisations can measure cognitive ability

Most tests of cognitive ability involve problem solving, either through tests of verbal reasoning, numerical reasoning or abstract / non-verbal reasoning.  The complexity of problems vary from the very challenging through to simple tasks that almost anyone can get right, meaning performance depends on speed of working.  Each task assesses something unique, though performance on all tests are substantially influenced by ‘g’ or a general ability factor[4].  General ability underpins our ability to learn.  Research on children shows this association between ability and subsequent learning[5], and is a key justification for using academic selection tests.  Similarly, in work contexts, cognitive ability predicts both the rate of job knowledge acquisition and the depth of learning[6].

So, where’s the catch?

Understanding about a candidate’s cognitive ability tells us a lot about their potential, but a few words of caution are necessary.  Tests based on verbal and numerical content may be seen as work-relevant (‘face valid’) by candidates, though non-verbal or abstract tasks less so.  If used, the relevance of such tests needs to be clearly explained to ensure candidates are engaged.  Completing cognitive tests can be demanding, resulting in higher levels of candidate drop-off compared to other types of assessment.  User experience is not always great, though some cognitive tests are now using elements of gamification as a way of enhancing experience.  They can also be practiced, so it’s important to consider how candidates are prepared for these tests to ensure all have an equal opportunity to show their abilities.  Many test publishers provide practice materials as a way of reducing the effect of prior experience.
Perhaps the biggest concern when using tests of cognitive ability is the risk of adverse impact, where one group of test takers performs, on average, better than another.  Diversity is one of key targets for many organisations, so any assessment that threatens this can be a barrier to use.  Establishing the validity of an assessment for a role means its use in selection is justified, but to many recruiting a diverse workforce is of equal if not greater importance.  The causes of group differences on cognitive ability measures are complex and not fully understood, though rarely due to obvious test bias.  As there is no simple way of compensating for group score differences, cognitive tests should be applied only after careful consideration as to how scores from them will be used in decision-making.

Concluding thoughts

The ability of cognitive tests to predict work performance and learning is now established.  As work changes at an ever-increasing rate, recruiting people with the potential to learn and adapt to these changes is essential.  Though easy to use and an effective screening tool, especially in large-scale recruitment, we need to be mindful of how cognitive tests are used if we want to create a more diverse workforce. Look out for our next blog where we will talk about what you can do to reduce adverse impact of cognitive ability tests.
Would you like to see the range of cognitive ability tests we are promoting? Click here.
[1] Schmidt, Oh and Shaffer, (2006). The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 100 Years of Research Findings. [2] Salgado and Moscoso, (2019). Meta-Analysis of the Validity of General Mental Ability for Five Performance Criteria: Hunter and Hunter (1984) Revisited. [3] Coward and Sackett, (1990). Linearity in ability-performance relationships: a reconfirmation. [4] Carroll, (1993) Human Cognitive Abilities. [5] Deary, Strand, Smith and Fernandes, (2007) Intelligence and Educational Achievement. [6] Hunter and Schmidt (1996). Intelligence and job performance: Economic and social implications.
Removing Barriers for Recruiting Neurodiverse Talent

Removing Barriers for Recruiting Neurodiverse Talent

October 27 / Grace Johnson
There is a lot of talk at the moment around employment and recruitment. With organisations still grappling with the devastating impact of Covid-19, the media is flooded with talk of hundreds of applications for singular roles, large numbers of redundancies and general market uncertainty. And with the government furlough scheme due to end in October and the latest restrictions reinstated by the government, these themes are predicted to take precedence in the media well until the end of 2020.
Competing in today’s employment market is a gargantuan challenge for anyone, but imagine if you had neurodiversity barriers too?
I work for the Work and Health Programme (WHP) at Activate Learning in Oxfordshire. The Work and Health Programme is a pioneering DWP programme with the sole aim of providing specialist and holistic employment support to a variety of individuals, most of whom have a mental or physical disability.
My Work and Heath Programme colleagues and I relied heavily on socially responsible and inclusive employers pre-Covid-19 and with the job market becoming even more competitive, we need them now more than ever.
7.7m working age people in the UK have a disability and they are attempting atypical recruitment processes alongside every other non-disabled applicant in the UK. So how can employers and recruiters ensure they are accessing ALL talent? The point of difference could be your recruitment process.
But in order to first understand how adapting the recruitment process can help, we need to understand the current situation first.
I contacted one of my WHP participants, who has an autism diagnosis and asked them if they could share some of their experience of atypical recruitment processes;
“I applied for a clinical job which claimed to be willing to make adjustments for my disability. On the day of the interview I discovered that this included a test and an hour of completing tasks which were similar to those I would be expected to do. I failed at interview because I explained that it was not possible to complete some tasks appropriately without any understanding of the company policy and procedures as there were many appropriate legal approaches to the problem posed.
I could complete it according to my preferred method but could not guess at what the company would prefer in each case. I was given no support or adjustment and was assessed as requiring too much direction to be able to work independently. The interview process meant that a potentially ideal candidate was lost to the company. While I do not know for certain that my diagnosis was a problem, I suspect that it was.”
“I think it's about 16% of diagnosed autistics who are in full time work (I'm one of the 84% who aren't.) Most of my job interviews were prior to my diagnosis so there was no option to disclose. Certainly, I knew that I needed to be very guarded about the truth about my communication challenges or I would not stand a chance at employment. That left me being judged on half-truths and my interview skills. Attributes which are of little importance in the job itself.”
So how can we change and adapt recruitment processes to ensure that ALL candidates feel they can be their full selves, whilst allowing employers to access their talent?
Some of the easiest ways you or your organisation can ensure you are creating a fair and inclusive process are;
  • Ensuring 'essential' job requirements are absolutely essential
  • Encouraging applicants to disclose their disabilities, by including disclosure opportunities at all stages of the recruitment process
  • Ensuring in-house designed assessments (if used) are valid and provide all the information necessary to complete the task at hand
  • Considering relevant, valid and reliable psychometric or other similar well-designed tests as part of your recruitment process and offering reasonable adjustments where required
  • Communicating well in advance what a selection test will assess and its duration, whilst ensuring it is completely relevant to the job
  • Offering flexible working and reasonable adjustments and stipulating this within all advertising material; ensuring that this is a sincere, embedded facet of your working culture
  • Implementing a Disability Confident Policy (you can find out more about Disability Confident here)
  • Actively seeking out applicants who may self-deselect or rule themselves out for opportunities, but who would be an asset to your organisation
  • Seeking out organisations that are already Disability Confident such as Activate Learning. Learn more about their project such as Removing Barriers Rebuilding Lives and Skills Support for the Unemployed (SSU).
  A well-designed talent assessment tool gives all the information a candidate needs to complete the task at hand fully and improves their engagement in the selection process. If you are looking for assessments to make reliable hiring decisions and provide your candidates with constructive and objective feedback, we would be delighted to help. Contact us at  support@talentgrader.com.
We are launching a dedicated platform, Talent Grader,  with all the talent assessment tools and technologies on 3rd November 2020.
Adverse impact: Addressing risks of bias in job selection

Adverse impact: Addressing risks of bias in job selection

October 09 / Andrew Clements
One of the critical roles played by Human Resources professionals is to ensure that organisations are compliant with employment legislation, such as the case of recruitment.  Readers of this post will likely have at the least a broad understanding of legislation relating to equality at work (e.g. the Equality Act (2010) in the UK), and some may have a high level of expertise.  Beyond this, readers are likely to have values linked to promoting diversity and inclusion.  In other words, I assume readers would not deliberately discriminate against people of different ethnicities, gender identities, religious beliefs, etc.  The emphasis is on the deliberate, but this is not going to be a post about unconscious bias.  Rather, I refer to indirect discrimination, which occurs when a policy or process of some kind is applied equally, but leads to unequal outcomes for people based on protected characteristics, and is not a proportionate means for pursuing a legitimate outcome.  In other words, I want to discuss the unintended consequences of decisions that we take, known as “adverse impact.”  Adverse impact applies when, for example, we use a selection tool that discriminates against people of colour, women, individuals who are neurodiverse, and so on.  This has legal and ethical implications; from a legal perspective it is important to consider what is defensible, from an ethical perspective we should be seeking to remove unfair discrimination (based on personal characteristics rather than ability to do the job) wherever we find it.  Drawing on the research evidence helps us both legally and ethically.

It is important to consider adverse impact before making decisions.  Without a background in individual differences (a core component of psychology courses) or having read about research on recruitment and selection, it can be all too easy to make decisions that will have undesirable outcomes.  For example, intelligence (or cognitive ability as psychologists tend to call it) is one of the best predictors of job performance (Morgeson, Delaney-Klinger, & Hemingway, 2005; Salgado & Moscoso, 2019; Schmidt & Hunter, 1998).  It reflects our ability to take in information, store it, retrieve it, and use it.  People with more of it therefore learn quicker and can do more with what they learn.  It therefore seems an obvious choice for selection.  Readers are probably waiting for the “but”…  The downside is that cognitive ability tests create adverse impact based on ethnicity (Schmidt, 2002) and age (Klein, Dilchert, Ones, & Dages, 2015).  In the latter case, “fluid intelligence”, reflecting our ability to learn new things and solve new problems, declines over the adult lifespan – hence jokes about asking children to teach us how to use gadgets.  In the case of ethnicity it suffices to say that there continue to be debates as to the causes for differences, which are likely to include socioeconomic factors.

When we know about a problem, there are some points to consider.  Firstly, is this legally defensible as a proportionate way of pursuing a legitimate goal?  Continuing with the case of intelligence, it is legitimate to seek to hire the best – and no one in the diversity field is arguing otherwise.  However, we can consider whether high levels of intelligence are really essential.  Research shows that intelligence has a greater impact on performance when jobs are complex (Bertua, Anderson, & Salgado, 2005).  When hiring for a simple job we can therefore simply do without the intelligence test, because it is unnecessary, and at the same stroke do away with a major source of adverse impact.  However, we may be hiring for complex jobs in which our employees must grapple with new problems (maybe even problems we don’t know exist yet).  In this case an intelligence test may be defensible, but we should consider options for mitigating the risks.  There are a variety of strategies such as combining intelligence with other tools that have low risk of adverse impact, adjusting the weighting we give to the use of intelligence tests, or grouping performance on intelligence tests into bands based on the margin of error – within which we arguably cannot differentiate anyway (see Ployhart & Holtz, 2008 for more detail on strategies).  It is important to note that in the case of intelligence tests we are likely to face a trade-off between predicting performance and reducing adverse impact, as strategies that achieve the latter reduce our ability to predict performance.  However, we can at least make an informed decision about these trade-offs.

What I am hoping to demonstrate is that choosing a selection strategy is not a simple thing.  We need to educate ourselves about the tools that we use.  I was recently considering the use of a work sample (i.e. observing a person perform a relevant practical task) in a selection process.  They have good predictive ability (Borman & Hallam, 1991; O’Leary, Forsman, & Isaacson, 2017; Roth, Bobko, & McFarland, 2005), but research has also identified adverse impact in the use of work samples (Roth, Bobko, & McFarland, 2005).  Intuitively I had not expected this, and had I failed to review the evidence base I might not have been aware of the need to formulate a strategy for mitigating adverse impact.  Roth and colleagues observed that scholars often assumed that a work sample would show similar outcomes as actual job performance, where differences are small between people of different ethnic backgrounds.  Roth et al’s analysis supports an interpretation that some work sample tests tap cognitive ability due to the need to take in and use information, for example in-tray exercises.  They found work sample tests such as role-plays, which tap interpersonal skills, demonstrate much lower levels of adverse impact.  Thus, similar mitigation strategies might be used with work sample tests as with cognitive ability tests, starting with an evaluation of whether the test is needed.

The scenario described above is an example of the practical use of engaging with psychological literature.  However, not everyone has access to the research evidence (which is often published in academic journals) or the expertise.  If you are choosing selection tools you can take some steps to protect yourself by either gaining expertise (if you have the inclination and resources) or contracting with someone who does have the expertise.  For example, Occupational Psychologists (registered with the Health and Care Professions Council) very often have expertise in recruitment and selection techniques.

What does all this mean in practice?  We should be reviewing our practices in terms of adverse impact.  I have discussed recruitment and selection, but we can apply this lens to nearly every aspect of organisational functioning (e.g. organisational practices relating to working hours).  We also need to review decisions when we are implementing something new (e.g. making a selection process for a new job).  This will need to become a habitual part of what we do.  Racism, sexism etc. are not going to disappear quickly, or possibly ever, so we need to be sensitive to how these may occur in our structures.  New challenges may emerge, perhaps as new individual differences become known.  In other words we are going to have to create and recreate solutions, and evaluate how well they work.  We will need to scan the horizon.  For example, the use of artificial intelligence in selection poses challenges, because machine learning functions by identifying patterns without judging patterns ethically.  If our selection process discriminates against black men, and we automate the process, we now have racist automation – not what I dreamed of when reading science fiction as a child.  If we are going to do this right, we can start by recognising that it is challenging and will require effort and thought.

Dr Andrew Clements is a Lecturer in Business and Occupational Psychology at Coventry University, with interests in promoting evidence-based practice.  You can find out more about postgraduate courses teaching the application of psychology to the workplace here (for individuals with a first degree in psychology) or here (for individuals with first degrees in other subjects).  
Why cognitive ability matters

Why cognitive ability matters

October 28 / Angus McDonald
“If you could use just one type of assessment when evaluating candidates, what would you use?”  This is a question I often ask training course delegates as a way of starting a conversation about the effectiveness of different selection methods.  By far the most common response is ‘an interview’.  My answer: ‘a test of cognitive ability’.  Delegates are often uncomfortable with this idea despite the training courses they are on showing just how effective psychometric tests of cognitive ability are.

The evidence

Here’s a brief summary of the evidence that supports cognitive ability.  In a recent comparison of 31 methods of predictive job performance, cognitive ability was by far the best predictor of both job performance and job-related training outcomes[1].  It has been known for some time that the more complex a job the better cognitive ability predicts performance, but even for unskilled jobs it still has substantial predictive validity[2].  The effect of ability on job performance also appears to be linear[3].  This contradicts the view that it’s important, but only up to a point, after which we need to look for other factors – such as emotional intelligence or ‘fit’ – to identify superior performance.  Whilst not discounting the potential of other factors to add to our understanding, on average, the better someone scores on a test of cognitive ability, the better they will do in the job.

How organisations can measure cognitive ability

Most tests of cognitive ability involve problem solving, either through tests of verbal reasoning, numerical reasoning or abstract / non-verbal reasoning.  The complexity of problems vary from the very challenging through to simple tasks that almost anyone can get right, meaning performance depends on speed of working.  Each task assesses something unique, though performance on all tests are substantially influenced by ‘g’ or a general ability factor[4].  General ability underpins our ability to learn.  Research on children shows this association between ability and subsequent learning[5], and is a key justification for using academic selection tests.  Similarly, in work contexts, cognitive ability predicts both the rate of job knowledge acquisition and the depth of learning[6].

So, where’s the catch?

Understanding about a candidate’s cognitive ability tells us a lot about their potential, but a few words of caution are necessary.  Tests based on verbal and numerical content may be seen as work-relevant (‘face valid’) by candidates, though non-verbal or abstract tasks less so.  If used, the relevance of such tests needs to be clearly explained to ensure candidates are engaged.  Completing cognitive tests can be demanding, resulting in higher levels of candidate drop-off compared to other types of assessment.  User experience is not always great, though some cognitive tests are now using elements of gamification as a way of enhancing experience.  They can also be practiced, so it’s important to consider how candidates are prepared for these tests to ensure all have an equal opportunity to show their abilities.  Many test publishers provide practice materials as a way of reducing the effect of prior experience.
Perhaps the biggest concern when using tests of cognitive ability is the risk of adverse impact, where one group of test takers performs, on average, better than another.  Diversity is one of key targets for many organisations, so any assessment that threatens this can be a barrier to use.  Establishing the validity of an assessment for a role means its use in selection is justified, but to many recruiting a diverse workforce is of equal if not greater importance.  The causes of group differences on cognitive ability measures are complex and not fully understood, though rarely due to obvious test bias.  As there is no simple way of compensating for group score differences, cognitive tests should be applied only after careful consideration as to how scores from them will be used in decision-making.

Concluding thoughts

The ability of cognitive tests to predict work performance and learning is now established.  As work changes at an ever-increasing rate, recruiting people with the potential to learn and adapt to these changes is essential.  Though easy to use and an effective screening tool, especially in large-scale recruitment, we need to be mindful of how cognitive tests are used if we want to create a more diverse workforce. Look out for our next blog where we will talk about what you can do to reduce adverse impact of cognitive ability tests.
Would you like to see the range of cognitive ability tests we are promoting? Click here.
[1] Schmidt, Oh and Shaffer, (2006). The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 100 Years of Research Findings. [2] Salgado and Moscoso, (2019). Meta-Analysis of the Validity of General Mental Ability for Five Performance Criteria: Hunter and Hunter (1984) Revisited. [3] Coward and Sackett, (1990). Linearity in ability-performance relationships: a reconfirmation. [4] Carroll, (1993) Human Cognitive Abilities. [5] Deary, Strand, Smith and Fernandes, (2007) Intelligence and Educational Achievement. [6] Hunter and Schmidt (1996). Intelligence and job performance: Economic and social implications.

Testimonials

Suppliers