Our personality assessment is based on six major dimensions of personality i.e. HEXACO model. The factors that make up the HEXACo model:
Honest-humility
Emotionality
eXtraversion
Agreeableness
Conscientiousness
Openness to Experience
We provide three different types of reports which can be used by interviewers. Please refer to the reports section for further information. We provide lots of support to help our clients use our tailored assessments, including full debrief sessions before interviews commence to answer any questions arising and to do a deeper dive into the candidates’ personality, ability, and derailment reports.Â
Note: The SOVA Cognitive ability tests (verbal, numerical, and logical) are timed tests although the time is not present on screen. Therefore, we ask candidates to work as accurately and as quickly as they can – remember, ability tests measure maximum performance.
Mapping to relevant competencies possible?
yes
Measures put in place to remove or reduce biases
All our bespoke and tailored reports are written by highly experienced Practitioners who have a very sound understanding of fairness within an assessment context. Furthermore, we write our bespoke reports based on the SOVA personality questionnaire and as far as we are aware, SOVA are the only test publishers who do not request biographical data. To further this point, SOVA consistently analyse their data for adverse impact for gender and ethnicity across their personality and ability tests. Face validity studies completed by SOVA report that over 90% of candidates (from a sample of 3,231) report finding the assessment engaging and similarly, 90% of those responding believed the assessment presented a positive impression of the organisation they were applying for.
Key Information
Online - desktop / tablet, Online - mobile
15 mins40 questions
English (UK), English (US), Arabic, French, German, Italian, Japanese, Polish, Portuguese, Russian, Spanish, Turkish, Japanese
The SOVA platform complies with the ‘Web Content Accessibility Guidelines’ and timings on the ability tests can be adjusted for individuals who have neurodiverse conditions.
Data Protection
Talentpraxis Group as the data controller manages candidate data very carefully, only requiring an individual’s first and last name along with their email address to which we send their assessment invite. We retain a secure copy of all reports for 12 months following production, after which they are destroyed. Additionally, we, as controllers, regularly change our passwords to the assessment platforms we use.
Â
Reliability, Validity and Norm Group Info
Reliablity
The average internal reliability coefficient for the scales of the personality questionnaire we predominantly use (from SOVA Assessments) is 0.8 which exceeds the industry benchmarks of 0.7 set by professional bodies. This means it is a highly robust model and the answers to the questionnaire are consistent over time during different administrations.
Validity
The SOVA personality questionnaire we predominantly use for our bespoke reports has been validated with numerous clients to establish its ability to predict current and future performance of candidates who have taken the assessment. Validity coefficients for validation studies for the personality questionnaire with live clients exceed the industry benchmark of 0.4 for personality assessments.
Comparison groups available for your candidate scores
We assess short-listed candidates who apply for mid-senior level roles irrespective of industry sectors. Therefore we use a global professional, graduate and managerial norm group.
Norm groups consist of both males and females who are professionals, graduates, and manager in organisations globally.
Cost exc. tax
FREE TRIAL
What's included
Bespoke reports mapped to values/competencies
Fully managed service
Invites and reminders to candidates
Feedback to interview panel
Feedback to candidates
FREE reports for all shortlisted candidates for one single job role
The contents of the Compatibility Report sections are based on how strong the behavioural preferences are. The order of presentation within the ‘strengths and considerations’ and ‘risks’ sections determines how strong the behavioural preferences are likely to be. We recommend interviewing panel to validate ‘risks’ using the competency-based questions along with the ‘STAR’ interview framework.
The contents of the Insights Report sections are based on how strong the behavioural preferences are. The order of presentation within the ‘strengths and considerations’ and ‘risks’ sections determines how strong the behavioural preferences are likely to be. We recommend interviewing panel to validate ‘risks’ using the competency-based questions along with the ‘STAR’ interview framework. Additionally, potential derailment factors are identified in order to understand the candidate’s responses when dealing with high pressure and novel situations. This report also includes cognitive ability (verbal, numerical & logical) scores.
The contents of the Composite Report sections are based on how strong the behavioural preferences are. The order of presentation within the ‘strengths and considerations’ and ‘risks’ sections determines how strong the behavioural preferences are likely to be. We recommend interviewing panel to validate ‘risks’ using the competency-based questions along with the ‘STAR’ interview framework. Additionally, potential derailment factors are identified in order to understand the candidate’s responses when dealing with high pressure and novel situations.
Customer Service, Finance, General Business, Human Resources, Information Technology, Marketing
Used in Language(s)
English (US)
Most useful
When engaging Talent Praxis, it is an entirely end-to-end service where they handle the administration, assessment, interpretation and feedback very expertly.
The contact with Piers was great, he was readily available and provided comprehensive insight to candidates. All timelines were met, communication was perfect and the assessments added a great deal of insight to our recruitment process.
<p>This test provides your candidates with an opportunity to demonstrate the style and approach they prefer to take towards challenges at work. The picture they provide will help us understand how they see themselves in relation to working in a leadership role.</p>
<p>They will be presented with a sequence of scenarios describing a situation or challenge. They will also see a selection of possible approaches that they could take to respond to the situation or challenge described in the scenario.</p>
<p>For each scenario, they will be expected to tick the most effective or least effective approach they would take in each fictitious scenario. &nbsp;&nbsp;</p>
<h4><strong>Provides a realistic job preview</strong></h4>
<p>Map competencies to your organisational framework providing candidates with a more realistic job preview compared to relying on interviews alone.</p>
<h4><strong>Eliminates hiring bias</strong></h4>
<p>Measure how well candidates respond to a host of work related scenarios, allowing you to find the most competent candidates for the role.</p>
<h4><strong>Saves time &amp; hiring resources</strong></h4>
<p>Allows candidates to self select out if they realise the job isn&rsquo;t a good fit for them - saving you valuable time and resources.</p>
<p><strong>When used alongside other psychometrics, such as personality questionnaires or cognitive ability tests, employers are able to build up a holistic picture of how the individual would behave in the role.</strong></p>
<p><em>Are you looking to develop bespoke situational judgements tets or looking to host your own test on our platform?&nbsp;</em></p>
<p>We are Business Psychologists&nbsp;with over 25 years of&nbsp;experience in developing tests. Our Psycruit platform is ready to host your own test. Please contact us by support@talentgrader.com who will be able to help you with your enquiry. Talent Grader is one of our authorised partners in the UK.&nbsp;</p>
Bias results when test performance is affected by unintended factors and those factors are not evenly distributed between groups. This results in group differences in test performance that are not related to the constructs the test is intended to measure. For example, a test of numerical reasoning that uses a lot of text may be biased against people who have English as an additional language. Group differences do not result from different levels of numerical reasoning ability, but from questions being more difficult for some due to their use of language.
Test developers may address bias through some or all of the following:
. Providing a clear rationale for what the test is, and is not, intended to measure
· Reviewing content to ensure it is accessible and free from complex language
· Ensuring scoring is automated and objective (i.e. free from user bias)
· Providing evidence of any group difference in test scores
· Examining the effect of group membership on individual questions – sometimes referred to as ‘differential item functioning’ or ‘dif’
· Ensuring norm groups used for comparisons are representative of the populations they reflect
· Providing guidance on using the reports and interpreting constructs measured
Reliability is an indicator of the consistency of a psychometric measure (Field, 2013). It is usually indicated by a reliability coefficient(r) as a number ranging between 0 and 1, with r = 0 indicating no reliability, and r = 1 indicating perfect reliability. A quick heads up, don’t expect to see a test with perfect reliability.
Reliability may refer to a test’s internal consistency, the equivalence of different versions of the test (parallel form reliability) or stability over time (test-retest reliability). Each measures a different aspect of consistency, so figures can be expected to vary across the different types of reliability.
The EFPA Test Review Criteria states that reliability estimates should be based on a minimum sample size of 100 and ideally 200 or more. Internal consistency and parallel form values should be 0.7 or greater to indicate adequate reliability, and test-retest values should be 0.6 or greater.
Most test scores are interpreted by comparing them to a relevant reference or norm group. This puts the score into context, showing how the test taker performed or reported relative to others. Norm groups should be sufficiently large (the EFPA Test Review Criteria states a minimum of 200) and collected within the last 20 years. Norm groups may be quite general (e.g. ‘UK graduates’) or more occupationally specific (e.g. ‘applicants to ABC law firm’).
A key consideration is the representativeness of the norm group and how it matches a user’s target group of test takers. It is therefore important to consider the distribution of factors such as age, gender and race in norm groups to ensure they are representative of the populations they claim to reflect. This is particularly important with norms claiming to represent the ‘general population’ or other wide-ranging groups. Occupationally specific norms are unlikely to be fully representative of the wider population, but evidence of their composition should still be available.
Validity shows the extent to which a test measures what it claims to, and so the meaning that users can attach to test scores. There are many different types of validity, though in organisational settings the main ones are content, construct and criterion validity. Reference may also be made to other types of validity such as face validity, which concerns the extent to which a test looks job-relevant to respondents.
Content validity relates to the actual questions in the test or the task that test takers need to perform. The more closely the content matches the type of information or problems that a test taker will face in the workplace, the higher its content validity. For tests such as personality or motivation, content validity relates more to the relevance of the behaviours assessed by the test rather than the actual questions asked.
Construct validity shows how the constructs measured by the test relate to other measures. This is often done by comparing one test against another. Where tests measure multiple scales, as is the case with assessments of personality and motivation, it is also common to look at how the measure's scales relate to each other.
Criterion validity looks at the extent to which scores on the test are statistically related to external criteria, such as job performance. Criterion validity may be described as 'concurrent' when test scores and criterion measures are taken at the same time, or 'predictive' when test scores are taken at one point in time and criterion measures are taken some time later.
Construct and criterion validity are often indicated by correlation coefficients which range from 0, indicating no association between the test and criterion measures, and 1, indicating a perfect association between the test and criterion measures. It is difficult to specify precisely what an acceptable level of validity is, as this will depend on many factors including what other measures the test is compared against or what criteria are used to evaluate its effectiveness. However, for criterion validity, tests showing associations with outcome measures of less than 0.2 are unlikely to provide useful information and ideally criterion validity coefficients should be 0.35 or higher. The samples used for criterion validity studies should also be at least 100.
Overall, whilst a publisher should provide validity evidence for their test, validity comes form using the right test for the right purpose. Therefore, users need to use available validity evidence to evaluate the relevance of the test for their specific purpose.
Please ensure you add the cost of the product (from the cost section) first before adding any of the reports, additional materials or any other costs.
You can add a report even if it is free or £0. This will ensure our supplier is aware of your requirements fully. Please contact us if you have any queries.
We are pleased to know that you found this review ‘useful’. To help us maintain the trust of our user community, please use the following login options.