Everything DiSC is a simple yet effective tool that measures an individuals preferences and tendencies based on the DiSC ® model.
It describes four basic behavioural styles: D, i, S, and C:
D (Dominance) style is active and questioning.
This describes people who are direct, forceful, and outspoken with their opinions.
i (Influence) style is active and accepting.
This describes people who are outgoing, enthusiastic, and lively.
S (Steadiness) style is thoughtful and accepting.
This describes people who are gentle, accommodating, and patient with others’ mistakes.
C (Conscientiousness) style is thoughtful and questioning.
This describes people who are analytical, reserved, and precise.
Everyone is a blend of all four DiSC styles, no one style better or worse than the next.
We believe that these differences in style can be extremely valuable. Once you assess these differences and harness their value, better workplace communication AND healthier organisations become possible.
The Everything DiSC® Assessment
Powered and proven by 40+ years of research
Uses computer adaptive testing and sophisticated algorithms for precise results
Provides the foundation for a personalised learning experience
The Catalyst™ Platform
Delivers the results of the Everything DiSC assessment in a guided, narrative-style format
Allows learners to go deeper into their DiSC® style to develop social and emotional skills
Compares their DiSC style with colleagues and gives real-time tips for more effective interactions
Provides access to a range of social and emotional skills development content—personalised to each learner’s unique DiSC style
Mapping to relevant competencies possible?
no
Measures put in place to remove or reduce biases
To determine if a tool is reliable, researchers look at the stability and the internal consistency of the instrument. Stability is easy to understand. In this case, a researcher would simply have a group of people take the same assessment twice and correlate the results. This is called test-retest reliability. Internal consistency is more difficult to understand. Here, we have the assumption that all of the questions (or items) on a given scale are measuring the same trait. As a consequence, all of these items should, in theory, correlate with each other. Internal consistency is represented using a metric called alpha. We can use similar standards to evaluate both test-retest and alpha. The maximum value is 1.0 and higher values indicate higher levels of reliability.
Most researchers use the following guidelines to interpret values: above .9 is considered excellent, above .8 is considered good, above .7 is considered acceptable, and below .7 is considered questionable. The reliability estimates, all values were well above the .70 cutoff and all but one were above .80. This suggests that the measurement of DiSC is both stable and internally consistent.
Key information related to the last update
Everything DiSC assessments were updated in 2013 to a computerized adaptive testing format.
Adaptive testing allows an assessment to change depending on a respondent's previous answers. This is useful in cases where the results of a standard assessment are inconclusive.
Key Information
Online - desktop / tablet
20 mins80 questions
English (UK)
Free trial : No No training required to use
Adaptability, Analytical, Assertiveness, Attention to Detail, Boldness, Change Management, charisma, Communication, Competitiveness, Complex thinking, Consciousness of others, Cooperative, Customer Service, Data Management, Decision Making
all
Coaching, Career guidance, Career management, Development, Interview, Outplacement, Skills assessment
all
all
Mapping Competencies : No
Cost exc. tax
$ 400.00
What's included
One Everything DiSC Workplace profile & lieftime access to the Catalyst platform.
3 x One hour 121 coaching calls, via Zoom with an experienced DiSC practionner
Designed to develop awareness of self and others, whilst improving communication, teamwork and leadership. Helping to understand those who are different to us, it helps to manage conflicts and fosters personal growth.
The Catalyst™ Platform
Delivers the results of the Everything DiSC assessment in a guided, narrative-style format
Allows learners to go deeper into their DiSC® style to develop social and emotional skills
Compares their DiSC style with colleagues and gives real-time tips for more effective interactions
Provides access to a range of social and emotional skills development content—personalised to each learner’s unique DiSC style
DiSC assessments are aimed at over 18's. The online test is taken at the candidates own pace, but that ability to read is required.
Data Protection
All data is handled responsibly by Surge Ahead.
Surge Ahead only has access to the candidates name and email address.
For further information: https://www.surgeahead.co.uk/privacy-policy/
Reliability, Validity and Norm Group Info
Reliablity
Most researchers use the following guidelines to interpret values: above .9 is considered excellent, above .8 is considered good, above .7 is considered acceptable, and below .7 is considered questionable. The reliability estimates, all values were well above the .70 cutoff and all but one were above .80. This suggests that the measurement of DiSC is both stable and internally consistent.
Validity
There are many different ways to examine the validity of an assessment. We will provide two such examples here, but many more are included in the full Everything DiSC Research Report. The DiSC model proposes that adjacent scales (e.g., Di and i) will have moderate correlations. That is, these correlations should be considerably smaller than the alpha reliabilities of the individual scales. For example, the correlation between the Di and i scales (.50) should be substantially lower than the Alpha reliability of the Di or i scales (both .90). On the other hand, scales that are theoretically opposite (e.g., i and C) should have strong negative correlations. Data obtained from a sample of 752 respondents who completed the Everything DiSC assessment detailed the correlations among all eight scales show strong support for the model. That is, moderate positive correlations among adjacent scales and strong negative correlations are observed between opposite scales.
Comparison groups available for your candidate scores
Working professionals of any gender. The test is based on individual behavioural preferences with no right ot wrong, therefore a 'norm group' is not relevant.
Cost exc. tax
$ 400.00
What's included
One Everything DiSC Workplace profile & lieftime access to the Catalyst platform.
3 x One hour 121 coaching calls, via Zoom with an experienced DiSC practionner
Designed to develop awareness of self and others, whilst improving communication, teamwork and leadership. Helping to understand those who are different to us, it helps to manage conflicts and fosters personal growth.
The Catalyst™ Platform
Delivers the results of the Everything DiSC assessment in a guided, narrative-style format
Allows learners to go deeper into their DiSC® style to develop social and emotional skills
Compares their DiSC style with colleagues and gives real-time tips for more effective interactions
Provides access to a range of social and emotional skills development content—personalised to each learner’s unique DiSC style
One Everything DiSC Workplace profile & lieftime access to the Catalyst platform.
3 x One hour 121 coaching calls, via Zoom with an experienced DiSC practionner
Designed to develop awareness of self and others, whilst improving communication, teamwork and leadership. Helping to understand those who are different to us, it helps to manage conflicts and fosters personal growth.
The Catalyst™ Platform
Delivers the results of the Everything DiSC assessment in a guided, narrative-style format
Allows learners to go deeper into their DiSC® style to develop social and emotional skills
Compares their DiSC style with colleagues and gives real-time tips for more effective interactions
Provides access to a range of social and emotional skills development content—personalised to each learner’s unique DiSC style
<p>The Criterion Personality Questionnaire is unlike anything else on the market. We don&rsquo;t subscribe to a one-size-fits-all approach to personality; the CPQ offers unparalleled flexibility by allowing you to pick and choose the elements you want to measure.</p>
<p>The CPQ is made up of 46 scales split across five key areas of personality at work. These elements are:</p>
<p><strong>Interpersonal Style</strong> &ndash; The candidate&rsquo;s approach to working with others, taps into their style of communication and preferences for working around others</p>
<p><strong>Thinking Style </strong>&ndash; The candidate&rsquo;s approach to tasks, decisions and challenges</p>
<p><strong>Emotional Style </strong>&ndash; The candidate&rsquo;s reaction to the emotional demands of the role</p>
<p><strong>Motivations </strong>&ndash; Understanding what drives the candidate and helps them to feel energised and motivated at work</p>
<p><strong>Culture Fit </strong>&ndash; Understanding the style of environment that is best suited to the candidate</p>
<p>We provide the following 3 options for you to choose from:&nbsp;</p>
<p><strong>1. OFF-THE-SHELF OPTION</strong></p>
<p>Psycruit offers two off-the-shelf personality questionnaires, both of which include a range of scales from across the five elements.</p>
<p>The Criterion Core (21 Scales) &ndash; Comprehensive insight into the typical preferences and tendencies for behaviours, feelings, values and motivations that are important in the workplace. This questionnaire takes about 20 minutes for the candidate to complete. Using the Core questionnaire will give you access to two specialised reports; Team Strengths Report &amp; Sales Report.</p>
<p>The Criterion Enhanced (30 Scales) &ndash; Builds on the Criterion Core, offering a deeper insight across a breadth of elements of personality in an occupational setting. This questionnaire will take about 30 minutes for candidates to complete. Using it will give you access to our Leadership Report.</p>
<p><strong>2. BESPOKE OPTION</strong></p>
<p>Psycruit allows you to build your own personality questionnaire so you can tap directly into the traits you are interested in for the role you are recruiting for or developing. You can pick any combination of the 46 scales in the Library and structure the selection according to your own values/competency framework or use our default headings. Telling the platform &lsquo;what good looks like&rsquo; will give you access to the Selection Report.</p>
<p><strong>3.INDUSTRY SPECIFIC</strong></p>
<p>We now have a collection of industry specific questionnaires that are available on Psycruit. These have been developed through role research and the expert knowledge and experience of our business psychologists. All of our off the shelf questionnaires also contain the social desirability scale in addition to those scales listed below.</p>
<ul>
<li>Remote Working</li>
<li>Sales</li>
<li>Call Centre</li>
<li>Customer Service</li>
<li>Graduates</li>
<li>Recruitment Industry</li>
<li>Project Manager</li>
<li>Legal Sector</li>
<li>IT Professionals</li>
<li>Engineering</li>
<li>Workforce</li>
<li>Human Resources</li>
<li>Administrative Role</li>
<li>Marketing</li>
<li>Education Role</li>
<li>Hospitality</li>
</ul>
<p>This test provides your candidates with an opportunity to demonstrate the style and approach they prefer to take towards challenges at work. The picture they provide will help us understand how they see themselves in relation to working in a leadership role.</p>
<p>They will be presented with a sequence of scenarios describing a situation or challenge. They will also see a selection of possible approaches that they could take to respond to the situation or challenge described in the scenario.</p>
<p>For each scenario, they will be expected to tick the most effective or least effective approach they would take in each fictitious scenario. &nbsp;&nbsp;</p>
<h4><strong>Provides a realistic job preview</strong></h4>
<p>Map competencies to your organisational framework providing candidates with a more realistic job preview compared to relying on interviews alone.</p>
<h4><strong>Eliminates hiring bias</strong></h4>
<p>Measure how well candidates respond to a host of work related scenarios, allowing you to find the most competent candidates for the role.</p>
<h4><strong>Saves time &amp; hiring resources</strong></h4>
<p>Allows candidates to self select out if they realise the job isn&rsquo;t a good fit for them - saving you valuable time and resources.</p>
<p><strong>When used alongside other psychometrics, such as personality questionnaires or cognitive ability tests, employers are able to build up a holistic picture of how the individual would behave in the role.</strong></p>
<p><em>Are you looking to develop bespoke situational judgements tets or looking to host your own test on our platform?&nbsp;</em></p>
<p>We are Business Psychologists&nbsp;with over 25 years of&nbsp;experience in developing tests. Our Psycruit platform is ready to host your own test. Please contact us by support@talentgrader.com who will be able to help you with your enquiry. Talent Grader is one of our authorised partners in the UK.&nbsp;</p>
Bias results when test performance is affected by unintended factors and those factors are not evenly distributed between groups. This results in group differences in test performance that are not related to the constructs the test is intended to measure. For example, a test of numerical reasoning that uses a lot of text may be biased against people who have English as an additional language. Group differences do not result from different levels of numerical reasoning ability, but from questions being more difficult for some due to their use of language.
Test developers may address bias through some or all of the following:
. Providing a clear rationale for what the test is, and is not, intended to measure
· Reviewing content to ensure it is accessible and free from complex language
· Ensuring scoring is automated and objective (i.e. free from user bias)
· Providing evidence of any group difference in test scores
· Examining the effect of group membership on individual questions – sometimes referred to as ‘differential item functioning’ or ‘dif’
· Ensuring norm groups used for comparisons are representative of the populations they reflect
· Providing guidance on using the reports and interpreting constructs measured
Reliability is an indicator of the consistency of a psychometric measure (Field, 2013). It is usually indicated by a reliability coefficient(r) as a number ranging between 0 and 1, with r = 0 indicating no reliability, and r = 1 indicating perfect reliability. A quick heads up, don’t expect to see a test with perfect reliability.
Reliability may refer to a test’s internal consistency, the equivalence of different versions of the test (parallel form reliability) or stability over time (test-retest reliability). Each measures a different aspect of consistency, so figures can be expected to vary across the different types of reliability.
The EFPA Test Review Criteria states that reliability estimates should be based on a minimum sample size of 100 and ideally 200 or more. Internal consistency and parallel form values should be 0.7 or greater to indicate adequate reliability, and test-retest values should be 0.6 or greater.
Most test scores are interpreted by comparing them to a relevant reference or norm group. This puts the score into context, showing how the test taker performed or reported relative to others. Norm groups should be sufficiently large (the EFPA Test Review Criteria states a minimum of 200) and collected within the last 20 years. Norm groups may be quite general (e.g. ‘UK graduates’) or more occupationally specific (e.g. ‘applicants to ABC law firm’).
A key consideration is the representativeness of the norm group and how it matches a user’s target group of test takers. It is therefore important to consider the distribution of factors such as age, gender and race in norm groups to ensure they are representative of the populations they claim to reflect. This is particularly important with norms claiming to represent the ‘general population’ or other wide-ranging groups. Occupationally specific norms are unlikely to be fully representative of the wider population, but evidence of their composition should still be available.
Validity shows the extent to which a test measures what it claims to, and so the meaning that users can attach to test scores. There are many different types of validity, though in organisational settings the main ones are content, construct and criterion validity. Reference may also be made to other types of validity such as face validity, which concerns the extent to which a test looks job-relevant to respondents.
Content validity relates to the actual questions in the test or the task that test takers need to perform. The more closely the content matches the type of information or problems that a test taker will face in the workplace, the higher its content validity. For tests such as personality or motivation, content validity relates more to the relevance of the behaviours assessed by the test rather than the actual questions asked.
Construct validity shows how the constructs measured by the test relate to other measures. This is often done by comparing one test against another. Where tests measure multiple scales, as is the case with assessments of personality and motivation, it is also common to look at how the measure's scales relate to each other.
Criterion validity looks at the extent to which scores on the test are statistically related to external criteria, such as job performance. Criterion validity may be described as 'concurrent' when test scores and criterion measures are taken at the same time, or 'predictive' when test scores are taken at one point in time and criterion measures are taken some time later.
Construct and criterion validity are often indicated by correlation coefficients which range from 0, indicating no association between the test and criterion measures, and 1, indicating a perfect association between the test and criterion measures. It is difficult to specify precisely what an acceptable level of validity is, as this will depend on many factors including what other measures the test is compared against or what criteria are used to evaluate its effectiveness. However, for criterion validity, tests showing associations with outcome measures of less than 0.2 are unlikely to provide useful information and ideally criterion validity coefficients should be 0.35 or higher. The samples used for criterion validity studies should also be at least 100.
Overall, whilst a publisher should provide validity evidence for their test, validity comes form using the right test for the right purpose. Therefore, users need to use available validity evidence to evaluate the relevance of the test for their specific purpose.
Please ensure you add the cost of the product (from the cost section) first before adding any of the reports, additional materials or any other costs.
You can add a report even if it is free or £0. This will ensure our supplier is aware of your requirements fully. Please contact us if you have any queries.
We are pleased to know that you found this review ‘useful’. To help us maintain the trust of our user community, please use the following login options.