As a researcher, ensuring the validity of your measurements is essential for drawing meaningful conclusions from your data. One important type of validity to consider is discriminant validity, which helps evaluate how distinct or unique your measurement is from other related constructs.
Discriminant validity, also known as divergent validity in some fields, is the extent to which a measure is not unduly related to measures of different constructs. It is assessed together with convergent validity to establish construct validity.
Example: Discriminant validity (divergent validity)
You’re developing a new scale to measure anxiety called the “Anxiety Assessment Scale” (AAS). To assess its discriminant validity, you compare the scores on the AAS with scores on a measure of a different construct, such as the “Extroversion Inventory” (EI). You administer the AAS and the EI to a sample of 200 participants. After collecting the data, you calculate the correlation between the scores on the two scales.
If the correlation between the AAS and EI scores is low (e.g., r = 0.15), the AAS does not measure the same construct as the EI. This provides discriminant validity evidence, indicating that the AAS is measuring anxiety and not extroversion.
What is Discriminant Validity?
Discriminant validity refers to the degree to which a measure is truly distinct from and unrelated to other measures that it theoretically should not be related to. In other words, it examines whether your assessment tool is tapping into something unique or overlaps with another similar construct.
The idea behind discriminant validity is that your measure should not be too highly correlated with variables it’s not supposed to be associated with. If it is, the measure may not adequately discriminate or differentiate the construct you intend to evaluate.
Discriminant vs. Convergent Validity
Discriminant and convergent validity are essential to establish a measure’s overall construct validity. However, they represent slightly different ways of evaluating how well a test captures its intended construct.
Convergent validity focuses on how a measure correlates with or relates to other assessments of the same underlying concept. The idea is that if a test truly measures a particular construct, it should show strong positive relationships with other established measures of that same construct.
Discriminant validity examines how distinct or divergent a measure is from other related but theoretically distinct constructs. This involves demonstrating that the measure in question is balanced with variables it should theoretically not be associated with.
So convergent validity is about showing similarities – that the test correlates as expected with other measures of the same thing. Discriminant validity, conversely, is about demonstrating differences – that the test does not correlate too highly with measures of different, even if related, concepts.
Together, evidence of both convergent and discriminant validity helps build a strong case for the overall construct validity of a measure. Researchers must show that their test relates to and aligns with similar constructs while being sufficiently distinct from conceptually distinct variables. This dual approach strengthens confidence that the measure is accurately and uniquely capturing the intended psychological or behavioral phenomenon.
Example of Discriminant Validity
Let’s say you’ve developed a new measure of work-family balance. To demonstrate discriminant validity, you might administer your work-family balance scale alongside other related but theoretically distinct measures, such as:
- Job satisfaction
- Life satisfaction
- Perceived stress
- Neuroticism
If your work-family balance scale shows low to moderate correlations with these other constructs, that would provide evidence of discriminant validity. It would suggest your new measure is tapping into something unique – work-family balance – rather than just overlapping with general job attitudes, well-being, or personality traits.
On the other hand, if your work-family balance scale showed very high correlations (e.g., r > 0.70) with the different measures, that would raise concerns about discriminant validity. It would imply that your new scale is not adequately differentiating work-family balance from these related but distinct concepts.
How to Measure Discriminant Validity
To assess the discriminant validity of a test, you aim to show that it has little to no correlation with measures of unrelated constructs. This is typically demonstrated by a low correlation coefficient, such as Pearson’s r, between the test scores and scores on a measure of a different construct.
The correlation coefficient ranges from -1 to +1, indicating the strength and direction of the relationship between variables:
- r = 1: Perfect positive correlation
- r = 0: No correlation
- r = -1: Perfect negative correlation
Discriminant validity is typically assessed using statistical techniques like:
- Correlation analysis: Examining the correlation coefficients between your measure and other theoretically distinct constructs. Lower correlations suggest better discriminant validity.
- Factor analysis: Conducting a factor analysis to see if your measure loads onto a distinct factor, separate from other related factors.
- The Fornell-Larcker criterion: Comparing the average variance extracted (AVE) for each construct to the squared correlations between constructs. The AVE for each construct should be greater than its squared correlations with other constructs.
Example: Measuring discriminant validity
You have developed a new questionnaire to measure depression called the “Depression Assessment Questionnaire” (DAQ). To establish discriminant validity, you want to show that the DAQ scores have little to no correlation with a measure of a different construct, such as the “Creativity Scale” (CS).
You administer the DAQ and the CS to a sample of 150 participants. The DAQ scores range from 0 to 100, with higher scores indicating higher levels of depression. The CS scores also range from 0 to 100, with higher scores indicating higher levels of creativity.
After collecting the data, you calculate the Pearson’s correlation coefficient (r) between the DAQ and CS scores. You find that the correlation coefficient is r = 0.08.
Interpreting the results:
- A correlation coefficient of r = 0.08 suggests a weak, almost negligible relationship between the DAQ and CS scores.
- This low correlation indicates that the DAQ is not measuring the same construct as the CS, providing evidence of discriminant validity.
- The DAQ appears to measure depression, a construct different from creativity, as measured by the CS.
In this example, the low correlation coefficient between the DAQ and the CS supports the Depression Assessment Questionnaire’s discriminant validity.