You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Imagine you put together a questionnaire with 2,000 questions on it that you'd assembled in an attempt to cover every significant aspect of personality where people seemed to differ from each other in noteworthy ways. And assume that, in assembling the questions, you solicited input from 200 personality psychologists, to help ensure that the questionnaire's coverage was as comprehensive as possible.

And assume you had a random sample of 100,000 people take your questionnaire.

And assume that a computer analysis of the results revealed that people's responses to around 900 of the 2,000 questions, rather than being random — in terms of the response to any one question being mostly unrelated to the response to any other question — fell instead into five statistically-significant "clusters," where people who answered A to question 17 tended to also answer B to question 31 and B to question 52 and A to question 66 and so on.

And assume you used those results to create a new, shorter questionnaire that included 30 questions that fell into each of those five clusters, selecting the questions for the shorter questionnaire based largely on which ones had clustered most robustly on the larger questionnaire.

And assume you took the shorter questionnaire and administered it to thousands more people and it turned out that identical twins raised in separate households came out as the same five-dimension "type" much more often than did people who were less genetically similar.

Would you consider that questionnaire a "toy"? Would you say it failed to qualify as "science"?

If so, why?

If not, are you unaware that the history I've described is, in essence, the history of the Big Five? And that it's also, in essence — albeit on a smaller scale, and with some reduction in the questionnaire's scope — the history of the MBTI? And that, as already discussed earlier in this thread, McCrae and Costa (probably the most prominent Big Five psychologists) long ago acknowledged that the MBTI was a scientifically respectable instrument that was tapping into the same underlying dimensions as four of the Big Five factors and that, as far as the typological variances went, each typology probably had things to teach the other?

Imagine you put together a questionnaire with 2,000 questions on it that you'd assembled in an attempt to cover every significant aspect of personality where people seemed to differ from each other in noteworthy ways. And assume that, in assembling the questions, you solicited input from 200 personality psychologists, to help ensure that the questionnaire's coverage was as comprehensive as possible.

And assume you had a random sample of 100,000 people take your questionnaire.

And assume that a computer analysis of the results revealed that people's responses to around 900 of the 2,000 questions, rather than being random — in terms of the response to any one question being mostly unrelated to the response to any other question — fell instead into five statistically-robust "clusters," where people who answered A to question 17 tended to also answer B to question 31 and B to question 52 and A to question 66 and so on.

And assume you used those results to create a new, shorter questionnaire that included 30 questions that fell into each of those five clusters, selecting the questions for the shorter questionnaire based largely on which ones had clustered most robustly on the larger questionnaire.

And assume you took the shorter questionnaire and administered it to thousands more people and it turned out that identical twins raised in separate households came out as the same five-dimension "type" much more often than did people who were less genetically similar.

Would you consider that questionnaire a "toy"? Would you say it failed to qualify as "science"?

If so, why?

If not, are you unaware that the history I've described is, in essence, the history of the Big Five? And that it's also, in essence — albeit on a smaller scale, and with some reduction in the questionnaire's scope — the history of the MBTI? And that, as already discussed earlier in this thread, McCrae & Costa (probably the most prominent Big Five psychologists) long ago acknowledged that the MBTI was a scientifically respectable instrument that was tapping into the same underlying dimensions as four of the Big Five factors and that, as far as the typological variances went, each typology probably had things to teach the other?

I'm guessing you were in your burrow when I made my previous posts and they went right over your head! So let me try again...

There are hard sciences, soft sciences and pseudosciences, and reasonable people can disagree about precisely where the boundaries are. But saying that astrology and temperament psychology — in any of its better-established varieties, including the Myers-Briggs typology — are equally unscientific doesn't fall within the "reasonable people can disagree" range.

There's now over 50 years of data — from hundreds of studies in peer-reviewed journals and so on — that strongly suggests that there are a handful of human personality dimensions that (1) are multifaceted (i.e., that involve multiple characteristics that tend to co-vary in a statistically meaningful way), (2) tend to be relatively stable through life, and (3) are substantially genetic. The "Big Five" is an umbrella term for several somewhat independently-developed typologies with respect to which respectable amounts of data have been gathered and that seem to basically involve the same five underlying dimensions (notwithstanding some theoretical variations from typology to typology and from typologist to typologist), and the four MBTI dichotomies appear to be tapping into four of the Big Five factors — albeit, again, with various theoretical variations both between the MBTI and Big Five and among different MBTI theorists.

In the modern world of personality typology, the relevant scientific standards include judging typologies in terms of two broad criteria known as reliability and validity. Reliability basically has to do with internal consistency, while validity relates to the extent to which the theoretical constructs actually relate to reality. Going all the way back to 1985, the second edition of the MBTI Manual devoted two chapters to the issues of reliability and validity, and there's been substantial additional confirmation in the years since.

McCrae and Costa are probably the most prominent Big Five scientists, and they long ago concluded (see this article) that the four MBTI dichotomies were essentially tapping into four of the Big Five factors, and that there was respectable scientific data in support of the MBTI dichotomies.

Over twenty years ago now, John B. Murray ("Review of Research on the Myers-Briggs Type Indicator," Perceptual & Motor Skills, 70, 1187, 1990) summed up the MBTI's status this way:

Originally Posted by Murray

The Myers-Briggs Type Indicator has become the most widely used personality instrument for nonpsychiatric populations. ... Approximately 300 studies of the MBTI are cited by Buros (1965, 1978) and over 1500 studies are included in the [1985] edition of the [MBTI Manual]. ... The research on the MBTI as a psychometric instrument and as an application of Jung's typology was reviewed and some of its modern applications considered. ...

The reliability of the M-B Indicator has been improved in recent years. ... Studies reviewed by Carlyn (1977) as well as later studies have shown generally satisfactory split-half and test-retest reliabilities. ...

Group differences and correlations are broadly supportive of the construct validity of the M-B Indicator scales, indicating that the four scales measure important dimensions of personality that approximate those of Jung's typology theories [citing multiple studies]. ...

DISCUSSION
...
[The MBTI's] indices of reliability and validity have been extensively investigated and have been judged acceptable. The constructs underlying the Myer-Briggs Indicator have been supported by correlations with other tests of personality, Extraversion-Introversion, and Emotionality as well as with behavioral correlates of the four scales in many professions and business organizations. ...

The inventory has served as a practical assessment instrument by virtue of its known construct validity. ... It has been extensively investigated and has met successfully most challenges to its rationale, test procedures, and results.

Here are three more sources, if you're interested. Each of the last two includes a roundup of multiple studies.

Particularly noteworthy (to me, anyway) is the fact that twin studies have established that identical twins raised in different households are substantially more alike with respect to the Big Five and MBTI dimensions than more genetically dissimilar pairs, which strongly suggests that these typologies are tapping into personality dimensions that are relatively hard-wired — however imperfectly and/or incompletely grasped and defined they may be at this stage.

I should probably also note, though, that the data support for the MBTI relates almost exclusively to the four MBTI dichotomies — which correlate with four of the Big Five dimensions — rather than the eight "cognitive functions." As I understand it, and as further discussed in this long INTJforum post, the few attempts to test/validate the functions — and, in particular, the functions model most often discussed on internet forums (where INTJ = Ni-Te-Fi-Se and INTP = Ti-Ne-Si-Fe) — have not led to a respectable body of supporting results.

Are self-assessment tests and other aspects of "soft science" studies subject, in many cases, to greater error, and/or different forms of error, than typical "hard science" studies? Sure. Are respectable psychologists aware of that, and do they take that into account in interpreting their data? Yes, they do. Do those sources of error render the results of those studies essentially worthless, or put them in the pseudoscience category? No, they do not.

Go into the psychology section of any major university library and you'll find many journals devoted (in whole or in part) to personality studies — Journal of Personality, Journal of Personality Assessment, Journal of Personality & Social Psychology, Journal of Research in Personality, Personality & Individual Differences, and on and on — and their pages include what is now a large body of scientific (albeit soft-scientific) studies that have been done based on the Myers-Briggs typology and various other typologies that tap into one or more of the same dimensions of human temperament.

That same university library will not house a similar body of journals relating to astrology — and that's because the Myers-Briggs typology is one of several schools within the respectable (albeit soft) science of personality types, and astrology is a pseudoscience.

Continuous scales of traits rather than absolute categorizations. Multiple theories that corroborate one another and tend toward capturing the same phenomena through different approaches. Always a fan.

But while MBTI tests reports continuous scales, I'm not a fan of how it couches those scales. For MBTI, the scale between one preference and another itself is outright called the preference clarity index:

Graphs preference clarity indexes so clients see how clearly their responses indicate their MBTI® type

The language here is important. The test would not claim that someone is ambiverted if he scores middling on E/I; but that his preference between Extraversion and Introversion isn't clear--that he is either an E or an I, but the test simply couldn't determine which.

Thus, the underlying theory of MBTI is still that people have categorical preferences. I think it's best to shoot that part of the theory in the head, point-blank, and simply allow the sliding scales to actually represent where one fits on the scale. It'd be a simpler, more direct, more accurate and more useful measure.

The types as categories could be useful, but the whole process ought to recognize that someone who scores ENTP with a '60%' E might also benefit from taking a gander at the INTP type descriptions and seeing what applies to him.