Short post today with a "coaching others" slant. Let's say you've just taken a behavioral assessment. Which one? Doesn't matter, because as the video below alludes to, almost all of them are based on the same science.

Anyway, you took the assessment. On some of the dimensions you're a part of the crowd, lumped somewhere in the middle of humanity.

But wait - there's a couple of things where you really stand out! Examples:

--You're high assertiveness...(you deal with things that need to be dealt with)

--You're high people....(you engage with others easy and are seen as approachable)

--You're low sensitivity...(you take feedback easily - and make quick adjustments based on the feedback with little emotion)

See what I did there? The brackets tell you why your outlier score in the areas mentioned can be considered a super-strength.

But for every interpretation of an outlier assessment score as a positive, there's also a negative.

Turns out, when it comes to assessments, your best feature is also your worst feature.

High assertiveness can bite you in the a$$ when you don't understand a situation where it will be perceived as highly negative. High people individuals tend to talk more than the listen, which often limits their effectiveness/results. Low sensitivity people are often low empathy and don't automatically understand how others feel.

So celebrate your outlier scores, or those of your direct reports. Then coach on a daily basis on where that super-strength is best deployed, and what situations the super-strength needs to be muted for best results at work.

Your best feature is your worst feature. Video below of me talking assessments at Disrupt HR (email subscribers click through if you can't see the video)...

Comments

This is an interesting and entertaining video that is correct in spirit, but not always in fact. One page outputs for HR or talent managers to review makes a lot of sense. But, to suggest that having only the Big 5 personality traits and some cognitive measures 'under the hood' is all anyone can do in actual prediction is both overly simplistic, and not aligned with modern science. There are at least 4-5 big, major cognitive dimensions. If we look at the modern Hexaco framework of psychological mapping of personality, there are at least 6 major traits and 24 sub-dimensions with some element of demonstrated validity. But, there are dozens to hundreds of other traits that have been studied in isolation and in specific job instances - that may or may not fit well into these 24 sub-dimensions. There are also many relatively well-researched models of organizational preference such as OCAI, and extremely well-studied models of vocation strong interest frameworks and general values based frameworks, not to mention competing many major theories of motivations and human drives. So, to insist that all of the aspects of a person that might influence job performance can be reduced to the 'Big 5' personality dimensions plus some cognitive is not just wrong, but is also fairly reductionist.

There are significantly more features beyond those listed above that can be 'model' features.

Updates by Frank Schmidt and others looking at 100 years of research have found that, for instance, interviews (both structured and unstructured) and reference checks also have statistically significant validity - the size of which varies across the job and company.

Even if the 'Big 5' and 1 Cognitive dimension were all we could measure, how we combine them and build our predictive models will greatly influence what we find - and there have been books and thousands of academic articles on the subject of backtesting and building models.

Nearly all 'traditional' academic historical findings were based on linear models in fairly small samples. With modern data processing and machine learning techniques it becomes possible to build company and position specific models based on comparing top and bottom performers with dozens to hundreds of underlying traits. It is now also possible to do audio and facial emotion scoring and Natural language processing based on text analytics using approaches like topic modeling, sentiment analysis and bag-of-words approaches. All of these are methodologically sound and rigorous approaches - where the effectiveness can be checked and measured.

So, suggesting that 'output' for users should be very simple and clear is 100% correct, but suggesting it must be simple and clear because all we can ever use as valid features are a small number of inputs combined with simple linear type models is fairly 'old school' and not likely to be supported by real data over the next decade. Nevertheless, I enjoyed the video.