I wish it wasn't true. But it is. How many of your clients describe the outcome they hope for before you have begun analyzing the data? I am going to ask you a favor. Listen to the first 10 minutes of this broadcast. It won't take long but I think it makes my point better than anything I can write in summary.

Once you have an expectation of what to look for in the data--you find it

I am a big fan of David McRaney. You should be too. He is smart and seems to tease out the underlying thread connecting so many ideas and long-held beliefs. If your bottom line depends on the ability to objectively analyze information in all its forms--you need to be aware of your own heuristics and biases.

​It doesn't mean you emerge squeaky clean and pure on the other side but you will at least appreciate the power of probabilities instead of absolutes and what we don't know about what we don't know...
​

We are not AIs, who, when finally implemented, will (putatively) be able to modify their own source code. The closest we can come to that is to be aware of what our reasoning is put together from, which include various biases that exist for a reason, and to make conscious choices as to how we use these components.
​Bottom line : understanding where your biases come from, and putting that knowledge to good use, is of more value than rejecting all bias as evil.--Morendil

If you didn't listen to the podcast, you might have missed the reference to false positives in the title of the blog today. Back when our confirmatory biases had a distinct advantage for survival (Is that a cheetah in the bushes?), we weren't troubled by "false positives". Why not? Because it was better to be wrong, than to be food...

All of Data & Donuts' content is made available to you for free, with no subscription fees or paywalls. If you are a regular reader, feel free to contribute. Thank you!