Future decision patterns and a glimpse on their effects. PS: We won’t get it.

Some Years ago (1977) Christopher Alexander wrote a famous book (A Pattern Language: Towns, Buildings, Construction) in architecture which describes that all „new“ created, is basically just known patterns but re-arranged. Sometimes we just do not see those patterns anymore or don’t get how they are re-arranged, that’s the moment we call it art or creative.

After reading this masterpiece I tried to apply the theory to other verticals of business or private life. I used to work as a graphic designer back in the days and noticed that there is no creative process, every layout I made or idea I had was built on patterns. The most creative people basically were the ones that hide those former single-viewed patterns best. I wasn’t a good pattern maker, that’s why I switched professions and became a technology guy.

Within technology I noticed the same theory applies, in Software architecture, in Development, in Leadership. I was way better in composing those patterns than I was before in graphics, so it became a habit. Then some days ago, a video was shared with me. A Keynote held by Cassie Kozyrkov popped up and she explained that we all aren’t able to build microwaves from the scratch, but we are still using it. Her Talk was sticky, so I decided to dig a little deeper into her published publications and one was even more sticky than her talk: „Explainable AI won’t deliver. Here’s why„

The more living patterns there are in a place – a room, a building, or a town – the more it comes to life as an entirety, the more it glows, the more it has that self-maintaining fire which is the quality without a name.

Christopher Alexander

A essential quote from her article is “in order to trust AI, we need it to be able to explain how it made its decisions.”, she didn’t agree with this quote – The pattern theory popped up in my head again. Patterns put together in a way most people don’t understand are called „art“, easy… Don’t get me wrong, my intension is not to make LSD a trend in Data Science, but isn’t that kind of the same? We trust artists, putting together those patterns a way we are inspired by later, while spectating – In Cassie’s words: „Manufactures Build Microwaves we use in our everyday life, without questioning their construction plans“ – but with AI we want to get every single step of building, to trust those suspicious machines? Shouldn’t we just trust the artists to do the right thing and get inspired again?

Visiting galleries or strolling through a nice city mostly is driven by the intension „blow my mind“ (OK, sometimes we are just the ones, forced to join a group or something…). With something relevant, we seemed to understand till now, it is quite different. Blowing our minds in that aspect is threatening our confidence intensively! Confidence – The stuff we used to aim for in our private and business lives, the stuff we vote our leaders for, a feature women used most to describe am man as „sexy“ and so on. Basically our confidence is based on a very fragile foundation.

How to tackle this contrast?

Let’s be patient, for decades philosophical issues like that solved themselves, the success is inescapable. The old role models will change, habits will change and in the end our professions will change.

I recently had a decision to swallow, a good full stack developer working with me was offered a better job somewhere else. Let’s assume he got an offer 40% more than his current salary, but he offered to stay for a raise of 20%, because he loves the team and environment. His salary was, within the local developer pool, below average (By the way, have you noticed that average contains the word „rage“), so the raise would be totally fine. If we need to hire a replacement, due to his dismissal, we would end up around 20-25% higher than the current budget would be leveled up to. Headhunters would cost us fees, productivity would be back with a delay of minimum 6-10 months and the revenue uplift effects (usually glued to a developer within this segment) would decrease.

Simple, right?

Guess what happened in the end? Turns out humans use data not the most logic way, but the way underlining their preferred default action. Sometimes other datasets (sometimes really creative ones like fiscal years or discussion based „facts“) pop out of nowhere and are somehow relevant for individuals. In psychology there is a nice name describing this act: confirmation bias. We all know these moments, somebody pops up with „I’ve read that a while ago“ and suddenly everybody is convinced.

When men wish to construct or support a theory, how they torture facts into their service!

Mackay, 1852/ 1932, p. 552

Separating those elements while making decisions is not easy, so is getting all those elements on the table while others determine something. We are now aware of this issue, so it should get more simple in the future.

There is an obvious difference between impartially evaluating evidence in order to come to an unbiased conclusion and building a case to justify a conclusion already drawn. In the first instance one seeks evidence on all sides of a question, evaluates it as objectively as one can, and draws the conclusion that the evidence, in the aggregate, seems to dictate. In the second, one selectively gathers, or gives undue weight to, evidence that supports one’s position while neglecting to gather, or discounting, evidence that would tell against it. There is a perhaps less obvious, but also important, difference between building a case consciously and deliberately and engaging in case-building without being aware of doing so.

We can see that a lot in so called „fake media“ publications these days. Arguments are drawn or described a way to support a political mindset or economical goal. Kinda sad in this context, but we all do it from time to time.

The default action is the option that you find palatable under ignorance.

Cassie Kozyrkov, Google

As a summary of this journey I will try to apply some simple rules in the future:

Being impartial while asking questions – I’ll try to kill my default actions as good as possible

I won’t be the next creator of an AI superstar program – But I will become a heavy user of those superstars

Separating the confirmation bias thoughts and arguments from the relevant ones

The last rule will probably be the hardest, but I will try my very best to apply the three of them as good as possible. Sometimes I won’t get why, but I am pretty sure I will love the results. If not, the data, results are based on, is just not good enough to do get the right output.