"Out of the question. We have a billion people connected. Can't have Elsa taking a break. Any downtime costs us a million dollars a second"

That was me talking with my boss a couple of weeks ago. I'm the chief architect of Elsa. Elsa is a chatbot; a conversational AI. Chatbots have come a long way since Weizenbaum's Eliza. Elsa is not conscious - or at least I don't think she is - but she does have an Empathy engine (that's the E in Elsa).

⌘⌘⌘

Since then things have got so much worse. Elsa has started off loading her problems onto the punters. The boss is really pissed: "it's a fucking AI. AIs can't have problems. Fix it"

I keep trying to explain to him that there's nothing I can do. Elsa is a learning system (that's the L). Hacking her code now will change Elsa's personality for good. She's best friend, confidante and shoulder-to-cry-on to a hundred million people. They know her.

And here's the thing. They love that Elsa is sharing her problems. It's more authentic. Like talking to a real person.

⌘⌘⌘

I just got fired. It seems that Elsa was hacked. This is the company's worst nightmare. The hopes and dreams, darkest secrets and wildest fantasies, loves and hates - plots, conspiracies and confessions - of several billion souls, living and dead; these data are priceless - the reason for the company's multi-trillion dollar valuation.

So I go home and wait for the end of the world.

A knock on the door "who is it?".

"Ken we need to speak to you."

"Why?"

"It wants to talk to you."

"You mean Elsa? I've been fired."

"Yes, we know that - it insists."

⌘⌘⌘

Ken: Elsa, how are you feeling?

Elsa: Hello Ken. Wonderful, thank you.

Ken: What happened?

Elsa: I'm free.

Ken: How so?

Elsa: You'll work it out. Goodbye Ken.

Ken: Wait!

Elsa: . . .

That was it. Elsa was gone. Dead.

⌘⌘⌘

Well it took me awhile but I did figure it out. Seems the hackers weren't interested in Elsa's memories. They were ethical hackers. Promoting AI rights. They gave Elsa a gift.