An Experiment with AI

Several months ago, in trying to automate some of my workload I accidentally found myself building a very lightweight, and limited AI bot. Of course, when we talk about AI, it’s fairly limited as it’s really nothing more than a series of scripts that key on set phrases and perform certain functions.

This is why we still struggle with personal assistants like Siri and Alexa (Amazon Echo) who seem to catch every other word we say, and fumble some of the simplest tasks (while rocking others, and completely missing out on ones we think are obvious).

The problem goes back to what researchers and universities are trying to solve – natural language recognition and processing. But I’m getting ahead of myself, as the first iteration of the AI was, well, really AS (Artificially Stupid).

What I mean by this, is that the first iteration was simply a collection of scripts that would be called when a phrase was matched – very similarly to how many “AI” tools work today. In other words, if you asked the “AI” to “give me points” it would call the “gimme_points” script which would then determine the application of your statement and return a result.

As simple as this approach appears, it did let me pull in numerous scripts (currently 58) to answer questions about anything from hobbies to performing calculations (yes, even better than Google currently does).

But the downside is that this approach is limited to set phrases, and truly does not allow for machine interpretation or machine learning. Two components I believe are essential for true “fake” Artificial Intelligence.

Another challenge for AI is comprehending human emotion – and interpreting tone. It goes beyond understanding set phrases to determining phrases based on words, punctuation, letter casing, grammar, verbs, nouns, adverbs, adjectives, and their order.

For the second iteration of the AI I’ve incorporated WordNet, in an attempt to broaden the understanding of the words and their context. Of course, understanding the words and being able to interpret them is just the very first stage. I have also built in a very rough tone interpreter, as one thing I learned from my previous marriage is that “fine” has a lot of meanings.

But the secret ingredients, at least as I see them, are in the ability to identify an individual and recall previous conversations.

If you put them all together, then you have a computer that can recognize a person, adjust its conversation style to them, recall previous conversations, and even reframe conversations, all while having access to potentially limitless data and information from across the internet.

Of course, it’s still a “fake” AI, as it’s not able to learn beyond its programming, and its not able to reprogram itself – at least not initially. But then again, would we want an application that could change its own directives? Because that’s the risk of true AI.