One of the main problems with the current generation of chatbots is that they require large amounts of training data. If you want your chatbot to recognize a specific intent, you need to provide it with a large number of sentences that express that intent. Until now, these large training corpora had to be generated manually, with one or more people writing many different sentences for each intent, vertical and language that needed to be recognized in your chatbot.

Contrary to popular opinion, speech recognition tools are not a contemporary development but instead date back to 1961 with the development of the ‘Shoebox’ from IBM, which harnessed the capability to recognise 16 words and digits. Slow development followed over the next ten years until regular RE•WORK summit attendees, Carnegie Mellon University, completed the Harpy programme, which developed the capacity to recognise over 1000 words.

Do you speak bank? Financial institutions often have their own vocabulary: transaction, authorized user, fraudulent activity, and so on. But humans don’t usually think in those terms, and they certainly don’t text that way. That’s why, when creating Eno, Capital One’s intelligent assistant, we decided to build our own natural language processing (NLP) technology. It was important to be able to build Eno on a platform that deeply understands financial services terms, allowing us to deliver the best possible experience to our users.

Bots, bots, bots! We’ve heard about them, we’ve seen them, we’ve likely used them- maybe without even knowing it. In May, Facebook announced that there were over 300,000 bots on Facebook Messenger, and of course Microsoft CEO Satya Nadella famously announced in 2016 that “bots are the new apps.”
In this post, I am not going to talk about chatbots, partly because there’s not a whole lot new to say, but mostly because at CloudMinds, we’re working on what’s next. Our goal is to have a robot (an actual physical one!) in everyone’s home by the year 2025. That’s not that far away! And one of the first things you’ll want to do with your home robot is to have a conversation with it (OK, “robot, do the dishes” isn’t really a conversation per se, but you get the idea; you’ll talk to it)!

A common refrain we hear at NeuroSoph from our friends in the government is that “Time is our biggest challenge. We’re unable to allocate enough of it on our long-term goals because a lot of our pie gets thrown at resource intensive short-term necessities.” This dilemma is hardly new. Since Ancient Egyptian tax collectors in 3000 B.C, humans have tried to balance spending time on a forward-looking vision of their labor with the reality of cumbersome tasks essential to accomplishing their goals. Fortunately for humanity, there has been an unrelenting and intellectually driven push to solve these issues with technologies that save time, reduce costs, and increase efficiencies. From the plow to the internet, people have been able to invent things that have allowed us to work faster, smarter and better. The advent of AI promises to be an epoch along this vein that should surpass any previous technological advancements and will permanently reorient how labor hours are spent throughout a swath of government sectors.

At the upcoming AI for Government Summit in Toronto, NeuroSoph will be joining as an exhibitor to showcase their work. NeuroSoph is currently developing an Intelligent Forms Processing System called Specto. Specto augments the processing of the paper-based and digital forms. The system extracts and classifies data from incoming documents and applies that output to downstream systems and workflows. Using AI, Machine Learning, and OCR technologies Specto integrates seamlessly into any workflow. We spoke with Matthew Pallone, CTO at NeuroSoph to learn more about what they were doing.

When Alan Turing postulated what machine intelligence could do, his question gradually evolved into a more practical and implementable form – from ‘can machines think?’ to ‘can machines do what we can do?’ Fast forward to present day, the combination of databases and technology has made machine learning more relevant in solving multiple business challenges.

In the initial press conference introducing Siri as an integrated iPhone app back in October 2011, an Apple executive asked the assistant ‘Who are you?’ to which Siri replied ‘I am a humble personal assistant’. Since then, the Natural Language Understanding of the assistant has evolved from a rule based system to the current use of state-of-the art deep learning techniques. We’re fortunate enough to have Alok Kothari, Machine Learning Engineer at Apple joining us at the AI Assistant Summit in San Francisco this week to discuss Siri and it’s Natural Language Understanding.

With the success of Amazon’s Echo and it's voice-controlled assistant Alexa, the smart speaker war is heating up to battle for the hub of home automation.
Traditionally, these devices needed to be operated with buttons, a remote, or other physical controls, limiting their capabilities. As AI becomes more mainstream and customers demand more from their devices, the need to become more user-friendly grows. Consumers want instantaneous responses without having to hunt down a remote, or get up to approach their device - the demand for far-field voice activation is just around the corner.

Whilst Siri was the first mainstream voice assistant many of us became familiar with, there are many personal AI's coming onto the market to make our lives that little bit easier. Google Assistant which was initially released in 2016 is ‘your own personal google’ - you can ask it questions, tell it to do things, and now it can even identify what song you’re listening to on the radio similar to Shazam. The AI, which can be found on Google’s Android phones and GoogleHome, is also one of the only voice assistants that hasn’t been given a female name or exclusively a female voice, breaking the gender stereotypes of digital assistants being female helpers. After all, ‘ if we’re ordering our machines around so casually, why do so many of them have to have women’s names?’ As AI assistants become more prevalent in homes as well as on mobile devices, as a society are we bound to become reliant on such assistants?