Inspiration

Providing quality customer service can be a challenge. Especially with tech companies. Between having to talk with 20 different customer representatives & being redirected multiple times only to re-explain each and every time, calling a support line often feels like more effort than it's worth. We decided that the tech service industry needs a tool to integrate a sense of continuity between customer representatives.

What it does

Phantom is an application that uses the power of natural language processing to automatically summarize customer-representative conversations into a few key main points, which are stored to convey the substance of the customer's issues to representatives in the future. The obvious advantage of using Phantom is that reps don't need to start at square one on helping out customers every time a customer calls, streamlining communication for both parties involved. This directly converts to lower costs & higher efficiency for the business, and a smoother & less frustrating support experience for the customer.

How I built it

We used Google Speech API to convert audio files into English text, which we then processed through IBM Natural Language Understanding API to extract key features like the "problem", "solution", "location", & other context regarding the customer's problem. This data was then stored in a Firestore database (GCP) where it was associated with a BAN (Billing Account Number). Customer reps. could later pull this data out when the same / family line called for support in the future.

Challenges I ran into

The largest issues were probably setting up the feature extraction from the text, developing a palatable front-end for customer representatives to deal with, & connecting the front-end & back-end together using POST requests.

Accomplishments that I'm proud of

Getting it done & making a function product without hard-coding the life out of the project.

What I learned

How not to sleep for 24 hours.
How to live on the verge of caffeine toxicity.
How to constantly stare at a laptop screen without straining my eyes.
The horrors of front-end development.

What's next for Phantom

Making it more bug resistant & enhancing our feature extraction from conversational text, which I believe is the hardest part in our project. This is essentially the step from thousands of words of conversation into 200 or less key words that effectively sum up the important bits.

Built With

Try it out

Submitted to

Created by

Set up Google Speech API, IBM API for NLU, Node.js backend for front-end communication, asynchronous network call handling, and built the entirety of the front-end application using HTML, CSS, & JavaScript.

I helped developed the concept of this project and the best ways to implement it. I used Node.js and IBM Watson to take the entire speech to text conversation and find the sentiment values to generate a summary of the conversation. This helps make phone calls with customer service reps more efficient and less time consuming on both sides. Also recorded conversations simulating a phone call to be used in the demos.

I worked on converting the generated speech-to-text from Google's Speech to Text API to a meaningful summary in Node.js using properties such as keywords and concepts from IBM Watson's Natural Language Understanding.