breveo

Inspiration

Nobody likes to watch long watch videos. Because of this, lot's of relevant content gets ignored.

What it does

Our application will watch a video for you and generate a summary in seconds.

How we built it

We decided that breveo should first learn to summarize the audio of a video, so we began looking into machine learned text summarization. After researching different open source machine learning libraries, we began using Google's 2016 release TensorFlow. However, after running many hours of machine training on our GPU and CPUs, we still weren't able to generate good enough models. So we began looking for pre-trained models, that have been trained on multiple GPU's for many days. We landed with OpenNMT (under Harvard University) which had pre-trained models that trained on GigaWord (the de-facto text data for machine learning).

Challenges

Machine learning for a long time was something only cutting-edge researchers could work on. Now, thanks to open source libraries like TensorFlow and OpenNMT, machine learning is in the hands of anyone who knows how to run a python script. However, this access to the public is still very new and there is not a whole lot of trial and error to learn from in the open source machine learning community. Therefore, we spent lots of time just getting these APIs to work on our machines.

Things we learned

Anaconda: Python data science platform

TensorFlow: open source machine learning framework

Keras: high level neural network api, can run on top of tensorflow

AWS Elastic Beanstalk: service for deploying and scaling web applications

Tools we used

AWS Elastic Beanstalk: service for deploying and scaling web applications

Github/git: version control system

Python3

Torch

Lua

Opennmt

Future

Breveo's next step is to train with images to gain more context on the video. After that, breveo will move into training with multiple frames. The goal is to be able to create the most concise and effective summary of a video possible through audio, visuals, and text.