HackerVision

Inspiration

We felt lost in the sea of nameless hackers during registration for Hack Princeton. Who to team up with? Who writes Javascript? We decided to solve this problem using AR.

What it does

Hackers can register themselves before an event on a web app, entering information like name and GitHub username. They then take three photos of themselves, and immediately afterwards a Raspberry Pi can identify the hackers with a camera, adding AR labels of their profile information.

How we built it

We used the Microsoft Cognitive Services Face API to do all of our facial AI. We built a vanilla Javascript web app where users could fill out their profile information and take photos of themselves from within the browser. After the submission, the Node server creates a person profile through the Microsoft REST API, and adds it to database of users. Then, the Microsoft server trains the newly updated database to recognize all the data.

On the Raspberry Pi end, we created a video stream using the built in camera. Using Python, we threaded processes that queried the REST API with frames of the preview to detect and identify the faces. The server responds with the coordinates for all detected faces as well as user data for identified faces. We use this information to overlay text about the people over the video stream.

Challenges we ran into

We continue to be limited by query time of talking to the Microsoft server. We improved the speed of our program by threading. We were initially limited by the inflexibility of the Raspberry Pi, but found pi3d to be a workable 2d and 3d drawing library.

Accomplishments that we're proud of

We have a completed, working project! We have implemented a fully integrated experience starting from user registration on the web app to precise camera identification.

What we learned

As three first time hackathoners, we learned how to pull together a working project in a short amount of time. Kyle and Charlotte had no prior experience working with a Raspberry Pi or threading processes. We learned about the effects of no sleep as well as the nutritious, delicious experience of drinking Soylent simultaneously with Red Bull.

What's next for hacker-vision

We hope to develop a mobile version using React Native and smart phone cameras. We want to add more user profile options, and integrate them into a better AR display. We really want to improve the processing time, moving towards a actual real time AR experience.