Lev Feels Fine Demo

From Open Source@Seneca

0.1 Overview

Ask for people's twitter handle, or a hashtag. Also ask for their location

PAGE 2

A video is created from 3 shots - must play seamlessly, no hiccups (this is a big component of the demo)

The first shot will be of a face looking towards the camera - it will be chosen from happy, sad or angry. Which emotion will be determined by the users twitter ID, depending how often they mention those emotions

The 2nd will be a still image, chosen from a flickr feed of happy, sad or angry, chosen as above

The 3rd shot will be identical to the 1st

Brett to send video to use as soon as possible. Use whatever in the mean time.

0.1 Tasks

Twitter API (Donna)

get people's tweets to search for mood keywords (happy, sad, angry)

Geolocation (Anna)

get weather for person's location, map to background, audio

Flickr API

pull in images from Flickr for "faces"

Canvas/Video montage (Scott)

video will be 480x270 pixels

seamless combination of video + images (e.g., use a canvas to show video and images, hiding the img and video elements)

0.2 Ideas

Once we get that done, there is some added complexity I would like to add - for instance, the 2nd shot will be an "over the shoulder" of whoever's face I record, looking at a computer. We will dynamically place the flickr image INSIDE the computer. I'd also like to group all the emotions like the 5 elements site, but I'm hoping the above text is enough to get started.