Extracting color from a video's pixel to automate an LFO.

I read in my Psych 101 textbook about how the human eye takes in light and understands color. They showed the process and gave me an idea for a new Max MSP patch.

I want to take the color from a certain pixel and translate it into a sine wave so that I can use it as an LFO.

I was thinking I could use Jitter to host the video, use Max to track a certain pixel (or group of pixels) in a certain area of the video, and then have the hue modulate the frequency and the brightness to modulate the amplitude of the sine wave. Are there any objects I should use to make this happen?