Computers can do a lot of things today: connect to the internet or do text editing or any other productivity task. They can play music, display or retouch photos, play movies. Produce realistic 3D graphics. But it's still 'blind', it has no idea of your existence, your emotions, the objects you are interacting with. Observer is about to end this era. Have a look at the videos presented in the movies section to understand the potential offered by Observer.

there _is_ indeed potential. i mean, how annoying is it to have to change my away/idle/busy status on my im. sure, that sounds trivial... but if my computer could see that i was away or on the phone and change my status for me? now _that'd_ be cool.

bluetooth dongle + bluetooth pda or cell phone + dbus. you get up from your computer and walk away your bluetooth loses a signal and sends a message via dbus to lock your screen, set you away in gaim. If you use a standard analog phone line you can have a process listen for rings on a modem in your computer or when the line is in use. send a message over dbus etc..

...because carrying a cell-phone with you instead of just leaving it on the table nearby is so much easier than having your computer just watch you leave, right?

A lot of things are already "possible" but not realistically feasiable.

(sorry to feed the troll, but its twitchy reaction. =D)

Observer looks like a very exciting project; bringing image recognition to the average developer, hopefully. I really like how quickly it seems to recognize objects in front of the camera, though it makes me curious about how the interface for defining that sort of thing works.

Watching the "I'm on the phone" demo is curious. Does the image recognition still go off if he brings an empty hand up to his ear, but positions his fingers like he's holding a phone?

Holy crap, that's cool! Sure, having it change your status on an instant messenger is cool, but this has so many more applications. If you leave the computer, it can log off for you. Then, when someone sits down, it can find out who they are and ask for *their* password, no user specification! Imagine if it could also track your pupils and adjust the screen accordingly. Say, you're staring at the middle of the screen for a long time, it could shade the stuff that you aren't looking at making it easier to see what you want. Imagine having an entire social doohickey hooked up to this thing. All theses young hipsters getting it to recognize their stuff would make for a great Google database to compare your stuff against.

I'm pretty sure this can also be applied to pr0n, like everything else.

Computers can do a lot of things today: connect to the internet or do text editing or any other productivity task. They can play music, display or retouch photos, play movies. Produce realistic 3D graphics. But it's still 'blind', it has no idea of your existence, your emotions, the objects you are interacting with. Observer is about to end this era. Have a look at the videos presented in the movies section to understand the potential offered by Observer.
>
>
Eugenia: HAL, Shut off you camera while I take a shower.

"Yeah, let's introduce more irritating misunderstandings and obtrusive belittlement into people's interaction with computers....let them all suffer from the moronic concepts geeks have of human nature."

Actually the idea is pretty good. What makes your complain valid is that it might be implemented poorly, but that's a different story.

To the other guy who says this idea is nonsense...why don't you tell us why it is nonsense?

I am sure many of those who are against this idea just "because the computer could watch us, it's creepy, etc." think the idea of helping robots are cool. So what gives?

So you're telling me that I can't show it a box of Tiger, and then have it go through my media archive looking for image recognition matches, even though that's basically what the software is demoed as doing alraedy?

I don't understand how you're not taking this project seriously. Mac shifting to x86 won't change as much as this technology or any other human-interface to PC technology. This is simply AMAZING! I hope there will be other talented developers out there to help Williams Paquier.

It's an interesting, but few people want it. Such technology already exists in SunRays. Plug in your card, and your previous session is restored -- it actually never logged out unless you explicitly did so. Of course the screen saver would kick in after a time so having your card stolen won't be the end of the world since the thief would have to type in the password again.

It's *extremely* convenient and we may see some form of that in the future home PC (perhaps through the use of RFID tags or memory chips).

But a full blown observer won't be popular for the same reason the video phone didn't take off, or voice recognition hasn't replaced the keyboard, or the reason why we all don't have webcams, or why people let the answering machine pick up calls even though they are there. One word, privacy. Few people want you to know exactly where they are all all times, and few people want to be seen or hear "in the raw" 24/7. We want to control how we are seen and if we can be seen at all (which is why Slashdot and other forums are full of "Anoynmous Coward"s).

There's also some practicality involved. Imagine you're asking someone else to sit at your computer to do something (proofread, fix a problem, or just do something on behalf of you), it would be extremely annoying to have your sessions change. Computers have a really hard time guessing our intentions. This project is essentially MS Word's Clippy on drugs,

Some people love Clippy (they actually do exist), but must just curse at it and want it to go away.

> what people are to lazy now to click a mouse a few times to change their aim status to away?

The majority of people in my buddy list are. Or look at IRC: most people don't use the away status or nick renaming, so you never know if they are around or not.
That's what Auto-Away is for: if the user is idle for some time, the status is set to away. That's also some sort of primitive user observation.

> it sounds creepy. A computer is a machine I choose to interact with. not the other way around.

I would consider "interaction" as something bidirectional. So if the computer is able to support my work by observing me, why not. But I also want to have control over the computer's attempts to act "intelligently", so I can switch it off or reduce it when it fails.
Open source solutions are mandatory here in my opinion, I won't trust a closed source observer.

In general I think computer have to get more aware of the context the user is in to adapt to his current needs. Otherwise the human-machine interaction will stay unfriendly and brittle.

Observer seems like the perfect application. Judging from the comments people either love it or hate it...

However, its interesting in terms of research potential...

Its nice to be able to say, "What if i do this? What if i do that?" and go on and pursue your dreams and ideas.

Ethical Issues are not addressed immediately but exist with every kind of application as is Observer. Someone else seems to be responsible for that??????

The software seems pretty responsive and indeed it could be usefull not only at home, but at work or industrial applications where it could reduce the load of an operator by recognising typical gestures in combination with commands.

I would rather remain the Observer. i will know when I leave the area and then I have the choice to signal to the computer that I am away. BUT, that is me and I do not have a problem with the technology itself.
In fact, I could use it in my work. i provide care for several individuals with developmental disabilities, mildMR and autism. My client could pass her computer before work in the morning and it could signal to her that she is dressed appropriately and make suggestions from a data bank of clothing in the closet. much potential here.

No No NO No.
First we use hardware to replace our defective body parts and even make enhancements.
Then we become largely synthetic and we replace the computer.
Then in the very distant future we WILL find away to do away with the hardware, synthetic or otherwise, and become like Q in "next Generation" We will be the Q. I have my faith and you can keep yours.

With an advancement like this one, the opportunities are endless. Consider an application where a disabled person sits in front of the computer and interacts with the computer in sign language. The computer then recognizes this sign language and performs corresponding actions.
Go Observer!

This technology would work wonders for searching through tons of digital pictures a well. Say you want to find a picture of you and your brother in front of your boat. This would be able to recognize both of you in the picture and then recognize the boat as well. No need to input metadata or titles.

The problems this "Observer" seems to be solving so easily are extremely hard ones. It recognized William's face with his hand obscuring most of it. The last time I looked at research in face and object recognition, we were no where near accomplishing such feats. Especially not in real time!

Not sure, but this seems like it's too good to be true. I will have to see it with my own eyes and try it before making any conclusions.

It does not recognize my face with my hand obsucring it.
Perhaps it's a syncronisation problem when displaying result in the input image.
Any way, Observer extract categories, so it's not a perfect face recognition and identification system. The recognition process is totally independant form the object the system have to found in images.
It's a recognition system for any king of categories for a given precision.
And yes it's real time.
Send me your email if you want to be a beta tester.