Nice shoes, wanna recognize some input?

Even though giant multouch display tables have been around for a few years now we have yet to see them being used in the wild. While the barrier to entry for a Microsoft Surface is very high, one of the biggest problems in implementing a touch table is one of interaction; how exactly should the display interpret multiple commands from multiple users? [Stephan], [Christian], and [Patrick] came up with an interesting solution to sorting out who is touching where by having a computer look at shoes.

The system uses a Kinect mounted on the edge of a table to extract users from the depth images. From there, interaction on the display can be pinned to a specific user based on hand and arm orientation. As an added bonus the computer can also recognize users from their shoes. If a user is wearing a pair of shoes the computer recognizes, they’ll just walk up to the table and the software will recognize them.

My first thought, what if a student doesn’t where the same shoes daily? Women are likely to change what shoes they where to match the rest of their clothing for the day. Both men, and women will change their shoes as the changing weather dictates. The table doesn’t appear to be very wheel chair accessible either.

Not saying that I’ll catch my mistakes always, and chances are that I will not, but a preview option would be nice, or like others have suggested or an edit option. As the screen changed I noted I used where rather than wear.

That’s really sweet. ok, probably best not to use shoes for authentication, and I don’t think that’s what’s being suggested here, but very cool that you can tell which user is interacting with the table in a multiple-user session. Could combine nicely with the D&D-esque tabletop errr… table shown on here a while ago.

Apart from what N0LKK pointed out, there is one other really huge flaw in this design:
It would be very easy to “log-on” as a different user by simply a) stealing their shoes or b) buying very similar ones…

Why even put security in the chips at all? The table is less likely to stolen and duplicated than a RFID chip. If its just for location awareness when you are inches away from a specific device why add that stuff? Plus you don’t want to make the table rely on this. If you lost it you wouldn’t be able to use it even in single mode, and if you make the chip optional that means any security on it can be ignored.

shoes dont have to be an auth method between sessions. it could simply be a way to track a specific user for the duration of a single session. such as if they were to move to the other side of the display.

Why not just ask each user to “sign in” with a gesture/signature/etc., then match the user to those shoes for the rest of the session? Users who don’t “sign in” can be given more limited control of the workspace, or even ignored.

A single common gesture to ask for a login prompt, followed by a user-defined gesture (equivalent to a signature, but doesn’t need to be as secure as a password, etc.), or whoever is already working at the board could define what “level” of control the new user be given.