Posted
by
timothy
on Sunday February 13, 2011 @04:01PM
from the tangible-tables-are-the-best-kind dept.

baxpace writes "The first open source prototype of a tangible table using the Microsoft Kinect sensor. The hack is essentially a proof of concept that can serve a multitude of purposes including a real-time analysis on urban models. The program uses the Kinect point cloud which is mapped onto a flat surface. The upper layer of the point cloud will apply a colour to anything that is placed on the table and is recognized by the Kinect depth sensor. Every object that is placed on the table is detected automatically and in turn becomes trackable."

it's not that big of a deal, as the technology has been around for a while. the great aspect of kinect is that the technology is now a household item, is cheap, and is accessible to home tinkerers. the games are also pretty novel.

I think a lot of the hype on/. is that MS put this out for a game system and the OSS/modding/hacking community has found a billion other uses for it far, FAR beyond the scope of the XBox, far more than MS ever even considered.

As far as I can tell, he's just projecting the depth (as a few color bands) on top of the table. About five lines of Python with libfreenect and OpenCV. He isn't even tracking anything, just projecting the raw depth quantized to a few layers and roughly calibrated onto the table. Seriously, there are probably hundreds of Kinect hacks more impressive than this one.

The only odd part about this Kinect hack is that he's using the ugly proprietary CodeLabs NUI drivers instead of OpenKinect/libfreenect or OpenNI/

I'm sorry to say your comment sounds like "i didn't like this, so others shouldn't either" , I know you didn't intend that, but this is STILL news-worthy, perhaps not/. "perfect" b/c it doesn't have any....technology......computers.......futuristic vision tech... oh... wait.... I Was still impressed with this demo. Like all research, it is built upon the work of others and incrementally tweaked/improved/etc. Just because this "could" have been done before, doesn't mean it has been, and even if it has bee

I didn't really read it like that. The GP makes a coherent argument for why this doesn't seem newsworthy - more than "I didn't really like it, meh".

The summary sounds pretty cool and grabbed my attention straightaway. If the article was actually about a hack that used the depth buffer to cluster points that move together (have consistent depth) into objects and then track them then it would be pretty cool. But really all he is doing is what the GP says - quantising the depth and projecting colours.

This does seem pretty basic - project a color based on depth. If he had projected a ball onto the table, and used a couple of pencils as paddles to do a pong game, or Neverball, or something, then fine. But this seems like "Hello World" kind of stuff.

Dude! you sure whom you are talking to? Marcan was the first person to hack the kinect and come up with libfreenect. In fact, he was the winner of the competition sponsored by Adafruit for opensourcing Kinect drivers.

This is just an example of the primary functions of the device. It uses stereoscopic cameras to determine depth of field and a common plane, by monitoring this data and differences are highlighted. It is a simple example. It's like someone who has just bought a chemistry set and made their first chemical reaction, you don't get to publish based on that but it hopefully encourages you to go forward.

Well, metaphorical desktop metaphor aside, Tangible media is about finding new ways to interact with a computer. One of the most interesting tangible media interfaces I've seen is Siftables [mit.edu]. Siftables are computers about the size of a cookie that interact with each other. They sense when other siftables are set beside them. You can manipulate the computer by shuffling around the siftables like blocks. They have tilt sensors so that you can "pour" the effect of one siftable into another. The desktop

The SNR on this site has gotten pretty bad. Sometimes comments look like they were digested by a focus group before pasted here. I've come to realize the barrier to entry to post on this site is a computer and an Internet connection so shills and non shills alike a free to post as they please. I just don't lie to myself and try to believe this is a pure and pristine place, not like it ever was. As far as believing blindly all the posts of user experiences with brand X and eloquent posts about tribulations

Too bad slashdot is not set up to allow community to mark shills as a group? Could we do that?

If you had a clique of others you trust, and allow them to mark posts as shill and it would reduce their score for you if you trust that clique? Sorta like moderation but for sub groups rather than moderation for the whole site at once? I guess you could use the friend-foe thing but AC would not get reduced.

And this firehose thing probably makes story posting worse and more susceptible to being gamed. But maybe

As far as AC's and high UIDs go people have been known to sell off longtime UIDs to the highest bidder and sometimes posting as AC is beneficial when some actual industry insider posts real verifiable info. As for cliques it would be pointless all you would have to do is register a large group of users and have them all "friend" each other making them selves the defacto Slash-Friends group speaking for the whole site. It is always best to reserve judgment and double check facts rather than giving trust t

I don't see why we get 2-3 stories a day about the kinect, and Slashdot should be able to see by now that stories that only get 30 comments half of which say "this story shouldn't have been posted" should not be posted.

Because people posting that are lazy whinging fuckwits that won't moderate the stories. Gees, it's not that hard.

This could be cool if you had a clay model of a landscape and were to simulated floods, tsunamis, etc. You could quickly mold new landscape modifications to try out. Or with a detailed enough sensor you might be able to simulate a wind tunnel (ie this pic of the Tesla Model S - http://bit.ly/Tesla_Model_S [bit.ly]) - of course the model could only be reduced in one direction, with a single sensor (the typical drawback of a topographical map).

Real science simulations require a lot of backend computational power and a ton of coding / configuration. You probably spend more time on setting up parameters / boundary conditions than establishing geometry.

Sure the code may be open but the framework, the hardware and SDK behind the whole Kinect is NOT open source in either spirit or word. There is no way to implement the Kinect without blessing or buying from Microsoft or adhering to their patents or whatever limitations they want to apply to the system and software.

The whole SDK expects you to have Visual Studio in order to develop for it. Let me know if there is a truly open source and cross platform implementation of both drivers, hardware and software tha

Sure the code may be open but the framework, the hardware and SDK behind the whole Kinect is NOT open source in either spirit or word.

What framework? What SDK? So what if the hardware is not open source?

There is no way to implement the Kinect without blessing or buying from Microsoft or adhering to their patents or whatever limitations they want to apply to the system and software.

Yes you have to buy the hardware from them, who cares? They haven't applied any limits to the system and the software you use with it (the libfreenect driver) and anything you build on top is/can be open source.

The whole SDK expects you to have Visual Studio in order to develop for it. Let me know if there is a truly open source and cross platform implementation of both drivers, hardware and software that I am free to implement in my own package without getting sued.

WTF are you on about? You don't need an SDK and the libfreenect drivers were originally linux-only in fact. Seriously have you go any idea what you're talking about or are you just an anti-MS crybaby looking to have a rant?