It looks like it's;A) Proprietary, and therefore not easily incorporated into LinuxMCE, due to potential licensing/cost issuesB) Limited in scope/use in a home setting,... a person knows where they are in their home (absent someone with Alzheimers), and limited use in tracking others because they'll need their phone at all times, and C) There are other floor plan drawing apps, including the open source Sweet Home 3D that are just as good at creating nice floor plan layouts for use with LinuxMCE's lighting, etc. home controls.

It's interesting tech, though. But it looks like it would be more useful for places like airports, shopping malls, casino/resort complexes, etc.

Logged

See my User page on the LinuxMCE Wiki for a description of my system configuration (click the little globe under my profile pic).

I think the major use of this technology, would be to allow a tablet / mobile device to know which room it was in, and automatically show all the controls for that room etc. With so many possible controls available, UI's that instinctively display the most appropriate controls dependent on their context is key. Obviously use of this tech is not to tell someone where they are in their own house.

I'm likely going to do something like this using Bluetooth and WiFi signal strengths. Almost every room in my house will have a Bluetooth dongle of some sort, and a few of the main rooms will have their own WiFi access points. Room movement gestures would be nice. E.g. I've just put music on in the dining room that has French doors leading out to the garden, within 30 seconds I go outside into the garden with the device I just use to turn the music on, my music know's based on my preferences I'd like the music to follow me out there.

I just got my Ubuntu 12.04 LTS Xen server up and running last night, so will be installing my live/development/test system on that tonight hopefully. Recently got a HP Touchpad that now has ICS on it so will be getting qOrbiter on that, and trying to implement the above soon enough.

About as much as I am able to do is install/configure the apps... I can't actually code (maybe someday I will). I've toyed with bluemon a little, but would be lying to say I know it well,... and I haven't touched it in years (I don't think the developer has either, actually). I can do a little shell scripting, though...

One architecture question: How does LinuxMCE go about releasing a device that has been captured for a particular job, once it is done? For instance, with Bluetooth, how would we set it up so that the system could grab the Bluetooth services for a Bluetooth headset, but then release it once done being used to later do a brief scan for known signals in proximity with bluemon, for instance? ? Not detail, mind you, but just overall function...

Logged

See my User page on the LinuxMCE Wiki for a description of my system configuration (click the little globe under my profile pic).

Bluetooth_Dongle uses the VIPShared routines (these routines were shared with Pluto's other project, PlutoVIP), which run in a constant runloop which scans for bluetooth devices, followed by inquiries to find RFCOMM channels. This happens constantly, and while the Inquiry feature is happening, bluez will simply return an error.

This is an issue because, if you simply run scanning loops in parallel while this happens, your scanning loop will fail, at seemingly random times while VIPShared does the inquiry for the mobile orbiters.

So, at the very least, a mechanism needs to be coded in Bluetooth_Dongle so that devices can register to have scanning time, and DCE commands can be sent to do scanning to target devices, while they emit events asynchronously.

This is my frustration, this project needs actual people willing and able to code, and all of us who CAN code, are tied up either (1) fixing bugs, or (2) working on our respective corners of the system which demand our full attention. We need people who either can code, or are willing to put in the energy and time to learn to accomplish what they want to do with the system.

This may be a dumb question... But, here it goes anyway... Could you suspend the runloop by using a kill on it, when a menu item is triggered, launching a different Bluetooth service, and then restart the runloop afterwards, by wrapping the call to the desired Bluetooth enabled program in script??? It's hardly elegant. But is the system able to do this without it automatically trying to restart the runloop?

You would not need to do this if some things were refactored in VIPShared, but this will take some investigation and somebody will just need to sit down and do some serious elbow greased research. The fundimental problem is not simple, and can't be solved by killing stuff like that.

I don't believe Bluemon is a better technology than IndoorAtlas at all for devices that fulfil the requirements. It is however more device agnostic. I actually implemented FollowMe media using Bluemon and MythTV back in 2006.

After I've got qOrbiter onto my Jogglers, location change gestures will likely be one of the first things I look at in combination with interfacing my PIR's into LinuxMCE.

Given a PIR's incredibly coarse distinction of motion, I do not believe a PIR would be best to implement motion gestures.

The more effective route would be to use a Kinect (or its OEM counterpart, I forget the name), as it provides enough information to shove to a computer visualization library like OpenCV to not only recognize gestures, but faces, and other unique shapes.

To do the kind of thing that I think you're talking about would require a number of different sensors, all located in strategic positions. Frankly, I don't see the benefit, at least until the motion sensing and visual/auditory recognition technology gets better than it is now.

Note: Bill Gates has a system whereby a person wears a pin that contains an rfid chip and changes the environment to suit that occupant or guest. Now look at the WAF problems associated with that!!!

PS: The Nokia N900 has an experimental app that allows you to control the phone by SMS. Combined with something like bluemon, LinuxMCE could send an SMS message to the phone that would cause it to launch a particular app or command based on Bluetooth signal strength derived location... If you could do that on an Android device and/or even run an SMS-like service locally (or dbus?!),... You could achieve something like you want.

« Last Edit: July 16, 2012, 07:48:13 pm by JaseP »

Logged

See my User page on the LinuxMCE Wiki for a description of my system configuration (click the little globe under my profile pic).

The first usable system I saw was an echolocation mesh that was present in the building at the AT&T (nee Olivetti) Research Lab in the UK, where they had placed arrays of echolocation transducers in the raised ceilings of the floors of the building, which created specific harmonic signatures when they bounced off each badge. Its initial cost was in the 6 figures.

Pluto's approach made sense for its integrated proof of concept. Make the media directors clients, constantly pinging for the phones which were servers, advertising an RFCOMM service on a known channel (this was allowed on Symbian, however, as Hari discovered, the J2ME port required that the service advertise itself via SDP.). This created a semi-reliable way to make the system figure out whether you were in one room or another. This worked well in large houses because of the nature of bluetooth signals and their transmission classes versus signal attenuation, but in smaller living spaces, it was hell because the clouds would overlap, and it incurred so much logic in the system just to try and figure out where a device actually was, not to mention, the constant use of the bluetooth was hell on battery life, but it DID work, if you understood how bluetooth's signals propagate. However, as time has gone on, this method is much more difficult to do with modern phones, as they want developers to use wifi instead for "network like" things. We can still utilize this ,but it needs to be brought to the present reality of smartphones with multiple network interfaces. Not even mentioning the fact that the bluetooth interface was horrendously slow for how it was being used (30 seconds typically between binding phases, and typically 1 second between button presses to show the new screen), and the target was for phones with hard buttons only, no touch screen ability, and insanely low resolution (176x208 was the initial target.)

Fiire sidestepped this problem by making remotes with a definite mandatory access control address, and sending that address each time the remote connected with a target media director via its dongle. The other aspect was that the radio chips in these remotes had a rampable transmission power adjust, coupled with a directional antenna, which allowed it to quickly find a machine to bind to. It worked, and it worked well, but Fiire contracted with Gyration to build the custom device, of which only a limited run was made, and no more of these devices are available.

The approach of using a definite camera with a depth imaging sensor is the most reliable and cost effective option at present, as it costs $120 or so for a device, and the protocol for it has now been decoded and well understood. The data emitted by the camera is literally a 1 to 1 mapping of RGB pixel to depth, making this shit _really_ easy to package up and send to recognition engines. The DCE device just needs to be done!

Follow Me, is merely an event that is fired to the Orbiter plugin. Anything can fire it, and if you can definitely provide a PK_User, the system will do the right thing.

But please, forget the PIR nonsense. It will trigger far too many false positives at best.

Not got time for a lengthy reply, but my mentioning my PIR's was an aside that I probably shouldn't of mentioned it due to it's closeness to the topic in hand. PIR's are for my alarm system, Dual Mode, internal & external, also got perimeter covered with door/window alarm sensors topped off with a healthy does of fire/smoke/CO detectors.

JaseP, I think your over complicating what I said I was going to do.. I almost always have my mobile in my pocket, it would be very easy for it to know which room it was in, if I had just used it to turn on music in one location, and then I move to another location, it is relatively easy to process that logic and cause the media to follow me. That's all I'm saying.

I'm not talking about human presence detection, but device presence detection, two quite different things in my opinion.

I'm going to be sticking a tōd on the dog for dog presence detection ;-)