A smart home vs an automated home

@matt-shepherd Something I hadn't thought of. Nice idea. I do have an Amazon Echo in my living room. My only question on that is have they figured out a way to sparate 2 Amazon devices like an Echo in one room and a Dot in another room to know what room a sound came from? Last I checked, that was not possible.

I was looking into it a bit for other automation stuff like turning on a light in a room. For example, when I am in the living room and I say "turn the overhead light on", it turns on the living room overhead light. When I go into the office, where now I am being heard by the Dot, and I say the same phrase, it should turn the office overhead light on. From the reading that I have done, you can't use them that way, but it should be possible somehow.

In the latest video of Andreas Spiess he is talking about presence detection with an esp32 by sniffing the wifi traffic. There are a number of commercial products that 2 or more can be installed to triangulate signals from smartphones wifi and Bluetooth (I had to install 3 of those in a Mercedes dealership, they were from Netgear if I remember well): basically there is a Master one and they talk to each other on their own separate wifi network; I think it is kind of a mesh network, because they can relay data of distant nodes to the Master device. Of course once installed they need to be calibrated by me standing in a known position with my smartphone in my hands and turning around 90° each time I was told to.
I would have liked too to have the home automation system to be aware of people in the house by the means of BT devices, but it is still in the to do list. As said before the data analysis is going to be tricky but it going to be the main subject of the following years, as more and more AI and machine learning cloud services are popping up (I did a quick peek on the IBM site and I got scared by the amount of services that are available and I will never be able to use, as my programming skills are not really the best)

@matt-shepherd If they can then get multi tiered location setup down, that would be awesome. By multi tiered location setup I mean being able to define say 2 houses, maybe a main home and a vacation home, and then have devices that have a defined parent, such as main home or vacation home. That would allow for say having two devices named living room light.

My only thought on that for occupancy sensing would be, what if you walked into the room and didn't say anything? None the less, it gets back to what I said about data, the more you have, the more informed your scripting decisions can be. One other thing, if you used multiple echos or dots, you would have to make sure that more than one device doesn't hear the command.

The question may turn out to be whether people are willing to accept less than perfect performance in exchange for occasionally more capability (when it works). I think Z-wave and x-10 were good examples (of unreliability) showing that's not what people want. People seem to prefer less capability, but have it work 100% of the time the way it's supposed to. At the very least, WAF is low on unreliable things.

@NeverDie nice! Sparkfun has a breakout that is 20% cheaper than just the omron sensor. This is is getting closer to my price range, the radar modules are cheap and might be fun, but this would likely yield a working solution sooner.

@NeverDie nice! Sparkfun has a breakout that is 20% cheaper than just the omron sensor. This is is getting closer to my price range, the radar modules are cheap and might be fun, but this would likely yield a working solution sooner.

Once a method of sensing people is selected/found, then mySenors can be used as the transport layer. This leads to the question of the actual "smarts". The various mysensors supported packages seem to track state, allow control, and have scenes, which are good data and tools for the smarts to work on, but dont seem to be smart themselves. Am I overlooking something?

Commercial products use the 'cloud' to gather a lot of data from local devices, and create an AI of sorts that local devices then query for the appropriate response to specifuc conditions. I'm not interested in sending all my data to the cloud, so Im interested in completely local solutions.

Again this doesn't currently exist(that I know of), but many pieces do. Some are just pieces (hadoop for storing data e,g), some are partway there (mycroft ai e.g.), some have large backers (movidius ai accelerator). Some assembly required.

Are there more complete solutions that I may not know of?
What goals do others have?

The amg8833 has an 8x8 grid, and a 60° field of view, so if you have 8' (2.4m) ceiling that will cover a square with 14' (4m) sides at the floor. One pixel will be about 1' 9" ( 44cm) at the floor. That should be plenty of resolution even without interpolation. I suspect interpolation could give an effective grid of 16x16 at least, maybe more.

Careful planning and mounting in a corner or on a wall would have some trade offs, but might allow for covering a larger area with one sensor.

One trade off is identification. Is that heat blob a person or @gohan 's cat? That might be doable, but is it Mom or Dad, or teenager would probably need supplemental information.

Stationary heat sources, lamps, vents etc, could be filtered out, in probably several different ways. I have some large windows that may blur the data, but this isnwhere situational awareness wouldmc9me in. E.g. if (curtains == open && tod == daytime) then apply filter to pixels x through z, maybe time of year etc.

Other obstacles would probably look like cold spots and unless they are large wouldn't affect detection of people. They might dim a bit, so maybe a filter would be needed here.

This is quite doable. I've been thinking about it for a while and seeing usable sensors for effectivley 1/2 price has me a bit excited. I appologize if I have monopolized the podium a bit.

You are pretty much facing the same problems as all the engineers working on self driving cars or whatever is using computer vision (which is going to be tricky to be handled by an arduino alone, and that is why many services are relying on cloud computing)

@gohan true for the larger goals, but this sensor is 64 pixels (256 w/interpolation) and we need to track a dot, i think an arduino could gather the data, do a bit of preprocessing, and (the mysensors part) transmit the data to a raspberry pi for "whole house" tracking.

This is pretty low res and I think a pi could handle it. If not, Intel has a movidius usb stick meant for computer vision/ai acceleration, I believe opencv has been ported to it. So while this is on the edge, some of the blood has dried.

The other plus is houses move slower than cars, unless people are running indoors, a 2 to 3 second refresh rate should be accurate enough.

This is a large project and mysensors would only be a portion of it, so for now I'll try to limit myself to talking about how a node based on this sensor would work and if it fits into mysensors properly or not. There is plenty there to discuss.

@dbemowsk again sorry for hijacking your thread, I'm going to look at the guides for submitting a node to openhardware.io, i dont promise I'll be fast so dont stop working your own ideas.

@wallyllama I think an easier way to do tracking of people in the house would be through BT tags, this way you have also identification. Image preprocessing on Arduino I think would be hard to achieve, maybe on a pi zero.

@NeverDie data sheet says 7 meters max, there is probably enough margin, at least for typical room sizes in the US. I think the 60° fov will be the bigger issue, getting coverage. Imagine you place the sensor in the center of your ceiling. The room is square, 14ft on a side and 8 ft high (~4X2.4 M), the sensors field of view would exactly cover the floor, but it is shaped like a pyramid with the sensor at the peak, so if you stand flat against a wall, only your feet would be in view.

@gohan's suggestion of bluetooth tags doesnt have that problem, it can be seen anywhere the signal gets to. You can have multiple detectors for coverage and triangulation. If you have a smartwatch or phone you always carry then youndont even need a separate tag. It is relatively cheap and simple, and most of the tech is done already.

(Now here is where I loop around and start spinning in circles) I dont want to have to carry anything, it should be possible to detect my presence by all the signals bouncing off me already, like light, or ir, or wifi, or radar, then the googling happens......

Perhaps an alternative definition of smart home could be whether it connects to the cloud? Or whether it uses Big Data / Machine Learning / aggregation of the habits of many households to find solutions to things?

Yet another, for me, is whether smart means 'ethical'. For example, a cloud connected home that shared my life patterns with third parties (which is most devices these days..) should never be called smart.

@alowhum well... it all goes to the point "do you have enough money/time/skills to invest in a homemade Big Data / Machine learning project"? Do you even have an idea of how complicated that system would be to setup and maintain later on?

Instead of going off on wild tangents about privacy and the like, I suggest we re-focus by asking what good or useful thing we might accomplish if we could make the thermal 8x8 pixel sensor work. After all, this is the first thread to consider it, and it would be a shame to waste the opportunity.

@NeverDie I agree but it is pretty much related as I really don't think image processing could be done on a microcontroller without the help of a backend server that would actually collects all data from sensors, correlates them and then give them a meaning that can actually be used.

In that case, I suggest @wallyllama start a new thread devoted just to the sensor and how best to make use of it. I wager something can be accomplished without resorting to full blown data fusion. Plainly if you tie your success to difficult, unsolved research problems that have long resisted solution, you will quickly bog down.

The two obvious things are: direction of movement and, as has already been mentioned, a finer location granularity within a room. Since it's thermal, it could know that you're sitting on the couch even if you're not moving. That's big. Just think of all the occupancy sensors that wrongly conclude the room is empty if nothing is moving. We've all had that experience, I'm sure.

You guys are thinking of a complex solution to this that is a single package that does it all. What if you dumb down the scenario a bit. Don't try to think of making a determination using only one type of sensor. After doing some more research on the guy that did the infrared doorway sensors, he said that it was a pretty reliable way of counting room occupants. Maybe you use the infrared doorway sensors as a way of counting the number of people in an area. Now you have a reliable way of counting the number of people in an area, now you start looking at ways of identifying who those occupants are if needed. Thinking in a broad sense, putting some fuzzy logic behind data from a number of other sensors, whatever that may be, may give you some kind of fingerprint for a person that could be used to identify people. using that approach may give you a little better accuracy too depending on the sensors and logic you use.

@gohan I get it, but at this stage, any bits and pieces that you can put together that can even do a fraction of it is better than nothing. I figure if I can start with the counting people part and somehow layer things on from there I'll be a little ahead of the game.

I don't want anyone to worry that you are hijacking my thread. Speak freely, this is how good ideas come to be. Precisely why I started this thread. I figured it would spark some creativity from the community.

So looking at the MySensors end of this, would it be too far off to think of adding a new node type, "person". A person node could have customizable properties that would allow you to define different useful bits of data related to that person. For example, preferred room temperature, or preferred light level. Heck, you could even have a room or area property that would get set when the system sees you move to a different area. So when you do figure out better occupancy sensing, you can automatically set user preferred light levels and room temps based on who is in the room, and dial them down after a person leaves the room. If you have more than one person in a room, it could take an average of the properties of all in the room to determine a setting like room temperature to provide a happy medium.

I think @dbemowsk is hinting at something that fits my understanding of "emergent behavior", individual simple things interact and create more complex results. How many, and whom are different questions. Counters in doorways plus a list of whose phones are at home, maybe add in some hostorical data of who likes to sit in which chair. There are probably better combinations, but that is what I got from hisnrecent comments.

So looking at the MySensors end of this, would it be too far off to think of adding a new node type, "person". A person node could have customizable properties that would allow you to define different useful bits of data related to that person.

I'd like to hear more about how you would use it. Below is my two pennies worth.

My thinking is that mysensors is a transport for relatively simple data, like state, value, counts etc, things nodes would need to set the environment up, or report back to central command.

A complex object like "person" could have all kinds of attributes and preferences, which would modify values sent to nodes. Example the curtain controller knows to open during the day, close at night, and maybe close for an hour at 10 am in the summer when the sun shines directly in and heats up the house( could also be a light sensor), but if the weather says it is clear, and kent is in the living room and it is night open the curtains up, and that would be an override coming from central. The node controlling the curtain doesnt need to know it is me in the room, it just needs to accept the modifiers.

I say this mostly because as @gohan points out arduinos arent terribly powerful, and telling them too much info may just confuse them.

I liken it to the body. E.g. your finger doesnt have to know if you are walking up as you are pushing a doorbell, it just extends on command and reports that it made contact, moved forward slightly and hit a stop. Your spine may get involved if the finger reports excessive heat, or something gooey on the switch, and pulls the hand back in reflex.

@wallyllama As to your first comment about "emergent behavior", that's pretty much what I was getting at.

As to the MySensors node, my thoughts when I mentioned the "person" node were possibly some kind of MySensorized identifier or tag for a person much like a bluetooth tag. The more I thought about it though, you are correct that there would be all kinds of attributes, and most of them wouldn't need to be tied to the tag. The "person" though might be on the controller side where the processing power is greater and where most of that data would be dealt with anyway.

@dbemowsk interestingly, as I thought about these ir cameras, they may require a smarter node (maybe something like a nanopi neo2) to preprocess the data, and then a person tag may be useful.

For example 64 pixels, 2 bytes to encode temperature value, 1hz refresh would mean 1280 bytes/s. Which If I have been reading this right, is pretty high for mysensors. There are some ways to reduce that, but it is unkown if an arduino could keep up.

I've mostly been doing research on sensors, and only built one node and a gateway, so alot of what I have been saying about my sensors is assumption.

Does it have a defined method of extending the data types? Or a board that decides? A Glorious Leader we need to cajole? Maybe "user defined" types?

Im kind of in love with these IR array sensors, and I'm probably not objective about what is best for mysensors as a whole, but I have boxes of opinions I'd like to get rid of, so just ask if you want some

Again, doing some more brain cell searching and reflecting on the subject of a "person" node for MySensors, I am more and more starting to realize that this part of things may not be in the realm of MySensors. Not to say it shouldn't be part of an HA system, just not handled by MySensors. I think it could be a different module/plugin for whatever controller people are using, e.g Vera, Domoticz, OpenHAB, etc... As was mentioned, a "person" node would probably have a great number of properties and attributes that define a person. That in itself I think is a great argument as to why it should NOT be a MySensors node. Some of those properties and attributes may be defined by one or more different MySensors nodes, but it may also take data from a different kind of node based on something like what @wallyllama mentions which might require a more complex processor such as a nanopi. The ways in which a person may be identified could differ greatly between systems and could range from simple to complex. Again I get back to the simple IR doorway occupancy sensor that can count the number of people in a room. I think that this could be a great starting point for something like this and could be something simple enough for MySensors to handle. Going with something like this and later finding varying ways to determine who the occupants of a space are may be a way to get this started.

@dbemowsk that's something that netatmo did with their smart ip camera that is able to recognize who entered the room or your garden and with that you can set some rules in a HA system and I bet it is far from simple to be done on a DIY project

@gohan Openmv has a single board camera with opencv and micropython, another option.

I think @dbemowsk idea of door sensors fits nicely with mysensors as he has said. Is there a more appropriate forum for the more complex devices that anyone knows of? Im thinking if I come up with a node, I can add it like any other, but there will be a lot of talk that ends up a bit off topic.

On topic, are there controllers that are more amenable to the kind of combining of different nodes to identify people that we are talking about? I've used Mr. House for other things, and domoticz for my one test node, not enough to really have an opinion.

@wallyllama I was actually a Mister House user prior to finding MySensors. The death of my Raspberry Pi 2 that I was running Mister House is what got me looking for other options, which is how I found MySensors. I then tried Domoticz for a bit, mainly because I found PERL, which is what MisterHouse is written in, hard to work with. Domoticz had some limitations too. I now have my Vera controller which I like. All of these have deficiencies in certain areas, but the nice thing with all of them is that they support many different types of HA hardware.

A thought experiment. 5 known people and 1 pet in a room. 1 living being leaves the room. IR door sensor in place. What information do we want about the new situation? And what sensors would we need to gather it?....

I dont know. I guess Im wondering what you want from this. Do you care about pets vs humans? Adults vs children? General person count only? If the goal is to not shut the lights off on people then the last one is good enough.

I have recieved 2 amg8832 chips for experimentation after some delays because of import rules. The labeling on the bagggie says they need to be mounted within 108 hours of opening the bag. I would recommend getting a breakout board and not raw chips.

I've been thinking of them as low resolution cameras that can see in the dark, but there may be more clever ways to think of it that I havent come across, but any computer vision algorithm would work. Motion detection for sure.

I have a fairly large living room with a high ceiling, if I mount one in the center, i should cover most of the room, I estimate a person would be about 1 pixel at the floor. The coverage is a pyramid so at the edges height is zero. Corner mounting like the video shows would probably fix that.

I think these work work better than my idea for a giant capacitive touch screen.

@dbemowsk this can be crudely done via routine feature on amazon alexa, what it offers is that you can rename any IOT device state (ON or OFF) with any name, so like in your example, living room lights you can just say Alexa "living room overheads" and it will turn the living room overhead lights, similarly for office you can say "Alexa; "office overheads" and it will switch on the office overhead lights. Ofcouse you have to have a different phrase for OFF state, but you get the idea. But yes the same phrase for all rooms and letting alexa sense which room r u in, and acting accordingly is actual smart home. I would keep searching if I can find n build somethig like that.

@sam9s what you are describing, while a nice way to control things, has the same basic flaw as a PIR device. You have to tell it you are there. The PIR(alexa) knows what room it is in, but you have to signal it some how. Alexa is signaled by a voice command, PIR by motion, but if you are quietly reading a book both of them forget you are there. Alternatively you can have them assume you are there for a set amount of time after the signal, or until they get an off signal.

The trick is to get them to detect you without actively addressing them. If Alexa can detect breathing, or heat or CO2, etc, then it would solve the problem.

you can combine alexa with door sensors. If alexa is triggered and no one has left the room then someone is still here. That is the idea that @dbemowsk pointed out earlier in the thread.

The trick is to get them to detect you without actively addressing them. If Alexa can detect breathing, or heat or CO2, etc, then it would solve the problem.

If you enter a room and do not say anything, then Alexa has no way of identifying WHO you are. Even if the echo could detect breathing, heat, or CO2, you are back to knowing that someone is there, but not who.