5.3 mmWave Sensors : People Counting Demo (English)

I think we're going to get started. Thank you everyone for coming today. This is a walk-through of TI's People Counting Demonstration in millimeter wave line of devices. About two months ago we released our first iteration of a people counting demo for the AWR and IWR1642 devices. These are available on the TI Resource Explorer. Today we're going to do a walk-through of this, talk about the theory behind it, and then also provide a demo.
So presenters, this is the third iteration of this webinar over the last two days. And I am Akash Gondalia. And I've been with TI for about a year and a half now. And I've been with the millimeter wave team for about a year. And it's been a pretty cool experience and glad to share this with you today.
This is also the third webinars in our webinar series. If you look on E2E forums, we have a sticky posted that shows the upcoming webinars. And we also have a feedback thread for people who have ideas for new webinars for us to have. We'd love to hear what questions you might have and any content you'd like to see us discuss. So with that being said, just a little bit over the agenda.
First, we'll talk a little bit about what the people counting demo is. Then we'll go over implementation. That's going to be theory behind it. Then we're just going to set it up and run it for you. We've got a webcam to show you some side by side of what it looks like. We're set up in a conference room right now. And we'll show you that.
We'll also show you cases where you might run into issues and how to fix them. And lastly, we'll finish with questions. Also at the end of this, I'm going to ask to hold for questions until the end of each section. And then I'll just take all the questions at the end of this. So at the end our theory section, at the end of our demo section, we'll do questions at that point. OK.
So first, so people counting is performed on the IWR1642. This is distinct from the IWR1443 device because the 1642 features an onboard DSP, the C674x right here along with the Cortex-R4F that's featured in the 1443. The reason this is a desirable platform for many is that it's a single-chip solution for a wide variety of applications. Also, this device outputs a point cloud and the addition of the DSP enables object tracking information. Also this is a demonstration. It shows how to locate and track moving objects.
There's a variety of benefits for people counting, especially in areas where you wouldn't need a camera. So it's conscious towards privacy, but also in areas where a camera might fail, such as night time or in fog or smoke. For this reason, applications are really across the board for people counting. You could do it in security, anything, anywhere that you would need to detect motion. Anything that you would be looking to maybe aid with a camera or even potentially replace a camera, people counting can be used there, as well.
At a high level, it's fairly easy to understand. First we have a RF input as a result of the radar sensor. We send that to our ADC. Then we send it for processing. And then afterwards, we track our data to try to establish what is a person, what isn't a person, and how that person is moving. So again, this is implemented on our IWR1642 EVM. The EVM itself is called the IWR1642 Boost. And the 42 means four receivers, two transmitters on this EVM. And then the software comes from the latest version of the millimeter wave industrial toolbox which can be found on the Resource Explorer.
If you haven't been here, I'd highly recommend checking it out, all kinds of applications, all the source code we've done, all the examples for these devices are available for anyone to use and to evaluate. And we're definitely planning on being more aggressive with the content that we're putting up on the Resource Explorer through 2018 and beyond. Also there is a full-fledged TI-Design. We have many TI-Designs for all kinds of equipment. There's a full-fledged TI-Design coming on this which will show full schematic diagrams, timing and power diagrams on the application of this device.
All right, now, we've been testing two different configurations, a 5-meter configuration and 12-meter configuration. The one that's released now and that's available now is the 5-meter configuration. It enables these parameters right here. And let's talk about the processing chain. So this is a lot to take in right here. It's pretty straightforward though when you saw my explanation two slides ago, right?
We take our data in on the RF side. This is fed through via ADC. Then, we use this to process, we then process the data. Then, we come in here, look for object detection and provide some Doppler estimation. And then finally, we use the DSP to do tracking, figure out what is a person, what isn't a person. And then, we output that to display in some capacity.
So first, the front-end part of it, ADC samples are EDMAed into scratch buffers in a PING-PONG fashion. Then, Range processing performs FFT per antenna, per chirp. And then, range processing results in local scratch buffers. And they're EDMAed to the L3 with a transpose. And the angle processing, since the 1642 does not have the elevation antenna, there's no elevation information for this particular EVM. So at this point, static clutter removal is performed to take out any objects that aren't people. And after this, we do object detection. We have object detection.
So at this point, the sensor and the processor has given us a list of objects. Well, we got to figure out still what might be false objects, false detection. And so that's where the CFAR is performed. We perform 2 CFAR cell averaging smallest of, one in the range domain and one in the angle domain. And at this point, we have our list of objects. At this point, we then do Doppler estimation by finding objects that have a non-zero velocity.
And then, lastly we do tracking and grouping to figure out what is a person. So at this point, we've got just a group of points that we see. We know specific things about those points. We know if those points are in a certain space, they are a person. If they might be in a much bigger space, it might be more than one person. It might not be a person at all. And so we group that appropriately to show and illustrate via our visualizer what is a person here.
All right, so the tracking module right here, we've performed our detection, the tracking module implement localization. And it uses our detection layer data to try and figure out and group points. And then lastly, with our tracking algorithm, and we've kind of gone over this, we take our point cloud, we predict, we associate, we allocate, and then we constantly update. And the reporting function, it queries each tracking unit and produces our algorithm output.
So this is the default. This is our default, the default implementation that's in the prebuilt binaries for the current people counting demo. And we'll show you how we can alter a few parameters to change a number of things in a couple of slides. So our max number of points are 250, max number of tracks, max number of people we could track, right now it's 20. Initial velocities are here, max acceleration 5 meters per second squared. And frame rate of 50 milliseconds right here.
OK, so that's it for the theory section. Are there any questions right now? Any questions at all on the theory? Otherwise, we'll just jump into the demo portion. OK, so all right, then we'll move forward on the demo. And a big part of the demo is customization. We'll talk about that in a minute.
So there's three factors here. First, chirp configuration, the chirp configuration is the way that we configure the RF that's going out and both coming back into the sensor. This is a pretty basic part of all the mmWave sensors, all of our demos for these. Next is EVM installation. For any people counting set up, you need to be able to set up your EVM in a spot that will be able to look at the room, scan the room, and be able to find people at all parts. We'll talk about that next.
And then next is parameter customization or as we call it sometimes, tuning, to tune for a specific scene. If you're looking at a scene that's smaller like a conference room, it might be different than the way you would look at a hallway. Or it might be different than the way you would look at a parking lot or any other given area. So you would have to tune for that appropriately. And we'll talk a little bit about that.
So first, EVM setup, [INAUDIBLE] so, you know, four steps to our desired demo. First is just setting up the EVM. Right now this is our set up. We've got our EVM in the corner of a conference room. We've got it at a slight tilt, tilt works anywhere from 10 to 45 degrees. We've got it at about 10 degrees because it's a bit of a smaller space right here. And 10 degrees is able to capture everyone adequately.
And so the next step is to just run the demo and just see what's going on with the demo. So, all right, we'll go ahead and run that then. When you do run the demo, let me show you here, in the quick start guide, you can go and run, right here, out of an executable. I've actually got mine loaded through Matlab, so I'm going to run mine through there. And let's get a camera going Sorry, one second. [INAUDIBLE] the camera?
It should be audio.
Is it? It's not audio. I had it going just a minute ago. Let me see something. Oh, here we go. OK, can you see our room?
No.
Oh, it's not sharing. Oops, thanks. It wasn't sharing the [INAUDIBLE] so let's take a look here.
Now I can see it.
Cool, so this is the prebuilt binary. This is what's included already. There's no tuning that has been performed on this. So this is our conference room. I'll show you, I'll go and get up and walk around. There might be instances where you might find ghosts. Where--
Tell them what a ghost is.
A ghost is a false detection that's interpreted as a person.
[INAUDIBLE]
And you can see the point cloud data on the left. There's actually a lot of points going on. It's just a matter of grouping that and seeing what you want to figure out-- that is a person.
[INAUDIBLE] plugged in?
Whose?
Yours.
Mine? Not quite. Are there any ghosts? You usually see ghosts when you get close to a wall.
Make certain that we can see four people here.
Right.
Four people [INAUDIBLE] the images.
[INAUDIBLE]
Traces, yeah, the traces on the left, those lines actually show the path over time.
[INAUDIBLE]
Oh, yeah, so there's a ghost. Right there. OK, so this is a version just straight out of the box of what the people counting demo looks like. It hasn't been tuned for this scene. But let's talk about tuning for just a second.
You mentioned that the more people in the room, the more images to be seen.
Right, so we got four people in the room, or now we've got three people.
So Akash, so the maximum of 20 people that it recognizes, is that a limitation of the DSP processing power?
So when you instantiate the tracker, you can preallocate the memory, so actually the limitation is how you would allocate. So in this configuration, we preallocate 250 points and enough space for 20 trackers. You can actually change that to increase it, depending how you tune for the [INAUDIBLE].
It's not a device limitation. It is the way we have implemented it. You can definitely choose to vary up your parameters and accommodate a few more people. It is also based on the separation between two people next to each other. And the setting that we have chosen to identify one person against the other next to it. So there are a few parameters that are [INAUDIBLE] can be customized if one chooses to dig deeper into it.
OK, so you also presented like a short range and a long range. So is the count different for short range versus long range?
So in the short range versus long range has more to do with the RF chirp we use. So the short range one, there's just no detection points beyond 6 meters. While the long range one can enable detection points starting at 15 meters. So that effectively just kind of changes the radar's field of view, essentially, or like maximum sensing area. But then whatever detections you'd have within that area could be memory constrained or density of people constrained depending which one kicks in first. So you could imagine in a larger area, you could pack in more total number of people for the same density.
All right, so back to the slides.
[INAUDIBLE]
So back to the slides. So we ran the demo. And we observed a scene. So now we get into the tuning, right? We would want to change the parameters of how we track these. And we've got a full-fledged tuning guide. This is just a small part of it. These are the allocation parameters.
So the default right here, our SNR Threshold, 250, and then our Points Threshold, 5, all right, minimum number of points to be able to detect a person. So we actually another set where we change this SNR Threshold up to 350, our Points Threshold to 20. You need 20 points in order to be able to track a person. So much better for a smaller space than we might be in. So I'm going to go ahead and get that set up, get this one running here real quick. So if you could just give me a second. All right, so yeah, move a little bit.
This is [INAUDIBLE]
Right, and so this is with a higher SNR threshold and higher points threshold. [INAUDIBLE] Did it brick? No, we're good. We're good.
[INAUDIBLE]
Oh, right so those, you see in the point cloud, there's a lot of reflections going on, a lot of extra points that the sensor thinks are outside the room. These are reflection points. But because we set our SNR threshold higher, it's not taking these points in. It's filtering these points out, filtering them out so you won't have false detections as ghosts. So I think you'll see this one's actually, you know, a bit cleaner. Definitely considerably cleaner and has better detection.
So there's a variety of ways we can tune this. So I'm not going to be going through all these. Yeah, we can take a seat now. We're not going to be able to go through all these, since this is the guide. But a guide has been posted. Right now it's on E2E, if you go to the thread that you probably signed up for this webinar on, the latest post has that guide attached.
And in a couple of weeks, in about 2 and 1/2 weeks, we'll be posting another release on the Resource Explorer that does have this full-fledged tuning guide. But we went ahead and released it now on E2E to get it out there. And at that point, you tune and you repeat until you have your desired demo and be able to have it desirable for your particular scene. So I guess here we finished up a bit early. Right here, I will go ahead and open it up for questions. Are there any questions?
Can you talk about specific application use cases where you guys are seeing interest for this product, like people counting?
So I probably can't get too specific. But we use the blanked term of building automation.
So we see this being used in applications when you would typically want to know of the presence of a person, how many people are present, based on that, trying to adjust ventilation, maybe, trying to keep track of people entering or taking specific paths, figuring out the heat map of paths traversed by people in let's say places like mall, places like walkways, to figure out where people pay more attention and where they stick around a lot more.
So there are numerous applications where the features that our people counting demo offers, it is not just counting the number of people, it is tracking them, it is giving you history of where they have traveled to. So there is a lot of statistics information available. It is much more reliable than a standard PIR sensor which is typically used in an indoor setting to identify the occupancy. Compared to that, our sensor is a lot more robust.
It works across different kinds of environmental in conditions, whether it is day, night, in darkness, fog, or even [INAUDIBLE] and things like that, like security applications where outdoor applications of security would be much more easily handled by this implementation of people counting. So at least that gives you an idea of where all you could see a specific application [INAUDIBLE] this kind of demonstration. Does that help?
Yeah, that helps. Thank you.
I have a question. I actually have two questions. I understand it's not necessary to have Matlab to run this?
Right.
OK, question number two is, you show four EVM modules, could you explain the difference between those?
OK. So this slide that you're looking at is our regular offering for millimeter wave devices for automotive EVMs, [INAUDIBLE] two automotive and two industrial [INAUDIBLE] EVMs. The one that we are using here is the IWR1642 boost, which is the evaluation module for the IWR1642 device, which is second in each of the lists that you are seeing, which has a DSP built into the device along with the [INAUDIBLE] code. And the [INAUDIBLE] with two transmitters and four receivers integrated onto the same device. That's the one we are using.
OK, thank you.
And to answer your other question about, yes, you don't need to have Matlab to run this. There is an executable available that you can download through the Resource Explorer. All it does is it shows the [INAUDIBLE] displays the information sent out by the EVM [INAUDIBLE] the [? UI port. ?]
And the parameter configuration could be done without Matlab or tuning, whatever you call that, right?
That is true. There is a config file that serves as an input to the demo. In the future, [INAUDIBLE] that we might have a GUI version which allows some kind of customization on top of the existing demo that is out there. But, yes, it is configurable, to answer your question.
So right now this is available on the [INAUDIBLE]. If you have the EVM, the EVM is also to purchase, [INAUDIBLE]. You should be able to replicate the setup very easily. The images are available to load onto the device. And within, I guess, a few minutes or maybe a half an hour or so, you should be able to run the demonstration by yourself in a setting equal to what we have here.
So the angular setting that you had, I think you said something like 10 degree or something, I mean, do you have something which talks about spacing out the dimensions?
Yeah, let's go back to the slide, I think, that you are mentioning about. Is this the one you were mentioning about?
Yeah. Yeah.
Yeah, so we hold the EVM at a little bit of a tilt rather than facing straight towards the other walls.
OK. So the idea is, if you were to set it up and you look at the visualizer, you can see the point cloud as you stand in front of it. And depending on your environment, if your ceilings are maybe low, to reduce your noise, you would increase your tilt further down, right, so away from the ceiling. And so that's why we kind of recommend a range. So the thing is, you could just, you could stand there, set it at one angle, tilt it down a little bit, and see if improvement. Just, you want to make sure you have somewhat of a rich point cloud if you're standing there before you start being concerned about other tuning parameters. So just setting it up correctly based on the geometry of your space and the area you're trying to detect is the most bang for your buck in terms of improving performance.
Right. So you have like something which talks about this kind of dimensioned room or like the height that you're looking at, these are the parameters that you'd set? Like either just the tilt.
[INAUDIBLE]
Yes, so if you download the lab in the user's guide section, we kind of illustrate the concerns when you set up in respect to the room and the ceiling heights, yes.
OK, thank you.
Is it possible to make this presentation available?
We will have the, I don't know if it was announced earlier on, but the webinar is being recorded in audio, video. And along with the slide deck, it will be available on training.ti.com under the millimeter wave webinar series.
OK, when?
We are hoping to have it at the end of the month or early April time frame.
OK, thank you.
And also that tuning guide that includes a bulk of these second half slides, that's available now. That's on E2E, and it will be released with the next people counting demo, the next release, which should be the first week April.
So we have plans to announce a lot more upcoming webinars. We are hoping to have at least one per month. The link to which you signed up for this particular webinar or the post on E2E forum is the one that we will keep on updating. And that's where you will find the announcement for the future one. So please do keep on checking that particular post. We also have a link there for a survey where you could request something specific of your interest or of interest to your customers which will help us decide which of the right topic to choose for our future webinars.
OK, so I think we are doing very good on time. If there are no more questions, we might close it out early. We always have the E2E forum, the link at the end of the presentation, which we will upload. And that would be the easiest way to get the concerns and question answered related to millimeter wave devices. Please do check the TI Resource Explorer. We keep on updating it with the latest, greatest software and improvements on the existing ones as well. That is the right place to get the latest happenings on our devices in terms of software and our newest algorithm implementations.
Yeah, thank you.
Thank you.
Thanks for joining. Bye.
[INAUDIBLE]
Bye, bye.
Bye.
Thank you.