OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sun, 29 Apr 2018 22:37:56 -0500Tracking of LED on UAVhttp://answers.opencv.org/question/190507/tracking-of-led-on-uav/Hi,
**Update**: Since LED's appear really tiny once the drone starts moving away, is there some way to get around this ? Example, Is it possible to use the Region of Interest (ROI) (of the original LED) near the LED (assume ROI is given), if I extract the LED in the ROI , can I get the pixel coordinates of the extracted LED center in the **original image** ?
======
**Original Question:**
I wish to track (using external cameras) some LED strips ( see [this](https://www.ebay.co.uk/p/WOW-4-X-10cm-White-5050-LED-Strip-Lights-12v-DC-Caravan-Boat-SWB-Van-Car-Ip65/1360317143?iid=262453353235&var=561256772536) for the exact LED type) on my UAV.
**Are there any existing methods in OpenCV for tracking/detecting LEDs specifically** ?
Thanks!malharjajooSun, 29 Apr 2018 22:37:56 -0500http://answers.opencv.org/question/190507/Tracking Keypoints in SimpleBlobDetectorhttp://answers.opencv.org/question/188334/tracking-keypoints-in-simpleblobdetector/Is there a recommended class or method to use to track each identified blob in the MatOfKeyPoints that results from from [SimpleBlobDetector](https://docs.opencv.org/3.3.1/d0/d7a/classcv_1_1SimpleBlobDetector.html)?
inacMon, 02 Apr 2018 00:51:06 -0500http://answers.opencv.org/question/188334/Can I use OpenCV to detect weeds in a paddock?http://answers.opencv.org/question/145292/can-i-use-opencv-to-detect-weeds-in-a-paddock/I am reasonably new to OpenCV and am looking to start a new project, and just wanted get some expert guidance and to know if I am barking up the right tree.
I want to build a system that can detect the colour of the actively growing green weeds on either a red road, or in a paddock with white/silver stubble background. So I just need to check for green on a video input, and trigger a solenoid to spray the weed with herbicide when green is detected. Currently on the farm I work on (about 20,000 acres) we just blanket spray the whole paddock for weeds, so just spraying the actual weeds could yield significant chemical savings (cost and environmental)
Can anyone tell me if this possible with current OpenCV algorithms? Is it possible at a fast enough rate to make it worthwhile (20km/h ish at 50-100cm above the ground)? Is it possible on cheapish hardware ie, a raspberryPi/Odroid XU4/Nvidia TX2 and camera?
I was thinking of having a stand alone computer, camera and solenoid for each spray nozzles or maybe 3 nozzles/solenoids per module, and detect which third of the image the weed is in and trigger the corresponding solenoid+spray nozzle.
Currently there is a system for sale (http://www.weed-it.com/) which I believes uses infrared/NDVI to detect the chlorophyll in the weeds. This system would cost us about $320,000 for a 36 meter wide setup, so it aint cheap. I was thinking each spray module could be built for a few hundred dollars at most. With a module retrofitted to a 36 meter spray boom, that would still end up a bit cheaper.
I plan on making this completely open source, and modular. I just wanted to check with the experts to see if this is a feasible project before I dive in too deep. I planned on doing simple colour detection to begin with, and build on the system or add features from there.
I hope that makes sense, and I hope I'm asking in the right place. I really just want to know if its possible for now, and if it would be worthwhile pursuing. Any input/questions, yes/no, criticism/encouragement is welcome.
Thanks
AdamagriadamTue, 02 May 2017 04:46:57 -0500http://answers.opencv.org/question/145292/How is a touch event defined for blobs/fiducials?http://answers.opencv.org/question/124017/how-is-a-touch-event-defined-for-blobsfiducials/Much code for multiple blob tracking available, as well as for multitouch events, but how can we define a touch event to begin with? Do we need to detect a minimum blob size, or a blob stationary for some amount of time, or what? jamessonThu, 26 Jan 2017 19:51:36 -0600http://answers.opencv.org/question/124017/Roboust Human detection and tracking in a crowded areahttp://answers.opencv.org/question/120426/roboust-human-detection-and-tracking-in-a-crowded-area/ Hello!
I am working on an application where I need to detect and track people in a crowded indoor area(like a mall). Right now I am using the OpenCV Background Subtraction class(MOG2) to detect blobs and a Kalman filter and Hungarian Algorithm for tracking(based on this video https://www.youtube.com/watch?v=2fW5TmAtAXM).
The issues I'm having are i) the blobs merging together when two people come close to each other ii) Parts of the person not getting detected which leads to false and multiple detections on a person iii) The background subtraction itself leading to too many false detections.
I would like to know your suggestions to improve this and any solutions to fix the problems?
Thanks in advance!
BTW, I'm using OpenCV 3.1,C++abhijithWed, 28 Dec 2016 06:40:15 -0600http://answers.opencv.org/question/120426/Best method to track multiple objects?http://answers.opencv.org/question/235/best-method-to-track-multiple-objects/Hi everyone,
I realize this questions is a pretty broad one but I was wondering what in your opinion is the best method to track multiple objects simulteaneously? My goal is to track 12 Objects independently of eachother and translate their positions into X/Y coordinates.
I've been trying to do HSV matching with Camshift, but I realize that with a limited number of colours to choose from I might not be able to reach my 12 Object goal...
Any guidance would be greatly appreciated!!
Cheers,
ChadChadReitsmaTue, 10 Jul 2012 16:32:25 -0500http://answers.opencv.org/question/235/faster blurhttp://answers.opencv.org/question/63344/faster-blur/Hi,
I am writing software for real time tracking of cells flowing in a microfluidic channel.
My tracking software (apparently) works fine at about 30fps but I am trying to accelerate it so I can flow the cells faster.
I ran the visual studio profiler and found that these are my main bottlenecks:
Frame[jj] = Frame[jj] - ImageBG; //background subtraction: 7% of execution time
blur(Frame[jj], Blurred, Size(3, 3));// remove speckle noise from image: 35% of execution time
compare(Blurred, Scalar(THRESHOLD), ImageBin, CMP_GT);// 8% of execution time
findContours(ImageBin, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);//8.5% of execution time
By comparison image acquisition is only ~2% of the execution time.
The image coming from the camera is 500x1120;CV_16UC1 (which is why I use compare rather than threshold).
I would be interested in any ideas on how to make these routines (in particular blur) run faster. I do not have a GPU available on this system.
I am using Visual C++ 2013 + OpenCV2.4.10
guy
GuyygartyFri, 05 Jun 2015 15:35:07 -0500http://answers.opencv.org/question/63344/Meanshift/Camshift just on segmented foreground?http://answers.opencv.org/question/59024/meanshiftcamshift-just-on-segmented-foreground/Hi,
At the moment I'm detecting movement by segmenting foreground and background. This gives me blobs which after I've tidied up a bit I'm trying to run camshift on. My probably very silly question is should I run camshift on the backprojected foreground image (with the initial window around the blob) or in the original unsegmented backprojected image (again with the initial window around the blob location)?
Theoretically which one is better? The foreground image seems best to me as the target will most likely be surrounded by low probability pixelsand hence much encouragement for the gradient ascent to the true target in the new image. However, what do people normally do and why?
Thanksricor29Thu, 02 Apr 2015 14:06:59 -0500http://answers.opencv.org/question/59024/Finding the centroid of a blob in Pythonhttp://answers.opencv.org/question/35006/finding-the-centroid-of-a-blob-in-python/I am a newbie to Python and OpenCV and I am trying to find the centroid of a blob. I have succesfully been able to find the blob using this code:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while(1):
# Take each frame
_, frame = cap.read()
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([110,50,50])
upper_blue = np.array([130,255,255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower_blue, upper_blue)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
but I can't figure out how to find the centroid of the blob. I have looked at the Moments() command but I don't know how and where to implement it into my program.I am using Python 2.7 with the latest OpenCV package and Ubuntu 14.04 LTS.
All help is appreciated and I will gladly provide detials if needed.
Thanks,
L
Thanks for the help so far. But is it possible that someone could provide me with some code that implements the moments() function in python. All the code I have tried so far will not run because of various errors...
Thanks,
LlukethedukeFri, 13 Jun 2014 11:14:58 -0500http://answers.opencv.org/question/35006/where is body/Code block of functions ?http://answers.opencv.org/question/34260/where-is-bodycode-block-of-functions/Hi,
I am using these functions,
m_pBlobTracker = cvCreateBlobTrackerMSFG();
m_pFGDetector = cvCreateFGDetectorBase(CV_BG_MODEL_FGD_SIMPLE, NULL);
m_pBlobDetector = cvCreateBlobDetectorSimple();
m_pBlobTrackAnalysis = cvCreateModuleBlobTrackAnalysisHistP();
m_pBlobProcessing = cvCreateModuleBlobTrackPostProcKalman();
in one of my file. and the project is working.
I found declaration of those in ~\build\include\opencv2\legacy\blobtrack.hpp file, but i wonder that where is the body of these functions?
Can Anyone Help Me With That?amitanvirThu, 29 May 2014 03:01:11 -0500http://answers.opencv.org/question/34260/Track Pedestrianshttp://answers.opencv.org/question/15850/track-pedestrians/I am using OpenCV sample code “peopledetect.cpp” to detect and track pedestrians. The code uses HoG for feature extraction and SVM for classification. Please find the reference paper used here.
The camera is mounted on the wall at a height of 10 feet and 45 down. There is no restriction on the pedestrian movement within the frame.
I want to track the detected pedestrians’ movement within the frame. The issue I am facing is pedestrians are detected only in the middle region of the frame as most of the features are not visible as soon as the pedestrian enters the frame region.
I want to track each person’s movement in the entire frame region. How to do it? Is tracking required?
Can anyone give any reference to blogs/codes?UserOpenCVThu, 27 Jun 2013 10:42:06 -0500http://answers.opencv.org/question/15850/Body Tracking Algorithms without RGB?http://answers.opencv.org/question/25895/body-tracking-algorithms-without-rgb/I have an Xtion Pro, which provides a depth sensor but not an RGB sensor. I would like to perform body tracking using OpenCV, and I only want to detect only upper body (torso and up). I have one moving camera at a fixed height (chest-level to the person). I have researched several ways of doing so such as: HOG, LBP, HAAR, latent SVM, and Kalman tracking. I would like to find out however if these methods can be implemented in an algorithm without an RGB sensor. If so, which methods would you suggest and why? If not, are there other methods using only depth sensing?
Thank you for your time!aakudakuThu, 26 Dec 2013 14:49:47 -0600http://answers.opencv.org/question/25895/How to tracking blob per pixel? (blob using findcontours)http://answers.opencv.org/question/20317/how-to-tracking-blob-per-pixel-blob-using-findcontours/Hi all,
There're my code below, and there's a place where I need to tracking a blob per pixel.
Can you tell my how to do that?
int main(int argc, char *argv[])
{
cv::Mat frame;
cv::Mat fg;
cv::Mat blurred;
cv::Mat thresholded;
cv::Mat thresholded2;
cv::Mat result;
cv::Mat bgmodel;
cv::namedWindow("Frame");
cv::namedWindow("Background Model");
cv::namedWindow("Blob");
cv::VideoCapture cap("campus3.avi");
cv::BackgroundSubtractorMOG2 bgs;
bgs.nmixtures = 3;
bgs.history = 1000;
bgs.varThresholdGen = 15;
bgs.bShadowDetection = true;
bgs.nShadowDetection = 0;
bgs.fTau = 0.5;
std::vector<std::vector<cv::Point>> contours;
for(;;)
{
cap >> frame;
cv::GaussianBlur(frame,blurred,cv::Size(3,3),0,0,cv::BORDER_DEFAULT);
bgs.operator()(blurred,fg);
bgs.getBackgroundImage(bgmodel);
cv::threshold(fg,thresholded,70.0f,255,CV_THRESH_BINARY);
cv::threshold(fg,thresholded2,70.0f,255,CV_THRESH_BINARY);
cv::Mat elementCLOSE(5,5,CV_8U,cv::Scalar(1));
cv::morphologyEx(thresholded,thresholded,cv::MORPH_CLOSE,elementCLOSE);
cv::morphologyEx(thresholded2,thresholded2,cv::MORPH_CLOSE,elementCLOSE);
cv::findContours(thresholded,contours,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE);
cv::cvtColor(thresholded2,result,CV_GRAY2RGB);
int cmin = 50;
int cmax = 1000;
std::vector<std::vector<cv::Point>>::iterator itc=contours.begin();
while (itc!=contours.end()) {
if (itc->size() > cmin && itc->size() < cmax){
//tracking blob here!
}
cv::imshow("Frame",frame);
cv::imshow("Background Model",bgmodel);
cv::imshow("Blob",result);
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
I'll appreciate any help here... Thanks :)ShabanSat, 07 Sep 2013 23:45:20 -0500http://answers.opencv.org/question/20317/Why is the blob tracking code 'legacy'? What's replaced it?http://answers.opencv.org/question/22418/why-is-the-blob-tracking-code-legacy-whats-replaced-it/I was struggling to find where in the OpenCV API the blob tracking code that EmguCV uses is. I think I have found it religated to legacy code: https://github.com/Itseez/opencv/blob/master/modules/legacy/src/blobtrackingauto.cpp
Why is it legacy code and where is any blob tracking code APIs that have replaced it?dumbledadMon, 14 Oct 2013 07:19:01 -0500http://answers.opencv.org/question/22418/saturate_cast.hpp undeclared variableshttp://answers.opencv.org/question/20192/saturate_casthpp-undeclared-variables/I'm trying to compile squares.cpp (from [github](https://github.com/Itseez/opencv/blob/master/samples/ocl/squares.cpp); I'm working in ubuntu using sudo g++ -o squares squares.cpp -lpthread -lX11) and I'm getting error messages claiming there are a number of undeclared variables in saturate_cast.hpp, beginning thus:
In file included from /usr/include/opencv2/core/utility.hpp:46:0,
from squares.cpp:6:
/usr/include/saturate_cast.hpp: In function ‘_Tp cv::gpu::device::saturate_cast(schar) [with _Tp = unsigned char, schar = signed char]’:
/usr/include/saturate_cast.hpp:61:24: error: ‘::max’ has not been declared
I'm guessing I'm doing something wrong but I've no idea what or even what to look for. I'd greatly appreciate any help.
What I'm actually trying to do by compiling squares.cpp is the following: I have some sets of b&w images showing what in 2d looks like a bunch of light rectangles of similar size and shape (aspect ratio ~6) on a dark, fairly uniform background. I need to find the center positions and, esp., the angles the rectangles are in relative to any edge of the image, and someone recommended squares.cpp - alternatively if anyone knows of another way of doing it I'd appreciate it (I'm not a CS, much less a computer vision expert - I have done tracking of circular shapes using the Mosaic plugin for ImageJ)
Thanks!
rsgWed, 04 Sep 2013 20:38:15 -0500http://answers.opencv.org/question/20192/move-stop-move Blob Trackinghttp://answers.opencv.org/question/18035/move-stop-move-blob-tracking/Hi OpenCV-ers, I need to build a blob tracker that works with **move-stop-move** events. Say a human walking into camera FOV, stops and stands still for a while, and moves again..rinse/repeat. What algorithms exist to collapse these multiple "tracks" (including the stasis) into a single track?
AFAIK, current OpenCV tracking algorithms track continuously moving objects, and any intermittent stop, generates a new track. Any thoughts on how I should approach this?
Thanks for your time.
p.s I know it might be easy to compare blobs and combine tracks, but I need to maintain a bead on the human even when s/he is standing still.mikosThu, 01 Aug 2013 18:35:41 -0500http://answers.opencv.org/question/18035/Track rain cellshttp://answers.opencv.org/question/17329/track-rain-cells/Hello,
I try to use opencv to track rain cell displacement. The cells move slowly, have lot of holes and can disappear.
I found blobtrack_sample.cpp and the result is not bad (see fig) but I would like to improve it a little bit.
I identified two problem, which i presume can be solved if i change some parameters.
First problem : sometimes the ellipse jumps far from the rain cell (especially when rain cell move out from the frame) and try to catch an other rain cell.
Second problem : the background obviously do not change but blob detection is still not very good. How can I do to detect a blob as soon as the pixel is not white ?
I presume that I have to modify some parameters in blobtrack.hpp and blobtrackingauto.cpp...
Thank a lot for any help.
![image description](/upfiles/13745729572121215.png)TeepeTue, 23 Jul 2013 04:50:00 -0500http://answers.opencv.org/question/17329/Why only Hue of the image is used for CAMSHIFT Tracking ?http://answers.opencv.org/question/16286/why-only-hue-of-the-image-is-used-for-camshift-tracking/
How is the Hue channel more advantageous than others for object tracking ? sachin_rtSat, 06 Jul 2013 05:51:21 -0500http://answers.opencv.org/question/16286/Find position and size of blob, opencv c++ ms visual studio, win7, 64bithttp://answers.opencv.org/question/12923/find-position-and-size-of-blob-opencv-c-ms-visual-studio-win7-64bit/I am building this system that detects/tracks a face(realtime). From that I want to measure som parameters from the eyes of the person in view(blinks for example).
The tracking part is working well and I am able to get this output from the data:
![image description](/upfiles/13677922967241319.jpg)
![image description](/upfiles/13677923055975961.png)
You've might have seen my previous post here where I ask for help with the findContour method. it seems like this method doesnt want to work on my computer, (dont know why), so I have to find another way.
I've tried houghCircles, but the resulting circles becomes really strange with positions at -203942034 pixels or so.
So I wonder.
What alternatives do I have to find the size and position of the (biggest, due to noise) blob in the matrix visible above.
MattiasRSun, 05 May 2013 17:22:49 -0500http://answers.opencv.org/question/12923/cvbloblib library quite heavy on the processorhttp://answers.opencv.org/question/9249/cvbloblib-library-quite-heavy-on-the-processor/In my lab they developed an algorithm which provides real world robots positions by tracking color-coded markers with a camera.
This is achieved using the cvBlobs library available [here](http://opencv.willowgarage.com/wiki/cvBlobsLib).
The software is really heavy though: can't do more than 10fps on a dual core core2 with 2gigs of ram.
The camera resolution is 1280x768, average number of markers is 8, maximum marker speed is about 10cm/s.
Can OpenCL or CUDA or whatever be used in conjunction with OpenCV to speed up the calcs?
These don't seem like taxing parameters, yet the software is running at 100% cpu.
Since there is not much it can waste resources on, I am wondering if there are less computationally intensive ways of tracking markers via OpenCV that I could exploit.
And why is there a cvBlobs separated library, and why there are no similar function in the main OpenCV library?Claudio CarboneThu, 14 Mar 2013 11:12:17 -0500http://answers.opencv.org/question/9249/Random Vehicle Detectionhttp://answers.opencv.org/question/8212/random-vehicle-detection/Hi all. I want to calculate the average speed of the vehicle that is crossing the camera. So for that I want to detect the time that is taken by Random vehicle to cross the camera. By using that I can calculate the speed. ( speed=distance/timetaken ). Which method should I use? Can I detect a single object randomly?AshwinThu, 28 Feb 2013 01:36:09 -0600http://answers.opencv.org/question/8212/Any ideas for tracking a person who turns around and walks away?http://answers.opencv.org/question/7174/any-ideas-for-tracking-a-person-who-turns-around-and-walks-away/Hi,
I am developing a vision system for a mobile robot that interacts with people. One of the use cases is going to be following a person walking ahead of the robot. Due to the robot's constraints (movie-accurate R2-D2), it is not possible to use stereo vision or a Kinect-like sensor, I only have monocular vision.
What I have so far is a face-detecting cascade classifier and a median-flow blob tracker. They both work together to form a crude but efficient face tracker and distance estimator. So as long as the person has their face towards the robot, and is walking backwards, following works quite well.
Now I would like to take things one step further and allow the person to turn around and walk in front of the robot with their back to the robot. That means that the face is no longer seen, all I have is the relatively sparse-textured back of the head, and maybe some patterned clothing. I also need to cover the part where the person is actually turning around; a median tracker will not work very well there. I don't really know how to go about this.
Does anyone have creative ideas on how to solve this problem? I'm happy for any and all input.
Regards,
BjörnbjoerngieslerFri, 08 Feb 2013 15:36:41 -0600http://answers.opencv.org/question/7174/color blob detection and distinguishinghttp://answers.opencv.org/question/4983/color-blob-detection-and-distinguishing/
I use opencv to track tennis ball with android phone.
I used template that comes with opencv for android called color blob detection.
By selecting color of the ball I track blob.
My question is how can i distinguish two or more blobs and track just one (when there are more tennis ball in the line of sight)?
How can i detect and track just tennis balls and not all other objects that are in the same color range?
Tnx, upfront!van wilderFri, 07 Dec 2012 04:29:39 -0600http://answers.opencv.org/question/4983/