In this post I would like to show a simple yet robust solution for the detection of a single vanishing point in road scenes.

Original grayscale image

The vanishing point in this scenario can be very usefull to retrieve the camera calibration, to perform some planar homography transformation, to determine a ROI inside the image, etc. Although there are several vanishing points defined by the elements of this scenario (the vertical and horizontal directions of the panels), we want to focus on the vanishing point defined by the lane markings. Note that for curvy roads, the vanishing point does not exist, although you can thought of it as the direction of the tangent on the car position on the curve.

For that reason, we first need to extract the lane markings, which can be done in many, many different ways (thresholding the intensity, connected components, edges, etc). In this post I share one of the fastest I’ve used in my works. It is published in my Springer MVAP paper “Road environment modeling using robust perspective analysis and recursive Bayesian segmentation”, and the code in C++/OpenCV I share here (sorry it’s an image because the <code> </code> html commands seem not to work fine in WordPress):

Applying this filter we get images like the following (note that I have set to black the upper half of the image), where tau is the expected width (in pixels) of the lane markings. For a better performance, this value can be adapted to the perspective of the road, although for this special case this is exactly what we do not have!):

Detected lane markings

After a proper thresholding we can get something like this:

Binarized lane markings

Although we do have a lot of false positive pixels (the vehicle or lateral elements of the scenario), the following robust stages will find the correct vanishing point.

Using OpenCV, I have found that a quite reliable solution is based on (i) the use of the Hough transform, and (ii) the computation of the intersection of the lines we get.

For the first part, OpenCV has two main options, the Standard Hough Transform (SHT), and the Progressive Probabilistic Hough Transform (PPHT). I use the first because it returns lines and not pairs of points or line segments and although it is a little bit slower, it requires the user to set less parameters and it works fine in most cases. The Hough transform can be applied as:

Where __lmGRAY is the image we obtain from laneMarkingsDetector, and __houghMinLength is the minimum length we require (it should be set according to the image dimensions, something like 30 should work for small images 320 x 240).

The result is a set of lines that visually converge on a small region of the image:

Detected Hough lines

In this simple case there are no strong outliers, i.e. lines that clearly do not intersect at the vanishing point, although we do have a non-negligible intersection error. For cases like this or with more outliers, we can use a RANSAC-like method to find the more likely vanishing point.

(UPDATE: The MSAC class is no longer available as it was, instead, you can download the new MSAC class with a full sample capturing images or video and computing as many vanishing point as desired, both finite or infinite. Please refer to the specific post for more details).

For that purpose I use a variation of RANSAC called MSAC which simply weights inliers according to their cost function (instead of just counting 1 for inliers and 0 for outliers as RANSAC does). I have programmed a very simple version of it, in a C++ class, which only needs two steps:
// Initialization
__msac.init(IMG_WIDTH, IMG_HEIGHT, 1);
// Execution (passing as argument the lines obtained with the Hough transform)
__msac.singleVPEstimation(lines, &number_of_inliers, vanishing_point);

Where __msac is an object of class MSAC, number_of_inliers is an output int that contains the number of inliers MSAC has found to compute the vanishing_point (if you want to play with this, you can go to the Code page from my website although it is not optimized nor commented).

The result is normally a good vanishing point (I have tested it in many, many type of road sequences, and it works fine as long as there are some painted lane markings).

Detected vanishing point

Additionally, I usually compute the vanishing point for a set of time instants, and check if the vanishing point is coherent and steady in time. In case not, I restart the procedure until I find something reliable.

Hi!
You can use the Hough transform to get the lines, for instance after applying the Canny edge detector as you are doing now.
The piece of code is in image in this post where it reads // USE STANDARD HOUGH TRANSFORM, you only have to substitute __lmGRAY which is the image I used for the one you are using, framec, and you will get a set of lines in the variable vector lines_.
After that you should convert vector lines_ into vector < vector < Vec2f > > lines before passing it to the MSAC object, for instance like this:
vector < vector < Point > > lines;
vector < Point > aux;
for(size_t i=0; i < lines_.size(); ++i)
{
aux.clear();
// Get the two end-points of current line segment
float rho = lines_[i][0];
float theta = lines_[i][1];

Hey, just wanted to say thanks for posting this. Its hard to find examples of this kind of stuff on the net, so thanks for sharing. I’m working on an small scale autonomous car and was trying to figure out whats the next step after hough transform. This answers my question, thanks!

It was my mistake. I fixed it ;)
Recently i read your publication “Road environment modeling using robust perspective analysis and recursive Bayesian segmentation” and according to it i was trying to write my own lane markings detection algorithm.

I would like to ask you, as a specialist, if you could give me some tips to help me find the best way to write an algorithm for Lane tracking like this one, you presented on YT:

I am beginner in working with Opencv (i downloaded it 2 weeks ago), so I would be very thankful for any help. This is very important for me cause it is a part of work at my university to build an autonomous cat.

Good job! It looks you’re doing the right way.
Nevertheless, if you are planning to jump into an autonomous vehicle, you should consider the extra difficulties a real scenario poses. The first one (and the most significant for the inverse perspective mapping) is the vibration of the camera, and the motion of the vehicle. This will make your IPM to be very unsteady. Typically you can only trust to have reliable straight-vertical lines in the very close distance.
Other aspects to consider: absence of lane markings for a while, rain, shadows, occlusions due to other vehicles…
Welcome to the road environment!

slt mes amis,
j’ai l’honneur je vous contacter une autre fois dans ce fameux forum,bon j’ai besoin d’une petit correction au niveau ce code en opencv
hello marcos,
well, I need a small correction to this code in opencv:
CvPoint meas_x1,meas_y1,cord_x1,cord_y1;
cord_x1.x=230;
cord_x1.y=100;
cord_y1.x=550;
cord_y1.y=500;
for (int l=0;l<10;l++)
{
std::string varimg;
char format[] = "franck_000%d.jpg";
char filename[sizeof format+100];
sprintf(filename,format,l);
varimg = filename ;
IplImage*imgw = cvLoadImage( varimg.c_str() );
cvNamedWindow( "Example1", CV_WINDOW_AUTOSIZE );
meas_x1.x=cord_x1.x;
meas_x1.y=cord_x1.y;
meas_y1.x=cord_y1.x;
meas_y1.y=cord_y1.y;
it is a part of the main program, but the most important is that you can help me to get a solution allows me to access the next position of the object to follow that is the measure of each point varies as they are developing a rectangle.this code used to display a series of images (though making a sequence) and in these images there is an object (eg the face of a person) that makes the mouvement.In this case, I just measured the new position of this object so I do the tracking of this object. In the first place I framed by a rectangle object and consequently I have (hopefully) receive each time the measurement of the position of the object to correct it.

thank you for this earlier answer,but the goal of my project is use the opencv only with a simple fonction to realize a tracking object with kalman filter .For this i haven’t used this fonction predefined in opencv from kamlan filter because i have a some image to configure at a sequence for tracking object. Therefore i must work with the exact place of subject .

Hello, i have got a working code for lane and vehicle detection in opencv(version 2.3, based on C). Everything is fine, excpet that in the lane detection output window, the lanes detected get overlayed over the previous, thus filling the window with lines, subsequently. I do not know how to delete the previous drawn lane-lines from the window. The normal cvLine() function is used for drawing the lane lines. Your help would be highly appreciated.

Hi,
Unfortunately, I don’t have an entire sample for lane detection available to share. Moreover, I don’t think it would be useful in your case, since aircrafts have more degree of freedom than cars, so the assumptions I use for lane detection will not hold (basically, constant roll angle and preferrably equal to zero).
Regards,
Marcos

Hi
this is a brilliant work, although i am working on a different platform as a beginner and i was a bit confused when i was going through the way you applied your hough transform.I am presently working on lane detection using matlab as my image processing platform…i have presently captured and processed the image by simply capturing,converting to grayscale and applying edge detectors operators and i am presently stuck on applying the houghline so as to get my lane boundries….i would really appreciate your guidance in this please. below are the image processing techniques i used
a = imread(‘roadlane.jpg’);
a = rgb2gray(a);
imshow(a)
% h = [1 1 1; 1 1 1; 1 1 1]/9;
% c = imfilter(a,h);
% % imshow(c);
% for sobel edge detector
% h = [1 0 -1; 2 0 -2; 1 0 -1];
% c = imfilter(a,h);
% imshow(c);
c = edge(a);
imshow(c);
% for canny edge detector
c = edge(a,’canny’);
imshow(c);

Apologies for answering this late, I was terribly busy with other duties.
So, in my opinion your idea is just fine. You can use any type of lane marking detector, such as the Canny or Edge detectors in Matlab, because at the end, what you need is a set of points in the image that belong to the lane markings. The Hough transform takes these points and finds the dominant lines (as clusters of points). Matlab comes with a nice set of Hough implementations, you have probably already found a solution for this.

Hi
I really appreciate your reply,although as u mentioned i have found a way out after tresholding the image and applied d hough transform and obtained an excellent result.i am presently trying to analyze which of the edge detector is more suitable.
Thanks and best regards.

Hello,
I saw your video on lane tracking and was simply amazed by it. I have just started of with my project which involves lane detection and tracking. Just to begin with it, what I have done is the following:

– For each incoming frame from a video or a camera placed on a moving vehicle, I detect edges, apply Hough Transform (OpenCV implementation) to get a bunch of lines, filter out lines based on the slope criteria to get only those lines which corresponds to the lane.
– Now, in many of the frames, I am not getting the hough lines so I applied Kalman Filter but it is not working.
– I take out the slope and intercept of the line and model them as my state vector, I use a 2×2 identity matrix as the state transition matrix (F). Why I am taking it as an identity matrix is because I don’t see this model as a constant velocity model. But I know that Kalman filter will only be able to predict the new state of the system if we assume that it is a constant velocity or a constant acceleration model.

So all together, I am not able to think how should I model my state vector and transition matrix when I have to track lanes based on hough lines. The only property of the these hough lines I can think about are the slope and intercept which somewhat change with each frame but that change is not constant.
Could you please suggest me the steps how to go about lane tracking?

Your approach is correct, although you may need to better fine tune your parameters. The road scenario is vastly dynamic which makes fixed thresholds and assumptions to hold only for certain situations or during small periods of time.
For instance, you are using edge detection and Hough transform. Fine, but probably you need to dynamically adjust the parameters as the scene evolves (what happens if the road is suddenly not well painted? or if you enter a tunnel?).
Once you have your lane markings detected (as Hough lines or other type of detections), you probably want to fit a lane model. In my case I use multi-lanes and parabolic fitting in the bird’s-eye view. Of course, this is up to you. The simplest approach is to model a single lane, without curvature, which can be basically be defined by a fixed vanishing point (that you can compute at the beginning and keep it fixed, or update it online with the observations) and two points at the bottom of the image.

Then you can apply a Kalman filter to provide smoothness to your tracking. Using constant-velocity is probably a good option.
All I can say is that if you plan to use this in a real environment, you probably also want to avoid costly operations (to run your sw in embedded platforms) such as the Hough transform, and also avoid OpenCV implementations, which are great, but general.

Hi,
Thanks for your comment, it’s all right!
The algorithm is designed to detect high contrast at row level. It works on grayscale images. Therefore, if the yellow line is well contrasted against the pavement, it will probably look bright in the grayscale image, and the system will work.
But, if the lines are not well painted, no matter what color they have, the system won’t detect them!
Regards,
Marcos

I am also working on lane detection currently. I am successfully detecting the lanes. The last thing I want to do is fill the space between the lanes with a color. I am using probabilistic hough transform to detect the lanes. I tried the opencv fillpoly function but had no success with it. Can you please guide me for the same ?
Thanks

Hi,
Once you have your lane markings detected with Hough, you need one extra step to create (and track if you wish) a polygon from them. Normally you should be able to locate the vanishing point as the point where the lines meet (probably you need to stabilize that with a Kalman filter). Then, you can select some row below the vanishing point and cut your lines there, and another cut at the bottom row. Then you have 4 points you can fill in with color.
Regards,
Marcos