Code

NOTE: This section is under construction. I will probably update it frequently with samples and more documentation. The source code can be found in the links to the sourceforge pages:

UPDATE (2014/08/20): I have recently discovered that the required dependency for the vanishingPoint project is no longer available as it was, but a newest version can be found here. I have added a flag that enables or disables the usage of lmfit when configuring the project. If disabled, the vanishing points are computed using “Calibrated Point-Line” distance.

Vanishing point detection for images, videos or live cameras. The method computes as many vanishing point as requested by the user (via argument). It uses MSAC (M-estimator ~SAmple and Consensus), and an angular metric between vanishing point and line segments.

Line segment detection for images, videos or live cameras. The method computes as many line segments as possible or as requested by the user (via argument). It uses a variant of the SSWMS method, which I call the LSWMS. It has been written in C++ and using ~OpenCV 2.X for a easier understanding for those interested in reading the code.

Class for creating inverse perspective mapping (IPM) views. Also called bird’s-eye view in the field of Advanced Driver Assistance Systems, or simple plane-to-plane homographies in projective geometry.

Is there any progress on this? I am currently doing my bachelor thesis and experiencing some problems, where lane detection is one of them. I would greatly aprecciate such an example project.
I’ve also written you a PM on your YouTube-Channel. Please contact me via mail, if you have time and motivation to help. Thank you very much in advance.

Hi!
Not really. Alas, my free time is close to zero. Most of the stuff I publish now is related to new papers I write or projects I lead in my work at Vicomtech-IK4.
Good luck with your project!
Regards,
Marcos

Hi!
You are right, I said I was releasing some code, but this is getting more difficult for me nowadays: I am working for my employer in similar matters and thus anything that I can share is what I do at home in my free time, which is very limited now.
In any case, you can try by yourself. You can start computing the vanishing point (e.g. only at the beginning, or continuously), with that, compose a bird’s-eye view of the road (an homography between the image plane and the road plane), and operate on that image. Detect lane markings applying a row-wise bump filter or any other pixel-wise detector you can find. Then, you need to fit a lane model to the filtered image, using techniques such as Least-Squares, RANSAC for removing outliers, and finally, provide some temporal coherence to smooth the results and be robust against absences of detections, by using a Kalman filter or similar.
These are the basics. The difficult work is to connect every node in a way the entire chain works.
Hope you find a way.
Regards,

Hello mister Marcos Nieto, I apologize for bothering you. I am a Mechanical Enginner, and I am working on a project that uses OpenCv to detect a specific colour of an object and to estimate the distance from the camera to the detected object using only one camera. At this moment I managed to detect the object, but I don`t know how to estimate the distance. Can you please help me with this problem? Thank you and I’m waiting for your answer.
P.S. We can talk more on skype if you have a little free time.

Hi!
My free time is very limited, I am sorry.
Regarding your question, you can estimate the distance to an object in a single image if you do have contextual information of the scene, such as the calibration of the camera and the assumption of an existing plane.
Try google homography, which can give you an idea on hoy to move from pixels to meters in a given scene.
Kind regards,

Hi Marco
I watched the video on youtube: https://www.youtube.com/watch?v=JmxDIuCIIcg, I was fascinated by it’s effects, it’s great, can you make this project as a open source project on github,com or sourceforge.net, I’d like taking part in, if you don’t like make it publicly, can you share the source code to me via email, thanks very much.

Hi!
Thx for writing! Unfortunately, I am not able to share that code because it belonged to my employer at the moment of making it (this is some years ago now).
From time to time I am sharing pieces of code, renewed and improved with respect to that version, which I can make it public.
I have an ongoing project for making public a lane tracking system but really, my free time is extremely short and it can still need some time to see the light.
That is for academic or hobby queries. However, I work for a private research centre where I work creating professional computer vision applications. If there is a commercial interest we can still talk about ideas or opportunities.
Kind regards,

Hi,
I am sorry, this is an ongoing project within Vicomtech-IK4 and not a personal project I can share with you all.
If you are interested in a potential collaboration or a commercial interest, please write me a private e-mail and we can discuss it.
Kind regards,
Marcos

Hi Marco
I am doing project on vision based aircraft runway detection.
I watched the video on youtube and is a good research work. In my project also i detect hough lines but it will reach vanishing point otherwise only two lines up to Horizon.
That is the problem i want it like a rectangular shape because i detect the corner point of that please share the code.
Otherwise send the code of how rectangular shape of hough lines is getting.
Thanks in advance..

Hello Marco..
I am trying to implement visual odometry using absolute orientation method. I am using kitti dataset. I have obtained Rotation(3×3) and translation(3×1) matrices for consecutive images. I just want to know, how to convert these matrices into ‘poses’ and plot ‘map like’ data. I am aware that camera centre is -R’.t.

Hi,
I believe you nearly have it. Visual odometry gives you frame-to-frame relative rotation and translation. So, you can concatenate transforms to obtain the rotation and translation of every frame with respect to the first one. This way you can have a trajectory. If the initial frame has a given offset with respect a desired world coordinate frame, you can add this additional transform to the entire trajectory and you can possibly plot it without problems. I am not expert in visualization nor computer graphics, but some alternatives may include using native OpenGL code, ViZ (http://docs.opencv.org/doc/tutorials/viz/table_of_content_viz/table_of_content_viz.html), Blender, or whatever suits you well.
Kind regards,
Marcos

I also have problem on running the code on my linux because I am just new to linux and I want to explore more on doing the programming on linux. Do you have some steps on how to make your program works on linux?

Hi Brilly,
Apologies for my late answer.
I can suggest you to take a look to any paper related to vanishiing point estimation, to get some background about the matter. If you are already doing so, you can take a look to my PhD thesis, to check some details about the methods I am sharing in sourceforge: https://marcosnietoblog.wordpress.com/about/my-phd/
The code is ready to be compiled in linux using CMake, please take a look to the readme files inside.
Regards,

I am trying to find the vanishing point in the video. However, the dependencies are no longer available as you mentioned regarding your posted code. Therefore, I am trying to use lmfit for the same purpose provided it as an alternative. I found it complicated to use it. Could you please elaborate how to use it for finding the vanishing point. That would help me out a lot. Thanks ahead before hearing :)

Hi! Thanks for making this code available. I am trying to implement a stereo visual odometry project, but it is driving me insane. I don’t suppose you have any open-source solutions to this that i could take a look at? The Vicomtech one is very impressive! Many thanks!

Hi!
Thanks, I am afraid all I have done about odometry is not open source. Vicomtech has indeed single and stereo visual odometry included in the Viulib libraries, which are still of limited access upon reaching agreements about exploitation and commercial applications.
Odometry is not easy, but you can still work on reading latest developments, and take a look to ROS (http://www.ros.org/) because they have open source SLAM applications that might be of interest to you.
Regards,

Hi!
Sorry, I don’t have any tutorial, but, in the past, I’ve pointed other researchers to take a look to a PhD of a colleague of mine who dealt with vehicle detection.
His PhD is available as pdf in the following link: http://oa.upm.es/11657/1/JON_ARROSPIDE_LABORDA.pdf
Regards,
Marcos

Hi:
Yes, but this is not a simple task! There are several ways to do it.
One is to assume that the object is on a planar surface. In that case, you can compute the homography between the image plane and the world plane, as long as you have a reference to solve scale (for instance, 4 known points in the world plane for which you can find their projection in the image).
Other approaches imply the calibration of the intrinsic and extrinsic parameters of the camera. Even in that case, any point in the image is projected as a ray that passes through the optical center, so you will need more information about the object.
This is a well-known topic in the literature of computer vision. I recommend you to take a look to books about projective geometry (google “projective geometry hartley zisserman” and you will find the pdf).
Regards,
Marcos

Hi, Marco. I appreciate you for uploading this code for free.
I’m implementing forward vehicle distance estimation, which includes vanishing point detection.
So I need this code and I want to ask permission to you.
Could I use this code for commercial purpose?
Regards,
Chanmi You.

Hi,
Sure, you can use the code as you want. Honestly, the code I am sharing has been conceived as an illustrative example, not application-ready code, so without optimizations nor support.
Good luck!
Regards,
Marcos

Hi,
Unfortunately, I haven’t published code of entire ADAS applications. Basically, I don’t have enough free time to do this because I would need to prepare the code to be readable, and prepare kind of a tutorial. I work for Vicomtech-IK4, who retain all the IPR.
Nevertheless, you can probably put an eye to OpenCV and the contrib modules, which look to start having ADAS examples.
Regards,
Marcos

Hi Marco.
I am having similar project made by ma self after working hours. And I meet one problem which is not exactly a very big issue but I can see it.

Let me describe. Wen I make IPM (in my case it’s not exactly the same but the idea is the same :). I noticed that on straight and ideal road, the road lines are parallel – that is good!. But when the road is having some potholes and bumps than some vertical vibrations components occurs and after that the lines after IMP are not parallel, the have some angle between dependent on the force was caused by road defects. Have you tried to deal with it?

My idea is to have g-sensor on cam and add some corrections in vertical axis for wrap transformation. But I have also an idea to make it by code and add wrap transformation correction coefficient knowind that road has always constant width (what can be calculated using white nearest road lines).

Hi,
Yes, I see your point. Actually this is something I tried to tackle time ago, in what we called “Stabilization of IPM”. Back in the 2000’s there were some movement around this idea, because, yes, IPM images are distorted by non-zero pitch angles of the car, which occur in bumps, slopes, curves, etc.
The solution could be to find these lines that are no longer parallel in your IPM, and find a second homography that map them back to parallel. Equivalently, you can try to estimate the pitch angle for every frame, and create an IPM which is adapted for each frame.
However, this could be bad for two reasons: (i) you need extra computational load to find this additional transformation, (ii) creating a new IPM for each image disables the advantage of using LUT (or remap if you’re using OpenCV).
My advice is to not even build your IPM if possible: operate on the original image, get the points, lines or other shapes there, and transform them into the IPM and work with them in that domain. Make your algorithm robust against pitch errors. This way your algorithm will be faster, at the cost of moving the intelligence to the analysis step, not to the image processing.
Hope this can help, good luck!
Regards,
Marcos

Hi marcus great work out there;
I tired your line segmentation codes on some videos however it performance very poor even on a 360p (and hence 720p) video, which is not suitable for real time applications. Do you suggest any method to apply these on real time applications? I will also remove lines which are not in my interest.