Android eye detection and tracking with OpenCV

This tutorial is for older samples, if you are starting with the new ones (2.4.6) please look at this updated tutorial. Method and principles of matching are the same for both.

Finally I found some time to write promised tutorial of eye detection and template matching on Android. Since OpenCV for Android is getting better and better, some code snippets can be old and not the best possible way to solve the problem. If you found some improvements, please comment or contact me, I will edit this post and share it to others.

We take standard OpenCV example for face detections and extends it a little.
Android OpenCV SDK can be found here
If you arent’n familiar with Eclipse and OpenCV yet, please read basic setup of opencv4android first.
Import Face detection sample to Eclipse and clean/build the project to be sure is correctly imported and working.

As you can se on the video, there are some differences in GUI against the pure sample. There is a slider to easily change the matching method and button to recreate the eye template.
So at first we add those elements to the GUI.

Open FdActivity.java and change content of LoaderCallbackInterface method with snippet below. Code is pretty straightforward – instead of putting instance FdView as content view, we programmatically create new layout, add button and verticalseekbar (this class is included in downloadable project at the end of page) to layout and pass the whole layout as content view.

Now we need to load cascade classifier files for left and right eye – haarcascade_lefteye_2splits.xml distributed with OpenCV package (data folder) I used the same classifier for both eyes, because for right eye, haarcascade_lefteye_2splits.xml gives me better results, than haarcascade_righteye_2splits.xml . But you can try it with both – simply rewrite the filename.
Dont forget to copy haarcascade_lefteye_2splits.xml to /raw directory in your Android project (if is not present, simply create it)

Now we detect face, right, nothing new, its face detection sample:) What about eyes? As face is found, it reduces our ROI (region of interest – where we will finding eyes) to face rectangle only. From face anatomy we can exclude the bottom part of face with mouth and some top part with forehead and hair. I could be computed relatively to the face size. See picture below – original image -> detected face -> eye area -> area splitted area for right, left eye. It saves computing power.

On following picture you can see all ROI areas and matching in progress – yellow rectangles.

Get template – find eye in desired roi by haar classifier, if eye is found, reduce roi to the eye only and search for the darkness point – pupil. Create rectangle of desired size, centered in pupil – our new eye template.

194 komentářů

When will you post these tutorials ?. I really need a android eye tracking tutorial for my final year project. I searched every where but couldn’t find any. Thank you very much for supporting us by providing these tutorials 🙂

hello, can you please provide me by your email i want to ask you some questions maybe i can learn from your experience i am trying to do this project :

Media, sensors and actuators: Wake up alert for sleepy drivers

This project intend to design a program that can be put on a mobile
device that will check the drivers face expression
and alert by audio to the driver and by a text message
to a friend, two ways to alert the sleepy driver that will wake
him up or make him to pull over.

i am trying to do it on eclipse with opencv working and i noticed that you are working on android and eye detection

I downloaded the „Eye detect“ app but it show error message „Fatal error:can’t open camera!“. how can it can be correct? Also, i want to ask that can the eye detection operation checking the eye is open or close?

Thanks for the tutorial. I tried above example, but it has some serious performance issues. When I tried to run it in my mobile device (Samsung Galaxy Nexus – Android 4.2 JellyBeans), it got stuck at some point (at creating template) and gives this error (Unfortunately application stopped working) .What might be the problem ?

What kind of error? While generating template, there is no check for case, when only one eye is detected, I planed to add it later as I‘ m currently busy at my work. If you have logcat, please send it to romanhosek.cz@gmail.com thanks.

thanks for this example. Can you explain to me more how have you improved the performance of the detection since you still use cascade classifier files and you add the template matching technique? it seems to improve the quality of the detection but not the detection’s time. Isn’t?

Cascade classifier used for face detection is LBP so its faster than clasic Haar one and its used all the time (I didnt try to replace it by matching, because I think, that it will be inaccurate because of hair and eyebrowns) Classifiers for eyes are clasic Haar ones and are used only for creating template in first five frames, then only LBP classifier is used. So reducing ROI by LPB + crop this area only for eye region + split region for each eye + using template matching for eyes, save some computing time.

hello ,
while testing this tutorial and other samples of opencv 2.4.2 i noticed that the app using video always crush.Even this tutorial , does it happens for you too ? is it related to the device which i’m using?
thanks in advance.

What kind of video do you mean? Import video to the app? I didnt try it because when I write the original code, video import was not implemented in wrapper yet. Please be more specific about importing video.

hi, i have problems in using the front camera of this application. I want to make it portrait, but the image was mirrored. How can i fix this? Also, the iris location is located using for loop one eye by one eye, is it possible to locate the midpoint of the eye? Because i want to track the eyeball movement relative to the screen. thanks a lot! xo

thanks for your code!! it is very useful to me , but how can increase the accuracy of detecting the eyes? because if we need to detect eye blink, the first thing is to increase the accuracy on detecting the eyes stably. Can you give me some ideas?

hii roman,
really thanks for ur application.I currently developed an Eye tracking system for disabled people using gaze detection.Im really looking forward now to try out ur sample. Im currently having jelly bean version in my mobile.will this example work out with it???

hi roman,
how to check the eye open state and close state from this sample. For example, if the eye is open, a message should display „Eye is Open“ and if it is closed, a message should display „Eye closed“. Can you tel me the code to do this ?…Thanks 🙂

hi Roman, thanks for this great sample!! Could you please guide to show how it can be used to detect an eye ‚blink‘ .
I’m a novice and interested in developing an Eye blink detection using opencv in a project to prevent a driver of a four wheeler to fall asleep/get drowsy while driving.
thanks

Hi Roman,
Thanks a lot for this awesome work, I applied it using eclipse after downloading „tegra developer pack2“, but I got too many errors that said „can not be resolved to a type“, can you please help me to fix by giving me some hints,
Thanks.

Hi Roman,
I’ve been playing around with your code for a while now and it works perfectly as you describe it. However, I was just wondering if there is a way to detect if the eyes are closed. If I understand correctly, in the case of TM_SQDIFF, the template is compared to the camera image and the template matching will look for the highest matching value and move the yellow rectangles over to that area. How I would like the program to work is: If it detects the eyes where they should be the rectangles will stay yellow, else the rectangles will be a different color. If you could help me out in anyway I would greatly appreciate it.

I tried to test your application on my Sony Xperia Z but immediately after the app started it crashes and i get the error message in logcat: Choregrapher – Skipped 48 frames! The application may be doing too much work on its main thread
Aborthing: heap memory corruption in tmalloc_large addr=0x6832ca
Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 18372 – could you give me a hint what I would have to change? Or is it possible that the app only runs in the emulator or only on certain phones?

Hi really vary helpful tutorial,it will be vary useful for me for my next project but i want to ask you that in my case i have to do it like if i am focusing on one icon on screen after 3 sec the icon should be click and same for the second user if user focus on button it should click after 5 sec ,so is it possible to do with this android eye detector or can you please suggest me how can i create such this with use of it.thanks in advance waiting for reply

Yes its possible, I saw this tracking on iOs, but not with this example – you have to track angle and some additional computations. Maybe if you reeeeally simplify the checking, you can show eg one eye on whole screen and track pupil against the position of button on the screen.

Thank you for sharing, I have a question
I don’t want to open camera, I just want open image so how to open image
I use: Mat image = Highgui.imread(„/mnt/sdcard/ban.jpg“); but it cannot open
Thanks,
Chi

ya i imported openCV library(2.4.2) project to my work space but it showing some error in gen floder OpenCVEngineInterface ,getLibraryList(java.lang.String version) throws android.os.RemoteException,i flow the steps you mentioned for library project but its showing error in the org.opencv.engine package in gen floder

Tried the stuff, however I notice that camera preview is not coming up , am digging into the code and checking If there is anything amiss WRT to openCV library version etc. On a side note I noticed that Face detection uses Camera mCamera your code uses VideoCapture mCamera can this be done with the Camera mCamera too ?

Hi Roman, first off thanks for this great tutorial.
I have searched the web in any direction, but I couldn’t find the solution to the problem I have with your project.
I get and error on application startup:
08-11 08:18:55.406: E/OpenCV::camera(6779): CameraWrapperConnector::connectToLib ERROR: cannot dlopen camera wrapper library
Funny thing is that the original openCV face detection sample works. I tried to copy the native camera libs into the project and modify the makefile to install the camera modules. No luck.
Maybe you have any hint?
Thanks,
Andrea

Hi roman,
I’ve some questions…1st, do you recommend opencv 2.4.2 version or 2.4.6 version? I got those two, and I just read your update post of 2.4.6 version. I’m newbie so I’m not sure if its better (or easier) the older version.
2nd, If I understand correctly, you can detect eye in base of the darkness point. I’m trying to detect eye with Hough Transform ( find a circle). You think it’s possible? Can you help me with this?
And finally, thank you…really, THANK YOU for posting this. It helps me a lot
(it is the only example I’ve understood).

Hi, definitely try 2.4.6 – its only one class and if you download my sample, its NKD free – better for beginners.
I detect eye by haar detector, then reduce the area and find the darkest point – this point is centeroid of new rectangle – which created the eye template… then its simple template matching. Yes, I tried to use HouhgCircles, while trying to detect open/closed eyes, but maybe with my hw and light conditions, it was too inaccurate for me.

Hi, this looks quiet the same to the Eye Detection and Matching what I am doing with OpenCV on the iPad! I am interested in the performance of your application! Do you calculate your fps? What is in general possible with the front camera of your phone in a video stream and what is the actual fps during the template matching? Thanks Jules

Roman…Why this sample application not working on SONY mobile phones ?…I tried your application on a SONY XPERIA P…but after opening application it suddenly get closed after few seconds…But when I tried on SAMSUNG NEXUS it works perfectly

hi Roman,it is nice tutorial …!
i wander by seeing this example.
iam trying to scrool list view when the eye reached to end of list . from past 3 months please can you help me in this…..
thanks in advance.

Hi Roman,
I downloaded your App “Eye Detect Sample” from the Download Eclipse Project
I tried to run it from Eclipse and I had many errors but I fixed them. Finally I stacked with following errors or warning:

„I SENT YOU EMAIL TAKING SNAP SHOT OF THE ERRORS“
Please try to help me ASAP
Best regards,
Adil

There is link to download full Eclipse project at the end of the article.
There are some errors with connecting the camera on some devices, which I’m checking it but currently I’m too busy at work at the moment.

Hi
I am new started with open cv for android .my project is eye detection on android phone. when i import library open cv 2.4.6 I can run some sample project camera control,color-blob-detec,samples-puzzle15, but i cannot run Sample face detection and camera preview. plz help me

hi Roman
thanks you for last advise .i can run sample face detection.now i have some problem when i try to import your project ican but it can’t show picture on my phone. it has only black screen. in error log tell me that Fatal signal 11 (SIGSEGV) at 0x00000000 (code=1)

Thanks for your sharing and quickly response to others needed. I have a question: how can I get the eye’s position as a viewer’s location? that means how could I get the eye’s position (x,y,z(maybe distance between eye and screen)?

Hey. .i already run your app. .and it works fine. . .
i have something to add which i could not fix. .i need to create an alarm if the one eye is in close state after 1 second. .can you give me some process because i already did everything but i could not fix it. .can you help me?. .cruser_pan90@yahoo.com

Hi Roman
Thank you for this great tutorial.
I’m working on my graduation project which helps the doctor to take the students attendance using face recognition so i downloaded your tutorial but there’s is an error says FdView cannot be resolved and i don’t know how to fix it and can you send me any advises about how can i do my application by face recognition.
best regards
Asem Battahasembattah@gmail.com

Great Job manh. i am developeing application for physically disable people and which based on face detection and for that i used your tutorial. Really helped a lot . but i want to ask you something . when i face detect using detectMultiScale, changing position of face detected frequently.

any solution for this ??
because i want to implement eye gazing and for that prior condition to stop this.

Hi Roman, its a nice step by step tutorial..
I just have to change a little bit and it work like a charms.
I think I will develop it for my final project in college.
I am doing Face Expression Recognition with Haar algorithm, I found your tutorial and try it, its a good begining man.
it would be great if you can point me out how to get the mounth area, cause I need it for feature extraction for the later calculation to get the expression
I’d like to hear your opinion.

Hi Roman!
Thanks in advance for the tutorial. A few weeks ago this was my start point. By now I want to develop my own classificator so I can set whenever the eye is open or closed. I know there are a bunch of ways to do that, but my intent is compare a couple. So, would you know how to create a wholly new XML in „.raw“ with my training images?
Thanks again.

This is really amazing tutorial. Now, I want to find corners of the eyes and then I want to find the coordinates of the screen i-e where the user is looking at the screen. Can you please guide me ??
I am working on android studio with opencv.
Thanks in advance.

My company just create a 2 camera system for car. Front cam and cabin cam can record 2 video stream 1080px30 and 720px30
Cabin camera can adjust to aim driver’s face. SDK is ready by use lite Android OS.

Anyone interest to develop your drowsy detection system on to this 2 camera system? This will be a first consumer product which is very meaningful to the car safety industry.

Above code for pupil detection is exactly pointed at Pupil area for few Images,but problem is for few images pupil detection(white circle) is placed near or above the eye lashes.
Can u please help me please
Waiting for your reply
Thank u
email:deepaandroid99@gmail.com

Hi Thank you for the tutorial .I need some help i am new to OpenCV and i wanna know that how can we detect eye with OpenCV and and store the detected Eye In database and on the other hands How can we detect the Eye Again and if it matches with the already stored eye in database ,will generate some message on screen else show some other message ? Thank You Will be waiting
Zeeshan Riaz
Knowldge Platform .

Roman Hošek!! I am commenting here now. I configured opencv in android studio now its configured and i can import the opencv stuff but how to do this code in open please make some video i want to change eye color in real time

Hi roman, Is it possible to crop and save these image into external directory… can u pls kindly send me the code for that and i’m also got an error while running.. that is „Build of configuration Default for project OpenCV Sample – face-detection ****

\ndk-build

Cannot run program „\ndk-build“: Launching failed
Error: Program „/ndk-build“ is not found in PATH

Hi i am new at Opencv i want to match camera image with android sqlite storage image for searching purpose.but I did not get any solution .I am tired by trying more and more Can you please give me any Idea .

Hello! Mr. Roman, I am a Chinese university student. I have been researching face recognition recently, but I have shown a lot of mistakes after importing your code in eclipse. I do not know how to solve it. I do not know how to issue the problem to you, I cut the map, your code can be placed on the Windows platform opencv-2.4.9 in it? (Eclipse compiler) Console always appears under the „\\ ndk-build.cmd“
Can not run program „\ ndk-build.cmd“: Launching failed problem, but I have opencv-2.4.9 into the eclipse, I hope to get your help! Thank you!

Hi, this code is suposed to be used in Android Studio – afaik Eclipse is no longer supported as the main Android IDE.
On what OS are working? On Linux you need to use just ndk-build without any extension. Please check that your path to NDK folder is right – for Android studio you can find it in gradle.settings as ndk.path

Hello there! Mr. Romman, please forgive me again disturb you, my NDK environment configuration is correct, but I import the project, the following problem: Properties:
Library reference .. \ .. \ sdk \ java could not be found
I was downloaded directly from your Tutorial, and I was in the Android studio2.2.2 in the import of the project, I downloaded from the Tutorial, But I have this kind of problem, do not know how to solve, but I did not in the Linux system, I was operating under the Windows operating system, I do not know if this will be the problem? Trouble you help me get advice, thank you again!

Hello sir,
Your tutorial is a very nice.but i need a code in android studio.its not working in android studio.error message in all Highgui and videocapture is a cannot resolve type.I fine a solution ..highgui in replace the imagdocs.but error is not solves that.
Error
mCamera.set(Highgui.CV_CAP_PROP_FRAME_WIDTH, mFrameWidth);
mCamera.set(Highgui.CV_CAP_PROP_FRAME_HEIGHT, mFrameHeight);
i replace highgui in

Hello there!
Just to say thank you for sharing your knowledge. I´v successfully installed the project, and even added some features for a class project. Again, thank you from a beginner Android developer.

Hello from France !
We are managing a fablab dedicated to opensource and disabilities, and Nicolas has always his phone behind his electric wheelchair. It would be great to implement eye tracking as an android app able to send bluetooth data emulating a mouse. Please provide any usefull link that could help us to help him.
Congrats for the work and the sharing !!!

Hi, Roman. thanks for this great tutorial. it works very well to me. In addition, is it possible to get the position of the target eye without displaying the camera view to the Lcm? in my case, I want to get the position of the user’s eye and playing a video with mediaplayer for the user. I found that if the mediaplayer runs, the eye tracking no longer tracking anymore. why is it? is it because of the
view is taken away by the mediaplayer? if so, how to fix this problem? In addition, I had found that there is a class CameraGLSurfaceView in openCV3+. is it a good choice for eye tracking only application? do you know how to use this class? thank you

Hi Roman
did you develope an open-eye recognition javascript application too?
If yes, I would be interested in and I would like to share with you a great opportunity about this.
Massimiliano Mazzarotto – Italy