Detection pupil/iris contour and are known as model based

Detection Algorithms
In the last years several algorithms for eye pupil/iris detection have
been developed. From the source light point of view there are two approaches:
based on ambient or infrared light. All of them search for characteristics of the
eye. There are some algorithms that search for features like blackest pixels in
the image, pixels that correspond to pupil or iris and are know as feature based
algorithms. Other algorithms are trying to best fit a model (ellipse) to the
pupil/iris contour and are known as model based algorithms.
The feature based algorithms need to isolate the searched feature in the
whole image or region of interest thru optimal image segmentation and centre of
mass of obtained image. The detection is affected by the corneal reflection
and/or eyelashes or eyelid but have in important advantage: low computing
resources. The model based algorithms search for best candidate pixels to
pupil/iris contour in the whole image or region of interest and then applies an
algorithm the best fit some of the pixels found. The centre of the model is
considered to be the centre of the pupil/iris. The detection of candidate pixels is
affected by the noise in the image, requires high computational resources but
have in important advantage: it can approximate the pupil even if the corneal
reflection, eyelid or eyelashes are covering partially the pupil.
The Starburst algorithm (Parkhurst, 2005) relies on black or white pupil
detection but can also be used for iris detection if eye receives enough ambient
light. It is a hybrid algorithm that search for eye feature but in the end try to best
78 Robert Gabriel Lupu and Florina Ungureanu
fit an ellipse for the iris/pupil contour. The images are taken from a video
camera placed right underneath the eye at a distance of six centimetres and an
angle of 30º. The algorithm starts by removing the corneal reflection. It
continues by finding points on pupil contour, applies the RANSAC (Fischler,
1981) algorithm for the founded points and best fit an ellipse that contains those
points. Because of noisy images, for every frame that is processed different
ellipses with different centres are fitted to pupil contour. This implies
oscillations of determined gaze direction in HCI systems. Improvements can be
made by preprocessing the acquired images and filtering the pupil detection
output (coordinates of pupil centre in time).
The preprocessing of acquired images consists in applying filters like
Scale Invariant Feature Transform (SIFT) or Speed-Up Robust Features (SURF)
(Luo, 2009) that have a high degree of stability when Gaussian blur radius is
smaller than 1.5 (Carata & Manta, 2010). Yet, this preprocessing does not
eliminate all the noise from the image. Filtering the output coordinates
improves the stability of gaze direction by denoising the eye movement signals
(Spakov, 2012).
The ETAR algorithm has a feature based approaches (Lupu, 2013). It
starts by searching the region where the eye is located, using HaarCascadeFilter.
The region is set as region so interest (ROI) and a mask image is constructed in
order to eliminate the unwanted noise from the four corners of ROI. The
algorithm continues with determination of an optimal binary segmentation
threshold. The pupil centre is determined by applying the centre of mass to the
group of pixels that correspond to the pupil from the segmented ROI image.
The analysis of determined gaze direction reveals that the algorithm is not
sensitive to the noise from the image.

x

Hi!
I'm Clifton!

Would you like to get a custom essay? How about receiving a customized one?

Check it out