Vision Assistant: A Human–computer Interface Based On Adaptive Eye-tracking
Price
Free (open access)
Transaction
Volume
87
Pages
8
Published
2006
Size
632 kb
Paper DOI
10.2495/DN060171
Copyright
WIT Press
Author(s)
V. Hardzeyeu, F. Klefenz & P. Schikowski
Abstract
The Vision Assistant is designed as an intelligent tool to assist people with different disabilities. The goal of this project is to replace the mouse and keyboard by an adaptive eye-tracker system (so called mouseless cursor) which will help to establish a universal and easy-to-use human–computer interface. Using a camera, it works with a pattern recognition algorithm based on a Hough–transform core to process the streaming image sequences. This technique is known for its performance in locating given shapes. In particular, it is used to extract the shapes that relate to the human eye and analyze them in real–time with the purpose of getting the position of an eye in an incoming image and interpreting it as the reference position of a mouse cursor on the user’s monitor. The possibility of the Hough transform parallelization and its execution on the Hubel–Wiesel Neural Network for ultra fast eye-tracking is also discussed in this paper. The results of several experiments in this paper proved that the system performs quite well with different colours of the subjects’ eyes as well as under different lighting conditions. In the conclusion we paid attention to the problems of further improvement of the functional and algorithmic parts of the Vision Assistant. Keywords: HCI, eye-tracking, gaze estimation, Hough transform, Hubel–Wiesel neural network. 1 Introduction Real-time systems that use eye-tracking technology have been explored for many years (Duchowski [1]). In our research we wanted to build a non-invasive videobased real-time eye-tracking system to let disabled people take a more active part
Keywords
HCI, eye-tracking, gaze estimation, Hough transform, Hubel–Wiesel neural network.