Artificial Vision Support System (AVS2) for Visual Prostheses
The human retina is not a mere receptor array for photonic information. It performs significant image processing within its layered neural network structure. Current state-of-the-art and near-future artificial vision implants provide tens of electrodes, allowing for limited resolution visual perception (pixelation). Real-time image processing and enhancement improve the limited vision afforded by camera-driven implants, such as the Artificial Retina, ultimately benefiting the subject. The preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. The Artificial Retinal Implant Vision Simulator (ARIVS) software system (Fink and Tarbell, 2005), devised and implemented under direction of Dr. Wolfgang Fink at the Visual and Autonomous Exploration Systems Research Laboratory at Caltech, performs real-time image processing and enhancement of the miniature camera image stream before it is fed into the Artificial Retina. This research is part of the collaborative U.S. Department of Energy-funded Artificial Retina Project designed to restore sight to the blind.
Since it is difficult to predict exactly what blind subjects with camera-driven visual prostheses, such as the Artificial Retina, may be able to perceive, ARIVS provides the unique capability and unprecedented flexibility to allow for repeated application of image manipulation and processing modules in a user-defined order. As a consequence, current and future retinal implant carriers may choose from a wide variety of image processing filters to customize/optimize their individual visual perception provided by their visual prostheses by actively manipulating parameters of individual image processing filters or altering the sequence of these filters. To attain true portability, ARIVS can be implemented on a commercial-off-the-shelf, battery-powered, general-purpose microprocessor platform without sacrificing functionality, flexibility, and real-time image processing capability.
ARIVS interfaces with a wide variety of digital cameras and is thus directly and immediately applicable to artificial vision prostheses that are based on an external or internal video-camera system as the first step in the vision stimulation/processing cascade. ARIVS presents the captured camera video stream in a user-defined pixelation, which matches, e.g., the dimensions of the implanted electrode array of the Artificial Retina. It subsequently processes the video data through user-selected image filters and then issues them to the Artificial Retina. Numerous efficient image manipulation and processing modules have been developed, such as contrast and brightness enhancement, grayscale equalization for luminance control under severe lighting conditions, user-defined grayscale levels for reduction of the data volume transmitted to the visual prosthesis, blur algorithms, and edge detection (see Figure below).
Wolfgang Fink, Ph. D.
Senior Researcher at Jet Propulsion Laboratory (JPL)
Visiting Research Associate Professor of Ophthalmology at University of Southern California (USC)
Visiting Research Associate Professor of Neurosurgery at University of Southern California (USC)
Visiting Associate in Physics at California Institute of Technology (Caltech)
California Institute of Technology
Visual and Autonomous Exploration Systems Research Laboratory
15 Keith Spalding (corner of E. California Blvd & S. Wilson Ave)
Mail Code 103-33
Pasadena, CA 91125