Access to textual information is of significant importance in our daily life and there exists text everywhere in many forms, like newspaper, mail, product instruction, restaurant menu, business card, road sign, and so on. The community of the blind and the visually impaired people has about 170 million people around the world. However, due to their vision troubles, the blind and visual impaired people need to rely on others to survive in the society driven by textual information. The text recognition technique from natural scenes has received a growing attention in recent years because of the potential of camera-based image acquisition and wide availability of digital cameras. More and more innovative applications for the visually impaired have been developed to meet their needs to access textual information. The object of this project is to develop a reading assistant system which can provide the blind or visually impaired people with access to printed English text information in natural scenes. The device developed consists of a USB camera and a laptop computer. Images from natural scenes are captured by the USB camera and converted into digital format. The image processing and text recognition software is developed with VC++ 6.0 & the Matlab platform and runs on a laptop computer. After image analysis and text recognition, the computer drives a speech synthesis engine to read text in real time. Image analysis and text recognition are via a set of algorithms which can be mainly divided into text region extraction, image pre-processing, image skew and slant correction, character segmentation, and character recognition. Locating the text region is of the first priority and a Gabor filter is used to detect and locate text region which improves the accuracy and reliability of text extraction algorithm. Due to a low quality of original images from natural scenes, several essential image processing algorithms are optimized, such as binarization, noise reduction, and skew correction. In our system, adaptive thresholding, median filter, Hough transform and the projection algorithm are utilized. As to the character recognition, the method is based on an artificial neural network with the back-propagation training algorithm. In this dissertation, the system structure and the flow chart of processing are discussed. Besides, in order to evaluate the proposed algorithms, some test images are selected from the public database of The International Conference on Document Analysis and Recognition (ICDAR 2003). Experimental results show the proposed methods can provide effective and reliable performance. The system developed makes the users (the blind or the visually impaired) be aware of textual information which is not available to them. Our efforts have made good contributions to improve the life quality of the disable with emerging assistive technologies.