Signs are ubiquitous indoors and outdoors, and they are often used for finding public places and other locations. However, information on signs is inaccessible to many visually impaired people, unless represented non-visually such as with Braille, tactile graphics, or speech. Automatically reading text from signs in natural scene images is a vital application for assisting visually impaired people. However, finding text in scene images is a great challenge because it cannot be assumed that the acquired image contains only characters. Natural scene images usually contain diverse text in different sizes, styles, fonts, and colors, and complex backgrounds. Therefore, we turn to the development of a portable camera-based assistive system to aid visually impaired people reading text from natural scenery. In this paper, a new method for character string extraction from scene images is discussed. The algorithm is implemented and evaluated using a set of natural scene images. Accuracy, precision, and recall rates of the proposed method are calculated and analyzed to determine success and limitations. Recommendations for improvements are given based on the results.