mladen, of course it's the computer:) it's my python script(nothing python specific; the code can be easily rewritten in C++).Reading an image file -> Converting it to the greyscale mode -> Populating a 2D array of bytes.Then analyzing the matrix, pixel by pixel, and so on.All is based (or rather inspired by) on the idea of so called Stroke Filter Response,but I do it in much my own way.Note: the code d-o-e-s n-o-t r-e-c-o-g-n-i-z-e texts.I don't even know what those Chinese words on my pics mean (seems the very first one means "http").As an intro to the subject a quote from an article of engineers from Samsung Advanced Institute of Technology:

quote:... For example, from the Canny edge image shownin Fig.1 (a), it is so difficult to find the text even for humaneyes. Besides the high complexity of text localization, wethink that the fundamental reason of these problems is thatno researcher finds the distinctive features of text. That is tosay, no one answers the basic question: what on earth istext? Intensity-similarity CCA, edge, corner or texturefeatures are only the necessary conditions of text, as shownin Fig. 1(b). Although recently someone uses Adaboost to ...

What I need is a good test set of images.They say there is e.g. Microsoft test set of 46 images.Mladen, if you were (are) in the subject, where can I see your results?I can't find anywhere someone's else after-detecting-pics, to see and comparePSI don't know what Canny filter is :)

No! I've never seen this article (now I'm downloading it).I have 3 or 4 other pdf-s thru which I just browsed and catched the main/basic idea/KEYWORDS.Then I applied my own grey matter :) Forgot: at first I tried to implement a chinese algorithm,based on stroke filter, without much success (due to my poor understanding of its (algo) steps)