This paper describes a simple and “pre-cortical” visual attention model, which does not take image directions into account. We compute rarity-based saliency maps and then we describe the relation between texture and visual attention. Finally we decompose the image into several textures with different regularities. Our purpose is to compress textures into images using small repeating patterns.