Introduction

Image Processing Lab is a simple tool for image processing, which includes different filters and tools to analyze images available in the AForge.NET framework. It's easy to develop your own filters and to integrate them with the code or use the tools in your own application. The following filters are implemented in the AForge.NET framework and demonstrated in the application:

You can create (save and load) your own convolution filters or filters based on standard mathematical morphology operators. Colorized grid makes it very convenient to work with custom convolution filters.

A preview window allows you to view the results of changing filter parameters on the fly. You can scroll an image using the mouse in the preview area. All filters are applied only to the portion of the image currently viewed to speed up preview.

A PhotoShop-like histogram allows you to get information about mean, standard deviation, median, minimum and maximum values.

The program allows you to copy to or paste from clipboard, save and print images.

Using the Code

Most filters are designed to work with 24bpp RGB images or with grayscale images. In the case of grayscale images, we use PixelFormat.Format8bppIndexed with color palette of 256 entries. To guarantee that your image is in one of the formats, you can use the following code:

Suppose, you want to apply a series of filters to an image. The straight way to do it is to apply filters one after another, but it's not very likely in the case of 3 or more filters. All filters implement the IFilter interface, so it allows us to create a collection of filters and apply it at once to an image (besides, the collection will also save us from disposing routines on intermediate images):

HSL Filters

Using HSL color space is more obvious for some sorts of filters. For example, it's not very clean, how to adjust saturation levels of an image using RGB color space. But it can be done easily, using HSL color space:

It's possible to get much more interesting results using HSL filtering. For example, we can preserve only the specified range of hue values and desaturate all others out of the range. So, it will lead to a black and white image with only some regions colored.

Mathematical Morphology Filters

There are many tasks that can be accomplished using mathematical morphology filters. For example, we can reduce noise on binary images using erosion, or we can separate some objects with the filter. Using dilatation, we can grow some parts of our interest on the image. One of the most interesting morphological operators is known as Hit & Miss. All other morphological operators can be expressed from the Hit & Miss operator. For example, we can use it to search for particular structures on the image:

Blob Counter

Blob counter is a very useful feature and can be applied in many different applications. What does it do? It can count objects on a binary image and extract them. The idea comes from "Connected components labeling," a filter that colors each separate object with a different color. Let's look into a small sample:

YCbCr Filtering

YCbCr filters are provided with similar functionality to RGB and HSL filters. The YCbCr linear correction filter performs as its analogues from other color spaces, but operates with the Y, Cb and Cr components respectively, providing us with additional convenient ways of color correction. The next small sample demonstrates the use of the YCbCr linear filter and the use of in-place filtering: the feature, which allows you to filter a source image instead of creating a new result image, is as follows:

Perlin Noise Filters

Perlin noise has many applications and one of the most interesting of them is the creation of different effects, like marble, wood, clouds, etc. Application of such effects to images can be done in two steps. The first step is to generate effect textures and the second step is to apply the textures to the particular image. Texture generators are placed into the Textures namespace of the library, which contains generators for such effects as clouds, wood, marble, labyrinth and textile. All these texture generators implement the ITextureGenerator interface. For applying textures to images, there are three filters. The fist one, Texturer, is for texturing images. The second, TexturedFilter, allows application of any other filter to an image using the texture as a mask. The third, TexturedMerge, allows merging of two images using the texture as a mask.

AForge.NET Framework

The Image Processing Lab application is based on the AForge.NET framework, which provides all the filters and image processing routines available in the application. To get more information about the framework, you may read the dedicated article on The Code Project or visit the project's home page, where you can get all the latest information about it, participate in a discussion group or submit issues or requests for enhancements.

Conclusion

I suppose the code may be interesting for someone who would like to start studying image processing or for filters/effects developers. As for me, I'll use the tool for my further research in computer vision. Besides, the library helped me very much in successfully finishing my bachelor work.

History

[08.03.2007] - Version 2.4.0

Application converted to .NET 2.0;

Integrated with AForge.NET framework.

[13.06.2006] - Version 2.3.0

In place filter interface introduced, which allows filter application on the source image;

License

Share

About the Author

Started software development at about 15 years old and it seems like now it lasts most part of my life. Fortunately did not spend too much time with Z80 and BK0010 and switched to 8086 and further. Similar with programming languages – luckily managed to get away from BASIC and Pascal to things like Assembler, C, C++ and then C#. Apart from daily programming for food, do it also for hobby, where mostly enjoy areas like Computer Vision, Robotics and AI. This led to some open source stuff like AForge.NET, Computer Vision Sandbox, cam2web, ANNT, etc.

Going out of computers I am just a man loving his family, enjoying traveling, doing some sports, a bit of books, a bit of movies and a mixture of everything else. Always wanted to learn playing guitar, but it seems like 6 strings are much harder than few dozens of keyboard’s keys. Will keep progressing ...

Hi Andrew...Great job...
I read your article and I have found that you convert images into 24-bit-per-pixel format.You used simple way to draw image with this format.You know I need algorithm to convert image formats to each other.32,24,16,8 bit formats.I have found some of them by searching in the net.Maybe you can help me.I will be thankful for any help from your side.
Thanks in forward

I don't have a lot of experience in particularly this area of image processing. There are many different techniques for color reduction/quantization with and without dithering. I would suggest you to search for them on the Internet.

When using your gamma correction filter, which is applicable to RGB and Gray images, it assumes that the image has an implicit gamma of 1. This is not what other pieces of software are doing.

In particular, a gray image, without any extra color information (/DeviceGray), and a Gray image with a gamma curve of 2.2 will look as identical images, when displayed by say, Adobe Reader. Displayed images are implicitly adapted to common sRGB monitors.

In your filter, assigning a gamma of 2.2 to a Gray image changes completely how it looks on the monitor. Maybe the issue is linguistic, and your filter is not "assigning" a gamma, but "correcting" a gamma. In any case, the result is not what other software would expect.

(No need to repeat all those nice words about IPLab, take if for granted.)
Regards,
Ignacio

PS:
By the way, I have expanded your filter, creating a parametric gamma curve filter, as specified by the ICC. Just waiting to hear from you, and the rest, on how to proceed with the (implicit) gamma. After that, I´ll give it to you, in case you want to include it in IPLab.

I must say that AForge.Imaging has rather simple and straight forward implementation of Gamma Correction. All the routines are done directly updating pixel values, so monitor properties are not taken into account/

If you have any updates to AForge.NET classes and you would like to contribute to this project, please, feel free to send me any updates. I will review them and include to development trunk as soon as possible. And of course you will get credits on project's home page.

The first one is the mouse point. Two other parameters are output parameters: image coordinate and screen coordinate. Image coordinate are real image coordinates (regardless of zoom factor), which are used for cropping. Screen coordinates are used to draw selection rectangle.

I need to calculate the area of the white blob, and also the vertical size of the blob and horizontal distances from the center to the outside of the blob (which I could do by calculating the centroid if I can get a set of the outer points).

I also have a more detailed image process that I'd like to process through this, which outputs images like: http://www.techautos.com/downloads/DetectionScopeProf.jpg[^], with the goal being the same - get info on the white mass seen (though here it's a bit less defined, as the edges are a bit gray, etc.).

In case anyone else is doing something similar, the solution to this was actually simpler than I imagined.

I used a threshold filter to binarize the image, then the blob extractor, which gave me several parts. Then my code sorted through the blobs, identifying the correct one by size/shape/location, then just looked at the outer pixels in each line, which gave me all the info I needed. Since I'm going through each pixel in the blob anyway, I thought I'd keep track of white/black pixels along the way, so at the end that gives me the area too. It's working quite nicely

Hi again, Andrew. The project I told you about before is almost ready, but I'm facing a problem that I can't figure out.

I have created a filter based on your EuclideanColorFiltering, which binarizes the image and also allows choosing two more RGB spheres (one to add colors to the main sphere, and one to subtract). I called the filter ColorSlice.cs

I have a ColorSliceForm to provide an interface for that, and in that form I have filterPreview.

My problem is, when I choose the extra colors, my filter preview shows the correct result, but when I apply the filter, it ignores the extra colors.

And I call it from ColorSliceForm.cs (I have to call it from ColorSliceForm since the form must loose focus so I can pick colors from the image below, and after that the OK button doesnt work since ImageDoc.cs leaves the line when it calls the form and waits for the result(if ( form.ShowDialog( ) == DialogResult.OK )), to go to the mouse events methods). The call is like this:

The image i am using is always 24bpp, and i degugged it with and without the FormatImageMethod. It gets to the Opening filter as a 24bpp image either way, but i get the same error. Could it be that the opening filter is expecting a binary image?

Hi
I was trying to figure out how to change the ImageDoc's tab which is in focus (through a messagebox that appears after a filter creates a new image), but haven't been able to do it so far. Any idea of how this can be acomplished?

Sorry if this is a lame question, I'm still learning to program in C# and VS..

Could someone please post/write a simple help on how to use the iplab with MS Visual Studio 2005.

I am new to both VS and iplab but very interested in using both. There should be a way to just incorporate the iplab classes into VS so I can drag and drop a new object (e.g. histogram form) into a new application.

I apologize if this is too basic of a question. IPLAB seems like a very elaborate image processing library that I would like to utilize but can’t yet (I spent few hours already)

Hi Andrew.
I have a little question about AForge library.
I'm using Your great component to discover shapes in the scaned picture. This picture is a test form which is filled by the students and then I want to find out which answers thay gave.
The answer field is a simple squere box (look's like a checkbox).
I'm using blob extractor but this method gave me the array of rectangle object from different dimensions.
I was thinking that method will find out the shapes in the X or Y dimension but sorted.
What is the reason that this shapes are not discovered in the regular growing coordination so the first discovered shape is the first shape in scaned sheet of paper.