Tools

Namespaces

Variants

Views

Actions

Search

Contents

Please note that as of October 24, 2014, the Nokia Developer Wiki will no longer be accepting user contributions, including new entries, edits and comments, as we begin transitioning to our new home, in the Windows Phone Development Wiki. We plan to move over the majority of the existing entries. Thanks for all your past and future contributions.

{{Note|This is an entry in the [[Nokia Imaging Wiki Competition 2013Q3]]}}

+

{{FeaturedArticle|timestamp=20131006}}

{{Abstract|This article explains how to use Nokia Imaging SDK to create powerful image editing applications. In this particular application, a user can select part of the picture by 'painting' over it and apply filter only to painted region. Then the filtered region can be blended back onto the image to create interesting images.}}

{{Abstract|This article explains how to use Nokia Imaging SDK to create powerful image editing applications. In this particular application, a user can select part of the picture by 'painting' over it and apply filter only to painted region. Then the filtered region can be blended back onto the image to create interesting images.}}

Note: when testing the application in the emulator, keep in mind that there is a bug when choosing photos from the library. Until one manually opens the "photos" application, calling the {{Icode|PhotoChooserTask}} will give appearance that the library is empty. It is not, but you need to start the "photos" application once prior to choosing an image from the library.

Note: when testing the application in the emulator, keep in mind that there is a bug when choosing photos from the library. Until one manually opens the "photos" application, calling the {{Icode|PhotoChooserTask}} will give appearance that the library is empty. It is not, but you need to start the "photos" application once prior to choosing an image from the library.

+

+

{{Note|Feel free to use the source code attached to this article, for the most recent version refer to the open source repository over at [https://github.com/tpetrina/ImagingApps GitHub].}}

The application consists of just one main editing page (name {{Icode|MainPage.xaml}}) which contains the entire application's functionality. The logic is implemented in the backing view model (named {{Icode|MainViewModel}}). The main functionalities are:

+

The application consists of just one main editing page (name '''MainPage.xaml''') which contains the entire application's functionality. The logic is implemented in the backing view model (named {{Icode|MainViewModel}}). The main functionalities are:

* Pick an image from the library or take a photo.

* Pick an image from the library or take a photo.

* Choose one of the selection tools (the default is "brush selection tool".

* Choose one of the selection tools (the default is "brush selection tool".

Line 97:

Line 100:

</gallery>

</gallery>

−

By default, the brush selection tool is used. User can select another tool by pulling down a "selection tools" overlay. This can be done by swiping down on the top edge of the phone (the edge above the image). This brings down the "selection tools" overlay which can be seen on the image to the right. Once a tool is selected, the user can start applying it to the selected image (if any). Application is designed with extensibility in mind and allows for easy creation of additional selection tools. Zooming in or inverting selection can also be easily implemented using the same interfaces. All tools inherit from the {{Icode|ITools}} interface, while

+

By default, the brush selection tool is used. User can select another tool by pulling down a "selection tools" overlay. This can be done by swiping down on the top edge of the phone (the edge above the image). This brings down the "selection tools" overlay which can be seen on the image to the right. Once a tool is selected, the user can start applying it to the selected image (if any). Application is designed with extensibility in mind and allows for easy creation of additional selection tools. Zooming in or inverting selection can also be easily implemented using the same interfaces. All tools inherit from the {{Icode|ITools}} interface, while

Once user picks an image, a region is defined by applying selection tools. Even though in this early iteration undo feature is missing, it is relatively easy to add it by separating all selections into "layers". Once the user is satisfied with the selection, the currently selected filter is applied on the defined region and blended back onto the image by pressing the "apply" icon (the one with the check). As mentioned before, blending is a process of merging two images to produce a third one. In this case, one image is the original image with the part of the image "carved out " and replaced with white pixels. The second image contains those pixels that have been carved out with the rest of the pixels turned to pure white. The replacement pixels can be adjusted using the two sliders on the main page which will be described in more detail below.

Once user picks an image, a region is defined by applying selection tools. Even though in this early iteration undo feature is missing, it is relatively easy to add it by separating all selections into "layers". Once the user is satisfied with the selection, the currently selected filter is applied on the defined region and blended back onto the image by pressing the "apply" icon (the one with the check). As mentioned before, blending is a process of merging two images to produce a third one. In this case, one image is the original image with the part of the image "carved out " and replaced with white pixels. The second image contains those pixels that have been carved out with the rest of the pixels turned to pure white. The replacement pixels can be adjusted using the two sliders on the main page which will be described in more detail below.

Line 357:

Line 360:

TargetImage.Invalidate();

TargetImage.Invalidate();

}

}

−

</code

+

</code>

In the current application, the parent container is {{Icode|Grid}} and handling of the notifications is done like this:

In the current application, the parent container is {{Icode|Grid}} and handling of the notifications is done like this:

This blend function generally works poorly with the current implementation since shades of gray do not carry as much color as regular colors do.

This blend function generally works poorly with the current implementation since shades of gray do not carry as much color as regular colors do.

Line 559:

Line 562:

This blend function "divides the inverted bottom layer by the top layer, and then inverts the result" (from Wikipedia).

This blend function "divides the inverted bottom layer by the top layer, and then inverts the result" (from Wikipedia).

−

[[File:blending_sepia_colorburn.jpg|none|thumb|400px|]]

+

[[File:blending sepia colorburn.jpg|none|thumb|400px|]]

The order of images is important. Upper slider darkens the image if the values are smaller while maximum value has no effect on the unselected region. The lower slider increases the contrast.

The order of images is important. Upper slider darkens the image if the values are smaller while maximum value has no effect on the unselected region. The lower slider increases the contrast.

Line 567:

Line 570:

This blend function "divides the bottom layer by the inverted top layer" (from Wikipedia).

This blend function "divides the bottom layer by the inverted top layer" (from Wikipedia).

−

[[File:blending_sepia_colordodge.jpg|none|thumb|400px|]]

+

[[File:blending sepia colordodge.jpg|none|thumb|400px|]]

−

The order of images is important. Upper slider value of 0 has no effect on the unselected part while increasing the value lights up the unselected part. Lower slider darkens the selected region if the value is closer to 0 and lightens the selected region.

+

The order of images is important. Upper slider value of 0 has no effect on the unselected part while increasing the value lights up the unselected part. Lower slider darkens the selected region if the value is closer to 0 and lightens the selected region.

=== Overlay blend function ===

=== Overlay blend function ===

Line 575:

Line 578:

This blend function combines Multiply and Screen functions and darkens resulting image where the first image is darker and lightens where the first image is lighter. With this filter it is easy to control dark/light levels in selected and unselected regions.

This blend function combines Multiply and Screen functions and darkens resulting image where the first image is darker and lightens where the first image is lighter. With this filter it is easy to control dark/light levels in selected and unselected regions.

−

[[File:blending_sepia_overlay.jpg|none|thumb|400px|]]

+

[[File:blending sepia overlay.jpg|none|thumb|400px|]]

The order of images is important. Values of 0.5 for both sliders have negligible effect. Smaller values for the upper slider darken the unselected region while higher lighten it. The lower slider has the same behaviour, but affects the selected region.

The order of images is important. Values of 0.5 for both sliders have negligible effect. Smaller values for the upper slider darken the unselected region while higher lighten it. The lower slider has the same behaviour, but affects the selected region.

Line 583:

Line 586:

This blending function is also a combination of Multiply and Screen blend functions and can be used to control how light or dark each respective part of the image is. The order of images is important and different implementations have different results. Top slider will never turn the unselected region to black or white even if maximum values are set. However, the lower slider can turn selected region to black or white.

This blending function is also a combination of Multiply and Screen blend functions and can be used to control how light or dark each respective part of the image is. The order of images is important and different implementations have different results. Top slider will never turn the unselected region to black or white even if maximum values are set. However, the lower slider can turn selected region to black or white.

−

[[File:blending_sepia_softlight.jpg|none|thumb|400px|]]

+

[[File:blending sepia softlight.jpg|none|thumb|400px|]]

=== Screen blend function ===

=== Screen blend function ===

Line 590:

Line 593:

<pre>f(ab) = 1 - (1 - a)*(1 - b)</pre>

<pre>f(ab) = 1 - (1 - a)*(1 - b)</pre>

−

[[File:blending_sepia_screen.jpg|none|thumb|400px|]]

+

[[File:blending sepia screen.jpg|none|thumb|400px|]]

The order of images is not important and black color in either image has no effect. White pixel in one image gives white pixel as a result.

The order of images is not important and black color in either image has no effect. White pixel in one image gives white pixel as a result.

Line 598:

Line 601:

This blend function is equivalent to Overlay blend function, but reverses the algorithm.

This blend function is equivalent to Overlay blend function, but reverses the algorithm.

−

[[File:blending_sepia_hardlight.jpg|none|thumb|400px|]]

+

[[File:blending sepia hardlight.jpg|none|thumb|400px|]]

As with the overlay, the order is important and values of 0.5 for both sliders appear to have no effect. Top slider can turn the unselected region to black if the value is 0 and white if the value is 255. The bottom slider can never turn the selected region completely black or white.

As with the overlay, the order is important and values of 0.5 for both sliders appear to have no effect. Top slider can turn the unselected region to black if the value is 0 and white if the value is 255. The bottom slider can never turn the selected region completely black or white.

|No change when blending with black || Unselected part turned to 'negative'

|No change when blending with black || Unselected part turned to 'negative'

|-

|-

−

|[[File:blending_sepia_exclusion2.jpg|none|thumb|400px|]]

+

|[[File:blending sepia exclusion2.jpg|none|thumb|400px|]]

−

||[[File:blending_cartoon_exclusion.jpg|none|thumb|400px|]]

+

||[[File:blending cartoon exclusion.jpg|none|thumb|400px|]]

|-

|-

|Selected part turned to negative || 'Dark cartoon' feel

|Selected part turned to negative || 'Dark cartoon' feel

|-

|-

−

| [[File:blending_sepia_difference.jpg|none|thumb|400px|]]

+

| [[File:blending sepia difference.jpg|none|thumb|400px|]]

|-

|-

|Difference gives minor differences than Exclusion

|Difference gives minor differences than Exclusion

Line 700:

Line 703:

Nokia Imaging SDK is a powerful tool that can be harnessed in different ways. I hope that this interesting approach to building complex compositions yields interesting applications that go beyond the simple "filter and share" applications. Since the application described here applies filters to just part certain part of the image (or consider applying filter everywhere ''but'' on some special part), resulting images can have surreal or hyper-real feel.

Nokia Imaging SDK is a powerful tool that can be harnessed in different ways. I hope that this interesting approach to building complex compositions yields interesting applications that go beyond the simple "filter and share" applications. Since the application described here applies filters to just part certain part of the image (or consider applying filter everywhere ''but'' on some special part), resulting images can have surreal or hyper-real feel.

+

+

Further optimizations for these types of applications is delegating all the work on actual images to another thread and only work in preview mode. This will increase the speed of processing images and reduce the strain on phone which in effect won't drain the battery as much.

=== Source Code ===

=== Source Code ===

Download [[File:BlendingApplications.zip|thumb|Source Code]]

Download [[File:BlendingApplications.zip|thumb|Source Code]]

Latest revision as of 01:33, 14 October 2013

06 Oct2013

This article explains how to use Nokia Imaging SDK to create powerful image editing applications. In this particular application, a user can select part of the picture by 'painting' over it and apply filter only to painted region. Then the filtered region can be blended back onto the image to create interesting images.

Nokia Imaging SDK brings a set of powerful filters which, either alone or combined, can create wonderful images. Applications that rely on filtering technology (e.g. Instagram) apply one or several filters to entire images in order to make them more appealing. By using a different approach, a user can create interesting images by applying filters to arbitrary regions and then applying filters to just that region. Afterwards, that part is blended back into image and images can now feature one or more differently filtered regions. In this article we will show the technical details behind building such application and several examples of interesting effects will be shown.

Since we will talk a lot about blending in this article, a definition is needed. The process of merging two images or, in case of advanced photo manipulation applications, merging two layers is called blending. Since both images or layers are made of pixels, the function that takes two pixels that occupy the same position and returns a resulting pixel is called a "blending function". There are several well known and defined blending modes, you can read up on the Wikipedia article. There is also something called "alpha blending" and is used when dealing with transparent images and that is not covered in this article.

Note: when testing the application in the emulator, keep in mind that there is a bug when choosing photos from the library. Until one manually opens the "photos" application, calling the PhotoChooserTask will give appearance that the library is empty. It is not, but you need to start the "photos" application once prior to choosing an image from the library.

Note: Feel free to use the source code attached to this article, for the most recent version refer to the open source repository over at GitHub.

For this particular application, the default Windows Phone 8 template was used. After adding the necessary NuGet packages and adjusting build configuration (specific to using Nokia Imaging SDK, for more information click here we are ready to start using the SDK. The idea behind this application is to leverage filters that come with Nokia Imaging SDK to create stunning and surreal images similar to other applications in the marketplace but with a twist - let user pick which parts of the image are filtered and with what filter.

Here is one example:

User picks an image from library or takes a picture

User picks region for filter application

The final result using MagicPenFilter and Darken blending mode

Unlike traditional filtering applications, the user can choose arbitrary regions for application filtering. By applying filers to specific regions, those regions can be highlighted. Here are some more examples:

The application consists of just one main editing page (name MainPage.xaml) which contains the entire application's functionality. The logic is implemented in the backing view model (named MainViewModel). The main functionalities are:

Pick an image from the library or take a photo.

Choose one of the selection tools (the default is "brush selection tool".

Discard all changes.

Choose a filter to apply to the selected region.

Choose a blending mode.

Change the overall shade of gray for the replacement pixels (more on that further down).

Save resulting photo to the phone or share it via built-in sharing capabilities.

Picking an image is done through the standard PhotoChooserTask and the resulting stream is cached in the application's isolated storage. The next time user opens the application, the last image from previous session will be used as a starting image. This choosing part can be extended to allow picking files from URL, SkyDrive or any other cloud storage provider.

Once the image is loaded, user can start defining the region using the built-in selection tools. There are three that are currently implemented:

Brush selection tool - user can touch screen to select anything.

Magic wand selection tool - by touching particular area, pixels that are similar to the chosen pixel are selected

Rectangular selection tool - user draws a rectangle over the image which is then used as a brush mask

Here are the differences between the selection tools:

Brush selection tool

Magic wand selection tool

Rectangular selection tool

By default, the brush selection tool is used. User can select another tool by pulling down a "selection tools" overlay. This can be done by swiping down on the top edge of the phone (the edge above the image). This brings down the "selection tools" overlay which can be seen on the image to the right. Once a tool is selected, the user can start applying it to the selected image (if any). Application is designed with extensibility in mind and allows for easy creation of additional selection tools. Zooming in or inverting selection can also be easily implemented using the same interfaces. All tools inherit from the ITools interface, while

Once user picks an image, a region is defined by applying selection tools. Even though in this early iteration undo feature is missing, it is relatively easy to add it by separating all selections into "layers". Once the user is satisfied with the selection, the currently selected filter is applied on the defined region and blended back onto the image by pressing the "apply" icon (the one with the check). As mentioned before, blending is a process of merging two images to produce a third one. In this case, one image is the original image with the part of the image "carved out " and replaced with white pixels. The second image contains those pixels that have been carved out with the rest of the pixels turned to pure white. The replacement pixels can be adjusted using the two sliders on the main page which will be described in more detail below.

Even after the filter and blending are applied, user can change both the filter used for the filtered region and the blending parameters. This allows for "fine tuning" and experimentation. Once the user is satisfied with the results, the image can be either shared using the built-in sharing capabilities or saved to the media library.

First two properties (ImageWidth and ImageHeight) are used for transformation from screen space to image space. In most cases the underlying image will be significantly larger than the UI control used to represent it. The last property is necessary since a tool might want to inspect image for its own purposes. The three methods in the interface are used for analyzing user manipulation.

Using this interface you might implement crop tool, color picker tool and in general read-only tools. To implement a selection tool that generates a mask, you need several additional parameters. To accommodate that, another interface is added on top of the ITool interface:

Tools that inherit from this interface have input pixels array which should be left unmodified and output pixels array which they use for building the result. MaskBuffer is used to remember which pixels have been manipulated. Undo stack can easily be created by chaining tools and putting them on the stack but due to the lack of time has not been implemented at this point. We can now see how individual tools have been implemented.

"Solidness" of the brush - right now it is a filled circle but it can be sparsely filled, hollow or something different (think graffiti brush from MS Paint).

When user touches the image, the center of the brush is positioned to match the center of the touch surface. User can drag around and "paint" the area. Unfortunately, due to the missing zoom capabilities and the inability to customize brushes size, it may be hard to precisely paint the desired region. This brush gives the natural feel of picking the region.

The brush is defined as a mask: a byte[50*50 array filled with values from 0 to 255. The value of 0 means that this pixel is to be left alone, other values signify the "replacement strength". Brush is generated via the following algorithm:

Note that the algorithm generates a soft fade-out which could be used for alpha blending, but is unused in the current implementation. The brush is applied on the separate thread. Since the code is rather large and is provided as an attachment to this article, we will just take a look at the underlying algorithm. When user interacts using the tool, all points and movements are added to the queue which is processed in another thread. This makes the application responsive when user swipes across the screen since the brush applying algorithm is faster when compared with generating the final image and refreshing the UI.

In the separate thread, the _touchQueue is inspected and values are pulled out. Once there are no values on the queue (which means that the user is no longer moving the finger across the screen), brush is applied on the user selected points. Brush is clipped to the image and every pixel of the brush is processed (all 2500 of them in this case). If the brush has a nonzero value at the given pixel, a pixel is moved from the Source to the Target and TargetImage is updated to reflect this process. Here is the sketch of the algorithm:

for each point to apply clip brush to the image for each pixel that is inside the image if brushmask != 0 copy pixel from source to target make display image pixel white

After the algorithm completes, the Target will contain only those pixels where the brush has nonzero value and the TargetImage will have white pixels instead of the old ones. The remaining pixels (those where the brush has zero value) are left intact in the TargetImage and are equal to 0 (black) on Target.

There are several other optimizations that could be used to speed up the algorithm, but then again, this is not a professional application for image manipulation and the performance issues should be negligible on high-end phones.

This selection tool is designed to select an area of "similar looking pixels". For example, you might want to select a part of the image which has same color, but different lighting e.g. door, table, field, sky, water, etc. Determining how similar two pixels are is done by calculating the distance between their UV components. If the distance is below some threshold value, they are considered "similar", otherwise they are not considered similar. UV components are part of the YUV color space and you have to convert RGB values using the transformation formulas. Distance is the standard two-dimensional Euclidean distance.

The algorithm is queue based flood fill algorithm with fixed threshold value. Ideally you would want to be able to adjust the threshold value, but it is not possible in the current implementation. Here is the algorithm:

This is probably the classic selection tool which is familiar to anyone who ever used image processing applications. The idea is simple: user touches the screen at one point and drags the finger around to draw the desired rectangle. Once the user is satisfied, finger is lifted and the area under the rectangle is used for selection. To enable the region preview during the dragging process, another interface is used:

The idea behind this interface is that after the user finishes its current manipulation the actual selection is done. During the dragging period a UI element is used to convey the information about the resulting selection process. Event handlers are used to signal the parent UI component that the underlying visual part has changed and that it needs to be updated. The tool will spawn the UI element that represents the action for this tool. Since positioning in the UI is not something that can be done from the tool itself, PositionChanged event is used to signal that the position has been changed. Positioning can be done using Margin (if the parent element is Grid) or in case the parent is Canvas using Canvas.SetLeft and Canvas.SetTop.

All the logic is implemented in the MainViewModel class. It contains an EditingSession instance (field _editingSession) which is responsible for blending layers. Applying the filter to a secondary layer (the one carved out using one or more selection tools) is done using a secondary EditingSession and the result is then both shown in the preview image and blended back into the main session.

Not all filters are added in this implementation and for those that require parameters, those are hard coded at the application's launch. The future version of this application will include all basic filters with special UI for adjusting parameters and also some complex effects. On the other hand, all blend functions are supported and can be tested easily.

Filter and blending is applied when user clicks on the apply button or when user changes filter or any blending parameter. To match the parameters necessary for the selection tools to work, the following fields and properties are used:

WriteableBitmap MainImage - Used for UI and binds to SelectedTool.TargetImage.

int[] _oldPixels - Keeps original pixels from the selected imageSelectedTool.Source.

int[] _buffer - Used for selected region SelectedTool.Target.

byte[] _maskBuffer - Used for marking selectionSelectedTool.MaskBuffer. In some other implementations this would never be a part of the main application logic, every time you apply a tool you would get another mask specific for that selection. This would allow applying more than one filter on the same image, but in different regions.

First the selected filter is applied to the selected region. Then the original image is adjusted to keep the original pixels except selected region and the result of the first editing session is applied to the adjusted original image. There are two properties used in the code above which haven't yet been explained: byte Grayscale and byte Grayscale2. Their default values are 255 which yields white color in both adjustments and can be set via sliders on the main page. They will be explained in the section below.

Blending itself is a filter that can be created using the FilterFactory.CreateBlendFilter method. This is great since blending can become a part of the filter stack and you can undo its effect.

As mentioned before, blending is a process of creating a new image from the two input images. Blending function can be described as a function that takes two pixels as input and returns a third one:

result = f(pixel1, pixel2)

So any function that takes two integers and returns a third is a blending function. However, there are several classic blending functions present in practically all modern image processing applications and they also come with Nokia Imaging SDK.

Before explaining each individual blending function, let's see how this application utilizes blending. As we have seen before, user selection is copied to another image. Let's see how that looks like:

Region selection in the application

Adjusted original image

Selected region with filter applied

User selection is painted white

Unused pixels are white

The rightmost image above is blended on top of the middle image above. Order of images is most of the times important and swapping their order can yield different images under the same blend function. As mentioned before, application allows for setting the shade of gray for the filling pixels in the two pictures on the right above. This has strong impact on the filters since those pixels are part of the blending (since user selection splits the image in two parts). Instead of adjusting the shade of gray, selecting a color, gradient image or another image as a filler, different results can be achieved.

Please note that each pixel is represented by values from 0 to 1 and not from 0 to 255 (as in hexadecimal format). Let's take a look now at each individual blending function.

This blend function "divides the bottom layer by the inverted top layer" (from Wikipedia).

The order of images is important. Upper slider value of 0 has no effect on the unselected part while increasing the value lights up the unselected part. Lower slider darkens the selected region if the value is closer to 0 and lightens the selected region.

This blend function combines Multiply and Screen functions and darkens resulting image where the first image is darker and lightens where the first image is lighter. With this filter it is easy to control dark/light levels in selected and unselected regions.

The order of images is important. Values of 0.5 for both sliders have negligible effect. Smaller values for the upper slider darken the unselected region while higher lighten it. The lower slider has the same behaviour, but affects the selected region.

This blending function is also a combination of Multiply and Screen blend functions and can be used to control how light or dark each respective part of the image is. The order of images is important and different implementations have different results. Top slider will never turn the unselected region to black or white even if maximum values are set. However, the lower slider can turn selected region to black or white.

This blend function is equivalent to Overlay blend function, but reverses the algorithm.

As with the overlay, the order is important and values of 0.5 for both sliders appear to have no effect. Top slider can turn the unselected region to black if the value is 0 and white if the value is 255. The bottom slider can never turn the selected region completely black or white.

These two blending functions are very similar. Difference subtracts second pixel from the first or vice versa to get a positive value. Exclusion is similar, but has lower contrast.
The order of images is not important. Slider value of 0 means no change, while slider value of 255 gives inverts pixels (upper slider controls the unselected part, lower slider controls the selected part).

The results can be scary, creepy or plain psychedelic depending which two images are blended.

As a bonus, in the attached source code you will find another application named Blender. Blender allows you to take two photos and blend them in various ways. You can test blending modes on real photographs using this application and both Color and Hue filter will now make much sense. I will not dwell on the implementation details since the application is quite simple. All the logic happens in the following lines:

Difference is used to determine if two pictures are the same or fore alignment. So let's compare the results using Lighten and Pluslighten. The darker the pixels are, the more similar pixels are in both picture.

We have seen all the different ways of blending images using Nokia Imaging SDK. In the beginning of this article there was a mention of that technique: alpha blending. While all blending functions described here operate on either RGB values or some other color space, alpha blending takes into account the alpha channel. Alpha channel is used to describe transparency. Photos you take every day don't have this alpha channel since every pixel is represented just by its color.

So where do we get this alpha channel you might ask. The answer is in computer generated images and games. For example, when applying watermark or adding text or a logo on a picture, you use alpha channel to describe which parts of the image are transparent and which are opaque. In games, transparent textures are used to represent transparent materials like glass, bottles, etc. Blending functions that take alpha into account differ from those described above and they are a topic for itself.

Nokia Imaging SDK is a powerful tool that can be harnessed in different ways. I hope that this interesting approach to building complex compositions yields interesting applications that go beyond the simple "filter and share" applications. Since the application described here applies filters to just part certain part of the image (or consider applying filter everywhere but on some special part), resulting images can have surreal or hyper-real feel.

Further optimizations for these types of applications is delegating all the work on actual images to another thread and only work in preview mode. This will increase the speed of processing images and reduce the strain on phone which in effect won't drain the battery as much.