iOS image processing with the accelerate framework

Sometime ago, my friend John Fox asked me how to reproduce a blurring image effect in the iOS SDK. Core Image on the iOS does not provide that effect. You can find in the Internet a couple of solutions for the iOS performing the convolution as matrix multiplication. That’s an ok approach, but it does not take advantage of the hardware acceleration.

I show here briefly how to apply a blurring filter to any image using the Accelerate Framework and vImage.

Before iOS 5, image processing on the iOS was hard. You were required to build your own set of basic tools to perform convolution, fft, dct, scaling, rotation, histogram equalization and so on. That was amazing difficult and time consuming, especially because you had to keep in mind the hardware limitations of your device. Many developers opted to send an image to a remote server, process it and send the result back to the iPhone. Obviously, that solution was only suitable for special and limited cases. If you wanted real-time processing, then you had to really fight hard against the clock cycles.

Nowadays, iOS 5 allows you to do image processing operation on board of your device. You can achieve this either using the Core Image or the Accelerate framework. Both frameworks were already available on the Mac, but only recently they shipped with iOS.

Core Image offers a set of predefined image processing filters. Unfortunately, differently than the Mac version, the iOS version does not offer the possibility to build custom filters. Additionally, it does not provide a blurring filter (and that’s what my friend John was looking for). So, the alternative is to use the Accelerate framework. Let’s see how to do it.

vImage

The basic data structure used by the Accelerate framework to process images is vImage. It’s essentially a C structure containing four elements: the image buffer (it can be either the image intensity or the values of the red, green, blue and alpha channels), the image height and width and the number of bytes for each image row.

Any method belonging to the Accelerate framework uses vImage as input and/or output. So, before applying any processing to an image, you have to create a vImage. Core Graphics can help with that. The quickest way to create a vImage structure is the following:

Now, you need to extract from this CGImage the pixels that will constitute the buffer of our vImage. You can do that in different ways. In this case, I am showing you the simplest approach, i.e. extracting the pixel intensities from the image (you can also apply a similar approach to colored images).

Here how to do it:

Now, the srcBuffer is our vImage and it’s ready to be processed.

The convolution can be performed using one of the convolution functions offered by the Accelerate framework. Since we are using a gray level image, I will use here the vImageConvolve_Planar8 function. Here, Planar8 means that the image is treated as a simple matrix with each element representing a pixel intensity.

Before the convolution, you need to prepare (allocate) some memory space for the final result:

Then, you need to create a blurring filter:

And finally, convolve the image and the filter:

I suppose that you need to display the result somewhere. So, you need to convert the resulting dstBuffer to a UIImage.

I am attaching here a simple project. I load an image and apply a blurring filter with different kernel sizes.

I hope my friend John (and you) enjoyed this post.

Geppy

iOS Consulting | INVASIVECODE

iOS Training | INVASIVECODE

(Visited 178 times, 1 visits today)