Direct link to this comment

The Parallel Computing Toolbox is the same toolbox that handles parfor and handles GPU, but they work in very different ways.

GPU performance is much reduced by indexing, and only really wins out when you have operations that can be vectorized over an entire array. For your code, that would mean continually creating new gpuArray for each subimage, and then what the GPU would accelerate would be the processing over the subimage. But that would still involve a lot of memory transfer.

You might be thinking that you could send the entire large image to GPU and create the subimages on there, but the computation engines work a bit oddly.

Computation on an NVIDIA machine is divided up into compute controllers. Each compute controller can be executing a different set of instructions than other compute controller are executing. Each compute controller is responsible for a number of computation cores. The compute controller decodes an instruction, and sends the same instruction to each of the computation cores under its control. Each computation core then executes that same instruction for its data. The way conditional execution works is not by having different compute instructions executed by different compute cores: instead, a mask is created, one per compute core, and any compute core for which the mask is not true, idles instead of executing the instruction.

So for example,

subimage = image(1:15, 1:20)

would involve implicitly creating a mask the size of image that was true for positions in the 15 x 20 upper corner, and a transfer instruction would be executed, and the compute nodes with mask true would execute the transfer instruction and the other compute nodes would idle themselves for that instruction -- compute nodes for the entire array are involved, with most of them idling.

This is very different than CPU programming, where on CPU programming it is almost always more efficient to restrict your computation to only the locations that need to be processed; on GPU you would rather have entire arrays being processed unconditionally so that you do not waste compute cores.

Direct link to this comment

I would point out, by the way, that if you were to replicate your template several times in each direction, that you could process a corresponding sized chunk of the image. You still need to shift the window around, but you would be doing more in each chunk -- better vectorization.

In order for it to be possible to slice the variable, one of the dimensions of the variable would have to depend only on the parfor index (possibly plus a constant.) You can do slicing by forming separate variables:

image_slices = cell(N,1);

for row = 1 : N

image_slices{row} = image(row:row+H-1, 1:M+W-1);

end

parfor row = 1 : N

image_slice = image_slices{row};

for col = 1 : M

sub_image = image_slice(:, col:col+W-1);

...

end

end

This would of course end up using a heck of a lot of memory.

I would suggest that you might be better off rewriting everything in terms of a 2D filter operation or possibly nonlinear filtering.