Hi,
I'm using PIL for image processing, but lately I also try numpy for the
flexibility and superior speed it offers. The first thing I noticed is that for
an RGB image with height=1600 and width=1900 while
img=Image.open('something.tif')
img.size
(1900,1600)
then
arr=asarray(img)
arr.shape
(1600,1900,3)
This means that the array-image has 1600 color channels, 1900 image pixel rows
and 3 image pixel columns. Why is that?
if I reshape with
arr.reshape(3,1900,1600)
will there be a mix-up in pixel values and coordinates when compared to the
initial PIL image?