If I have a RGB image of size 520 x 380, it means that I have 197600 pixels to play with. Now, if we are talking of a RGB image, then every pixel must be formed of three compounds (Red, Green and Blue) so, we are talking of 197600 x 3 (592800) units of color data (in my case).

Nevertheless, the method QImage::bits() is returning a size of 81276 bytes. Understanding that 592800 > 81276 I wonder how the pixel data is stored with too few bytes to save all that information? (or maybe my math is just wrong?)

I appreciate any hint about how the image data is mapped by the method QImage::bits() ?

Qt Developer

I'd guess that the first time that
@
QByteArray byteMe((char *) image.bits());
@
hits an 0x00 in the "string" pointed to by image.bits() it see it as a terminator and isn't filling the QByteArray with the full contents of the file.

Now, if I do: 790400 / 4 = 197600 then everything starts to make sense. The format used by the method bits() is this: every pixel of the image is defined with 4 bytes. To verify this, I tried this code:
@
for(int i=0; i < 8; i++) {
fprintf(stderr, "Color[%d]: %X\n", i, byteMe.at(i));
}
@