The code works by first loading the texture as a CGImage, then drawing that CGImage to a bitmap context, where it is loaded into OpenGL. Unfortunately, it appears that the color values get multiplied by the alpha component when doing this, rather than just being copied directly.

This would be fine except that when using glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); the source colors are again multiplied by the alpha component -- so the alpha is applied twice. I have fixed this temporarily by using glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); as my blending function, but I'd prefer to be able to use the former blending function.

Any idea how to transfer the CGImage to the bitmap context without multiplying the color components by the image alpha?

Thanks,

- Holmes

(p.s. I have tried setting the CGContextBlendMode to kCGBlendModeCopy, which did not appear to work. I've also tried setting the bitmapInfo parameter of CGBitmapContextCreate to kCGImageAlphaLast instead of kCGImagePremultipliedAlphaLast, but this did not work either.)

Core Graphics premultiplies the alpha. The only way around that (at least to my knowledge) is to use your own loader, such as libpng. Otherwise, you'll have to use GL_ONE, GL_ONE_MINUS_SRC_ALPHA, as you're already doing. It seemed funky to me too, but it's been working fine for me to use the premultiplied alpha as it comes out of CG. I'm used to it now.

AnotherJake Wrote:Core Graphics premultiplies the alpha. The only way around that (at least to my knowledge) is to use your own loader, such as libpng. Otherwise, you'll have to use GL_ONE, GL_ONE_MINUS_SRC_ALPHA, as you're already doing. It seemed funky to me too, but it's been working fine for me to use the premultiplied alpha as it comes out of CG. I'm used to it now.

Thanks for the response and for confirming that CG is doing what it appeared to be doing to me (ie, I'm not crazy).

So there's really no way around this other than to use another library?

You could consider filing a bug with Apple to help get them to offer non-premultiplication from Core Graphics.

I suppose you could load in an RGB image and then a separate alpha-only image and recombine them. Seems awkward, but if you really have to have non-premultiplied color, maybe that'd do the trick without having to use another library.

Like I said, living with premulitiplied has turned out to not be any problem for me on iPhone after all.

I much prefer premultipling on my own terms - it's the artist's job, not the texture loader's! I gave up using CG for most things and started using stb_image instead. It loads a bunch of formats but it's dead simple and relatively small.

Do note that Xcode will mangle PNG files when packing iPhone apps, so 3rd party PNG loaders won't work as-is unless you set some compiler flags or use a different extension. I just rename them ".xng".

longjumper Wrote:So you can use libpng on the mac, convert all of your .PNG files to just raw RGBA data, and use those as textures.

I don't understand. Whatever you load will then be RGBA and used as textures, regardless (assuming you want RGBA). ... well, unless they're compressed like pvrtc or something. Can't you use libpng on iPhone too?

longjumper Wrote:So you can use libpng on the mac, convert all of your .PNG files to just raw RGBA data, and use those as textures.

AnotherJake Wrote:I don't understand. Whatever you load will then be RGBA and used as textures, regardless (assuming you want RGBA). ... well, unless they're compressed like pvrtc or something. Can't you use libpng on iPhone too?

I'm not positive, but is what longjumper suggesting simply to preprocess all of the texture data (as opposed to doing it at run-time)?

AnotherJake Wrote:I suppose maybe that's what he was getting at, but that wouldn't really make much sense, especially for iPhone. Uncompressed images on disk would take up massive space, and it would be unnecessary.

Using compressed textures (such as PVRTC ones) can significantly reduce the bandwidth used by the GPU and, depending on where your game's bottleneck is, substantially contribute to a better framerate.

It's possible it wouldn't make a difference in this specific case, but in general I wouldn't say it was unnecessary -in fact it's one of Apple's (and POWERVR's) major performance gotchas. With PVRTC you also avoid almost all of the processing overhead required in initializing your textures.

It depends on the size of your image. I whipped up some fairly non-complicated images in Photoshop and saved them as PNGs.

The 32x32 image was 54794 bytes, 64x64 was 65922 bytes, 128x128 was 113555 bytes and 256x256 was 197815 bytes.

Compare that to raw bytes, 4096, 16384, 65536 and 262144 bytes. (Of course, I might add, if you have a difficult to compress 256x256 image, you can certainly get higher than that 262144 bytes, but just barely. Which means 512x512 you are most certainly going to save on disk space.)

So up until you hit 256x256, you are saving disk space. You are also saving run time on decompressing the texture, drawing it to a Core Graphics context, and then handing that memory off to OpenGL. You are also getting rid of the premultiplied alpha problem.

Photoshop leaves it's puss-laden header in your PNG images (and TIFFs too!), so be sure to re-export those using Preview before you compare byte sizes. BTW, Xcode will chop that off automatically for iPhone if you don't explicitly tell it not to (which is at least one good aspect to their PNG mangling process).

I just did a test to check my sanity and a 128 x 128 image with alpha, saved as tiff is 65,902 bytes, wheras its corresponding png is 7,741 bytes (over 8 times smaller than your raw bytes version!). So using raw bytes is an interesting idea, but no, it isn't automatically smaller than PNG compression -- not by a long shot!

Yes, I am fully aware of pvrtc. I use it myself wherever possible. Yes, it offers tremendous memory and performance advantages. It also has a tendency to leave compression artifacts which makes it less than ideal for all situations. We're not talking about that though. We're talking about how Core Graphics premultiplies alpha for other types of compressed images, such as png, [edit, whoops, jpeg doesn't have alpha, silly me!], etc.

Your numbers seem to be off, presumably due to metadata that Photoshop is infamous for inserting when it saved PNGs. I just did a quick test with a reasonably complex 32x32 image, and it came out to 616 bytes. Except for extremely small images (like, 4x4 or so), you're going to be extremely hard-pressed to find one that's smaller as raw image data than as PNG without metadata.

My idea was that worrying about compressed images on disk in the form of PNGs seems to ignore that, when the images actually get used, they're in uncompressed format which is far from optimal for the GPU _and_ they also suffer from the alpha blending problem.

If you deliver your sprites in a preprocessed form ( I've been using ARGB4444, ARGB1555 if PVRTC has been giving me trouble) you help the GPU, avoid the alpha problem and reduce the file storage size as well. I imagine it wouldn't be too difficult to compress these preprocessed files on top of this (zip or similar?), but with the GB of space on the iPhone I hadn't considered it necessary yet in my own projects.

16-bit 32x32 textures would be 2048 bytes (if you could get PVRTC to work for you it would be 512 or 256 bytes depending on the flavour...).