I still don't see how using premultiplied alpha is the "correct" way to do things. If the result of the blending multiply winds up being exactly the same, and you lose no performance because either way the multiply is being done in hardware regardless, how can it possibly be "correct" to do it one way or the other? I already tried to read some of Forsyth's stuff on the premultiplied thing but I can't make heads or tails out of it right now. Perhaps you have an easier explanation?

It's not really a "problem", it's just the way they do it. The result works out the same in OpenGL either way in general use. The only time you'll miss your alpha is if you're using shaders, and you have something special going on there. The only other time you'll miss it is if you're trying to work out a particular blend mode, but that doesn't mean it can't be used, just that the blend func needs to be different to account for the premultiplied alpha. It's kind of a pain in the butt because most GL example code out there assumes non-pre-multiplied, so it can throw you for a loop at times. I suppose that could be consider a "problem", depending on your point of view.

Yes, if you use Core Graphics to load your imagery on OS X, it is premultiplied there as well. Just use libpng if you don't want premultiplied.

The toolchain thing you're talking about is more than likely in reference to the png mangling, not the alpha premultiplication. The alpha premultiplication is done by Core Graphics when loading the image. [edit] my bad, apparently the png mangling process involves premultiplication as well... lame! [/edit]

The whole point behind premultiplication is that it's a faster way to draw graphics in software (which is what Quartz originally did exclusively, although I don't know how much of it is hardware now). In hardware, it totally doesn't matter because the alpha multiplication happens regardless. In software it's faster because you can skip a multiply when drawing. Apple really should allow the option of loading without pre-multiplied alpha. File a bug!

AnotherJake Wrote:I still don't see how using premultiplied alpha is the "correct" way to do things.

Agreed. I have plenty of textures/atlases where certain bits are pre-multiplied and others aren't (and it's totally on purpose and "correct" for that texture). If there's ever a problem with an alpha channel, then it's an art problem, not something that needs to be solved with code.

See also this description of how premultiplied alpha avoids the "fringing" problem that people here seem to run into on a weekly basis, writing My First Sprite Engine.

The key points here are:
1) for 2D compositing, premultiplication is perfectly reasonable, and Quartz is built entirely around it. All images drawn by CoreGraphics will be forced to be premultiplied.
2) for 3D graphics, there are many cases where premultiplication does not make sense. For example, GL's fog, or point/line/polygon smoothing. Those features directly manipulate the RGB or A components of a pixel, and it is up to you to perform alpha multiplication, i.e. in the blend stage. Another common example is putting a normal map in RGB and a gloss map in A in a single texture. It's up to you to apply the channels correctly in the fragment stage, and clearly you don't want the RGB to have been multiplied by A by the image loader.

If the image loader you're using forces premultiplication, it's broken. Use a different one.

Xcode's premultiplication "optimization" of .pngs is a travesty that the spec specifically forbids. Yes, you should file a bug about this, because Apple is corrupting the PNG spec. They should be called ".optpng"s or ".iphonepng"s or something, not ".png"s.

arekkusu Wrote:=Xcode's premultiplication "optimization" of .pngs is a travesty that the spec specifically forbids. Yes, you should file a bug about this, because Apple is corrupting the PNG spec. They should be ".optpng"s or "iphonepng"s or something, not ".png"s.

Holy crap, that's actually part of the toolchain and not just Core Graphics? Wow, that does suck!

arekkusu Wrote:Xcode's premultiplication "optimization" of .pngs is a travesty that the spec specifically forbids. Yes, you should file a bug about this, because Apple is corrupting the PNG spec. They should be called ".optpng"s or ".iphonepng"s or something, not ".png"s.

AnotherJake Wrote:Holy crap, that's actually part of the toolchain and not just Core Graphics? Wow, that does suck!

So, does this confirm that it is indeed the Xcode iPhone bundle creation step that is performing the alpha pre-multiplication? If I take the image loading code from my iPhone app and move it to another app built against OS X (where I can just read a PNG file from disk), should I see the correct, unaltered RGBA values?

I actually did read the documentation a loooong time ago (and I do read the build log from time to time, and I damn-well use Google but I never wondered about the script because I was using Core Graphics anyway), and forgot about the mangler (aka "optimization script") actually doing the premultiplication there. In fact, for quite a while I even forgot that Core Graphics did the premultiplication because I'm so used to using libpng on the Mac and I was using Texture2D blindly on iPhone in the beginning.

Quote:So, there's already an option to not optimize PNGs.

Yeah, but not with Core Graphics.

@ kalimba: So yes, this confirms that the PNG mangler does premultiplication too, but it can be disabled. However, you're still stuck with Core Graphics premultiplication unless you use a different loader.