So I'm going crazy, having bad coding catching up to me, or need more rest because I've been up too long. My current issue is I am getting a Segfault on a al_clone_bitmap() call. I've never have had this issue until this last week.

The scoop: I have always used the bitmap example method of loading a bitmap (ex_bitmap.c):

I don't get it - why don't you just load the bitmap as a video bitmap to begin with, and fall back to a memory bitmap if it fails? Is there some tutorial somewhere that says you have to load as a memory bitmap first before making a video bitmap? If so, it should probably be rewritten.

Can you build a debugging version, link against the debugging version of Allegro, and get a backtrace of where it is in al_clone_bitmap when it crashes?

To my knowledge, the code as described above is still the ex_bitmap example that comes with the source. I honestly couldn't tell you the difference between one way of flag setting or the other. However, I assume your method is valid.

I have tried to determine where it fails. However, even when I try using the debugger it just states that the function al_clone_bitmap() segfaults when used in the a5 monolith-mt.dll file.

The only reason I can see that it would segfault is if the bitmap passed into it is not a valid ALLEGRO_BITMAP, but according to your code, you check for a NULL bitmap, so I'm not sure why it would segfault...

In any case, if you link to the debugging version of Allegro, you will also get a text file called allegro.log. Attach that to your post, it may have a clue.

The important thing though, is to get the line number in al_clone_bitmap where it fails. If you have to, run it through gdb manually :

Wow I am slow tonight. I'll run it against that quick. I did with just code blocks:{"name":"6003450481_e0163eddcd_b.jpg","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/9\/2\/92cc1a2d642cac92a14c7be6887c7063.jpg","w":830,"h":577,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/9\/2\/92cc1a2d642cac92a14c7be6887c7063"}

Well, al_clone_bitmap changed sometime between 5.0.2 and the current 5.1 SVN, but both versions return NULL if their call to al_create_bitmap returned NULL. It does look like the bitmap is too large to be made into a video bitmap, because both of your allegro.log files say 'd3d_create_textures: Unable to create video texture.'. However, that still doesn't explain why al_clone_bitmap would segfault...

An interesting tidbit: When I place the d3d9.dll from system32 into the debug folder the errors change slightly

That's just because d3d9.dll was found in a different place. Don't worry about that bit.

What does the function stack look like when you link to the debugging version of allegro? It should give you a line number inside of al_clone_bitmap.

Also, the code you showed in that image doesn't change the new bitmap flags between al_load_bitmap and al_clone_bitmap, so you're doing the same thing twice for nothing.

allegro.log said:

Failed loading images/PPlogo.bmp with .bmp handler.

The image you posted shows you loading PPlogo.tga, not .bmp, and it also doesn't abort if membitmap was NULL, so you could be passing a NULL pointer to al_clone_bitmap, which would cause a segfault. al_clone_bitmap should have an ASSERT(bitmap) in there, like Allegro 4 did everywhere.

I think you missed part of what I was saying - the code you showed in that last image does not check if any of the bitmaps are NULL upon loading. If your program is being run from a different directory then they could easily fail to load if you don't account for it by changing the current directory. So if they fail to load, then you are passing NULL bitmaps to al_clone_bitmap, and that would cause it to segfault.

I don't quite understand that. Is there just a set limit of bytes available per texture? If so, why couldn't you vary the dimensions as long as the area was less than the max? Or do cards prefer square textures? That seems silly. I guess I just don't understand the inner workings of graphics cards...

The texture size constraints come in mainly from precision constraints, due to chip area constrains. It's much more expensive to support larger texture sizes because of the adders and multipliers that are needed for addressing (and addressing textures is very complex!). You also incur a cost in terms of cache tag size.

References

We would be a lot safer if the Government would take its money out of science and put it into astrology and the reading of palms. Only in superstition is there hope. If you want to become a friend of civilization, then become an enemy of the truth and a fanatic for harmless balderdash. — Kurt Vonnegut

So basically has to do with how shaders can sample an u/v position within a texture. And since you could always set the texture transform to rotate 90 degree, it wouldn't make too much sense to have a different limit for u than for v. What I guess could happen (purely speculating now) is that some HW would be able to handle a 2048 x 8192 texture as well as a 8192 x 2048 texture but not a 8192 x 8192 texture. However what would you return for MAX_W and MAX_H in that case?

So for simplicity we just return the size where you're guaranteed that a square texture (and anything smaller) will work.