Messages - AeroShark333

Yes, it's save and it's cheap. However, compiling the object fixes the texture stage count for it. That's why you are getting that exception with the TextureInfo. YOu could assign a TextureInfo with the same amount of stage but using some dummy textures, compile, assing the actual textures. That should actually work.

16k and 32k...it's absurd IMHO to use those. A 16k texture with 32bit and no compression but mipmaps would require ~2GB auf GPU memory and almost the same in VM memory. Still not very feasible IMHO.

Hmmm yeah, I understand... Though that would be true for squared textures but for me just one dimension would be okay. I can already kind of use 16384x4096 textures on my phone using NPOTTexture (which I guess is kind of a hack-ish way to surpass the 8192 limit restricted by jPCT. So well yeah...

However, is it safe to apply textures after having having the Object3D's built and compiled?It seemed to work just fine? Though, not sure about performance...However, a multi-textured TextureInfo would give an ArrayOutOfBoundsException when I apply the TextureInfo after having built and compiled.

I also wondered if 16k and 32k textures could be enabled, devices these days can handle quite a lot hehe

A lot of apps using jPCT-AE are doing it all the time (including mine) without any problems. However, I'm well aware that it might cause trouble, which is why there are these config settings for it (which are just some shots in the dark as well). The actual problem is, that I've no idea why this happens and when. I've checked the code at least a dozen times because of this and it's just fine.

Well yeah, I'm unable to reproduce this SIGSEGV error on most of my devices too, it'd work just fine with the way I render everything now.And I'm not sure if this blitting before the uploading the Object3D data is actually the issue for all the SIGSEGV reports I got, maybe it would only solve the problem for the emulator.

Mixed, however I can run the renderer class in a regular Android Activity too but that'd not really make much of a difference..?The app I'm currently working on is the wallpaper app, and the other app is a 'standard app' I'd say.

My best workaround/solution for now would be to make sure the Object3D's I'm using are uploaded to the GPU before blitting anything.Which I'll try to implement soon so I can see if people would still get SIGSEGV's after this.Though, there's one problem... How can I set textures to my Object3D's after building and compiling them?If I remember correctly it'd have a delay or something before the textures get visibly applied or something if you do it this way.

And because fiddling around with the blitting config seems to changes things for you: Are you actually blitting stuff? What happens if you don't?

Yes, I do blit things before the first world.draw() call.=> Texture blits (some of these textures are used for the Object3D's, so the textures get uploaded to the GPU and removed from VM heap memory)=> Loading screen (probably 50+ blits of a 2x2 texture with variable greyscale color) per frame

I tried to comment out the texture blits (so they'd stay in VM heap memory) => it would still crashBut when I removed all blits (texture+loading blits) before the first draw call, it would work fine.

Also, only have texture blits (without loading screen blits) would still be able to crash but not so likely.

Interesting results I guess...

I once again tested the crashing likelyness (with higher polygon models and with blitting before the first world.draw() call)=> Config.blittingMode = 8Runs: 25Crashes: 17Result: More than the 50% of last time...=> Default Config.blittingModeRuns: 25Crashes 25Result: About the same result as last time (100% vs. 96%)

So could it be that blitting anything before having all Object3D data loaded could cause this SIGSEGV issue?

PS: I actually do blit (2D background behind 3D world) before calling the first world.draw() in the other app too actually...

I found this while Googling around: https://stackoverflow.com/questions/30825386/android-opengl-fatal-signal-11-sigsegv-code-2I tried the same code on the Nexus 5 emulator and I got some similar results.=> size = 10000; would crash=> size = 3000; would work=> size = 5000; would crash=> size = 3500; would crash sometimes?When it works, I would show nothing but it would keep 'drawing' and not crashingAdding floatBuffer.rewind(); after giving it values would fix the issue for any size (and it'd actually show something when drawing.. lol).I'm not sure if this could be helpful but I sure found it interesting.

no compat mode:runs: 10succes: 0crashes: 10As you can see it drops the chance of crashing from ~96% to ~53%.Whether this is just a coincidence, I can't tell but I tested this multiple times (swapping between the two config values after every 10 runs).

I'm currently using a Genymotion emulator (With a Nexus 5, Android 5.0.1 build) to reproduce these crashes. (I can reproduce crash on older physical devices, just to make sure... Plus many people reported this SIGSEGV crash through the Google Play Store and I think I can assume they are not using an emulator... )

Okay nevermind, it crashes with default shaders too :| It just seems more likely with my custom shaders though...But increasing the buffer size to 1800 did help a little, is it possible to increase it even more?Why is the default 600 anyway? And what units are these 1800? Bytes I assume?

Another thing that helped a lot is reducing polygon count per mesh but yeah... that'd reduce quality...

For now I'll just assume my device is the only device with this unloading issue... Oh well, it's a custom ROM...

Anyway, about the SIGSEGV error that keeps happening at the first world draw call, I think it's because of my custom shaders... Which I don't really understand though...On most high-end devices there are no issues at all with my custom shadersAnd I don't really understand why the default shaders would work just fine (always..) while they look more complicated than some of the custom shaders I'm using. Although, on devices where my custom shaders crash, it does NOT always crash, which completely blows my mind... It seems to happen just randomly basically.

Unlike the default shader, my custom shaders use:-> pre-processor things with #-> setting uniform variables in onDraw-> defined functions within the shaderThough, I don't think these are really the issue...

While Googling this issue I did find some weird OpenGLES shader crash reports by other people that could be solved by work-arounds such as changing the order of operands...Anyway, what would be do's and don't for writing a GLES shader? Or maybe: how did you manage to create the 'perfect?' default shader which never seems to crash?Does jPCT-AE perhaps treat custom shaders differently than default shaders?

Another thing that seems to reduce the probability of the random crash at the first draw call was to use lower polygon models...

Also... What could be an explanation of the randomness of the crashes since it does not always happen, well the positions of the Object3D's are always different but would that make a huge difference...? I'd think not but apart from that nothing else is really different and yet it somehow manages to render/crashAnd once the first draw call has successfully completed (assuming all Object3D's are visible) then it won't crash in future world draw calls

Another possible solution: increase the buffersize of the framebuffer even more? I thought Config.blittingMode = 8; did impact the probability of crashing in a positive way (I think it had to do with vertex upload buffer maybe, I don't really know...)Is there any way for me to determine what is actually causing the crash as in which call in the jPCT-AE jar is causing the SIGSEGV error?Might help for solving the issue...

I don't think you want to use the rotation pivot like this:speed_neddle.setRotationPivot(speed_pivot.getROtationPivot());

A rotation pivot is a point relative to the Object3D where you want to pivot around.So I'd think you'll need to SimpleVector that's the difference between the two Object3D's.SimpleVector diff = speed_pivot.getTransformedCenter();diff.sub(speed_neddle.getTransformedCenter());speed_neddle.setRotationPivot(diff);

Emulator #3 (Android 7.1.0):-> Low VM RAM-> No memory leak with rendertarget textures in VM-> I don't see the VM+Video+Native memory usage as high as on my own device here in developer options... Unloading seems to work just fine here-> No memory leak with rendertarget textures in native+video

For the sake of clarity also a test without rendertarget textures on my main device:

-> Low VM RAM-> No memoryleak with rotating screen-> Memoryleak with renderers when restarted in native+video RAM (renderer restarts when a setting is changed)How does it restart:-> Change a setting-> (I believe it still uses the same GL context as before)-> Removing and unloading textures from texturemanager (but it won't go through another draw call to actually unload them I suppose...)-> Framebuffer is disposed-> Reference to the renderer is gone now-> A new Framebuffer is createdNow that I think of it... I think I could re-use the framebuffer... :|10 minutes later... -> Re-using the same FrameBuffer but the memory leak remains..?

In the end, it seems that textures aren't unloaded untill the whole wallpaper engine is killed.(restarting doesn't kill the wallpaper engine...)

However, I've no idea how it's supposed to work that way, because rotating the device should destroy the context (simply because width and height are changing) and so you should need a new instance of the buffer.

But anyway...as long as the buffer doesn't change, there's actually no need to unload the render target texture at all. Have you tried what happens if you don't do that at all?

I tried that... but the render target won't have the right dimensions then and it will give weird results on screen.Let's say I start the wallpaper in portrait mode.I would create a 1440 x 2560 FrameBuffer (width x height)And at the same time I'd create a 1440 x 2560 NPOTTexture for the render texture.All fine here.Now if I change the orientation to landscape, I'd resize the FrameBuffer to 2560x1440 (width x height). (Notice that the values are now swapped)But the rendertexture is still 1440 x 2560, so I remove this texture from the texturemanager and add a new 2560 x 1440 NPOTTexture.

Removing without unloading did not change anything, still filling memory...

I actually have the feeling that unloading textures does not completely work...Whenever I open a second live wallpaper instance (using preview) it seems that the textures of that instance aren't unloaded whenever the preview instance is 'killed'.It is like the primary live wallpaper instance is preventing the other textures to get unloaded or something(Unless all instances are killed it will all be gone)

Uhm, I'm not sure if the context is lost...I don't re-initialize the Framebuffer ever (I just use #resize() whenever the device is rotated).

I already had Config.unloadImmediately set to true, which did not work... Setting it to false did not work either unfortunately.Whenever I don't use a rendertexture but just the FrameBuffer, RAM usage does not increase on screen rotations.

Is there perhaps a rotate function/method possible for rendertextures/NPOTTextures since screen rotations usually just swap the width and the height? (But these rendertextures would hold the same amount of data eventually for the different orientations)Probably my solution for now would be to create these two rendertextures at start (one with default orientation and one with rotated orientation) and just switch between these two rendertextures.

---

Another thing about my application:I don't keep my texturedata for rendertextures nor any other texture in VM.TextureManager.getMemoryUsage() would indeed show that just 1024 bytes is stored by the VM. (Basically nothing)However, when I use "Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory()", it would show that there's muuuch RAM usage more by the VM, up to 300 megabytes (by jPCT, I assume). What could be using all the RAM?

So it seems that glDrawElements(...) is causing a crash here, although it is not always happening which I still don't really understand.

I tried to make it use glDrawArrays(...) by invoking build(false) and compile(false,false) but no luck there, same story: random crashes but sometimes works.

All textures do load, but when the first World draw call is made, it might crash.

EDIT:Another unrelated question I always wondered about:Why is TextureManager static and why isn't it designed in such way that you can have multiple TextureManagers (for multiple renderers)?

EDIT2:I seem to be having issues with render target textures. Everytime I change device orientation I create a new render texture so it'd fit the rotated screen dimensions. However, I seem to be getting a memory leak here... How I do this:-> start-> on surface changed (): create rendertargettexture and add to texturemanager with a specific name-> on draw (): use rendertexture-> change screen orientation-> on surface changed (): set rendertargettexture reference to null + unload&remove rendertargettexture using specific name + add new rendertargettexture with new dimensions-> on draw (): use new rendertextureWhile this all works fine, it seems to fill my device's memory if I keep changing the screen orientation.Without using rendertargettexture, RAM usage might also increase a little when changing screen orientation but it'd get GC'ed and RAM usage will drop again.

My problem is the following.Let's say I have 2 Object3D's and one camera.Object1 is drawn first using FOV setting 1.Object2 is drawn after using FOV setting 2.Both objects are sphere's (if that's helpful information)A part of Object1 is 'blocked' by Object2 (which is fine since Object2 is closer to the camera anyway).

How can I calculate if Object1 is blocked or if it is visible?And if Object1 is a complex Object3D how can I calculate what percentage of the total of Object1 is visible?

In the past I was able to calculate if Object1 was visible or not using sine, but in that case FOV1 = FOV2. (see sketch)Now that the FOV's are different, it doesn't work.