Author
Topic: INioBufferTextureEffect (Read 1655 times)

can I suggest adding a new version of ITextureEffect which accepts java.nio.Buffer and channel ordering?

I'm experimenting with the idea of mixing OpenCV with jPCT and making them to render to same GLSurfaceView as I mentioned in this thread. the pipeline is something like:

* take android's camera preview data (byte[]) and put into an OpenCV Mat1 object* modify and/or process Mat via OpenCV* copy final contents of Mat into a jPCT texture via an ITextureEffect* blit that texture as background* render 3d content over that

all is fine for now except the fps. some profiling suggests most of the time is spent in ITextureEffect.apply(..) method.

there are two issues:* jPCT wants texture data in ARGB channel order but OpenCV supports RGBA and BGRA, not ARGB* jPCT wants all channels embedded into an int but OpenCV gives chanels as different bytes (or float/short/int whatever depth is used)

so I take the byte array from OpenCV, iterate over that, re-arrange the bytes and create an int out of that. jPCT takes that and puts them into a nio.Buffer, quite a waste.

long story short, it may be good idea to add a new version of ITextureEffect which accepts java.nio.Buffer and channel ordering. unless of course there is a better way?

as an implementation detail, this new one can extend from the current ITextureEffect to keep things simple:

interface INioBufferTextureEffect extends ITextureEffect { // channel order maybe return type or a setting in Config. // if filling in source is costly, it can even be skipped based on a setting in Config void apply(java.nio.Buffer dest, java.nio.Buffer source); }

1: Mat is an OpenCV matrix like structure which can hold and convert to a matrix of arbitrary depth. pixels of an image or a perspective transform for example.

Sounds reasonable, albeit it will render some other settings in Texture (like 4bpp) useless. Another way to implement that would be to add some method to Texture itself that simply takes the NIO-buffer and it's format (RGBA or BGRA), overrides the actual texel data and uploads the buffer instead. This would only work if no texture compression, no 4bpp and no mipmaps are being used. Any other case would raise an error. Would that be sufficient?

It adds a method overrideTexelData(<ByteBuffer>); to Texture. As said, no mip maps, texture compression or 4bpp are allowed and the buffer has to have a size that matches the one of the texture (i.e. width*height*4). Format is limited to RGBA, because BGRA isn't well supported at least in OpenGL ES 1.x (that means: It's only available as part of an extension and i got a black texture when trying it...). It worked for me and was pretty fast.

quite fast there was two bottlenecks according to profiling results: the major one was this and other one was scaling camera image to a 2^n sized image for using as a texture. both eliminated and it's quite fast at the moment.

for some numerics: previously 320x240 camera preview was working ~20 fps on samsung s3, now 1280x720 preview works at ~20 fps. using a wrapped ByteBuffer even takes that to 25+ fps