Register an account now to get access to all board features. After you've registered and logged in, you'll be able to create topics, post replies, send and receive private messages, disable the viewing of ads and more!

Builds just fine. But plays very SLOW. I've tried disabling OpenGL, etc. Also tried various renderers. The game only begins to become "playable" at 640x400 resolution.

I have an old svn build I got off the inet (file named eduke32_2.0.0rpi+svn2789_armhf.deb) that uses a software renderer that is VERY playable, even at 1600x1200. But it runs from CLI or GUI. Where my from scratch build from source will only run under X.

Could use some help getting the COMPILE/BUILD switches correct to be able to build a similar (CLI runnable) binary from current sources (if this is even possible).

So you notice a significant performance delta in performance in the software renderer between 2789 and at head? I'm asking because in PolymerNG I'm using the classic renderer for occlusion culling, and I notice a lot of performance hits I'm seeing are from the software rasterizer.

Does building that old revision from source do everything you need? Or, does the build you found have patches applied that are not upstream? If so, please bisect the SVN history so we can address the problem.

EDIT: I recall bisecting a similar performance regression on Wii that I worked around by creating and defining #define SDL_DISABLE_8BIT_BUFFER in sdlayer.c. You may also need to change "sdl_renderer = SDL_CreateRenderer(sdl_window, -1, 0);" to "sdl_renderer = NULL;". Do either or both of those suggestions help? Neither should have any effect if you aren't displaying the output to the screen, sorry Ice.

Does building that old revision from source do everything you need? Or, does the build you found have patches applied that are not upstream? If so, please bisect the SVN history so we can address the problem.

EDIT: I recall bisecting a similar performance regression on Wii that I worked around by creating and defining #define SDL_DISABLE_8BIT_BUFFER in sdlayer.c. You may also need to change "sdl_renderer = SDL_CreateRenderer(sdl_window, -1, 0);" to "sdl_renderer = NULL;". Do either or both of those suggestions help? Neither should have any effect if you aren't displaying the output to the screen, sorry Ice.

I'll grab svn 2789 sources and do a build. I guess the first step will be to insure the binaries are same size, etc. Can also do a diff on the source trees - so long as the diffs aren't too "dif".;-)

The svn binary plays and works, so in that sense it does all I need. But I am wanting to build and "play" with latest sources - so... here we are.

So you notice a significant performance delta in performance in the software renderer between 2789 and at head? I'm asking because in PolymerNG I'm using the classic renderer for occlusion culling, and I notice a lot of performance hits I'm seeing are from the software rasterizer.

Seems that way - but I am wondering if the binary I downloaded is plain vanilla. Also possible I may need to change a few flags.

Does building that old revision from source do everything you need? Or, does the build you found have patches applied that are not upstream? If so, please bisect the SVN history so we can address the problem.

EDIT: I recall bisecting a similar performance regression on Wii that I worked around by creating and defining #define SDL_DISABLE_8BIT_BUFFER in sdlayer.c. You may also need to change "sdl_renderer = SDL_CreateRenderer(sdl_window, -1, 0);" to "sdl_renderer = NULL;". Do either or both of those suggestions help? Neither should have any effect if you aren't displaying the output to the screen, sorry Ice.

OK. I grabbed svn_2789 and built it from sources. With flags: USE_OPENGL=0, NOASM=1, LINKED_GTK=0 and USE_LIBPNG=0. I get a binary that is sligtly smaller then the one installed using "sud apt-get install eduke32". But it behaves virtually identically as far as I can tell. It is only using 20%-25% of resources and runs quite nicely, even at higher resolutions. It can also launch and run from a CLI.

HOWEVER

With identical flags and building svn_5700 I get the same old SLOW result. I'll grab 5718 and try the source level #defines you suggest above.

Sorry for my "pokiness", but juggling several other projects, as well as my "Day Job" (was also up in the wee hours, watching SpaceX *NAIL* another 1st stage landing ;-) )

I've had similar reports of performance regressions with SDL2 on other platforms. I'll have to try performance profiling the builds. Any suggestions for doing this on the Pi?

Hmmm, first of all, sorry for the delay in my reply.

Getting the makefile to detect it is running on a Pi and automatically setting SDL_TARGET = 1 produces a playable source build for the time being.

A bit of digging found no Pi specific flag/define. However an effective hack might be made be to check cpuinfo for the presence of "BCM2708" or "BCM2709".

I found the following script:

grep -q BCM2708 /proc/cpuinfo
if [ $? = 0 ]; then
echo Pi2
fi

grep -q BCM2709 /proc/cpuinfo
if [ $? = 0 ]; then
echo Pi3
fi

Perhaps doing this along with. OS=Linux would suffice?

(It goes without saying this will only work while building on the target system)

--------------

I don't know enough about SDL 1.2 vs 2 to be much help on what is happening there. If I had to guess (a dangerous, error prone approach LOL), I would speculate SDL 2 is offering extended graphic capabilities that are taxing the capabilities of the Pi. VideoCore IV doesn't appear to support a number of texture compression algorithms internally (like S3TC). If these calculations are being done on the ARM side - this may be slowing things down considerably.

edit: On 2nd thought. Never mind, after further study, it appears that S3TC is a function of the GL API, so perhaps the SDL version has no impact on this...

By now I have greatly exceeded my limited knowledge on eduke32, SDL, Mesa, etc, etc. I would be most interested in your insights on the same. :-)

The problem is caused by SDL2 attempting to use hardware accelerated backends. On top of that, the SDL2 that you get from apt is built as if running on a conventional Linux desktop using X11 and OpenGL.

If you install Mesa packages for GL, EGL, and GL ES, you get software wrapper libs with very poor performance. The Pi comes with closed-source vendor-provided GL ES and EGL libs in /opt/vc/lib/, but I could not seem to get those to take precedence over the Mesa ones in /usr/lib/arm-linux-gnueabihf/ (whether at link time or via dlopen) no matter what I did to /etc/ld.so.conf.d/.

The apt SDL2 doesn't even try EGL and GL ES. It goes straight for the OpenGL lib, which only ever sucks.

I built SDL2 from source since it has special code paths for the Pi, including explicitly trying the vc libs. Unfortunately, performance was only mildly better, and on top of that the game only runs in fullscreen.