SL 2.0 Beta still running too slowly on your Mac? No worries!

If, like me, you’re not the kind of person that subscribes to ATI’s or nVidia’s RSS feeds and immediately jump to the nearest computer shop as soon as the latest graphics card is released, and spend thousands of dollars every year purchasing the latest and greatest to squeeze more performance out of your computer, read on. A few simple tricks might give you that extra bit of performance that might make the difference on your current computer, and finally convince you that SL 2.0 is not as slow as you thought — it just has the settings shuffled around and assumes too low settings as the default.

Texture memory

In the olden days, SL would detect how much video RAM your card had, and use as much as it could, and that was all. Then the clever LL programmers found out that OpenGL (the rendering framework used by all versions of the SL viewer) supports the notion of “virtual RAM” for textures. What would that possibly be good for, since virtual RAM, as we all know, is always slower than real RAM?

When you upload a texture to SL, no matter what the original format was, it gets tightly compressed to JPEG2000, which is a reasonably good standard, specially for highly-compressed images. Having them taking as little space as possible is crucial for virtual worlds like Second Life®, where a user is constantly streaming hundreds or thousands of images all the time during the whole session. So it’s not really a question of storage — disk space is cheap these days — but really downloading time. A huge 1024×1024 texture can eat up to 5 MBytes, but be compressed to a dozen KBytes (with luck!), which transfers rather quickly.

Of course, there is a trade-off. The more you compress an image, the more time it takes to decompress it. And your graphics card requires fully decompressed textures to display them. So this means that the lovely compressed 1024×1024 texture you’ve just downloaded at a wink of an eye now will eat up 5 MBytes of your card’s RAM. Even if you’re blessed with a 512 MB card or higher, this means that you can only store 100 of those textures (actually, a bit less, since SL also uses the graphic card’s video RAM for other purposes). The remaining ones have to be stored on disk, on the cache. This cache stores the compressed textures, so, even though it’s “only” up to 1 GByte large (LL has capped it at 1 GByte for mysterious reasons), it can, in fact, keep quite a lot of textures in it. And I really mean a lot.

The trouble is that it also takes “a lot” of time to load those images from the disk cache (disks are around 100-1000 times slower than regular RAM; video RAM is quite faster than regular RAM) and decompress them again to feed the graphic card. So if you have a very old graphics card (which is my case!!) with, say, just 128 MB of RAM, it means that not many textures can be held there. Every time you move the camera, and you need more textures, the VRAM has to be flushed, and the new textures, hopefully all downloaded by now, have to be decompressed and put into the VRAM again. Every time you move around, this happens again and again.

And yes, of course, you have guessed: this causes lag. A lot of lag. It’s just client-side lag, but it’s nevertheless lag 🙂

Virtual graphics memory

OpenGL, which was obviously designed by people who knew what they were doing, allowed applications to define “additional” virtual memory for textures. What this means is that instead of pushing textures out of the card’s RAM, and requiring them to be loaded again from SL’s cache (after the painfully slow decompression!), they could just be swapped into regular RAM instead, still uncompressed. Now, as said, regular RAM is slower than video RAM — but not much slower. It’s still insanely faster than a disk access! And, more important than that, it prevents the whole decompression slowness to happen. Swapping fully uncompressed textures from RAM to video RAM is not dramatically slow. Although obviously you’re far better of with 512 MB of RAM on your graphics card, having just 128 MB on the card but 512 MB of virtual graphics memory would still provide most excellent results! In fact, the major difference between those two approaches might not be perceptible: in theory, a modern card with 512 MB or more of VRAM will also have a lot of more improvements over an old card with just 64 or 128MB of RAM, and these improvements will make far more difference than being nitpicky about the slight differences of using regular RAM or video RAM for storing uncompressed textures.

So for over a year now, LL has placed a slider under Preferences > Graphics > Hardware, called “Texture Memory”. This affects the virtual graphics memory — or, more precisely, the amount of memory that SL tells OpenGL to reserve for texture caching. Many might have been shocked, at the early days, that this slider will allow them to set their memory far above what their card supports, and feared unexpected results (and oh yes, the first time this slider was introduced, there were nasty bugs!). Currently, short of tinkering with the code, LL caps the limit at 512 MBytes, although the only official reason given for that is that some applications that might run at the same time as SL and are using OpenGL too might go crazy if SL “steals” more than 512 MB for them (I’m hoping that some SL coders and hackers are reading this and remove that limitation soon; Imprudence folks, please go wild 🙂 ).

Now the Mac version of all viewers always had a very nasty bug with that slider (I think that Kirstens Viewer S17 once fixed it, but it popped back again on subsequent versions). While it most definitely worked, it never saved the settings properly between sessions: every time I logged back to SL, the first thing I did was to change it, moving it to the extreme right. My silly graphics card might only have 128 MB of VRAM, but the iMac is use for SL has 2.5 GBytes, and I can definitely “spare” some of the RAM I don’t use anyway (SL will rarely use up more than 1 GByte anyway). It was a nuisance, and multiple JIRAs exist to ask LL to fix it, but definitely not a show-stopper. The Windows version never had any problem saving the setting (and I haven’t tried the Linux version for quite a while now).

SL 2.0 changes everything!

In this case, for the worse! Gah! As soon as I downloaded SL 2.0 for the first time, I went to search for that slider. It’s easy to find it, it’s still under Me > Preferences > Graphics > Hardware:

But here came the surprise: although I had 128 MBytes of RAM, the slider didn’t save anything… above 64 MBytes! I was stuck with that setting forever! Gah, gah, gah. I’ve reported it as a bug, but to no avail; perhaps LL thinks that we poor users with old hardware are not worthy of logging in to SL any longer?

The immediate effect of having only 64 MBytes of VRAM is that you will always see textures constantly fading in and out, blurring and de-blurring, every second or so. This, according to LL, is “expected behaviour”. With just 64 MBytes of video RAM, SL can’t even keep all textures for your avatar (and its attachments!) uncompressed in video memory. So it is constantly swapping them out, and in again (which needs decompression!). You can imagine the loss of performance… and now you know why so many people are frustrated because SL 2.0 seems “so much slower” with some extreme cases reporting “half the performance”.

If LL is cheating, we can cheat too!

Ron Overdrive, on this JIRA, found a workaround. Apparently when this feature was implemented and Linden Lab found out that some applications have problems with more than 512 MBytes, graphics cards had not more than 1 GBytes of VRAM (this has indeed changed recently). So they simply added a setting that would cut the amount of RAM set on this slider to half. Hah!

Thankfully, this setting, called RenderTextureMemoryMultiple, is available from the Debug Settings, and you can change it back to normal 🙂

So now it’s the time to fully tweak SL 2.0 to your content! We’ll use a few more nifty features which are always turned off by default. Some of them have been in “beta” for years now, because probably they break under some graphics cards, and LL hasn’t figured out yet why. But for most of us, they are stable enough, and since they will definitely enhance performance, you should activate them and see if they work better for you. If yes, be glad, and enjoy the additional performance increase! 🙂

The first thing is to get access to SL’s “hidden” menus. If you have never seen them before, they come up when you press Ctrl-Alt-D. Under SL 2.0, since this menu — called Advanced and which will appear as the last item on the menu bar, to the right — was starting to grow too big, you’ll have to enable an additional menu, called Develop. It’s just a checkbox at the bottom of the Advanced menu which will make it appear. You can read what all those options do on the SL Wiki, although bear in mind that it still shows the layout for SL 1.X and not 2.0, which has a few more tricks.

At this stage, you should also check Run Multiple Threads. This will allow textures to be downloaded in the background as you move around SL. It should always be on (unless you’re on a dial-up connection!), so make sure you have it checked.

Now, some options and settings are not on any menu, because they’re “experimental” or “dangerous” or “not supposed to be used too often anyway”, so they don’t show up here. And RenderTextureMemoryMultiple is one typical example. Instead, to get access to every possible setting for the SL viewer, you’ll have to enter the small toolbox for Debug Settings, which is the option before the last one on the Advanced menu:

You get an image like the one below. Just type RenderTextureMemoryMultiple and let the viewer search it among the thousands of possible settings 🙂

Now the default is at 0.5, which, as said, will just show half the RAM you’re allowed to set for the texture memory cache. Although the viewer says that you ought not to go above 1.0, I definitely totally disregarded their advice 🙂

My own experiments with a 128 MB RAM card:

0.5 — maximum memory selectable, just a pitiful 64 MB, or half what my old card has

1.0 — this gives me 128 MB, which is pretty much the amount of VRAM I have

4.0 or more (more won’t make a difference) — this allows me to go up to 256 MB, which is definitely far better!

I never managed to trick my iMac to go above 256 MB but your mileage might vary. Some report that the 512 MB limit is really hard-wired. It’ll take some more serious hacking at the code to get more than that 🙂

The best thing about this is that this will now persist across sessions, e.g. when you log out and log in again, it won’t forget the last setting! Hooray!

Now, how do you know that Second Life is really using that amount of memory? From the Develop menu go to the Consoles submenu. You should have an option to enable the Texture Console that way, and the SL Wiki has an explanation of all the complex things that are supposed to be happening.

What matters is the first line, which says GL Tot: XXX/YYY. XXX will say how much RAM SL is currently using for textures; YYY tells how much RAM is being passed on to OpenGL for texture memory. This apparently includes all: video RAM and extra RAM. In my case, when I set the Texture Memory slider to 256 MBytes, YYY shows 384 MB (128 MB from VRAM + 256 MB from normal RAM). Not bad! You can imagine the difference it makes from the first day I’ve installed SL 2.0, when I just had 64 MBytes for everything 🙁 With 384 MB all lag disappears!… well, almost 🙂

More tweaking and easy shadows

Ok, so for more cool settings. Some of you might never used Snowglobe or a viewer that is based on Snowglobe’s code (like Emerald), so you might be surprised that there is a setting now saying HTTP Textures. Yes, Linden Lab has realised, after several years, that Philip Rosedale’s patented streaming technology might have been a bliss for audio and video streaming, but really not that great for texture downloading. So Snowglobe allows to choose HTTP as an alternative protocol. Currently, work is under way to allow all textures to be fetched via HTTP. Just think what improvement this will give to corporate and academic campuses, which almost always have network-wide proxy servers; once one employee or one student downloads a texture, you won’t ever need to burden your network with downloading it again! And since you can configure proxy servers to have as much disk cache as you wish — far above the SL viewer’s 1 GByte limit! — it’s theoretically possible that even a small corporate network might have all SL textures on cache! Well — almost… we’re talking about Petabytes of data 🙂 But even a few Terabytes of cached textures would work wonders…

But under the Rendering submenu, more goodies are hidden. One that should always be checked (it is by default) is Object-Object Occlusion. This will stop the SL Viewer to download asset data (e.g. prims and textures) for objects that are not visible from your camera’s view. The increase in performance in closed spaces is awesome… unless these objects have windows (which means, alpha textures). In that case, the effects of occlusion are lost.

But there is a new rendering algorithm for alpha textures, called Fast Alpha, which you can check from this menu as well. It uses “lossy” alpha textures to speed up the process, and I’ve found out that on places with several alpha layers in front of each other, this algorithm produces better results (i.e. less images “showing through” each other wrongly). I suppose that when zooming in on those, however, the quality might be worse (I personally can’t visually detect any differences), so this setting might not be appropriate for taking high-quality pictures or machinimas. You’ll just have to experiment on your own!

The last bit of good news is that activating shadows on SL 2.0 became easier (not as easy yet as on, say, Kirstens Viewer… but much better than before!). All you need to check is Framebuffer Objects (you can keep it checked if you wish; it uses some extra hardware accelerating features on your graphics card, which might improve performance drastically or not at all, but it will never hurt). Then the options below will un-grey; Deferred Rendering is what Linden Lab calls “turn shadows on” 😉 (I haven’t tested with Global Illumination yet, but it’s allegedly a new model to light the environment that will completely sweep us out of our feet; sadly, my own hardware is too old to get more than 0.2 FPS with that one enabled, so I can’t really tell what it does. Experiment it, though, and enjoy the fun!).

While you’re at it, Angela Beerbaum also suggests that you go into those Debug Settings, search for RenderAppleUseMultGL, and set it to TRUE. On the Mac, this will allow the Second Life viewer to use the multi-threaded OpenGL library instead of the single-threaded one, and this usually results in better performance (instead of waiting for each OpenGL call to complete, some might be sent in parallel). Thanks for the tip, Angela, I keep forgetting this one exists!

I hope this will help some of you out who have felt terribly frustrated with the move to SL 2.0. Like the menus and options, your old performance did not disappear… it just moved to different places 🙂

Like this:

LikeLoading...

Diablo Balazic

Wowww is’t amazing how this settings helped me!!! I got 256MB(64 before) texture memory now and avatar is always fully loaded and i don’t have to rebake all the time. Also the other avatars rezzing for me now (strange to see how some avaters look lol) Thank you so much for this great tip Gwyneth, now i can enjoy my SL again as before.

Update: only the graphics cards with support for the latest version of OpenGL will get shadows enabled.

Loading...

Diablo Balazic

I still got some problems rezzing lately. When i’m with 2 people in one room i constantly have to rebake to keep my avi from going fuzzy and blur! I tested it with a second puter and on that i can see both avi’s very good but on my Apple only the other avi and not mine. I rebake and it stays good for about 5 sec and pppfff… When i logged out the second puter the problems are gone and my avi stays ok! Do you know a way to change that in setting or make it better?