I recently got hold of an 300 Mhz R5200 O2 (for free, written off at the company I work for), which I updated with some more memory and a second drive. I have installed 6.5.22 and Firefox 1.0.4 (from this site - works great). However, web browsing seems kinda slow. It does not appear to be download speed - that is the highest of all computers on my home network - but rather the speed at which the O2 connects to other computers for browsing. The number of connections that the O2 can make per minute to a server on the net lies at around 100, while a PC connected to the same net can make 600-800 connections per minute. I tested this at http://www.speedtest.nl.

My network at home consists of a Sitecom WL-025 all-in-one device that is attached to a cable modem. All computers are connected by ethernet and cable to the Sitecom. Everything is pretty much standard.

We also have some O2's at work that are attached to a high speed network (20 Mbits) and I compared performance of a recent PC to the O2. This gave pretty much the same picture as at home: the O2 can keep up where download speed is concerned (1700-2000 kBytes/sec), but the number of connections that the O2 can make is quite low. The PC can make about 3800 connections per minute, while the O2 can only make about 170. Again, web browsing seems appears to be slow.

The question is: is there some hardware limitation that limits O2 web browsing speed or do I have a machine that is not properly configured? I checked most of the standard settings, and cannot find anything out of the normal. What is realistic to expect from an O2?

Shoving the data around (networking falls under this category) should be plenty fast, even on older SGI systems. Your Sitecom router also has a CPU in there, and I bet it is MUCH slower than the one in your O2, but still it does just fine.

A valid networking benchmark would be an O2, serving webpages to a fast web browsing machine (so there are no bottlenecks - like your web browser is on your O2), or concurrent FTP/SMTP/... connections you get the idea.

IMHO, the biggest culprit (for newer software running like a snail on older systems) are the inefficient coding techniques of modern day 'programmers' that learned to program on gigahertz machines with hundreds of megabytes of RAM. Who needs optimization? Program too slow, no prob, the next generation of CPU/GFX-card/Chipset is around the corner... (I didn't have that luxury when I learned to program on my 1KB RAM Sinclair ZX-81 heheh).

Quake3 runs (sort of) on an O2, so why should a non-realtime render engine of some low-res text with pictures take half a day?

See, I'm rambling again. I'm sorry, but there's nothing much you can do. If you want a snappy workstation, and it HAS to be an SGI, try an Octane with dual R12K or better and GFX card with texture RAM. Way faster than an O2 per Mhz! The O2 is a fine machine for video I/O (if you have the A/V module) and simpler online tasks (e-mail/chat/ftp/news). Sad but true...

Glock wrote:IMHO, the biggest culprit (for newer software running like a snail on older systems) are the inefficient coding techniques of modern day 'programmers' that learned to program on gigahertz machines with hundreds of megabytes of RAM. Who needs optimization? Program too slow, no prob, the next generation of CPU/GFX-card/Chipset is around the corner... (I didn't have that luxury when I learned to program on my 1KB RAM Sinclair ZX-81 heheh).

Quake3 runs (sort of) on an O2, so why should a non-realtime render engine of some low-res text with pictures take half a day?

I'll jump in and say your right. I'm doing a degree in 'Computer Games Technology' (which is NOT just about playing games!) and the closest my uni go to program optimisation is 'Design of Alogrithms' and 'x86 ASM'. This is for games, where performance is paramount. Also seems to be the same for many of the uni's I looked at in the UK. Seems todays solution for slow program's is to buy a better computer. shame

So, if I want to test the rendering speed, I should download some complex web pages and see how long it takes to render them on the O2. Once the pages are in the cache, loading and rendering no longer depends on the network speed and settings.

I ran the CSS test on a 2400 AMD Athlon with IE 6.0 and FireFox 1.03. Interestingly enough, IE is a lot faster in loading and rendering the local data. If I run the test at http://www.speedtest.nl, FireFox comes in as the winner.

It's quite interesting to note that one program may have a faster rendering engine and a less efficent download strategy and vice versa.
I remember that in some browers you can change the strategy to start early to draw the page hence completing it faster with the risk on having to redraw vs waiting until you get everything.
On local disk access the second solution is likely the most efficient while on the net the first one may be the best...

Ok, I ran the tests and it convincingly shows that the rendering does indeed determine overall speed. The CSS test page at http://www.howtocreate.co.uk/browserSpeed.html loads and renders in about 500 msecs in IE6 with my AMD Athlon 2400, either from the Net or from a locally stored copy of the page. Firefox on the O2 takes 19800 msecs, and Mozilla 1.4 a whopping 32500 msecs. On the O2 too, the time needed to load the file is not really a factor.

So, what I need to do is to find the fastest browser for Irix, then change the settings for fastest render speed (if at all possible)

What is the current memory footprint on the O2?
Just check that you're not swapping a lot: it seems that the mozilla family of browsers is quite memory hungry.
Another interesting chapter can be kernel tunable parameters. I've never worked with the in irix, but in HPUX you can do a lot of things with them.
Also ndd on HPUX may help a lot in optimizing the network performance.

If after all you decide to hate you O2 I can get it from your hands for free

I would think that 384 Mb is enough. I tried some of the FireFox tuning parameters but they are are all pretty much useless. I do not think that kernel tuning will help much because the rendering speed is the problem.

Having to workj in an unix enviroment I really suggest to pay attention to kenel parameters: the rendering speed is not only about sheer CPU performance, but, as all the computational activities, is a function of the efficiency of the usage of the computing resources.
As the resources are mediated by the OS and his kernel it's often worth the time to investigate if kernel parameters are relevant to the specific case.
For most non-singleuser activities HPUX machines need to be tuned to be usable, and the tuning has major impacts in performance: the size of the system tables may have dramatic impacts on the resposiveness of the system. If they are too big you may have a kernel footprint that leaves little space to application, if they are too small you end up spending a lot of CPU time swapping contents in and out of them...

twix wrote:Ok, I ran the tests and it convincingly shows that the rendering does indeed determine overall speed. The CSS test page at http://www.howtocreate.co.uk/browserSpeed.html loads and renders in about 500 msecs in IE6 with my AMD Athlon 2400, either from the Net or from a locally stored copy of the page. Firefox on the O2 takes 19800 msecs, and Mozilla 1.4 a whopping 32500 msecs. On the O2 too, the time needed to load the file is not really a factor.

CPU is the biggest factor here.
On my Octane 400Mhz R12000, Firefox 1.0.4 (foetz built), gives 8721ms.
Interestingly, the old Firefox 0.8 I was using (too lazy to upgrade until recently) it was 9476ms.