Is there a way to revert to the "old" Seti screensaver instead of the floaty 3d version?

I would have thought that this screensaver would require more GPU work than the old one and therefore if we could swap to the old version of the screensaver wouldn't it free up more processor time for the GPU to work on the actual data?

not to mention the heat generated from my GPU doing 3d screensaver processing all the time.
____________

No, you can't return to the 2D screen saver, as it wasn't really saving screens, what with all the static parts burning in on the monitor. You can either set up your preferences to use the emulation of the old screen saver, or just turn it off. It isn't really needed for the calculations that Seti is doing.
____________Jord

Unfortunately, it doesn't look like one can truly emulate the original Classic SETI screensaver. See this thread http://setiathome.berkeley.edu/forum_thread.php?id=3317#19512.

Yes, you can sort of make the newer 3D version look like the old one by following the wiki instructions but the graphics are less legible and many agree in the SETI community that the newer version doesn't look as polished as the original.

I first saw the screen saver in 99 or '00 running on a farm of computers in a university lab and was instantly drawn to it. It was a conversation piece. I needed to know what it was and started talking to the tech lab assistant. I thought it was really "techie" and cool. Consequently, I immediately went and installed it on my machine and joined the hunt for extraterrestrial lifeforms! :-) I'm sure others joined the project based on similar experiences. I believe this is core to the ambitions of the project, that many people from many different backgrounds with many different interests can join together for a variety of reasons and contribute to a common goal.

One can engage in discussions regarding what SETI is for, what computers should be running it and why, etc. But what is this really all about? I find this situation to be a fascinating example of how decision makers continually "miss the boat" and disregard how modifications to existing systems will negatively impact the end user. I think one can draw parallels to the development of the newest Microsoft Office suite, where MS engineers now admit very little R&D went into usability testing of the newer, "better" product. Why so little regard for the people who actually purchase and use your product?

I think it is perfectly acceptable for creativity to drive innovation, and I understand the argument that laggards may impede development. But in this case look at what we're talking about-- a screen saver. A small piece of graphical programming that brought a little joy during the day to those who marveled at its simplicity and form.

Why discount this? Why not leave an option for those users who liked the classic 2D version to see it while they are crunching numbers? Would the inclusion of the 2D version have resulted in a nightmarish project boondoggle that wasted hundreds of precious hours of programming detail? Would the inclusion of the 2D version have significantly DEGRADED the performance and integrity of data gathered using the newer BOINC application? Were there licensing considerations which drove this decision? Was this the reason why the 2D version NOT included? I think these are fair questions to ask. Unfortunately, my suspicion is that the 2D version wasn't included because a small group of people somewhere made an executive decision based on their own feelings about this aspect of the project (which they deemed of little importance) without considering how the community at large might feel about it. Maybe there WAS a survey issued and feedback showed that the inclusion of the classic 2D screen saver was only important to a very small and select percentage of users-- but I have my doubts. And sure, this whole issue may seem trivial to those who could care less.

I think this situation perfectly illustrates how software engineering many times pays little aspect to human factor engineering. Whether or not any of this matters is wholly dependent on your personal perspective.

1) The old 2D screen saver actually burned some screens because it moved very little.

2) The old 2D screen saver had to be hand written for every OS that SETI@Home supported to ensure it would work with that platform's graphics subsystem. It also had to refrain from being too intensive or it would take more power to run the graphics than to actually do the processing. With the new 3D version, it is written completely in OpenGL, which is supported by every major OS (Mac, Windows & Linux/Unix) so it does not have to be re-written each time.

The last part of point 2 is particularly important. When SETI@Home first opened its doors, it had funding from the NSA and could afford to pay salaries for the talented people working in the lab. Now that the funds have run out (as of about, oh, 7 or 8 years now), the baseline code has to be easy to implement so that the few staff that actually remain in the lab can focus on keeping the servers going.

As much as I'd like to see a 2D option, leaving the 2D screen saver out was not a malicious or short-sighted attempt at neglecting the user base. The decision was not based upon "feelings", but that of cost and ease of implementation.
____________

Well, I can understand if these were indeed the reasons. The problem with screen burns isn't as prevalent today because most people have replaced their CRTs. But I can understand this decision as it was made a number of years ago.

I've seen other posts that make the point that the newer version won't run on older, "less capable" machines. I guess this is due to the OpenGL support in relatively newer video graphic cards. This seems counter intuitive to me that the continuation of an older 2-D version would drain performance as processing chipset power has increased ten-fold over the years. I would think that the additional processing power would more than make up for it.

As far as the open architecture, I was not aware that the classic screen saver was not originally developed to be platform independent. I can understand why resources would not be allocated to rewrite the code to do this, especially seeing as though there are little resources to work with! I appreciate your input.
____________
Simian Guerrilla Task Force :: Forbidden Zone Geocaches

Well, I can understand if these were indeed the reasons. The problem with screen burns isn't as prevalent today because most people have replaced their CRTs. But I can understand this decision as it was made a number of years ago.

I've seen other posts that make the point that the newer version won't run on older, "less capable" machines. I guess this is due to the OpenGL support in relatively newer video graphic cards. This seems counter intuitive to me that the continuation of an older 2-D version would drain performance as processing chipset power has increased ten-fold over the years. I would think that the additional processing power would more than make up for it.

As far as the open architecture, I was not aware that the classic screen saver was not originally developed to be platform independent. I can understand why resources would not be allocated to rewrite the code to do this, especially seeing as though there are little resources to work with! I appreciate your input.

It depends on where the processing gets done. When the S@H Classic screen saver was written, there was no real data processing on the graphics card. All of the processing for the screen saver was written to be done on the same CPU that the crunching was done on. OPENGL makes different calls into the video driver, and determines what has to be done on the CPU, and what can be done on the GPU. Since the newer screen saver is moving much of the work to the video card where the Classic screen saver was not, the Classic screen saver would get in the way more then the newer version. The exception is of course older machines that do not have graphics card with onboard processing. These machines slow WAAAAY down during the Screen Saver show.
____________BOINC WIKI

I've seen other posts that make the point that the newer version won't run on older, "less capable" machines. I guess this is due to the OpenGL support in relatively newer video graphic cards. This seems counter intuitive to me that the continuation of an older 2-D version would drain performance as processing chipset power has increased ten-fold over the years. I would think that the additional processing power would more than make up for it.

A couple of things. The graphics are OpenGL 1.1, which is one of the earliest available. Most videocards out there, even those with only 8MB of memory and upwards, are capable of doing OpenGL, as long as the user installed correct videocard drivers for them. In most of the cases where the Windows driver is used, there is no OpenGL support, because Microsoft didn't/doesn't like the Open part (don't know if this has changed?)

There are only a handful of videocards, mostly embedded on the motherboard videochips, that don't do OpenGL natively. Nothing one can do about that, but for add a card that can.

But mostly, don't you think that the people using those older less capable machines will have kept their CRT monitors attached? Why spend a fortune on an LCD or plasma monitor and attach it to a very old machine? That is counter intuitive. ;-)
____________Jord

Modern CRT have little image burn problems. On the other hand Plasma screens are almost as sensitive to it as old style CRT. I even heard of image burn on LCD displays although very faint and fairly rare.