I think you'll find it's the QuickLook daemon , pun intended, coupled with the incontrovertibly difficult maths to make a visual going on in the example composition, and in any composition employing the stereo environment patch.

[Randy turned in on himself, no mean feat for a 40 stone man]

The solution, if found to be applicable on your machine, is so easy, you'll kick yourself :-).

Just place a clear in the first level down in the stereo environment patch

Well, I've been playing with your example and have achieved loads of VRAM - see this video on YouTube, just uploaded.

Although there should be a rendered object this glitches even with or without an enabled object patch - buffer clutter results - love it for that and when appropriately trained then the patch does do some interesting things, but I'm finding it to be slightly unreliable .

I had included / used the new 1024 patch - which I've also done a visualizer with, but even with a plain sphere instead, as per the example I'm posting here, it will glitch, that's because I'm jigging way too many settings on the Stereo patch.

There seems to be some odd interaction with 10.6's new deep aversion toward rendering every frame. If, for example, you attach an LFO to the Cube inside the Stereo Renderer, it may improve reliability (since it's forced to continuously redraw the cube instead of just drawing it once and going to sleep). I'll try to find a real solution for this.

I don't have a GeForce 9600M available at the moment to test on, but one should be arriving within the next week or so, and I'll add it to the test suite.

And definitely let me know if you're able to reproduce the Finder crash issue and/or isolate the crash log from it.

Screen Z defines the confocal plane --- where both the left eye and right eye will see the same image. As geometry diverges from the Screen Z value, the image locations in the left and right eyes will diverge.

Geometry closer than Near Z Clip and further than Far Z Clip is culled.

If you set Screen Z and Object Z to the same magnitude, opposite sign (e.g., Screen Z = 10, Object Z = -10), an object placed at the origin within the environment (e.g., a Sphere with default values) will be visible, rendered with center at 10 units from the camera. The image will diverge slightly as it approaches the camera.

Thanks smokris, for the excellent explanation and for the bug info. The lazy-rendering thing makes sense, in fact. In my very simple setup, I had a cube inside a trackball. When I rotated the trackball, both images sometimes appeared, but then the right one would disappear when I stopped dragging the trackball. I think there's something else going on too though, because of the pixel-junk we've both been getting.

Incidentally, I'll give it a go with the GeForce 9400 too, see if that's any different.

OK, tested the same basic QTZ with the 9400, and it's the same. The Right image actually appears when the composition is first started, but disappears when the trackball is dragged (the opposite of what I thought before). Even when it's there though, it flickers on and off rapidly. The left image seems 100% stable, and I don't get VRAM junk with the this simple setup.

To be clear, I've seen this plugin in a variety of versions... and haven't tried the very very latest yet (still keeping a pretty much working build I have around, and in 10.5 where it is solid).

Issues that I know of, besides what has been mentioned, are-

-In some systems, you have to have a 1:1 aspect ratio, or otherwise, make sure that width is always greater than height. I find that to be different on various computers.

-If it actually gets running and outputting image on both sides (try a sphere or teapot...a standard QC object for your tests), you may still have problems with K3D, with Particle Tools, and possible with Mesh stuff (I have not been able to test mesh stuff, because I haven't had it working in SL well enough to yet). Things with K3D or Particle Tools may not draw at exactly the right depths or respect the z values correctly. This can be overcome, but I don't think that current K3D or Particle Tools builds handle this.

-Image ports/tooltips can have a weird side effect where you see the image upside down on the tooltip window, yet it will still render out correctly when put to billboard. I'm not sure if that is a tell of a larger problem or not.

-I have seen the freeze problem as well, in one of the builds while using SL, and also noted that as long as an lfo/interpolate, etc was attached that things evaluate. Part of me wants to say that aspect ratio plays a part in this as well, but I could be wrong.

gtoledo3 wrote:In some systems, you have to have a 1:1 aspect ratio, or otherwise, make sure that width is always greater than height. I find that to be different on various computers.

The issue with aspect ratios is fixed with the final released version, I believe.

gtoledo3 wrote:If it actually gets running and outputting image on both sides (try a sphere or teapot...a standard QC object for your tests), you may still have problems with K3D, with Particle Tools, and possible with Mesh stuff (I have not been able to test mesh stuff, because I haven't had it working in SL well enough to yet).

Kineme3D 1.2 or later should work. I haven't tested it with ParticleTools recently.

gtoledo3 wrote:Image ports/tooltips can have a weird side effect where you see the image upside down on the tooltip window, yet it will still render out correctly when put to billboard. I'm not sure if that is a tell of a larger problem or not.

Good news about K3D 1.2 and on indeed. Stereoscopic explosion will be a nice one...I've been really hoping about that one.

I can't remember what the Particle Tools issue was... I remember that it was working out for all scene info to be "outside" of the stereoscopic patch (and other things like 3d loaders, warps, etc., as well). Certain Particle Tools renders don't work correctly as well, but that carries over to the standard Render In Image environment, so no complaint there.

Cybero, I'm talking about when you actually view them stereoscopically and whether or not depth info is correct when you view them on that kind of playback equipment, not simply whether they render or not.

Well, the IOD is what I'm given to understand gives stereo separation control. When set high I get two separate images, one flickering - the right image and one constant - the left image.

Unfortunately I don't happen to have the specific equipment to completely test out the validity of the examples posted but was given to understand that they were stereoscopically capable due to their rendering in the viewer window.

I don't even have a cheap and cheerful pair of red / green 3D glasses at hand - daft question perhaps, but would those give a better sense of what does and doesn't work on the examples I posted?

You definitely need 3D specs to know if it's working.
On the other hand, if you don't get a constant image out of both Left and Right image ports, we can fairly definitively say it's not working.

;)

You also need to set the two output billboards to Add blend mode, and eliminate the Green and Blue channels from the Left one, and the Red channel from the right (as in my example QTZ). This can easily be done with the Color inputs for each Billboard.

Genius! Thanks smokris, this one works on my system. The original one from the v.1.5 Compositions folder still gives GPU junk and flickers though. I guess you put in more Clears. Is there some rule about where you have to insert extra Clear patches when using this environment?

Great work, and thanks for responding so quickly to my moans (I'm a bit of a spoilt kid sometimes, for which I apologise).

Nice, glad about the update. I was just noticing I was getting the same glitches as TB when testing today (we have the same config laptop...maybe not OS version, but hardware wise). It was total glitchout.

I've just posted you two urls for movie captures of the actual step by step procedure that results in 3D Object input splitter problems when providing published ports. Also shows a nifty disappearing trackball related trick in the Stereo patch:-).
BTW it has moved from being a selector error to an index matching class error

Actually, scrub that smokris. I was thinking my OpenCL-driven composition wasn't working inside the Stereo environment, but it was user error on my part with Additive blend mode and a light-coloured background.

Incidentally, is there a reason why objects rendered with the Stereo patch appear much smaller than without it? You explanation of the various parameters was very clear, but I'm still a bit unsure about how to zoom in on things. Or would it be easier just to move the objects a bit nearer using a 3D Transform?

Incidentally, here's a very simple CIFIlter kernel to composite the Left and Right images and output a single image. Probably only works properly if both images have 100% opaque foreground and 100% transparent background, identical dimensions (which they should have, I'd have thought).

Just installed 1.6 on my MBP/9600, and for some reason, whenever i replace the default spheres - in the sample comp - with another object , such as my particle plugin, the Right image stops working.
Its output is plain black, or eventually flicker.

Funnily enough, that was what I was getting with v.1.5. Are you using the example composition from the older v.1.5 examples folder, by any chance?
Apparently, the Clear inside the Stereo patch was disabled in the older QTZ.

would it be possible to have the Stereo patch support multisampling? I've got used to seeing nice smooth(ish) edges to geometry in the Editor, but the only way to smooth the output of this patch (and the standard Render In Image patch, for that matter), is to do brute-force supersampling or apply some kind of edge-blurring post-process (unless I'm missing something here).

Come to think of it, a drop-in replacement for Render In Image that did multisampling would be a REALLY good thing. Should I make that a Feature-Request?

You get that sort of result - I don't - I get image on both outputs, including your 1024 plugin. see attached example. works aok for me :-). it's based upon the tb simple example. putting your plugin or similar exotica [3rd party] into the kineme example's treble iterator really slows down the rendered results fps wise. that's my experience to date with trying that. i am finding it pretty difficult to get more exotic than Apple's objects to really iterate to that extent, although having some success with iterations.
BTW, got a snazzy 1024 visualizer sorted :-) - windy

btw the example posted here is in a real rough draft stage - problems with getting signals to set and reset the repel, shake and such, work in progress.

Some work better than others. Also, when the object come too close, the separation becomes too extreme for your brain to fuse the two images.

Re. the glasses, I got a 5-pack of them for not-very-much, from some Dutch website, years ago, when I bought a Loreo 35mm 3D camera. They're cheapo cardboard Red/Cyan ones with no arms. Also got two different but superior pairs free with Hexstatic albums :D

Although I've gotten pretty good results with iterating mesh with kineme 3d and apple opencl in this stereo patch, I hadn't found a way to make .md2 animate, until I realised that I'd set the scaling way, way too low.

It uses a different projection matrix than the standard QC GL renderer.

You can effectively zoom in by reducing the FOV input value, by moving the Object Z input value closer to zero (making sure to adjust the Near Z Clip input so close objects aren't clipped), or by using a 3D Transform inside the patch.

Those stills look great Alex, found a nice tip for anaglyph recently on vimeo: double up the specs to get better filtering. The gels are just too cheap and thin in the cardboard specs!

Now we just need to have the camera angles narrow as an object gets closer to avoid the separation issues. I spotted a physical camera system which did this at a trade show a year ago with a winding handle (be great to have it automated), trying to think of a way to do it DIY for my eyetoy usb cameras.

The very last build isn't putting out both image channels in SL on my system - latest SL seed... (haven't tested in Leopard). Tried this with stock and 3rd party patches.

It also reveals an interesting weirdness with DAE/mesh and the stereo environment. I can actually make DAE's stop rendering by "not" having a lighting macro inside of the stereo environment, whether or not the DAE's actually have anything to do with the macro... the DAE's are actually outside of the lighting macro, but turning lighting off makes them crap out.

Scratchpole wrote:Those stills look great Alex, found a nice tip for anaglyph recently on vimeo: double up the specs to get better filtering. The gels are just too cheap and thin in the cardboard specs!

I'll give that a go (I have several pairs here)!

Quote:Now we just need to have the camera angles narrow as an object gets closer to avoid the separation issues.

Hmm.. but that would make it look further away again, surely...
I'm still a bit confused about how the various controls of this patch interact, but i guess you should be able to automate how the IOD, for example, related to Object Depth.

Quote:I spotted a physical camera system which did this at a trade show a year ago with a winding handle (be great to have it automated), trying to think of a way to do it DIY for my eyetoy usb cameras.

I think the Screen Z parameter effectively does that, by setting the depth at which there is 0 separation between the left and right channels. Or perhaps I'm completely wrong on that....

On 10.6.2 I screen grabbed some similar weirdness regarding Trackball and knocking out of Mesh Renderers outside of the TrackBall if the Trackball is switched off. I was trying to grab how asking for input splitters on 3d Renderers causes exceptions. This was mainly using tb's simplified example.

Using "Insert Input/Output Splitter" with Kineme3D objects has always caused those exceptions (it's not specific to the Stereo Environment). You have to manually create the Input Splitter and set its type to Virtual.

When you create a Mesh Renderer deep within nested iterators, you get a crash in C3DEngineContextCreateSharedContext(). I haven't been able to recreate this --- Mesh Renderer works just fine for me (see attached screenshot). If you take the contents of the GL Stereo Environment and paste them into the top level another composition (taking the Stereo Environment totally out of the equation), do you still get a crash if you insert a Mesh Renderer? And if you do this inside a normal Render In Image patch? If so, please file a bug with Apple.

Well, I went around things in a reverse engineer route using tonebursts simple exemplar and step by step adding features to replicate the precise iterative combination used, within the environment patches, Polygon & Stereo used in the source code example.

It works.

:-)

See attached screen grab.

and attached reverse engineered example.

These constructs still can just crash QC when first opened and asked to render - for the most part they are reliable and I wonder if there isn't something about the QC cache that trips such constructs up?

What I mean is, if and when something goes amiss , especially due to .dae / OpenCL related assets, even restarting doesn't seem to clear the kludge. Constructs, such as the one attached below, that are usually to be relied upon can cause QC 4 to crash.

Quite definitely get a crash whenever I try to place a Mesh renderer into the pre-existing construct in the sample compositions.

The standard Gradient background patch won't work inside GL Stereo Environment --- it draws its geometry (apparently) at z=0, which is culled by GL Stereo Environment's projection. (This also applies to the projection used by the GL Frustum patch.)