Tough Challenge

This is a discussion on Tough Challenge within the Tech Board forums, part of the Community Boards category; I've got to design a system that uses 4 projectors (under $14,000 each) in sync with a workstation. The Problem: ...

Tough Challenge

I've got to design a system that uses 4 projectors (under $14,000 each) in sync with a workstation. The Problem: The projectors will project on a quarter of a cylinder, creating a seamless mesh of a 360 degree view.

I've screwed around with this for about a month and derived that I need new software. The workstation must break up the image into four parts vertically. After this the signal must be distorted to compensate for falloff on the edge of the cylinder and then must change the aspect ratio to supply full viewing on the cylinder (10' diameter, 8' tall)...... How in the hell can I do this?

(PS - budget is really neglegible @ this point. I'm part of a team drafting a grant...thanks...)

The workstation is an SGI Octane2 and I don't know the specs on the video card. Anyhow...the software is still in question. It obviously would be a great help if it were pre-ported or created for a UNIX based shell. So basically I need software(s) that processes the output. Is there a way to use some software similiar to Adobe Premiere to convert it with an animorphic filter and then just split that to a 4-way output?........

well I know the nvidia quadro4 cards are meant for up to 4 monitors, is that what you're doing now is hooking the projectors to the video card? If so, any slicing and dicing of the images would have to be done at a near OS level (on in the case of *nix, the window manager level).

Are you sure you need to use software? Can you not just adjust the projectors/monitors so that the images are faded on the edges? I'm still a little bit unclear on what you're trying to accomplish. I get the idea of the entire project, but I don't understand what you want to do to the images to get them how you want them.

well I know the nvidia quadro4 cards are meant for up to 4
monitors, is that what you're doing now is hooking the projectors
to the video card? If so, any slicing and dicing of the images
would have to be done at a near OS level (on in the case of *nix,
the window manager level).

Not true.

If you are using something like Direct3D to render or OpenGL simply get a pointer to the render buffer. In D3D you would get a pointer to the back buffer (attached surface; secondary buffer) and a pointer to the main or primary buffer.

You will have to have a very large image otherwise the pixels will break down into blocks in each of the 4 images.
This is NOT optimized. Using temporaries from C in assembly code results in significant performance loss - as was pointed out to me by Fordy.

This has NOT been tested and I coded it from scratch here on the board. However the principle will work. I'm simply iterating through the source buffer non-linearily. The reason I have to add is to avoid doing a mul in every loop. I'm simply using a trick to get ESI to be correct. I'm not positive that all this works as is..but its close.

You could do this in C with loops but it would be slow. You could port it to use memcpy() instead of all the asm. The reason for the labels is that MSVC, BC45, TC, and DJGPP will not jump to labels that are inside of asm blocks - they are not interpreted as labels. So you must end your asm block to use labels which looks ugly...but it gets the job done.

This is a task suited for assembly since it involves many copies to and from memory. If you can get the above code to work all you simply need to do on each video card you use is to render the buffers in the struct. The temp pointers are simply because compilers do not allow you to access structure members or class members in asm blocks - which is stupid. They say it will work but I've never gotten it to.

This is not optimized and does not make good use of the stack but it is clearer that way. This will create 4 screens from one huge one which is what you wanted....I assume.

And this is for x86 platforms only of course...but you could port it to anything.

Thanks bubba. I had been screwing around with a blitting function though to no avail. I understand your principal in the code above even though I don't fully understand the flags for assembly (i.e. eax, xor, etc.)

I guess since some of you understand partially what I'm getting at, I'll share a little more precious tidbits, though I do expect you to respect the intellectual property that I have so far. And I do share a proof of concept with other members of my team. Besides, it took a couple of months for us to figure out the pneumatics, servos, and various other mysteries...

Anyhow... `member back in the good 'ol days before digital anything? I don't but I've heard stories. The movie theatres projectd their movies onto the silver screen from the rear. I.E. the audience would stare straight at the projector with a screen midway inbetween...right? Well now today we are going to take this "silver screen" and wrap it up into a cylinder and put the audience inside.... start to get the "picture" (pun intended).

This is where the distortion comes in. Anamorphic drawings use a half-cylinder mirror to reflect back an undistorted view of what was originally drawn to look distorted to the naked eye. Look at the attached picture. This is the most rough layout, from top view, and the areas at the ends of the paths to the cylinder is the problem. the actual ones are lined up and create a seamless 360 degree panorama. But... the material does have a minute texture and creates dim edges and the curvature of the cylinder creates falloff distortion at the seams...............clear 'nough? Then 'nough said.

I'm not sure how to fix the falloff at the edges but the distortion can be fixed by accounting for the curvature of the surface in rendering. I assume you are rendering here and not projecting tv images or anything.

The distortion prevalent on a normal raycast is sin(90-angle) where angle is from -halffov to +halffov (-30 to +30 for 60 or -22.5 to +22.5 for about 45. Since your projectors are basically enormous raycasters and you are projecting on a surface that is actually closer in the middle and farther at the edges - opposite the 2D wolfenstein and traditional ray caster's you should be able to derive the opposite effect.

sin(90)=1 so 0 distortion at center because distance is multiplied by this factor and distance*1 is simply distance. But your distortion is max at center. So perhaps cos(0-angle) would work. Simply alter the projection by this much and it should work. To alter the projection will require a matrix that will project a concave frustrum instead of a convex. For ray casting simply multiply the distance by cos(0-angle). Since cos(0) is 1 that means that the center distance would not be altered and the distances get larger as you move away from center, which fits your projection scheme.

As for the falloff I have an idea that might work. You could 'share' the left and right extents of the images between video projectors or video memory. Then alpha blend or color blend those together - additive alpha/color blending. If you do it right...the falloff will be negated by adding the two colors together. But this would have to be very precise or you will definitely see it. The factor would be determine by how fast the light falls off the edges and might not be constant. Your center projection medium must be manufactured with precise tolerances on the curvature or this will not work.

Combine the shared's with an alpha factor consistent with falloff on edges and it should blend perfect. Combine this with the distance solution and it might just work.

The falloff will be consistent with the equation that a surface illumination is determined by the cosine of the incident ray angle and the normal to the plane. As you move around the circle...your surface is not being lit because it does not face the light. You can account for this by slightly overlapping the images and additive color blending the two images to again arrive at the original color values.

For illustration I've attached an image with massive falloff at the edges. Then I added it to itself and you see what happens. If I added an image with the exact reverse falloff amounts of the original -> you would get the original image because the falloff in the first would be made up by the color in the second.

Also what I described and what you drew were exactly the opposite of putting the audience inside of the movie. You are putting the movie on a cylinder inside of the theatre. If you are projecting from the middle out to the edge then the distortion is sin(90-angleoflightray). Even though you are using a projector you can still render this on a computer so that it will be projected correctly onto a curved surface. As for the seams in your picture that is a manufacturing flaw and I know of no way to compensate for that. Any idea would be a mere hack since its impossible to tell how much distortion will occur. As for the texture of the screen - this does not affect modern movie screens so I cannot see why it would yours.

You also might need more than 4 projectors. Perhaps 8 would work - they would overlap in certain areas - you would need to account for this by taking the difference of the overlapped areas and rendering that difference on each edge.

Just ideas.

By the way the animorphic filter also uses some form of cos(90-angle) principle - hence the middles bulge upwards and the farther from the middle of one view you get the more normal the image is to the naked eye.

I assume you are talking about the QuickTime technology that will render 3D views that do not distort by distorting animorphic or bulging renders.

EDIT: I think I understand now that you are projecting onto a curved mirror and back out to the room. Being employed by a glass factory I know a significant amount about well...glass. You need mirror quality glass but you need very high quality mirror glass. There are several grades of this. Also any defects in the glass will be magnified onto the screen so it is imperative that there is no distortion. The amount of distortion will be determined by the index of refraction for that glass.

I would have to see some type of example as to the final projection in order to assist you more.

I am not reflecting the image but pushing it through screen. I appreciate the detail here. Parts I haven't mentioned before complicate this further, though. Using the render in the video w/ the filters and falloff won't exactly work fully....The cylinder shouldn't be used just for projecting a movie into. The audience should be able to put on a motion tracking device that is used to orient the position to which direction the user is facing. The "software" problem must, in effect, do all that the renderer would do but do it on a sub-os level. I.E. the user could use the SGI platform with a wireless mouse and keyboard/ other inputs (secret ones and use the system interactively. I will end up having to re-write the video drivers or a go-between app to manipulate what the images above look like but in a way that the screen would be permanantly altered. Am I being clear? I'm if I'm not but I'm going on day three with six hours of sleep total....sigh.

The camera can already be oriented via software. This is a simple projection transform. If you are trying to allow the person to interact with this and display the results...this has been done in every video game around.

What you need then is to display the parts of the world that a monitor cannot draw...namely because 1 monitor equals 1 frustrum. You must compute then the new frustrums. If one frustrum is 45 degrees wide then you will need a total of 8 frustrums and 8 projections. The angle at which to render is extremely simple. Simply subtract from the current view angle the size of the frustrum in degrees and you've got it.

I've got an entire book(s) on 3D math and I'm really not sure where you are going with this but some of what you said can already been done - it seems to be the projection scheme that has not been done.

By the way...nothing will be done at sub-os level and you do not need to write a new video driver for this.

Create 8 attached surfaces and use the card to do a render to surface -> blit each surface. The blit each surface part will either require a card that can handle multiple monitors or will require a small lan which the buffers can be transferred from one system to the next. The render is the easy part...getting the render to the screen might be a task.