Summary: This invention sets the transparency of a virtual object based on the distance between the viewpoint and the virtual object in a virtual environment generating both left eye and right eye images. Note: transparency in this context could be interpreted to mean several related concepts, such as fading and alpha level adjusting.

Claim 1 (as filed) is:

A computer-readable storage medium having stored therein a display
control program which is executed by a computer of a display control
apparatus that displays a predetermined virtual space on a display
apparatus capable of stereoscopic display, the display control program
causing the computer to function as:

object placement means for placing a predetermined object in the virtual space;

transparency degree setting means for, in accordance with a distance specified between (1) the predetermined object placed in the virtual
space, and (2) a viewpoint position based on the position of a virtual
camera used for virtually shooting the virtual space,
setting the degree of transparency of a part or the entirety of the predetermined
object such that the longer the distance is, the higher the degree of
transparency is; (emphasis added)

image generation means for generating an image for a right eye and an image for a left eye by shooting the virtual space with a right
virtual camera and a left virtual camera, respectively, so that the
predetermined object, which is included in each of the image for a
right eye and the image for a left eye, has the degree of transparency
set by the transparency degree setting means; and

display control means for displaying the image for a right eye and the image for a left eye generated by the image generation means, on
the display apparatus.

Sample Figure

WHY IT MATTERS: Object transparency based on distance between a viewpoint and an object appears to have been done before, such as in the area of ray tracing.

QUESTION: Have you seen anything (published before December 09, 2010) that describes the transparency degree setting means in the manner recited in claim 1?

If so, please submit evidence of that prior art as an answer below. Please submit only one piece of prior art per answer below. We welcome multiple prior art proposals from the same individual; please create separate answers for each one. This is so the community can vet each individual piece of prior art independently.

For details about what makes good prior art, please see our FAQ. Once you have submitted prior art, check back soon to see if the Ask Patents community has chosen your prior art to be submitted to the United States Patent & Trademark Office.

If you'd like to contribute in another way, please vote or comment on submissions made below. And we welcome you to post your own request for prior art if you know of another questionable patent or patent application.

18 Answers
18

Addressing the "transparency degree setting means" issue: I believe prior art is very clearly shown by Blender's Mist feature, which uses this technique and is documented here.
That page is marked as modified in 2011, but the history goes back earlier; the use of adjusting transparency based on the distance is first mentioned in this version of the page from the history (published 4 September 2007):

Mist can greatly enhance the illusion of depth in your rendering. To
create mist, Blender makes objects farther away more transparent
(decreasing their Alpha value) so that they mix more of the the
background color with the object color. With Mist enabled, the further
the object is away from the camera the less it's alpha value will be.

This directly equates to "the longer the distance is, the higher the degree of transparency is".

Addressing the issue of generating "an image for a right eye and an image for the left eye": The same software (Blender) also has a plugin that does exactly this, by shooting the virtual space with a right and a left virtual camera. The plugin is described on this page. Relevant illustration from the page:

The last bullet point ("display control means for displaying the image...") is also addressed on the same page:

You can use Blenders Node Editor to control the left and right camera
render results and combine them to a Side-by-Side, Interlaced or
Anaglyph image

I believe that the first point in the claim ("object placement means for placing a predetermined object in the virtual space") is obvious, but for completeness this is described in Blender's documentation here:

Each object has a center or origin point. The location of this point
determines where the object is located in 3D space.

Prior art for this would appear to be OpenGL. Any version of OpenGL. Yes, even OpenGL 1.0, published in 1994. Let's take these in order:

object placement means for placing a predetermined object in the virtual space;

Yes, OpenGL can be used to do that.

transparency degree setting means for, in accordance with a distance specified between (1) the predetermined object placed in the virtual
space, and (2) a viewpoint position based on the position of a virtual
camera used for virtually shooting the virtual space,
setting the degree of transparency of a part or the entirety of the predetermined
object such that the longer the distance is, the higher the degree of
transparency is; (emphasis added)

Even ignoring shader-based techniques, OpenGL's fixed-function fog is perfectly capable of this. It's simply a matter of setting the fog color alpha to a value other than 1 and turning on blending.

This will fade the object based on the distance from the virtual camera in the virtual space. The degree of transparency of part of the object will indeed be such that "the longer the distance is, the higher the degree of transparency".

This code would work on OpenGL 1.0 (PDF). Obviously, shaders could do the same thing, but having it as an explicit feature of the API makes this more relevant as prior art. These six lines, written in 1994, do what Nintendo is claiming to patent.

image generation means for generating an image for a right eye and an image for a left eye by shooting the virtual space with a right
virtual camera and a left virtual camera, respectively, so that the
predetermined object, which is included in each of the image for a
right eye and the image for a left eye, has the degree of transparency
set by the transparency degree setting means; and

display control means for displaying the image for a right eye and the image for a left eye generated by the image generation means, on
the display apparatus.

OpenGL has supported stereoscopic rendering from day one. Every version of OpenGL has had separate back and front buffers for left and right eyes (GL_BACK_LEFT, GL_FRONT_LEFT, GL_BACK_RIGHT, GL_FRONT_RIGHT).

Generally speaking, only professional hardware exposes stereoscopic rendering through OpenGL. But the API can do it.

"Could be done" isn't the same as "has been done". If those 6 lines were written in 1994 and there is a published document, that is prior art. If, now that we read the application, we can figure out out to do it with an old system that is not prior art.
– George WhiteApr 13 '13 at 2:40

@GeorgeWhite: I'm not sure what the rules are for prior art, so I'll leave that to others to decide. But I would think that, considering how simple this was to implement, that the "obviousness" issue would matter.
– Nicol BolasApr 13 '13 at 5:11

So, we don't know that those lines were written in 1994? Or that someone wrote an article about how interesting that effect would look? Difficulty of implementation is not related to obviousness. What one can do after reading the application is hindsight, not prior art. Many inventions involve a small change that make a big difference. Heating a substance to a particular temperature rather than a different temperature can produce a dramatically different result but it may just take a different setting of a knob. This is not my field and I have no idea if this is novel or non-obvious.
– George WhiteApr 14 '13 at 1:33

Yeah this won't be enough to kill the patent. This is a modern piece of code, not something that existed in 1994... Simply saying that a thing could have existed does not qualify as prior art.
– johnwbyrdJul 22 '13 at 17:08

2

Since it wasn't written before 1994, forget the 6 lines of code. But Section 3.9 of the linked OpenGL 1.0 spec specifically addresses achieving a fog effect by fading objects more the farther they are. The GL even gives several options of the equation of fading, and they are all parameterized. So even though Bolas's code wasn't written in 1994, the technique is clearly described in the OpenGL spec.
– Nicu StiurcaJul 23 '13 at 5:25

It has publicly released support for stereographic rendering since earlier this year. This by the use of the Oculus devkit. The free to play game "Team Fortress 2" is a concrete example of these both features being used together.

I think this part of that link says it all:Fade Scale <float> If you specify so in worldspawn, or if the engine is running below DirectX 8 (DX7 in Ep1), props will fade out even if the fade distances above aren't specified. This value gives you some control over when this happens: numbers smaller than 1 cause the prop to fade out at further distances, and greater than 1 cause it to fade out at closer distances. Using 0 turns off the forced fade altogether. See also the QC command $noforcedfade.
– user4389Jul 22 '13 at 18:27

This YouTube video of Minecraft, dated 2009-12-24, clearly demonstrates both the fog that fades objects to different degrees as their distance from the camera changes, as well as the anaglyph 3D feature.

Isn't this how a lot of games do LOD swapping, to cross-fade between a complex object and a simpler mesh (or texture billboard) as it got further from the camera? The earliest case I can clearly remember is the foliage in DICE's "Battlefield 1942" (published in 2002), but I'm sure there must be others.

"the longer the distance is, the higher the degree of transparency"
Isn't this just saying "We want to take coding for fog and and patent making the numbers go down instead of up with distance"
Would that fail for obviousness?
Might also investigate prior art for the treatment of mirrors.

Re "a display apparatus capable of stereoscopic display", my first guess at where you can find prior art is the Virtual Boy game console, but I am not familiar with whether people could tune the brightness of the LEDs to give a sense of "depth" to the background. If somebody could recall a game which did this, then that would be a stellar example that the effect has been done since the late 1990s.

Wikipedia also kindly offers a list of stereoscopic video games to examine for the effect. Is anyone familiar with any of these? Did any of them tweak luminance or transparency to make things appear more distant?

The OpenGL Programming Guide, 5th edition (published in 2006), describes use of "fog" in great detail between pages 261 and 271. It begins:

Computer images sometimes seem unrealistically sharp and well defined. Antialiasing makes an object appear more realistic by smoothing its edges. Additionally, you can make an entire image appear more natural by adding fog, which makes objects fade into the distance...

When fog is enabled, objects that are farther from the viewpoint begin to fade into the fog color.

Silent Hill: Downpour (2012) is an implementation that supports both stereoscopic rendering and fog, which constitutes prior art: http://en.wikipedia.org/wiki/Silent_Hill:_Downpour. This game was built using Unreal Engine, which uses OpenGL and DirectX for rendering.

This video is taken from the game. Fog (distance-based object transparency changes) is clearly visible at the beginning: http://youtu.be/nbEI4B_rmiU

Well, I used this technique in a project described on my website. Since I haven't published any papers about this project, I'm not sure it counts (and frankly the "fake fog transparency" trick never struck me as novel since I was, myself, copying it from numerous earlier examples even then in 2004), but the web does represent a historical record of sorts. At least, it establishes that this technique is quite old.