graphics

Sphere Mapping

This post on sphere mapping is part of ongoing series on implementing the legacy NeHe lessons (originally done in C++ with OpenGL 1.x) using the three.js library. This post covers Lesson 23.

In computer graphics, sphere mapping (or spherical environment mapping) is a type of reflection mapping that approximates reflective surfaces by considering the environment to be an infinitely far-away spherical wall. This environment is stored as a texture (image) depicting what a mirrored sphere would look like if it were placed in that environment.

So how does this magic happen? Once again, three.js does almost all the heavy lifting. However, there are a couple interesting wrinkles, notably how the texture gets mapped onto the sphere and how the background is rendered.

Please see the article linked above on the Geo-F/X website for all the details.

Mapping on to the Sphere

In earlier lessons (like Lesson 9 and Lesson 17) we covered UV coordinates and how they are used to map a texture onto a shape. But in those cases, the texture was flat and the target shape was flat as well. In this case, we use a texture that has a “fake” curvature to it. See the lesson for details on how this was done. Here is the spherized image:

You can most clearly see the effect of the spherical filter in the “halo” in the sky above El Capitan. The details on the math used to map the spherized image are in the article.

Rendering the Background

We map the texture onto a sphere which appears to be in the foreground and can be spun and moved independently of the background. So how is this done? The answer is that the demo contains two scenes, a foreground scene containing the sphere and a second scene which is the background. Then the background is a second scene that has a single plane geometry covering the entire view of the second scene. Finally, the properties of the scene are set such that it is always rendered behind the first scene. Details of all this are in the lesson itself.

The Lessons and Source

More information and a live demo of this lesson can be found at Geo-F/X here. As always, the sources are on github here. Feel free to contact me at rkwright@geofx.com or comment on this article directly below.

Orthographic Projection

This post is part of ongoing series on implementing the legacy NeHe lessons (originally done in C++ with OpenGL 1.x) using the three.js library. This post covers Lesson 21 – Orthographic Projection. This is a relatively complex topic though three.js does most of the heavy lifting. Still, we’ll only hit the high points here. Please see the article linked above on the Geo-F/X website for all the details.

Projections

Orthographic projection is a means of representing a three-dimensional object in two dimensions. It is a form of parallel projection, where all the projection lines are orthogonal to the projection plane. This results in every plane of the scene appearing in an affine transformation on the viewing surface.

There are many types of projections. Two of the most common are perspective projections and orthographic projections. In addition, there are many more. You are probably familiar with many of the geographic projections. These map the surface of sphere (such as the earth) onto a flat surface. Mercator is the most common, but many others exist. A projections section on the Geo-F/X site covers many of these.

Orthographic Projection and GfxScene

As the lesson outlines, the GfxScene object has been expanded for this lesson to support orthographic projections. The lesson shows how to use the new functionality. It allows one to switch back and forth between orthographic and perspective projections for the same scene, like this:

An orthographic projections

A perspective projection

In a later lesson (Lesson 42) we’ll demonstrate how to use both projections in a single scene. This isn’t very commonly used, but it’s a cool technique.

The Lessons and Source

More information and a live demo of this lesson can be found at Geo-F/X here. As always, the sources are on github here. Feel free to contact me at rkwright@geofx.com or comment on this article directly below.

Both of these are using images with WebGL to create some interesting effects. They’re pretty simple though so we’ll cover them pretty quickly.

Alpha Masks

The key point to alpha masks is to create a textured object using TWO images. One is the full color image where the part you want rendered is non-black and the rest of the images is black (i.e. pixel values of 0). The second image is black and white, with the white portions corresponding to the non-black parts of the first image. Then when the two images are combined to form a masked texture in three.js the black portions of the resulting texture are transparent and only the non-black parts of the first image are shown.

Bump Maps

Bump maps also use images to produce the effect, but in a very different way. Bump mapping was invented by Jim Blinn in 1978 as part of his work for the visualization of the Voyager space-probe project. Bump mapping is a technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations. The result is an apparently bumpy surface rather than a smooth surface although the surface of the underlying object is not changed – only the apparent reflection of the light off the surface is changed.

More information and live demos of both these lessons can be found at Geo-F/X here.

The Lessons and Source

You can find the Lesson 20 at Geo-F/X here and lesson 22 is here As always, the sources are on github here. Feel free to contact me at rkwright@geofx.com or comment on this article directly below.

Introduction

This post is a bit of a hybrid as it makes use of Canvas2D to render the text and labels, then composites them into the scene as three.js sprites. The original NeHe lesson 13 was about bitmapped fonts and how to use them, but that’s so last century. Canvas2D provides support for true vector fonts so why would one use bitmap fonts?

On the other hand, being able to place labels where you want in a 3D scene is a handy feature. Moreover, the use of three.js sprites has another useful aspect as three.js sprites are implemented such that they are effectively in 2D space in the plane of the screen, so they are always facing the user, no matter how the scene is oriented.

There are four main parts to the demo:

Setting up the Canvas2D that is the basis for the sprite

Rendering the text and label on the canvas

Loading the contents of the canvas as a texture

Positioning the resulting sprite in the scene

Setting up the Canvas

This is standard HTLM5 – nothing tricky here. Two tips:

Make the canvas big since only the label and the text will actually be rendered as the texture since the rest of the canvas is transparent as far as three.js is concerned

Ensure that the canvas size is a power of two, e.g. 2048×2048. If you don’t three.js will change it for you and send a warning to the console

Rendering the Text Sprites

I won’t go into the details how HOW this is done. Take a look at the lesson on the Geo-F/X website here for all the details (and of course the sources are on github). The key is the call to makeTextSprite:

Yeah, the function’s signature is unwieldy. I have a to-do item to convert it to use some form of class/object but it works as-is. Notice that you can set the vertical and horizontal alignment of the text with respect to the 3D point where the sprite will be rendered. You can set the color of both the text and the label, round-corners or not, opacity, etc. etc.

And that’s pretty much it! Fairly simple but rather handy. At some point I will do the refactoring mentioned above and add it to the GFXScene class.

You can find the lesson at Geo-F/X here. As always, the sources are on github here. Feel free to contact me at rkwright@geofx.com or comment on this article directly.

Intro

This is the first blog from my new site (host). The intent is to provide some outlines of the wacky graphics explorations (WebGL, three.js) I have been trying out. During the day I manage a large open-source project (readium.org) and do some consulting in digital publishing. Not much scope for playing with graphics. At the same time, in the distant past I was a university professor, specializing in hydrology, permafrost and GIS. Don’t do much of that anymore either… 🙂 So I spend my spare time playing with scientific visualization of some of my experiments (when I am not in my woodshop). My website is here, my github account here, and my LinkedIn profile here.

Intent

The intent of the blog is to provide some color and explanations of the details, motivations and experience in some of the graphics and digital publishing explorations I have been doing. On my website Geo-F/X I’ll post the result of my explorations on my website. Here in the blog I’ll look at and discuss how and why they came to be.

Three.js and NeHe

I’ll start out the blogs with a recent project of mine, implementing the legacy NeHe demos with three.js and WebGL. I did a lot of work in OpenGL at one point, back in the early days of OpenGL (when little worked correctly) and the NeHe demos were nice cool intros. I was busy writing PostScript interpreters (for Eicon and QMS) so OpenGL was just a hobby. More recently, after a number of excursions (GIS, SVG, working at Adobe then eBooks and digital publishing, I decided to look into WebGL as the browser support was getting pretty good.

I came across three.js and it looked cool, so I decided to amuse myself by implementing the NeHe demos in three.js. Turned out to be more work than I expected, but the code is all done and most of the “tutorials” are as well. Hope to complete them very soon. In the meantime, I am going to start blogging about the first couple of dozen which are all done. You can see the results here and the sources are on Github here.