In the previous post in this series, I selected a Bing Maps tile and converted its quadkey to a POLYGON representing the geographic extent of that tile. I then used that POLYGON as the basis for a query of the GTOPO data in SQL Server to retrieve the corresponding elevation data for the terrain on the tile. In this post I’ll combine these two into a 3d model showing the terrain of that tile.

Triangulating the Elevation Data

The GTOPO data is a set of distinct elevation recordings, recorded on a grid spaced at regular intervals, like this:

In order to construct a 3d terrain model from this elevation data, we need to construct a continuous smooth surface from those distinct points. For this, we will use triangulation.

Tringulation, as its name suggests, is the process of creating triangles from a set of data points. These triangles will tesselate together to cover the entire extent of data, with no gaps and no overlaps. There are many different triangulations of the same set of points – but I’ll use the Delauney Triangulation. I could probably write a whole new blog post on the Delauney triangulation, but for now all I’ll say is that there are many examples of code on the internet that describe how to create a set of triangle polygons from an input set of points. For example, you could try looking here, here, or here.

Triangulating the set of elevation points leads to a set of triangular polygons, like this:

At first glance it may not be obvious why having a set of triangles in which each vertex is a point in the dataset is any better than the set of points we started with. What you’ve got to remember (and what the spatial results tab in SQL Server Management Studio can’t show you) is that each of the vertices of these triangles has an associated Z value – they are all at different heights. Therefore the triangles in our dataset now form a set of connected, angled faces.

Creating the WPF 3D Mesh

To display a 3D mesh representing the surface constructed from these triangular faces, I’ll use a WPF MeshGeometry3D object. First, connect to the database and loop through the triangulated SqlGeometry polygons, adding the X, Y, and Z values to the Mesh Positions array:

MeshGeometry3D mesh = new MeshGeometry3D();

while (dataReader.Read())

{

SqlGeometry v = (SqlGeometry)dataReader.GetValue(0);

for (int n = 3; n >= 1; n—)

{

mesh.Positions.Add(new Point3D(

(double)v.STPointN(n).STX,

(double)v.STPointN(n).Z / 10000,

-(double)v.STPointN(n).STY

));

}

}

There’s a couple of points to note here:

Firstly, every value returned by my DataReader is a triangular SqlGeometry polygon. Polygons must be closed, so they actually contain 4 points – the last point being the same as the first point. However, we only want the three distinct vertex locations.

Secondly, when defining 3D objects in WPF (as in most 3D applications), the order in which you define the coordinates is important as this is used to determine the “direction” in which each face points. Unless you explicitly specify vertex normals, you should define vertices in anticlockwise order as you look at them to ensure that the associated face points towards you Faces are single-sided, so if you look at them from behind they will appear invisible. I loop through the points of each triangle from n = 3 to n = 1 to ensure they are added to the mesh in anticlockwise order.

I’ve applied an arbitrary scaling factor of 10,000 to the Z value just to make the mountains and valleys look aesthetically attractive compared to the horizontal dimensions. I wouldn’t have needed to do this if I had first projected my latitude and longitude coordinates into metres, so that they were measured in a unit consistent with the GTOPO elevation data.

Notice that, when considering a flat two-dimensional object, we tend to think of the x axis as extending across towards the right of the screen, and the y coordinate coordinate extends up the screen. The z coordinate comes towards us from the screen. However, the WPF 3d coordinate system treats x coordinate as extending to the right, the z coordinate extending upwards, and the y coordinate extending forwards. So, to map the SqlGeometry coordinate values to the WPF Point3D object, I use X = X, Y = Z, and Z = –Y.

The resulting mesh, illustrating the mountains and valleys of Scotland contained on this tile, looks like this:

Adding Material to the Mesh

Now we’ve got our mesh, we can specify a material to paint it with. But, before we do, we need to specify texture coordinates for each point of the mesh. Texture coordinates are used to describe how a 2D image should be stretched over a 3D shape. Fortunately, because we’re dealing with a square tile, the texture coordinates are thankfully simple – we want to stretch the entire image over the entire mesh, with the image equally stretched at every point.

Thus, for each point added to the mesh, the associated texturecoordinate is as follows:

mesh.Positions.Add(new Point3D(x, z, -y));

mesh.TextureCoordinates.Add(new Point(x, -y));

Once we’ve specified the appropriate texturecoordinates for each point in the mesh, we can create a material based on an ImageBrush to superimpose the original Bing Maps tile image over the top.

Here’s what the terrain mesh looks like with an aerial tile image brush:

Or, with a road tile:

Smoothing the Mesh

Currently, my mesh lists each point of each triangle separately – it contains 2,099 faces and 6,297 vertices (3 distinct vertices for each triangle). However, many of the vertices in the mesh are actually duplicates – any vertex at which two or more triangles meet is currently listed multiple times. The effect of listing points in this manner is to cause each triangle to be rendered individually, causing the “hard edges” between faces shown in the preceding image.

As an alternative, we could list only unique points in the Positions list of the mesh, and then use the TriangleIndices property to list the index positions of each vertex that forms a triangle. For example, when looping through each point, x: