4 Answers
4

One basic approach is the same as computing the shortest distance between 2 lines, with one exception.

If you look at most algorithms for finding the shortest distance between 2 lines, you'll find that it finds the points on each line that are the closest, then computes the distance from them.

The trick to extend this to segments (or rays), is to see if that point is beyond one of the end points of the line, and if so, use the end point instead of the actual closest point on the infinite line.

The SoftSurfer code is fairly good. It has trouble with almost parallel lines. I ended up writing a check for "almost parallel" lines. Once I did this, it worked well. I'm not sure why the check for "almost parallel" lines built into SoftSurfer didn't work out. Just thought the next user would like to know....
–
Tim PerryJul 10 '10 at 0:37

See that for each line, when the parameter is at 0 or 1, we get one of the original endpoints on the line returned. Thus, we know that PQ(0) == P, PQ(1) == Q, RS(0) == R, and RS(1) == S.

This way of defining a line parametrically is very useful in many contexts.

Next, imagine we were looking down along line PQ. Can we find the point of smallest distance from the line segment RS to the infinite line PQ? This is most easily done by a projection into the null space of line PQ.

Thus, null(P-Q) is a pair of basis vectors that span the two dimensional subspace orthogonal to the line PQ.

> r = (R-P)*N
r =
0.83265 -1.4306
> s = (S-P)*N
s =
1.0016 -0.37923

Essentially what we have done is to project the vector RS into the 2 dimensional subspace (plane) orthogonal to the line PQ. By subtracting off P (a point on line PQ) to get r and s, we ensure that the infinite line passes through the origin in this projection plane.

So really, we have reduced this to finding the minimum distance from the line rs(v) to the origin (0,0) in the projection plane. Recall that the line rs(v) is defined by the parameter v as:

rs(v) = r + v*(s-r)

The normal vector to the line rs(v) will give us what we need. Since we have reduced this to 2 dimensions because the original space was 3-d, we can do it simply. Otherwise, I'd just have used null again. This little trick works in 2-d:

> n = (s - r)*[0 -1;1 0];
> n = n/norm(n);

n is now a vector with unit length. The distance from the infinite line rs(v) to the origin is simple.

> d = dot(n,r)
d =
1.0491

See that I could also have used s, to get the same distance. The actual distance is abs(d), but as it turns out, d was positive here anyway.

> d = dot(n,s)
d =
1.0491

Can we determine v from this? Yes. Recall that the origin is a distance of d units from the line that connects points r and s. Therefore we can write d*n = r + v*(s-r), for some value of the scalar v. Form the dot product of each side of this equation with the vector (s-r), and solve for v.

> v = dot(s-r,d*n-r)/dot(s-r,s-r)
v =
1.2024

This tells us that the closest approach of the line segment rs to the origin happened outside the end points of the line segment. So really the closest point on rs to the origin was the point rs(1) = s.

Backing out from the projection, this tells us that the closest point on line segment RS to the infinite line PQ was the point S.

There is one more step in the analysis to take. What is the closest point on the line segment PQ? Does this point fall inside the line segment, or does it too fall outside the endpoints?

We project the point S onto the line PQ. (This expression for u is easily enough derived from similar logic as I did before. Note here that I've used \ to do the work.)

> u = (Q-P)'\((S - (S*N)*N') - P)'
u =
0.95903

See that u lies in the interval [0,1]. We have solved the problem. The point on line PQ is

> P + u*(Q-P)
ans =
0.25817 -1.1677 1.1473

And, the distance between closest points on the two line segments was

> norm(P + u*(Q-P) - S)
ans =
1.071

Of course, all of this can be compressed into just a few short lines of code. But it helps to expand it all out to gain understanding of how it works.

I would parameterize both line segments to use one parameter each, bound between 0 and 1, inclusive. Then find the difference between both line functions and use that as the objective function in a linear optimization problem with the parameters as variables.

So say you have a line from (0,0,0) to (1,0,0) and another from (0,1,0) to (0,0,0) (Yeah, I'm using easy ones). The lines can be parameterized like (1*t,0*t,0*t) where t lies in [0,1] and (0*s,1*s,0*s) where s lies in [0,1], independent of t.

Then you need to minimize ||(1*t,1*s,0)|| where t, s lie in [0,1]. That's a pretty simple problem to solve.

Given line segments from p1 to p2 and from q1 to q2 you need to compute all of the following distances and take the minimum: (line1, line2), (p1, line2), (p1, q1), (p1, q2), (p2, line2), (p2, q1), (p2, q2), (line1, q1), (line1, q2). (Or maybe you can show mathematically that some can be eliminated.)
–
j_random_hackerMar 10 '09 at 15:50

How about extending the line segments into infinite lines and find the shortest distance between the two lines. Then find the points on each line that are the end points of the shortest distance line segment.

If the point for each line is on the original line segment, then the you have the answer. If a point for each line is not on the original segment, then the point is one of the original line segments' end points.