When is a Point Inside a Polygon?

In many applications, it is important to be able to determine whether a certain point is located inside a given polygon. A well-known algorithm to answer this question for general polygons (with possible holes or self-intersections) is the half-line algorithm (also see Figure 1):

From the point P in question, shoot a ray (half-line) in any direction, such that the ray does not directly hit any vertices of the polygon Q.

Initialize a counter intersectionCount to zero.

For each edge e of Q, determine whether the ray intersects that edge. If it does, increase intersectionCount.

After all edges of Q have been checked, P is inside Q, if and only if intersectionCount is odd.

Figure 1: Determining whether a point is inside a polygon Q using the half-line algorithm and the even-odd rule. Point P1 is inside the polygon, because its ray intersects Q three times; point P2 is not inside the polygon, because its ray intersects Q four times.

This algorithm, simple as it is, has one major problem: Its stability, or more precisely, its lack thereof. The first step of the algorithm says: "... such that the ray does not directly hit any vertices..." What happens if the ray does hit any vertices?

In the case of the ray intersecting a line segment exactly at one of the endpoints, should the algorithm consider this a "valid" intersection, or not? The problem is, that each vertex of a polygon is shared by two adjacent polygon edges. This means, if one edge is intersected at an endpoint, the adjacent edge is also intersected at an endpoint. If the algorithm doesn't count a vertex intersection, a ray can exit a polygon without intersecting any edge; on the other hand, if vertex intersections are counted, they are counted twice, having the same effect. This problem can lead to vertices inside the polygon being classified as "outside," and it can also cause points far outside the polygon to be counted as inside.

A simple way to fix the algorithm doesn't really work: We could workaround the problem by shooting another ray if the first one hit a vertex. This leads to another problem: How to determine whether a ray hit a vertex. Due to the limited precision of computer arithmetics, this is not always possible.

To solve this problem once and for all, we have to change the definition of a valid edge intersection. To do this, we agree on always shooting rays in the positive x-direction; in that case, each ray shot from a point (Px, Py) separates the (x, y) plane into two regions: The region R1 containing all the points (px, py) such that py < Py; and the region R2 containing all the points (px, py) such that py >= Py. Then, we only increment the intersection counter, if the two endpoints of an edge lie in different regions, and if the intersection points of the edge with the line y = Py lies to the right of P, see Figure 2.

Figure 2: Region changes of edges ei with respect to a ray r. e1 does not intersect, because the intersection point is to the left of P; e2 does intersect, e3 does not intersect, because both endpoints are in region R2; e4 does intersect, because the endpoints are in different regions; e5 does not intersect, because both endpoints are in region R2.

This is the modified version of the algorithm to determine whether a point P is inside a polygon Q:

Initialize intersectionCount to zero.

For each edge e of Q, defined by p1 and p2, check the following:

If p1y < Py and p2y < Py, do nothing (both points are in region R1).

If p1y >= Py and p2y >= Py, do nothing (both points are in region R2).

Otherwise, calculate the intersection point S of edge e and the line y = Py. If Sx >= Px, increment intersectionCount.

After all edges have been checked, P is inside Q, if and only if intersectionCount is odd.

The major difference between this algorithm and the previous one is, that this one increments intersectionCount by exactly one when the ray hits a vertex: Only one of the two edges sharing that vertex will change the region and be counted. This solves the stability problem of the original algorithm - now, the only reason a point could be wrongly classified is that it could be lying very close to an edge; in that case, numerical imprecision would make a correct classification impossible. But this is not a problem, since only points very close to edges would be affected; for those, a wrong classification would not matter. Figure 3 shows the "degenerate" case of an axis-aligned polygon. The standard algorithm would typically fail for polygons exhibiting horizontal edges; the improved algorithm always classifies correctly.

Figure 3: Even in a polygon with horizontal edges, the improved algorithm always classifies correctly: Point P2's ray has exactly one region change. Edge e3 counts as a region change, since its endpoints are in different regions and it intersects P2's ray to the right; edge e4 does not count as a region change, since both endpoints are in the same region; edge e5 does not count as a region change for the same reason.

Another benefit of the improved algorithm is its higher performance. Let us consider a polygon having a lot of, say 1000, edges; in the original algorithm, each of the edges would have to be intersected with the ray, which involves lots of expensive arithmetic operations per edge. But only a few of these edges will ever change the region (typically two to six), and the improved algorithm rejects all others by just comparing floating point values, which does not involve any costly operations. It will only do the actual intersection calculation when there is a high chance (50%) that the edge will intersect the ray.

As far as I know, this algorithm has been invented by Prof. Alfred Schmitt (Universität Karlsruhe (TH), Karlsruhe, Germany) in the mid-70s; I have no idea why every graphics book out there doesn't seem to know it. I was lucky, Prof. Schmitt happened to teach the "Introduction to Computer Graphics" course at Karlsruhe the year I took it.