JPS is awesome, but it's missing two features I want in a pathing algorithm:

Weighted grids. I think Rectangular Symmetry Reduction can handle weighted grids, and it's quite similar to JPS in runtime efficiency.

Multigoal search. This is more of a library consideration than the algorithm itself. JPS can handle multiple starting points, so it should be possible to expose that as multiple goals by performing the search from goal to starting point.

Rectangular Symmetry Reduction is related to JPS. They both exploit symmetry, and get similar performance gains over A*. As far as I can tell, RSR can handle weighted grids, but it requires preprocessing and additional data structures.

was about to say this, JPS assumes A -> B has the same uniform weight as B -> C. So if you ever have a situation like the traveling sales man (shortest path to point B) then i dont think (from what i gather) JPS wil be able to compute different weights

so i think a grid where cost to go from X -> Y is different for each case would fail JPS

TSP is also a good example of where programmers need to learn the difference between having the right answer and a good enough answer. While it is NP-Complete, we can use heuristics to calculate an answer that is very close to optimal in no time flat.

Great things come out of "good enough". The fast inverse square root is a good example of this.

Or, as the Devil's DP Dictionary says, "The traveling salesman problem is a problem of the type that computer scientists consider of extreme difficulty, but which traveling salesmen have been solving for centuries."

The book is rather old (predating the sophisticated use of computers by delivery companies, I'm sure), but it's extremely funny if you actually work with computers. My favorite entry was the three-page-long entry on "Kludge". And yes, I was just being funny.

The fast inverse square root (sample code) would never be tolerated in good modern code (i.e. code which is both modern and good; I'm not trying to say that new code is better than old code or anything of the sort). It's too hard to read.

This is why I specifically mentioned "modern good code" and further clarified it to "code which is both modern and good." Optimizing compilers, Moore's law, etc. have gotten us to the point where developer time is much more expensive than running time.

Uniform cost has it's place though, say for example if you're just trying to find if two volumes are connected to each other (has a sealed volume been "breached" and now connected to the "vacuum" of space). I was on a forum the other day when someone was trying to do exactly that, their grid was basically "air permeable or not air permeable".

Most world representations don't use grids these days, so probably not, no. But when you have the luxury of being able to use a search space that follows these rules, it is always likely to be a useful optimisation.

As I recall, for a particular type of heuristic functions (that are easy to make, basically best-case estimates), A* is provably optimal, meaning you have to find some better heuristic function, or use more information, to improve your results.

This algorithm appears to use more information, namely the information that the graph is a square grid with uniform costs.

That's my effort. The two U shaped obstacles around the goals effectively push the start of the pathfinding to the back wall. The staggered single pegs across the middle of the board each give two equally good paths on either side of them, forcing the path finding algorithm to consider each side.

Keep in mind that a JPS 'operation' is significantly more expensive than an A* 'operation'.

As far as I can tell, JPS gets its real speed boost by minimizing the number of accesses to the open set (the log n priority queue data structure); in cases where the number of open nodes is comparable, the extra work per step to find the jump points will make it slower overall.

I was suspicious of that. In my examples, the 'far flung' green squares would have required some sort of blockage check inbetween, which I don't think are accounted for in the 'operation' tally.

Then perhaps these are examples of the sorts of problems JPS will be slower then A*. The methodology I used to create the examples was to create a high number or 'decision' squares on the periphery. This tool seems to indicate that seeking the jump points is 'free', the article suggests (slightly) otherwise.

For what its worth, I downloaded the code for that site from github intending to color squares based on their inspection (something the JPS must do to find a 'clear' path to a jump point) but i found you could tweak A* with a 'corner' weight. I did that with my latest map, and got this, http://imgur.com/a/MSdXy . A* taking (42 vs 57) fewer operations* then JPS.

Given that JPS operations are not exactly equal to A and might be more expensive.

What I'm curious personally about A* is how you make a map that is generally the same solutions to get from A to B, but has a couple mutable movable pieces(zerglings) in it that act as walls and change the solution in real time. You don't want to calculate A* maps over and over again to save on processing time.

In general, we've found that JPS is the worst with "forests", so a lot of decisions. Corridors and open spaces are its strong suit, but crossings makes it slower. Even with the worst forest, it's roughly as fast as A*, so pretty impressive. (Edit: by "we", I mean a study mate and me.)

Nonsense. If you make it random there's almost certainly going to be paths that are trivially more optimal than other paths. With a worst case scenario you want all paths to be more or less equal, forcing the algorithm to go deeper in the search to find the actual optimal path. This means symmetry - all paths have to be more or less equal.

Without preprocessing the grid the algorithm couldn't possibly know to go around this object in the first step because it doesn't know anything about the obstacle. It's really exploring a similar amount of the map as A* does but the way they're diagramming it makes it look like it's doing a much smaller fraction of the work than it actually is.

24 is the number of squares it goes through to get to the end. It is the shortest path to the goal. Without 'extra knowledge' about the playing field there is no way to 'skip' those squares. They had to have been accessed, even to determine that it is safe to skip them.

A perfect pathfinder would make a prefect decision every time a decision is required, that is it somehow knows the correct direction it should travel in to reach the target in the shortest distance, without testing any of the alternatives first . Since there are 24 steps to reach the goal, then a minimum if 24 decisions must be made.

I understand your point, but the set operations+cost evaluation are the heavy work in A*. JPS eliminates most of those operations in exchange for something faster(the jps per se). That's the point of the demo.

You're right, but I do think it's noteworthy to point out that reading a tile doesn't even come close to the cost of pushing a tile in the open set, like you'd in regular A* with every tile you encounter. If you've ever implemented regular A*, you'd know the biggest cost lies with sorting the open set, whether you add it intelligently or you have to search the set for the cheapest one later on. I think the graphical representation of the arrows reflects this.

This algorithm simply looks ahead to see whether it's worth it to add a tile. The actual looking ahead part is quite clever as well. By sticking to its direction and ignoring everything else until it absolutely has to, it makes sure that the total amount of tiles that are read, is not higher than you'd have with regular A*. Maybe even less in practice. It also makes sure that the amount of tiles that are actually pushed on the open set is reduced to a minimal. I find it more ingenious the more I think about it.

edit: I just implemented this and it works like a charm. It's also important to note that the algorithm is recursive, which makes it quite easy to implement. However, that makes the demo even more deceiving than I originally thought, because it actually has to remove arrows where it knows it leads to dead ends. I just made this test in the demo to prove it. Because the 'stairs' goes up by (2, 1), at every corner it should get one of those forced nodes. Obviously going diagonal here is cheaper than going straight down, so these corners simply had to be visited and 'pushed' to the open set before considering the final route. Clearly when it recursively knew it had a dead end, it simply removed all existence of its search there.

Still, that said, I can't imagine this algorithm be slower than regular A*. It's really fast considering it has so few pushes to the open set compared to regular A*.

I'm not saying the algorithm shouldn't work, I'm just saying the demo is misleading. I know it doesn't do any preprocessing. The diagram makes it look like the algorithm goes around the obstacle immediately, when in reality it does something very similar to A*: It looks toward the obstacle until it runs into it and keeps scanning until it finds a corner to go around. The demo just doesn't show all those intermediate steps even though it shows every step of the A* pathfinding. I just wanted to point out that the algorithm isn't quite as superior to A* as the demo makes it look.

Does anyone have a real benchmark of the performance between the different algorithms? Both algorithms still have to fetch a large chunk of the board from memory. How much difference does the reduced number of checks actually make?

Does anyone know if the authors followed up on the last, intriguing, final paragraph of the original reference:

One interesting direction for further work is to extend
jump points to other types of grids, such as hexagons or
texes (Yap 2002). We propose to achieve this by develop-
ing a series of pruning rules analogous to those given for
square grids. As the branching factor on these domains is
lower than square grids, we posit that jump points could
be even more effective than observed in the current paper.
Another interesting direction is combining jump points with
other speedup techniques: e.g. Swamps or HPA*.

The key insight into JPS is that the ideal path always goes through a point next to the corner of an obstacle. What JPS actually ends up doing is reducing the uniform cost grid to a graph of just corner nodes and A*-ing through that as opposed to every single node.

I think you could pre-calculate JPS by constructing a graph of just corner nodes and calculating the straight-path cost between ajacent corners. For a path, you make a node of the starting location, make it ajacent to the closest node CS and ajacent to all nodes CS is ajacent to. If more than one node tie for the closest, you will have to do that for each one that ties (all nodes ajacent to the closest one will be considered, and "ties" should be ajacent anyway). Do the same for the destination. A* it out for your final path.

I don't see how changing the graph to hexes or triangles would really affect the underlying algorithm at all. However, it has to remain uniform cost to maintain that the shortest distance between two points is a straight line. Otherwise, the heuristic that the shortest path goes through a jump point breaks down.

I'm fairly certain that all of this work and cases just collapses down to mean, "In A*, resolve ties in the value of f(x)=g(x)+h(x) to choose the node with the smallest h(x) as the next one to expand."

Am I missing something? A good A* implementation shouldn't be expanding as many nodes as their demo appears to expand.

When coding A* I have added cost to changes in direction, so I would assume something similar could be done here. Combine that with some post-processed curves using the path nodes, you would end up with something pretty natural looking.

There's plenty of nodes in the open set that are heuristically far closer to the goal (current distance + expected length) than the selected node. Maybe he is using a bad heuristic like bird-fly distance?

I think the author of this page misinterpreted JPS a bit; when I read the paper it seemed like the goal was still to find the optimal 4-connected path (only allowing NSEW moves, and not treating diagonal moves as 'better')--but you need the diagonal jumps to make the technique work. You can still make the diagonal steps cost 2x a 'straight' step. Even if you want to consider diagonal steps as cheaper, your heuristic should be something like sqrt(2) * min(dx,dy) + abs(dx - dy) where dx and dy are the manhattan distances to the goal in the x and y directions. That's the minimum cost if you walk straight to the goal from a location ignoring walls.

"Technically, the A* algorithm should be called simply A if the heuristic is an underestimate of the actual cost. However, I will continue to call it A* because the implementation is the same and the game programming community does not distinguish A from A*."