Wanted to post my new pathfinding applet and let people play with it. It uses A* and its a almost a port of the code I did in Blitz3d a year or so ago. The java version came out much cleaner and more efficient, of course the Java version is 2d where the Blitz version was for 3d. Try it out and let me know what you think.

It would be great if someone can compare the timing to their own pathfinding algo's out there, the timing is purely the time it takes from starting the pathfinding to finishing the search, no drawing or rendering is involved in the timing.

Let me know what you think!Make sure you click inside the applet to capture the focus.Press the 6 button to throw random blocks throughout the mapleft click a cell to make it the start cell, turns blueleft click again to make selected cell the target cell, turns redRight click the mouse anywhere in the applet to find the path.

You can use the middle mouse button to create a new block or erase a block

Pretty cool, I like path finding a lot too. Have you tried many optimisiations? I found that the binary heap was a massive performance boost.

Why would anyone implement A* and not use a heap at least as sophisticated as a binary heap?

Xyle, if you want to get useful comparisons then you're going to have to post your source code. You're also going to have to look at larger test cases - a 16ms test is far too small to measure real differences unless the one you're measuring against is at least an order of magnitude slower. As a general rule of thumb I would say that a performance test which doesn't run for at least a minute won't give you any useful information.

Few things you can do in that application:- You can resize the grid by using arrow keys (WARNING, don't exceed 256x256...starts to lag)- You can toggle on/off the gridlines using G- You can move grid grid using W,A,S,D (buggy though)- You can zoom in/out using + and - on the numkeypad.- You can change the heuristics by pressing 1-5 on the qwerty-keypad (only in A*)- You can press SPACEBAR to show how the nodes were traversed (pink show paths tried, fading to black means not tried as much)- You can reset the view of the grid by pressing R.- Press enter to relocate the start and goal nodes (sorry, you can't manually position them yet!)- Press F5 to refresh the pathfinding operation... nice to do a few times to see the timer.

Appel, I really appreciate the sharing of your code. I changed my timing to match yours and ran a few tests that matched the map up with yours. I was excited to see that the paths found were identical and the search paths were relatively close for the most part. Although I was really bummed to see that your code is almost 3 times faster than mine, hahahaha!!

Now, who said what about Binary Heaps? lol, I'm going to have learn em I reckon!

There are hundreds of heaps in the literatures. AIUI Fibonacci heaps were designed specifically for use with Dijkstra's algorithm, and are asymptotically better; relaxed heaps are also asymptotically better, and I'm sure many others are too. Then there are things like the C algorithm which can be seen as an algorithmic variant of A* or as A* with a variant heap. I use it with a non-admissible heuristic where I care about finding a route quickly more than finding the best route.

Appel, I really appreciate the sharing of your code. I changed my timing to match yours and ran a few tests that matched the map up with yours. I was excited to see that the paths found were identical and the search paths were relatively close for the most part. Although I was really bummed to see that your code is almost 3 times faster than mine, hahahaha!!

Now, who said what about Binary Heaps? lol, I'm going to have learn em I reckon!

heres the screenie...

Hm, you definitely need some optimization here, yeah. Actually, you might even have done something wrong. I've got a maze app I wrote that solves random mazes more or less instantly, even if they're fairly enormous. I've also got an algorithm on the iPhone that can solve situations comparable to the one you posted in less than a second... that's with a mobile processor. It's weird that such a small area is taking you 3 seconds to solve.

The maze app I wrote does not really use any optimizations like using binary heaps, but the one on the phone uses a HashMap equivalent for faster lookup. Actually you can have a look at my maze app, it's open source.http://www.otcsw.com/maze.php

There is a high chance you can find most targets without encountering obstructions. Of there are obstructions, just let A* create a very rough path, and 'feel' your way from target to target. This is very realistically looking for crowds and group movement.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

There is a high chance you can find most targets without encountering obstructions. Of there are obstructions, just let A* create a very rough path, and 'feel' your way from target to target. This is very realistically looking for crowds and group movement.

That's very cool. I would actually use that approach if I hadn't already written A*. But I find myself spending lots and lots of time trying to tweak A* to look more natural, especially when diagonal movements are allowed in a grid-based environment.

There are hundreds of heaps in the literatures. AIUI Fibonacci heaps were designed specifically for use with Dijkstra's algorithm, and are asymptotically better; relaxed heaps are also asymptotically better, and I'm sure many others are too. Then there are things like the C algorithm which can be seen as an algorithmic variant of A* or as A* with a variant heap. I use it with a non-admissible heuristic where I care about finding a route quickly more than finding the best route.

Just adding to this. In the maze examples there's no variation in move cost, so your heuristic will be "too good" and you won't be testing every aspect of the priority queues. One big advantage of Fibonacci heaps, pairing heaps etc. is improvement in decrease_key running time. If the heuristic can perfectly judge the distance to the goal when no obstacles exist, then there should never be duplicates in the open list that need updating, and decrease_key won't have to be used.

I guess it all comes back to making sure you are testing for exactly the situation you'll be using them in... reviled micro-benchmarks and so on.

I've also got an algorithm on the iPhone that can solve situations comparable to the one you posted in less than a second... that's with a mobile processor. It's weird that such a small area is taking you 3 seconds to solve. "

Thanks for the looky! Those are millisecs, not seconds. I was thinking 3 millisecs was pretty fast considering that it could do that same routine 300 more times before getting even close to 1 second. Hahahaha. Until I saw Appels that is.

I am thinking that main slowdown is when I gather and set all the neighbor cells. It takes 6 checks to make sure each neighbor cell is valid, than it sets each cell's f,g,h and parent values. It must do this 8 times for each cell that gets checked on the openlist to determine if the cell is valid and should be put on the openlist. This was the main problem in my code that took me a day to figure out. I dont think it can be broke, I tested it many many times on blocked targets and inaccessible targets, plus different size maps, etc. I cant think of any other way of checking and setting the neighbor cells. Ive looked through tons and tons of examples and source but cant really get a grip on how other people do it.

Heres the code that checks and sets the neighbor cells, topleft, top, topright, left, right, bottomleft, bottom, and bottomright cells of current cell being checked...

Just adding to this. In the maze examples there's no variation in move cost, so your heuristic will be "too good" and you won't be testing every aspect of the priority queues. One big advantage of Fibonacci heaps, pairing heaps etc. is improvement in decrease_key running time. If the heuristic can perfectly judge the distance to the goal when no obstacles exist, then there should never be duplicates in the open list that need updating, and decrease_key won't have to be used.

I guess it all comes back to making sure you are testing for exactly the situation you'll be using them in... reviled micro-benchmarks and so on.

Hi Jono, what's an example of this? I don't understand what you're saying. Do you mean that in some simulations the heuristic changes as you find out more info, so the heap needs re-ordering?

There is a high chance you can find most targets without encountering obstructions. Of there are obstructions, just let A* create a very rough path, and 'feel' your way from target to target. This is very realistically looking for crowds and group movement.

That's cool, great applet examples. Looks like the best way to achieve unit avoidance.

Assuming that gVal is initialised to Integer.MAX_VALUE / 2. Since you're assigning a greater cost to diagonal movement than to horizontal movement your screenshot in the first post is quite clearly showing a non-optimal route, and I'm pretty sure that the reason is that you're never reducing the key of a cell in the open list.

Since you're assigning a greater cost to diagonal movement than to horizontal movement your screenshot in the first post is quite clearly showing a non-optimal route, and I'm pretty sure that the reason is that you're never reducing the key of a cell in the open list.

I think that's ok - the diagonal moves actually are slightly cheaper: 14 cost instead of 10*sqrt(2) cost. Edit: I see what you mean now down the bottom right of the map. Hmm.. wven without reducing the key shouldn't this still work? Something wrong with the priority queue?Edit2: I can't access the files anymore. Is it using manhattan distance heuristic (overestimating cost) instead of Euclidean distance? That could explain it.

Hi Jono, what's an example of this? I don't understand what you're saying. Do you mean that in some simulations the heuristic changes as you find out more info, so the heap needs re-ordering?

Yeah, I wasn't very clear. When you add a neighbour to the open list, it may already be in there but with different f-value. This will happen pretty frequently in grid-based search. There are two ways of handling it:

1) Just add neighbours to the list and don't worry about it. The priority queue will sort them so the best one is used first. Downside: if there are lots of duplicate neighbours, the open list will be much larger, slowing each operation.

2) Find the duplicate in the open list (usually via an extra hashtable), and compare the two f-values. If the f-value of the new neighbour is lower, you want to have it in the open list instead. Removing the old one and adding the new one is a duplication of effort. Instead you can update the existing node in the open list (including changing its f-value) and have the priority queue resort itself. This is the decrease_key operation (decreases the f-value).

Binary heaps have reasonably inefficient decrease_key operations (still log(n) though). Fancier heaps like Fibonacci heaps are more efficient. The problem is that when the amount the heuristic decreases after one step is equal to the cost for the move added to the g-value, a neighbour that is already on the open list will never have a lower f-value, and decrease_key won't need to be used.

In the grid example here it's not quite the case, but it is close enough that I think there will still very rarely be duplicates that need updating. If the cost to move between grid squares is different in different locations (like in any game with variable terrain), then this should turn up a lot more.

I see what you mean now down the bottom right of the map. Hmm.. wven without reducing the key shouldn't this still work?

Hmm. You have a point there. It should be incorrect, but the specific problem observed in the screenshot shouldn't happen if the priority queue is working. The actual result looks like pure breadth-first search.

Raft,I really, really appreciate you sharing that. I made a map 50x50 with random blocks spread throughout and did the same starting points, opposing corners and this is what I showed...

Your Javascript page showed the path found in 74 MS on a comparable map with 30 rows x 50 cols.

Of course I am pretty happy with that. I really appreciate it.

PJT33,Most of the stuff your saying is way above my head. Rotation Matrices? I've never seen or heard of em. I still havent grasped Binary heaps. I will have to reread what you posted a few times to try to understand it. I really appreciate the tips, it just may take a while to sink in and try to institute.

Yeah, I wasn't very clear. When you add a neighbour to the open list, it may already be in there but with different f-value. This will happen pretty frequently in grid-based search. There are two ways of handling it:

1) Just add neighbours to the list and don't worry about it. The priority queue will sort them so the best one is used first. Downside: if there are lots of duplicate neighbours, the open list will be much larger, slowing each operation.

2) Find the duplicate in the open list (usually via an extra hashtable), and compare the two f-values. If the f-value of the new neighbour is lower, you want to have it in the open list instead. Removing the old one and adding the new one is a duplication of effort. Instead you can update the existing node in the open list (including changing its f-value) and have the priority queue resort itself. This is the decrease_key operation (decreases the f-value).

Binary heaps have reasonably inefficient decrease_key operations (still log(n) though). Fancier heaps like Fibonacci heaps are more efficient. The problem is that when the amount the heuristic decreases after one step is equal to the cost for the move added to the g-value, a neighbour that is already on the open list will never have a lower f-value, and decrease_key won't need to be used.

In the grid example here it's not quite the case, but it is close enough that I think there will still very rarely be duplicates that need updating. If the cost to move between grid squares is different in different locations (like in any game with variable terrain), then this should turn up a lot more.

I did show 12ms the first time, then 0 ms every time afterwards running the application. I'm sure it definitely depends on the browser and pc running the applets. Mine is a bit slow and outdated which is why I love attempting to develop things with it. Shouldn't have any problems with other people running them.

I do know that the way I find the adjacent cells and set their values is horribly unoptimized and causes major slowdowns, but until I understand what pjt33 eluded to, it will have to do. Of course I will keep poking and prodding at it til then, hehehe.

Thanks for the looky! Those are millisecs, not seconds. I was thinking 3 millisecs was pretty fast considering that it could do that same routine 300 more times before getting even close to 1 second. Hahahaha. Until I saw Appels that is.

Hahahaha, oh. Well that makes a lot more sense! I was wondering how you would manage to solve it so slowly. XD

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org