Forgive me, but I don't think this is very good code. For one thing, it is O(n\\^2) in the number of triangles. Using a hash table to store edges and their indices, you could achieve (expected) O(n) time with a single pass through the triangles. Second, you allocate about twice as much space as you need to for the edges; in a 2-manifold mesh, each edge is shared by two triangles. Of course, triangleCount*3 edges would still be needed for a soup of disconnected triangles, but you could control your memory usage much better by using a std::vector or similiar that only expands when it needs to. Which brings me to the third point, that your code is horrible C++; passing double pointers in the parameter, what gives? At the least that should be a reference to a pointer; and better, it should be a reference to a container class, like the aforementioned std::vector. Also, the triplication of code (bad for readability and maintenance) could be solved by judicious use of the % operator. All in all, the algorithm works but it could be written much better.

bramz
—
2005-11-16T20:35:34Z —
#3

I second that!

PS, reedbeta: you bastard!

bramz
—
2005-11-16T20:43:40Z —
#4

you don't really need the % either ... you can write a loop over the vertex indices as following:

For once, there's actually a practical reason to use a postfix increment in a for loop statement! And the cute thing is, replace 2 and 3 by n-1 and n, and you can use it for n-sided polygons as well!

but i would agree that's a bit less readable than using the %

btw, i'm sure mixing unsigned short and longs causes some warnings?

cm_rollo
—
2005-11-17T02:44:48Z —
#5

I agree that a hash-table implementation would have been better for large models, but this is a short snippet. It wouldnt have made sense to paste in a large chunk of hashing code since AFAIK there is no standard hash container in the STL yet? I agree on the other points though.

Second, you allocate about twice as much space as you need to for the edges; in a 2-manifold mesh, each edge is shared by two triangles. Of course, triangleCount*3 edges would still be needed for a soup of disconnected triangles, but you could control your memory usage much better by using a std::vector or similiar that only expands when it needs to. Which brings me to the third point, that your code is horrible C++; passing double pointers in the parameter, what gives? At the least that should be a reference to a pointer;

1) Memory VS Speed and in this case Speed seems to be more of a concern (ok, let's not look at the algorithm). Letting a std::vector grows by itself would mean so much more allocations and possibly (gasp!) an even greater array than triangleCount*3 at the end! (but I agree though that resizing the array to fit "edgeCount" edges before returning would have been better)

2) Horrible C++ code? I don't think so. It may not be perfect but looking at the function it looks clear enough for me. I admit that I hate references to pointers and when a parameter can be modified or allocated, I prefer using a pointer is it much more clearer when you see the call to the function afterwards like:

anyway that's just personnal taste maybe, I use references as well, just depend on the particular case. The most important thing for me when writing code is to write code that can be easily read afterwards and understood by a little kid with no programming experience whatsoever. Being a C++ purist to the bone doesn't make you a better programmer to work with.

Letting a std::vector grows by itself would mean so much more allocations and possibly (gasp!) an even greater array than triangleCount*3 at the end!

That's true, but the effect can be mitigated by calling the vector's reserve function. Since we can be pretty sure that there will be at least triangleCount*3/2 edges, reserving that much space at the beginning would limit it to a single reallocation, at most (since std::vector grows by doubling, in most implementations).

I admit that I hate references to pointers and when a parameter can be modified or allocated, I prefer using a pointer is it much more clearer when you see the call to the function afterwards...

This is a matter of taste, but I have to disagree - a variable name like uiResult, or a simple one-line comment can make this clear; also, modern IDEs almost all allow you to mouseover a function call and see the function's signature, and a parameter type of non-const reference should be a sure sign that it is being used for output. Moreover, pointers allow NULL values to be passed in; in some cases this makes sense, but when you don't want to allow this you have to remember to include an assert() or something. References don't allow NULL values and so are instantly recognizable as meaning an output parameter that shouldn't be set to NULL.

bramz
—
2005-11-17T10:36:10Z —
#8

@cm_rollo

I agree that a hash-table implementation would have been better for large models, but this is a short snippet. It wouldnt have made sense to paste in a large chunk of hashing code since AFAIK there is no standard hash container in the STL yet? I agree on the other points though.

In absence of a hasp_map, you can always use a std::map. It's not as good, but still better than O(N\\^2). The pretty thing is, if you have hash_map that looks like an STL container (like STLport's), it's a simple subtitution to replace the map with the hash_map.

Hi -- I just saw that this code had been posted (I didn't post it myself). I agree that this isn't very good code. It was written a long time ago to illustrate a simple way to build the edge list without getting too fancy to get running times down to O(n log n), and it was never meant to be used with large models. The code that I actually use in my engine nowadays stores the edges in a tree structure. I should probably update the code on the website...