Mesh simplification & decimation algorithms

Hello i recently started to tackle the various mesh decimation & simplification algorithms, mostly from Hoppe & Melax. The algorithms in question are the Quadric error metrics mesh simplification & the Progressive mesh simplification. I found a few good github starting points (VTK, hoppe's & melax's githubs etc.) to try and test the various implementations but i run into some problems:

1. I tried to simplify a few of mine and a few of the provided (inside the code base) meshes and i noticed that not all meshes survive the same simplification. For example a 400 vertices robot mesh can be simplified down do 10-20 (or about 3-7% of the src vertices) vertices without any problem no missing faces or huge topology distortion, yet an Eagle mesh with 3000 vertices can be just simplified to 1900-2000 (or about ~60-70% of the src vertices) vertices before i start noticing that the mesh starts to lose faces (which leads to a mesh with quite noticeable number of holes). And that is something i noticed with quite a few models (be it mine or the provided). I would like to know what is the actual cause which stops / prohibits some meshes from being simplified as much as others. I am probably asking a quite stupid or obvious question

2. Even if the simplification was perfect i still have difficulty figuring out how would i preserve the vertex appearance (uv coord, tangents, normals etc.) I read a few papers - one in particular from Hoppe - New Quadric Metric for Simplifying Meshes with Appearance Attributes. But did not find any real world examples or at least snippets of code to give me a better grasp of the technique

PS: Just as a side note VTK is a beast of a library never used it before i find it a bit heavy or maybe bloated for most people. But i really liked it !

I would like to bump this thread, as i have been busy lately but still have not found out a solution.

Further more i was wondering what are some state of the art rendering techniques to reduce polygon count (or LOD) when walking over a very large mesh terrain that is NOT build up from a height field or map - but is rather an irregular mesh modeled by hand.

Yea, i have tried mesh lab and they seem to be managing the simplification very well. Definitely more precise and accurate simplification, most of the time.

What has been bugging me lately is more about techniques to dynamically render different LOD levels when walking over a large terrain mesh (non regular grid - NOT generated from a height map, noise, 2D data, etc.). I will give you a CS:GO analogy here, i am not sure if they are doing that exact thing, but presume that their maps are mostly hand crafted, all trees, boxes, walls buildings which are not interactive or destructible are probably embedded and modeled along with the rest of the map. This can potentially allow for fewer draw calls, and much richer environment.
Or is it just a flat plane, with everything else rendered as a separate entity (instancing where possible) ?

Yea, i have tried mesh lab and they seem to be managing the simplification very well. Definitely more precise and accurate simplification, most of the time.

What has been bugging me lately is more about techniques to dynamically render different LOD levels when walking over a large terrain mesh (non regular grid - NOT generated from a height map, noise, 2D data, etc.). I will give you a CS:GO analogy here, i am not sure if they are doing that exact thing, but presume that their maps are mostly hand crafted, all trees, boxes, walls buildings which are not interactive or destructible are probably embedded and modeled along with the rest of the map. This can potentially allow for fewer draw calls, and much richer environment.
Or is it just a flat plane, with everything else rendered as a separate entity (instancing where possible) ?

This will depend a lot on how their engine is designed. If the engine is designed around instancing or around batching, or both. To hand-craft everything will certainly reduce the draw calls, but that will make the level to require far more memory, so reducing the ability to have such richer environments, or big enough. Plus batching often require to update the geometry indices for each frame, which might be a bottleneck. And at the opposite, instancing allows to add far more details to a scene, but it obliges to do more heavy calculations (ie for culling and also rendering).

For about your original issue, you might find some people knowing the subject at this site.