I was not overwhelmed with either the explanation of the problem, solution, or the conclusion. I am even unclear if the real problem is the algorithm used for creating and then parsing the XML or the actual creation of the XML file. Having worked with XML, this approach should work fine for exchanges of data and data definition structures. However, part way through this article, the developer hints that "speed" is the problem. The totallity of this article appears to be a side-bar note to another developer so that they can commiserate on the reality that there is no perfect development tool.

I would have found this article of some value had there been a real description of the problem (# of records, goals of the project, # of directories, # of databases, reason for the project in the first place, etc .etc.). It would have been even nicier had there been even a brief discussion of the algorithm approach. It mentions recursion...and having done recursive functions in AI design for years, I have seen even experienced programmers create some of the most inefficient recursion code possible. And finally, the article provides no solid solution approach showing 1) what went wrong, 2) how the "wrong" was identified, and 3) how the solution was so much better. Did the author just use default XML builds from SQL? If so, why not complain that VS2005 does not do a better job of writing the code for you? Maybe because, more experienced developers understand that 99% of the time, its a person problem and not a tech tool problem.

Other than these basic criticisms, I would say that SQL Server Central editors were hard pressed to find something of value to link to other than a brief commentary from a programmer that had a bad week and then provided us a fragmented review of his experience.

I cannot see how even 1000 sibling directories should create a bottleneck. To me, another typical design mistake seems to have been made by the author namely that all data was transmitted at once, i.e. all directories and subdirectories with files. Only the first level should be queried and displayed firsthand and if the user expands a node the application should query again for the next level and so on.

Anyway what's the use of storing the contents of an ever changing file system in a database??

"Because of the small size of files, the data transfer speed has also increased considerably, especially for web applications"

blew it for me. There's no way anyone that knows much on the subject can possibly consider XML files as having "small size" when compared to most of the alternatives out there.XML is somewhat self documenting and usually human readable, but size and efficiency are not it's strong points.