I have a large library (39,000 entries over 12,000 Authors) That is pretty much unusable due to how long it takes to do anything. Is there any hooks to run cProfile against it and see where it's spending it's time?

One thought outright is over 12,000 entries just doing an fstat can take a very long time. When I used to own an ISP we wrote some custom Linux kernel hacks just to avoid the inode lookup on the news server. Is there any way to have a module specify the way to convert a book entry into the path? IE: rather than being library root/Aadam Johnson/His First Book (ID) it may help on large libraries to go library root/Aa/Aadam Johnson/His First Book (ID) so that the initial lookups are from a directory with ~676 max entries and not the 12,000 I have now.

Tag browser hidden, 39270 books going from filter of 2720 to filter of 2792 books took 70 seconds. Filter was y/n field and a user category made up of authors. (@PaperAuthors:True OR #gr_read:True) If I apply the same search as the filter itself (IE: Every book in the filter should match) it took 50 seconds once the filter was applied.

Any way to profile what that 70 seconds was spent doing? I thought maybe it would be creating/destroying (or adding/removing) the books from the grid view, that took a long time, so I tested the filtered view instead and had the same issues.

The improvements will be in the next release, or you can run from source to get them now.

That's an intriguing possibility, I do have an ISBN column displayed.

Which I guess just goes back to my original question, is there an easy way to get cProfile on the main code to find these type of things? I saw that some of the support tools have cProfile already set up, but didn't see it in the main gui2.