I'm having an efficiency debate with myself that I thought I might ask the gurus about. I have a program that requires records to be sorted (as so many do) - as little as 10 to as many as several thousand or more. Now, those records will be split into pages of, say, 20 records, once they're sorted, and each page will be displayed on a person's browser. But only one page needs to be created at a time.

Just to simplify the situation, say @records holds all the records. At the moment, I sort the records:

@records = sort @records;

And then take a slice of the 20 records that are to be displayed on the particular page:

@records = @records[$first..$last];

(where $first and $last specify the numbers of the first and last records to be displayed).

Now that's fine, and it works, but I can think of another way to do it that may be better, and this is where it gets a little tricky. From what I've read, the Perl sort function uses QuickSort (although I also read that 5.6 uses MergeSort). Now, what I'm thinking is that when I use the above code, Perl sorts the entire array for me - but it doesn't need to. It only really needs to sort that part between $first and $last. Instead, I was thinking of writing a modified QuickSort algorithm that throws away records that it doesn't need to sort on each iteration. An example mightn't go astray:

Say there were 1000 records, and records 20 to 39 need to be returned. The QuickSort algorithm picks its first pivot element, and pivots around that. The pivot element ends up at position 300. Now, the order of records 301 to 1000 aren't important, because we know that in the final order, records 20 to 39 are now in the first 300 elements in the array, so all records above 300 can just be thrown away, and records 1 to 299 can be sorted further. Then, the pivot might end up at position 89 after the second iteration, and so records 90 and above can be thrown out. Basically, when QuickSort comes to forking into a sort of a particular sub-array, it decides whether it really needs to or not.

The question is this - if that will actually work (I haven't programmed it yet), will it be faster than the first, standard perl sort method? The reason I ask is that I also read that because the sort function is in binary form, it executes faster than a custom made sort algorithm, but how much faster?

I'm thinking that the standard Perl sort would be better for small array sizes, but the modified QuickSort better for large array sizes. Any ideas? Am I making sense?

Fair enough suggestions, but the whole thing has to be dynamic. Each record has several fields, and the records will be sorted in different orders, and on different fields depending on the circumstances. The sort has to happen, there's no way around that. The real question is whether it's worth writing a custom sorting algorithm, or whether it's better to stick with Perl's own "sort" function because it is pre-compiled and apparently very fast.

I had a similar problem some time ago. After trying different approaches, it turned out that using the standard sort together with some efficient way of calculating the sort key only once (like the Schwartzian Transform, a packed sort or whatever) was still faster than writing your own ranking algorithm. Unless you want to build your own XSUB, maybe - but for a few thousand records, it really isn't not worth it.

Note that there are a couple of Sort modules on CPAN that might be of use too.

The number in brackets is the number of elements in the test array. Each ChopSort and QuickSort pair had the same array to sort (so each algorithm sorted the same three randomly generated arrays for each array length). That was with ChopSort extracting a sorted subarray of 40 elements. Here's some more:

This time the second number in brackets after ChopSort is the length of the sorted array ChopSort is trying to extract. Notice that as the length of the subarray approaches the length of the total array, ChopSort starts slowing down (I'm now trying to work out the percentage length where the inbuilt sort becomes better than ChopSort).

I should probably explain what ChopSort does, I can post the algorithm (in a rough form that can probably be improved) if anyone's interested. What happens is, say you have a 10000 element long array that you want to sort. For example, you might have a search engine CGI script that has returned 10000 results for example - but you only want to display 20 results per page. Well, the in-built sort function will sort all 10000 results, and you can then take a slice of 20 the elements to display. But really, it does a lot of work that isn't necessary (the order of the last 9980 elements doesn't matter after all). What ChopSort does is only strictly sort part of the array. So if you want to return the first 20 elements, for example, it will give you the first 20, in order, but the rest of the array will only be roughly sorted. If you only want elements numbered 21 to 40, you'll get them, but elements 1 to 20 and 41 onwards won't be in perfect order.

I hope that makes sense. And I hope I haven't just reinvented the wheel...

I'd really like to see this algorithm! Also, I've got another suggestion for speed improvement. I assume the algorithm you wrote could easily be ported to C, so you should consider a C rewrite and merge it into your Perl environment with an XS wrapper. I think this could speed up the whole thing again.

Marcus - I've attached the algorithm, let me know if you have any suggestions (and anyone else too, if anyone needs some explanation on how it works, fire away with the questinos).

The actual implementation that I've put into a program I'm working on is slightly different, being able to sort on multiple fields, each having its own order and whatnot, but because it tends to complicate things, I've just attached the core algorithm behind it all.

The basic algorithm should be able to be ported to C fairly easily. The more advanced algorithm that I mentioned mightn't port well to C, because it concantenates strings and then uses them as symbolic references to subroutines that make comparisons based on user input (for example, the user wants an alphabetical sort in descending order on a certain field, so the algorithm tacks a few strings together based on those options to generate the name of a subroutine it will use for making comparisons). I don't think C can do that, but I'm sure there'd be another way to do it.

And I've never used an XS wrapper, does it increase performance considerably?

I haven't looked at the algorithm yet, but I'll do asap. From what you said concerning the symbolic reference to a subroutine that does comparisons, that sounds to me like a perfect application for function pointers in C. For each sorting option you can define your own function, just as you would define your subroutine in Perl. Each of these 'compare' functions take two pointers to data they'll compare, and return -1, 0 or +1 to indicate that arg 1 is less, equal or greater than arg 2. You just have to pass a pointer to this function as an argument to the chopsort function, as well as pointers to the arrays that hold the data, of course. I'll try to figure out if there's an easy way to port this to C if I've had a look at your code. I really think that speed could be improved that way. Writing an XS wrapper is no big deal. A nice introduction is given in 'perldoc perlxstut'.

As you might have worked out, I'm no expert on C. I know enough to write simple programs, but not much more. I didn't realise that you could have function pointers, but from your description, yes, they'd be perfect for what I was doing.