> > According to me calculations, if we assume that there are "weirdos" with
> > 40GB hard drives, they will still almost never have even 10000 songs. So,
> > a static memory area to support this amount of songs would need merely
> > 40000 bytes. I think that since we would still need to design the memory
> > sizes for the worst case, we can use the worst case always. imho.
>
> Don't underestimate what users will do. :)

Well, actually I don't. Or at least I try not to. :-)

> There may be a case where someone has thousands of tiny files, or some else
> that would result in a large number of individual files. Know well that if
> there are limits somewhere in software, users will find them and attempt to
> violate them. The more robust and capable you make code, the less you have
> to kludge it later when someone needs to go beyond what you've written.

The thing here is to consider the case where the user actually would use a
lot more than 40KB for the playlist. We have a limited memory amount for
malloc(). We need to keep it big enough for "worst case". I can't see how
making this area dynamicly allocated helps us in this aspect.

If the user makes a list with 40000 files, yes then we need 160000 bytes to
deal with the list in memory.

In my mind, this is a perfect case for the rockbox configure-your-build tool.
Set the maximum amount of files in a playlist. Default is N.

We can't possibly come up with a single system that works equally good for
1000 files or 100000 files. It'll be too much unused memory in the first case
and the memory won't be enough in the latter. Be it static or dynamic. Be it
one single array or a several-linked-arrays system

> And I know that there will be people that get 40G or larger drives. I
> bought mine for many reasons, and one was it's potential extended lifetime
> through the use of larger drives.

Björn has a 40GB'er and he has some 5500 mp3 files on it, which I used as
input for my calculations. Of course, soon people will have 80GB disks and if
they have very small files on it we will soon find people with 20000-30000
mp3s.

> Considering the limits on memory and cpu the device imposes upon us, the
> benifits of a O(n/2) alg to shuffle seem to outweigh the ideal of a
> "perfect" random. The ADT I have described allows for fast acceptable
> randomization with several other good side effects, like it's scaleability.

Using an array, I can come up with another O(n/2) algorithm that I in fact
like more:

for(i=0; i< array_size; i+=2)
swap_index(i, random_index);

It would be more likely to swap around entries from the entire scope, than
your solution would. It would also risk that every second song remains at the
index where it was first loaded.

However, using my imaginary 10000 files, I don't think an O(n) algorithm is
too slow. It just boils down to the random function which doesn't have to be
very advanced and shouldn't take very many cycles. Given 1000 cycles per
entry (which I believe is a rather high estimate), we get 10 million cycles
which with our 12MHz CPU would equal less than a second.

Of course, given 40000 files that would mean almost 4 full seconds.

> Of course, what I propose is a somewhat ideal case. The code is not
> terribly difficult, though it is slightly more complex than a simple array.
> My assertation (from years of experiance) is that it's better to take the
> effort to implement a more robust alg now, then it is to rewrite it later.

Well, given that we have a fairly good/simple API, it won't be hard to change
the implementation. Heck, we could even implement both versions and run tests
with both to see which one people prefer.