Re: are arrays contiguous in memory?

One thing that is missing from this discussion is the importance of cache effects when iterating. If your vector accesses are distributed all over memory like they might be with a linked list that has been sorted, then your cache miss rate will be significantly higher than if you iterate over a contiguous block of memory. This effect is drastic because L1 cache can be well over 20 times faster than RAM.

Another thing here is the fact that modern compilers have extremely good optimizations for looping over contiguous arrays because the compiler knows that it is working with contiguous memory.

That said, depending on what you are doing, it is not always worth it to sacrifice an order of complexity on an algorithm just to get these benefits. The two things I describe above are only constant time improvements, while dispersed memory vectors (such as linked lists) can sometimes give you order N or greater improvements over contiguous arrays.

We kind of got off topic from the original question but this is a good discussion!

Peter Oliphant wrote:
> Wow, you ask a simple question..... : )
>
> As I understand it, the 'old' STL containers make a distinction in
> this regard. List's are linked lists, and so are not necessarily
> contiguous in memory. Vector's are the STL equivalent of contiguous
> lists. The reason for both is that vector's are better at lookup
> performance (randomly accessible and therefore a fixed length of
> lookup time); but suffer when being deleted from or resized, since
> this is often done by moving the entire end of the list and
> allocating a bigger vector somewhere else and copying over,
> respectively. List's are worse at lookup performance since they
> require serial lookup (linking thorugh the list from the header,
> which is takes a variable length of time based on 'distance' from
> header), but benefit when it comes to deletion and resizing since an
> element can be removed without moving anything else, and resizing
> just means finding a large enough memory space anywhere for the added
> element.
And there is also in intermediate "mix" between the two : the deque.

>
> My opinion is this: with a faster computer and more memory it really
> doesn't matter which method is used! : )
This is of course basically wrong : processor speed has never changed
anything to algorithm complexity. When manipulating great number of elements
(several tens of thousands), the difference between logarithmic and linear
complexity is as important today as it was 10 years ago.

Relevant Pages

Re: are arrays contiguous in memory?... the 'old' STL containers make a distinction in ... List's are linked lists,... > contiguous in memory.... The reason for both is that vector's are better at lookup...(microsoft.public.dotnet.languages.vc)

Re: arrays... >> arrays which i will pass to a custom module written in C. ... > You still might want to use numarray or Numeric in this case. ... The memory... > memory block of doubles and lists of lists of Python floats. ...(comp.lang.python)

Re: Tracking down memory leaks?... It uses arrays extensively and loops alot. ... are being used, I would think that when it does, all memory returns ... Would your program work if you substituted collections.deque for the arrays ... (did you mean array.arrays or lists?)?...(comp.lang.python)

Re: efficiently generating hash keys... > lists of small integers. ... I fear that the hash tables might ... iterate by some memory... can use eql hash it has to do what equalp does for simple arrays....(comp.lang.lisp)

Re: double precision vs. integers... could e.g. handle that problem with a lookup table. ... E.g for a base 2 a table of 31 integers if you really don't trust log():... I suppose you could generate these in a list of lists...... The only limit would be the memory, it would take 100kb of memory or so, not much for a PC, huge for a smaller device under MIDP. ...(comp.lang.java.programmer)