Multiple threads can grow the page-table and push_back new elements concurrently. A ValueBuffer provides accelerated and threadsafe push_back at the cost of potentially re-ordering elements (when multiple instances are used).

This data structure employes contiguous pages of elements (like a std::deque) which avoids moving data when the capacity is out-grown and new pages are allocated. The size of the pages can be controlled with the Log2PageSize template parameter (defaults to 1024 elements of type ValueT). The TableT template parameter is used to define the data structure for the page table. The default, std::vector, offers fast random access in exchange for slower push_back, whereas std:deque offers faster push_back but slower random access.

There are three fundamentally different ways to insert elements to this container - each with different advanteges and disadvanteges.

PagedArray::push_back has the advantage that it's thread-safe and preserves the ordering of the inserted elements. In fact it returns the linear offset to the added element which can then be used for fast O(1) random access. The disadvantage is it's the slowest of the three different ways of inserting elements.

This technique generally outperforms PagedArray::push_back, std::vector::push_back, std::deque::push_back and even tbb::concurrent_vector::push_back. Additionally it is thread-safe as long as each thread has it's own instance of a PagedArray::ValueBuffer. The only disadvantage is the ordering of the elements is undefined if multiple instance of a PagedArray::ValueBuffer are employed. This is typically the case in the context of multi-threading, where the ordering of inserts are undefined anyway. Note that a local scope can be used to guarentee that the ValueBuffer has inserted all its elements by the time the scope ends. Alternatively the ValueBuffer can be explicitly flushed by calling ValueBuffer::flush.

The third way to insert elements is to resize the container and use random access, e.g.

PagedArray<int> array;

array.resize(100000);

for (int i=0; i<100000; ++i) array[i] = i;

or in terms of the random access iterator

PagedArray<int> array;

array.resize(100000);

for (auto i=array.begin(); i!=array.end(); ++i) *i = i.pos();

While this approach is both fast and thread-safe it suffers from the major disadvantage that the problem size, i.e. number of elements, needs to be known in advance. If that's the case you might as well consider using std::vector or a raw c-style array! In other words the PagedArray is most useful in the context of applications that involve multi-threading of dynamically growing linear arrays that require fast random access.

Will grow or shrink the page table to contain the specified number of elements. It will affect the size(), iteration will go over all those elements, push_back will insert after them and operator[] can be used directly access them.

Note

No reserve method is implemented due to efficiency concerns (especially for the ValueBuffer) from having to deal with empty pages.

Resize this array to the specified size and initialize all values to v.

Parameters

size

number of elements that this PageArray will contain.

v

value of all the size values.

Will grow or shrink the page table to contain the specified number of elements. It will affect the size(), iteration will go over all those elements, push_back will insert after them and operator[] can be used directly access them.

Note

No reserve method is implemented due to efficiency concerns (especially for the ValueBuffer) from having to deal with empty pages.

Warning

Not thread-safe!

void shrink_to_fit

(

)

Reduce the page table to fix the current size.

Warning

Not thread-safe!

size_t size

(

)

const

inline

Return the number of elements in this array.

void sort

(

)

inline

Parallel sort of all the elements in ascending order.

void sort

(

Functor

func

)

inline

Parallel sort of all the elements based on a custom functor with the api: