> I suggest for a fix that your code not preallocate arrays; it's redundant
> with what the runtime is already doing.
Is that for Linux, too? Because my code is much faster in doing that... ok, perhaps he's just much more aggressive.
Ciao
uwe

In article <op.sq2quseu6yjbe6@sandmann.maerchenwald.net>, Uwe Salomon says...
>>> eek. I know that's a common C trick to extend an array at the end of a
>> struct but does D recommend that practice? I assume it could wreak havoc
>> on
>> garbage collectors that are "type aware" since the pointers stored in the
>> extended section have unknown types to the GC. For example if the GC were
>> changed to allocate pointer-free data from a special non-scanned heap
>> then
>> the new char[..] would allocate from that heap and they the cast(Node*)
>> would wind up meaning that pointers are stored hidden inside what the GC
>> thinks is a char[].
>>What do you recommend, then? I thought D should be a programming language that lets you access the lower levels if you need to. Now this is all not true anymore, because we have a super-duper overclever garbage collector? Perhaps the D garbage collector needs to be robust, and simply cannot be too funky.
>>I mean, what would the benefit of all that cleanness in the programs be? Look at some code from an average open source programmer... your hairs will stand on end. That is only logical, because high quality code needs a lot of time. You won't find a lot of programs where the programmers took this time. So why optimize the GC to the bare bone, if its total execution time is under 5% in the average program? It is much better to use some advanced memory techniques (as outlined in the D docs somewhere) when it really matters, i think.
>>Ciao
>uwe
Maybe instead of /byte[n]/ or /char[n]/, use /(void*)[(n+3)/4]/. This way the GC will think of the data as pointers, but will know that it cannot safely modify void* objects because the type info is unknown.
I would think a skip list could be implemented as a template taking the node level as a parameter.
There is also a data structure that has the form of a binary tree, but works just like a skip list; ie it makes the same decisions based on all the same comparisons in the same order.
The key observation was that in any particular skip list, many of the pointers are never really used. For example, if a level 4 node is followed by a level 5 node, the pointers on the level 4 node are never followed. If you can legally move to the level 5 node, you would have done so when you were traversing at level 5. Once you descend to traversal at level 4, all higher level, subsequent nodes, have been eliminated.
Thus, for any node X followed by a higher level node Y, X is essentially a leaf of Y. This is not exactly how it works. Ultimately, every decision is a binary decision, so you can build a binary tree that represents that decision space. Each node in the binary tree contains a random integer that corresponds (roughly) to the "skip list node level".
I worked out the details of how to do this once and wrote it up in C++, but I discovered that someone else had already invented what appeared to be the same data structure. I can never find the link when I want it though... regardless, I think my implementation was better than his -- his required node pivoting.
Kevin

> Each node in the binary tree contains a random integer that corresponds
> (roughly) to the "skip list node level".
Hmm, i read something similar back then, about randomized binary trees... But that is some other issue, i think. Your approach reads interesting. Does it only save some space, or does it also speed the map up?
Ciao
uwe

"Uwe Salomon" <post@uwesalomon.de> wrote in message news:op.sq3c0da86yjbe6@sandmann.maerchenwald.net...> > I suggest for a fix that your code not preallocate arrays; it's
redundant
> > with what the runtime is already doing.
>> Is that for Linux, too? Because my code is much faster in doing that... ok, perhaps he's just much more aggressive.
The same code is used for both.

"Walter" <newshound@digitalmars.com> wrote in message news:d6lh2e$1keg$1@digitaldaemon.com...>> "Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d6i1ps$1osr$1@digitaldaemon.com...>> I was doing some experiments with comparing the MinTL container
> performance
>> against the STL performance and MinTL was getting stomped because of the
> GC.
>> It seemed to be allocating twice as much memory as the STL runs. The allocations I was making was for arrays with lengths a power of two. I looked at gc.d and noticed all the allocations were being made with +1
> added
>> to the lengths, which means my nice power of two was being incremented to the next larger bucket and hence using twice the space of the C++ code.
> Can
>> we get rid of those +1? I recompiled phobos without the +1 and it all
> seemed
>> to work ok. I think it will be common to use power-of-two sizes for
> objects
>> assuming those will fit nicely together when in fact those fit the worst together.
>> The +1 was to support the following usage:
>> T[] foo;
> T[] bar;
> ...
> foo = foo[length .. length];
> ...
> foo ~= bar;
>> Without the +1, if length is a power of 2, and the power of 2 is the
> bucket
> size, then foo[length..length] happens to cross the bucket boundary and
> point at the start of the *next* bucket. The array append code just sees
> that foo[] is pointing to the start of a bucket, and so happilly inserts
> bar
> overwritting whatever was already using that bucket. This would cause
> erratic, random crashes.
I suggest foo ~= bar for arrays be roughly equivalent to
size_t oldlen = foo.length;
foo.length = foo.length + bar.length;
foo[oldlen .. oldlen+bar.length] = bar[];
That way when foo.length is 0 a new pointer is allocated (as you probably
already know _d_arraysetlength checks for 0 length). When foo.length is
non-zero and the array can be resized in place then it behaves the same as
what it does today. So basically to make the smallest change to gc.d I would
change _d_arrayappend to check if length == 0 just like _d_arraysetlength
and malloc a new block if that is the case.
In fact it would be very odd if ~= had different semantics than changing the
length and copying. I can imagine subtle bugs creeping into code if they are
different. Of course ~= can be more efficient than the two step code but it
shouldn't end up with a different result.
> I suggest for a fix that your code not preallocate arrays; it's redundant with what the runtime is already doing.
For my case I wasn't preallocating - I was allocating blocks for a deque. Each block has a fixed size that will never be sliced or resized. I would like to know how to efficiently allocate such a block of memory and initialize it to hold a paramtrizing type T. I'm starting to suspect a general-purpose deque in D will very hard to get competitive with C++.

"Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d6lrve$1rpt$1@digitaldaemon.com...> I suggest foo ~= bar for arrays be roughly equivalent to
> size_t oldlen = foo.length;
> foo.length = foo.length + bar.length;
> foo[oldlen .. oldlen+bar.length] = bar[];
> That way when foo.length is 0 a new pointer is allocated (as you probably
> already know _d_arraysetlength checks for 0 length). When foo.length is
> non-zero and the array can be resized in place then it behaves the same as
> what it does today. So basically to make the smallest change to gc.d I
would
> change _d_arrayappend to check if length == 0 just like _d_arraysetlength
> and malloc a new block if that is the case.
> In fact it would be very odd if ~= had different semantics than changing
the
> length and copying. I can imagine subtle bugs creeping into code if they
are
> different. Of course ~= can be more efficient than the two step code but
it
> shouldn't end up with a different result.
The idea was to support the idea of 'reserving' a length for an array by setting the .length property to the reserved size, then setting .length to 0. This means that while filling the array, reallocation wouldn't be happening and it'd be more efficient.
> For my case I wasn't preallocating - I was allocating blocks for a deque. Each block has a fixed size that will never be sliced or resized.
Why were the sizes picked to be powers of 2?
> I would
> like to know how to efficiently allocate such a block of memory and
> initialize it to hold a paramtrizing type T. I'm starting to suspect a
> general-purpose deque in D will very hard to get competitive with C++.

In article <d6lh2e$1keg$1@digitaldaemon.com>, Walter says...
>>The +1 was to support the following usage:
>> T[] foo;
> T[] bar;
> ...
> foo = foo[length .. length];
> ...
> foo ~= bar;
Okay, I must be missing something, because the above code looks to me like it should result in undefined behavior. Isn't the result of foo[length..length] a single nonexistent array element just beyond the end of the array? Assuming it is, why should the GC be modified to support such usage?
>This would cause erratic, random crashes.
Perhaps this should just be done in debug builds, with the appropriate bounds checking in place. Then the bugs could be found and corrected without any impact on relese performance.
Sean

In article <d6m2k5$1vv6$1@digitaldaemon.com>, Sean Kelly says...
>>Isn't the result of foo[length..length] a single nonexistent array element just beyond the end of the array?
I meant a zero length array at an invalid location.
Sean

"Walter" <newshound@digitalmars.com> wrote in message news:d6m0i1$1uoh$1@digitaldaemon.com...>> "Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d6lrve$1rpt$1@digitaldaemon.com...>> I suggest foo ~= bar for arrays be roughly equivalent to
>> size_t oldlen = foo.length;
>> foo.length = foo.length + bar.length;
>> foo[oldlen .. oldlen+bar.length] = bar[];
>> That way when foo.length is 0 a new pointer is allocated (as you probably
>> already know _d_arraysetlength checks for 0 length). When foo.length is
>> non-zero and the array can be resized in place then it behaves the same
>> as
>> what it does today. So basically to make the smallest change to gc.d I
> would
>> change _d_arrayappend to check if length == 0 just like _d_arraysetlength
>> and malloc a new block if that is the case.
>> In fact it would be very odd if ~= had different semantics than changing
> the
>> length and copying. I can imagine subtle bugs creeping into code if they
> are
>> different. Of course ~= can be more efficient than the two step code but
> it
>> shouldn't end up with a different result.
>> The idea was to support the idea of 'reserving' a length for an array by setting the .length property to the reserved size, then setting .length to 0. This means that while filling the array, reallocation wouldn't be happening and it'd be more efficient.
Then why when foo.length==0 does "foo.length = 1" reallocate but "foo ~= x" doesn't? Long ago I had argued that setting length shouldn't reallocate when resizing from 0. Now that I know the append behavior and the impact it has on the GC I'd rather have resizing from 0 reallocate both for consitency with setlength and to save the GC performance. Either that or enfore the rule that the first index of a slice must be a valid index within array bounds. After all the section on array slicing says "slicing an array means to specify a subarray of it" which to me implies the pointer refers to a valid location. I never knew one could slice off the end but I suppose that could come up if you are seaching for an item and it isn't there and so you wind up slicing off the end.
>> For my case I wasn't preallocating - I was allocating blocks for a deque. Each block has a fixed size that will never be sliced or resized.
>> Why were the sizes picked to be powers of 2?
I'm open to suggestions. I'm trying to write a deque that competes with C++ implementations. Efficient memory usage is an important part of a deque and powers of 2 (or I suppose power-of-2*element.sizeof) are traditionally the most efficient packing I believe.
>> I would
>> like to know how to efficiently allocate such a block of memory and
>> initialize it to hold a paramtrizing type T. I'm starting to suspect a
>> general-purpose deque in D will very hard to get competitive with C++.
>>