BGB / cr88192 wrote:> "Jon Harrop" <jon@ffconsultancy.com> wrote in message>> Kaz Kylheku wrote:>>> So now you have references that can't fit into a machine register.>>>> If you don't have 128- or 256-bit registers on your 32- or 64-bit>> machine, respectively.>> one needs AVX to have 256 bit registers at present (on x86 / x86-64 and> friends).>> given AVX is not (yet) available in general "run of the mill" CPUs,> this seems to be a bit agressive. similarly, 256 bit pointers are> hardly cheap...

On the contrary, HLVM's benchmark results have proven that fat references
are cheap.

>> I see. That is not what I'm doing but, besides atomicity, what is the>> motivation for trying to squeeze a reference into a pointer?>> the main thing I am thinking here is:> 1, saving memory:> given that, typically, pointers are one of the most common value types> (especially in dynamic languages), large pointers translates to HUGE> increases in memory use.

No. Moving the header data from the heap-allocated target to the local
source of the reference only increases memory consumption by 96 bits
for every *duplicate* reference stored in the heap, i.e. when two heap
objects A and B both reference C then the metadata is duplicated in A
and B. I don't believe duplicate references in the heap are common at
all.

Moreover, HLVM is specifically designed to reduce the number of
references in the heap. Most notably by unboxing multiple values
(tuples) and all primitive types. In contrast, languages like OCaml
box multiple values (e.g. pairs) and box many basic types such as
floats.

No. HLVM has certainly proven that wrong. Indeed, disproving such
accepted wisdom is an important part of why I think these results are
of wider interest.

For example, I just benchmarked a simple hash table with arrays for
buckets. Note that the spine contains the *only* reference in the
heap to each bucket: there are no duplicate references so memory
consumption is no worse than with 1-word references. OCaml is probably
the world's fastest functional language implementation for this kind
of task but HLVM is 2.2x faster than OCaml on this benchmark (!).

> as I sometimes say, having lots of a resource is not a good reason> to waste it. just because a modern computer has > 1GB of RAM (my> desktop has 6GB and my laptop 4GB), does not mean it is a good idea> to increase memory use by a huge factor without good reason.

I don't believe HLVM's design will bloat the heap by a huge
factor. Indeed, can you think of a practical solution to a problem
that entails large numbers of duplicate references stored in the heap?

> it is typically better to use more of a resource to have more stuff,> than to squander it and have the same, or less, overall.

I don't believe so.

> much like how now, a person can easily have multiple TB of hard-drive> space, yet data compression has by no means become meaningless.

Sure but there are clear trade-offs and compressing data
representations in RAM to fit in a smaller fraction of a cache line is
clearly diminishing returns.

> to make little savings at the bottom can often translate to big savings> overall...

That is certainly wrong. Language implementations like OCaml and
Haskell go to ridiculous lengths to shrink data representations
(e.g. placing a tag bit in each int) but HLVM and F# wipe the floor
with them in benchmarks.

> hence my idea is maybe to consider a scheme where, most of the time,> pointers remain at their natural size, and can occasionally route through> another structure for added functionality (this structure need not be> necessarily heap-based). alternatively, most info can be placed in the> object (as is often done).>> in the structure based strategy, also possible could be to have an "object> indirection table", which would basically be used for any item addressed> via a fat pointer. in this case, the narrow pointer encodes an index or> pointer into this table, and the table in turn refers to the object (with> any added info).>> this would allow greatly reducing overall costs to multiple references to> the same object.

But does that improve memory consumption and performance for the uncommon
case of duplicate references at the cost of performance for all other
references?

> the main weak side of tagged references is that they are a hassle to work> with from C, but at the same time, the same could be said of fat pointers> (without explicit compiler support). actually, it may well be worse,> because at least a tagged reference will usually fit nicely into an> unsigned integer or similar...

You just need primitive operations to extract the underlying pointer
from a fat reference (the AddressOf operator in HLVM) and to construct
a fat reference from a pointer and a type (not yet implemented in
HLVM).

That is exactly the kind of low-level bit twiddling that old school
language implementations like OCaml, Haskell and even the JVM do. I
don't think it is such a great idea. For example, it makes OCaml's int
arithmetic 3x slower than C and boxing makes Java's hash tables 32x
slower than .NET's.

> even on a 64-bit system, a person may still use 32-bit tagged values,> meaning LESS space use than if they used pointers (although, there is a> finite limit on practical heap size, for example, 4GB worth of cons cells,> ...).

You mean 2^32 cons cells. That will become a serious limitation quite
quickly. OCaml suffers from a similar legacy problem due to similar bit
twiddling: strings and arrays are limited to only 16Mb.

> the reason for giving so much space to fixnums and flonums is that these> are usually very common types, and it is desirable to minimize the need> for heap-allocated integers or floats.

If you're interested in performance and memory consumption then it is
absurd to ever heap allocate ints and floats.