Just remember one of the reasons the VM guarantees the memory is zeroed is to give you a known state for a new object - one of the classic reasons C programs go awry unexpectedly is having cruddy data in a struct. With pooling you lose that safety net and you're on your own again.

Just remember one of the reasons the VM guarantees the memory is zeroed is to give you a known state for a new object - one of the classic reasons C programs go awry unexpectedly is having cruddy data in a struct. With pooling you lose that safety net and you're on your own again.

Cas

But C guarantees the memory will be uncertain, and whatever you don't initialize may not be 0 And 0 isn't a safety net if you don't wnat 0 either

Hehe There is a subtle but big difference in having guaranteed default values, and that's repeatable crashes

Cas

lol yes, funny but true. Like when beginners test their games in VBA and find that it doesn't work in No$ and real hardware - although that's almost the opposite.

Well the whole reason for me creating the FixedInt class was so that I could have integer precision and predictability without having to worry about the correctness of the arithmetic statements (i.e. shifting and performing operations with different point integers). Having said that I will probably change it to floating point anyway, but it did create a fine example of creating something that works over something that is fast (if you get what I mean)!

basically because it is costless to get an existing object, and it is cpu expansive to allocate (release when GC) memory for new object (even for one byte), when creating a new Object you ask the target OS RAM manager or java memory manager to give you an heap space, this is slow in comparaison of getting an existing memory allocated place, so using a pool for object this is what you do by getting a "vector" to an already allocated memory area. this is why it it will be always faster in any language to use pooled objects.

but, in this special case, there will be no surprise, pooling will be faster especially for a lot of objects. Memory managment works exacly the same as hard disk managment, including fragmentation and such, so even if Java GC run it best and Java memory manager is excellent and underlying OS memory manager is excellent, it cant be faster to allocate object than reusing already existing object. reusing object is near to be cpu costless, it only imply reading an object reference wich is near to read a "pointer", maybe only 2/4 cpu cycles.

If you have time, make a simple test case (a simple loop allocating object and doing some computation on them and do the same with prealocated objects) an printout both bench result, I guess you will find a huge difference.

thinking of the general rule of cpu use 80/20, that basically explain that most of the source code 80% doesn't help a lot in being optimised as most cpu is used in 20% of your code, you dont have to care of pooling if it is outside of the 20% code that use most of your cpu.

you can use -xprof option to identify the code that use the most of your cpu and se if you really need optimising this part by pooling, low level optimisation should be used carefully and must be done as later as possible in your project.

especially how do i find out, if an object isnt needed anymore? (referencecount == 0)

Don't reimplement the GC! If you want fast pooling keep it simple as possible. Put objects into the pool if you are sure that there are not in use any more. Anything else will run slower than regular new/GC cycles.

I use pools for the storage of Triangle objects for terrain triangulation. Without the pool I get regular full (stop the world) GCs every 6 seconds. With the pool enabled the GC is triggered concurrently without stops and enables smooth rendering (60+ fps). And don't use pools for a small amount of objects (my initial pool size is around 1 million triangles).

ok, according to my profiler i got garbage collection down a lot! the minor GCs happen now only every 3 seconds as opposed to 3 times a second before. if that has an impact on the overall performance i do not know. probably on slower machines yes.

Tbh, I wouldn't introduce object pooling at the source level at all.You are compromising the design integrity of your source-code to accomodate for a performance limitation of the current breed of VMs.What do you do when the next VM comes along, and your object pooling turns out to now be the performance bottleneck?

A bytecode engineering solution to compliament the capabilities of the VM's compiler would be a much cleaner, reusable & more scalable solution.

While it isn't a trivial problem to solve, it isn't beyond the realms of imagination (No doubt it would be borrowing many aspects from the miriad of optimising compilers that already exist)

I want performance now... it makes my code 2-3x faster, in a few minutes for refactorying my 'ideal' sourcecode.

I don't have the time to build that bytecode-transformer. Keep in mind that such a transformer would be almost impossible to get right, as the developer knows when the Object is ready to reuse, yet the transformer _cannot_ analyse that. Or you'd be building yet another GC...

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

There is object pooling and object pooling. Not every case is just about saving gc, sometimes it is about saving memory. 'Pooling' immutable objects has a nice side effect that you won't end up with millions of instances of the same (same like in 'equals returning true') object in jvm. After all, java.lang.Integer.valueOf(int) implements a small pool itself, so it cannot be THAT bad, can it ?

As far as claiming that gc will solve all your problems - it is not exactly true. If you generate a LOT of immediate garbage, you will invoke gc pauses more often. In every gc, some part of life objects will be copied here and there (at least before they mature enough to hit old generation) - which is costly operation. So, don't sacrifice your app logic for gc, but also don't allocate things just because they are 'free'.

I'm doing a lot of performance sensitive code these days and when you hit 8+GB heaps and cannot afford more than 50ms pauses and cannot use NewParallelGC (because it crashes 100% with our app within 4 hours), one because bit more careful about garbage allocation.

There is object pooling and object pooling. Not every case is just about saving gc, sometimes it is about saving memory. 'Pooling' immutable objects has a nice side effect that you won't end up with millions of instances of the same (same like in 'equals returning true') object in jvm. After all, java.lang.Integer.valueOf(int) implements a small pool itself, so it cannot be THAT bad, can it ?

As far as claiming that gc will solve all your problems - it is not exactly true. If you generate a LOT of immediate garbage, you will invoke gc pauses more often. In every gc, some part of life objects will be copied here and there (at least before they mature enough to hit old generation) - which is costly operation. So, don't sacrifice your app logic for gc, but also don't allocate things just because they are 'free'.

100% agreed. [offtopic] Immutable objects have even another cool side effect you need 0 synchronisation if you work with multiple threads. Scala uses even immutable HashMaps...[/offtopic]

I'm doing a lot of performance sensitive code these days and when you hit 8+GB heaps and cannot afford more than 50ms pauses and cannot use NewParallelGC (because it crashes 100% with our app within 4 hours), one because bit more careful about garbage allocation.

8GB Heap and 50ms GC pauses? Thats awesome! I never thought that the GC would scale so good. My engine uses currently around 1 gig of RAMfull of small objects and is at a point where even parallel young GCs take >100ms (and full GCs would take > 2 seconds without pooling). Now I decided to move from dynamic resizing pools to static pre-allocated pools that fixed that problem (0 allocation or deallocations, yeha!).

But good to know that there is still room left

(Am I the only one who noticed that every VM/GC performance white paper tries to advice against pooling?)

this is the new Concurrent Mark and Sweep Collector introduced with Java SE 6. It uses all available cores to clean the young generation and tries to do most of the work in the tenured generation concurrently (-> while your app is runnig).

Its primary aim is to prevent the evil full stops (also known as full GCs).

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org