The stuff that happens in the for loop is pretty straight forward, however I'm not sure I get the last bit, with swapping pointers and what not(At least, that's what I assume it does, as to switch the elements from stillActive over into entities).

Today, however, I ran into the retainAll() function, that's defined in the Set interface, which does exactly the same.

So instead of the above code, I could write:

1 2 3 4 5 6 7 8 9 10 11 12

for(Entitye : this.entities) {if(e.isActive()) {e.update();

if(e.isActive()) {this.stillActive.add(e); } }}

this.entities.retainAll(this.stillActive);this.stillActive.clear();

which imo is a lot more readable and understandable. My question is, which method is better?

Actually, it just hit me that retainAll must be a lot slower. Since it compares all elements in A to all elements in B. And B can contain elements that's not in A to begin with.

However, I'd still like someone to explain how the whole pointer swapping actually works in the first code. Because I'm not quite sure of that. (Though I know what pointers are and what they do, I've meddled with them in C and C++)

Here's how I guess it works: - First the pointer for entities is copied over into tmp - Next the pointer for entities is overwritten with the pointer for stillActive - Then the pointer for stillActive is overwritten with the one from tmp (Or the original pointer to entities) - Then stillActive is cleared (Along with tmp, I suppose)

.... And writing this out, I think I just understood how exactly it works.. Assuming it works as I just wrote.

It would make things easier when you just drop the global stillActive Collection. -create a new Collection-put all active entitys in this new collection-set the entity collection reference to the new collection-finish

Some always try to reuse objects, but I think this premature optimization is just evil reusing objects is just so error prone, especially if there is a lot of state which needs to get reset.

The garbage collector on Android is not that fast, because of that object reuse can be a valid optimization on this platform.But remember we are talking about one single object here, so I would argue that this optimization is even on Android useless.

Firstly the absolute fastest thing to do is not use an iterator, this is true: use an ArrayList and scan through with a for loop, it's about 10% faster even on the server VM on the desktop. So why not?

Secondly, the way CPUs and memory buses work means that memory that is accessed sequentially takes advantage of the CPU pre-cacheing data that it thinks it will need next. By scanning linearly through memory, as you will be doing like this, you are using memory incredibly efficiently, and you're only gonna look at it once. Likewise writes.

So what you're doing is: - scan through your list of things sequentially. As absolutely efficient as can possibly be on any architecture.- perform whatever operation you normally do on the thing- check to see if it's dead. If it's not dead, copy it sequentially into a second list. As super efficient as it is possible to be, again.- When you're done, clear the original list, and then swap the references to the lists around, so next time, you're running against that list you built up. Rinse, repeat. The swap is virtually instantaneous, consisting of swapping 4 bytes with another 4 bytes.

This algorithm a) makes the absolute best use of cache as possible on any architecture b) is as fast at removing every single element in the list in one go as it is removing only one element c) does everything in one pass and d) retains the order of the things in the list (often important). Its only disadvantage is using 2x the memory for the list. Which for a list containing 1 million elements will be a pathetic extra 4mb.

is faster and efficienter.So, the iterater loop could be replaced by this loop for maximum performance?

If i understood it well, the Arraylist uses an array to store its data.I understand removing can use loads of resources, because the array needs to be rebuild on every delete.But with adding values the array, the array needs to be resised on every addition or will this be cashed to?

You need to check entities.size() every iteration in case a new entity is added by one of the entities already in the list. Eg. a player fires a bullet. And going backwards is likely to be wrong for just that sort of thing too. Using an Iterator furthermore will actually crash with a ConcurrentModificationException when you try and spawn the bullet. Bad! Slow! Avoid!

Removal in ArrayLists can get very slow for large arraylists, especially if there are quite a few near the start of the list. Conversely, increasing the size of an ArrayList happens very infrequently and is only about the same cost as a single removal might be. After just a few iterations of your game loop your ArrayLists are likely to be the biggest they will ever need to be, and will remain that size. Or just size them properly ahead of time in the constructor.

You could do that yes, though it sort of pollutes the purpose of your update() method, and you'll still need isAlive() or isDead() so that entities can check that references they may hold be alive still during their update.

That's pretty nifty. I suspect most people including myself would get away with doing something like that if mutation of the entities array wasn't a problem during iteration (I suspect it isn't but I've always played it safe).

Only problem as I see it, is that you need to know exactly how many entities you'll have at max, at any given time. Or else you'll need to add code for changing the length of the array. But that code is a lot easier to comprehend than the other code is.

And I think I might just implement something like that in my particle emitter, as it has a max number of particles defined. ^^

Well it wouldn't hurt to replace the array with an ArrayList but use the same technique.

Probably not (given extra method calls will probably get inlined) but it's also easy enough to expand the array on demand in the way ArrayList does in only a couple of lines!

@ClickerMonkey - nice example. Was initially trying to understand the size field, then realized I think you're missing setting the size to the number of live objects prior to nulling the dead ones. Won't this cause a NPE on the next run otherwise?

I would bet that the update method of the entities is much heavier than the loop + the deleting. Besides, how many entities will u have? one, two thousand?

Not premature really, this is just a trivial way of doing things that is no more complicated than any other way but just happens to be really easy to understand and use and is the fastest possible way to do it. And let's not forget when you're on a 700MHz ARM chip with a 16 bit bus running a crappy Dalvik VM that every cycle is important.

<edit>Should probably be noted that if you are dealing with particles in this way you might well have a few thousand of them as well.

this is just a trivial way of doing things that is no more complicated than any other way Cas

I agree with you, when you handling particles this quite a different story.What I think a big problem with premature optimization is, is that so much time gets wasted thinking about better ways of doing a simple thing.And this thread is still going on, even a good enough solution was found^^

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org