[Sean Jensen-Grey]
> Yes, that is the behaviour I was looking for. Thank You. I
> guess the bug can be resolved RTFM. But it should still
> work the way I want it! B^)
Well, I'm not Guido, but this won't change -- it would break far too much
code; e.g., I routinely do breadth-first traversal of directed graphs via
# return list of nodes reachable from root, in breadth-first
# order
def bfirst(root):
all = [root]
seen = {root: 1}
for x in all:
for kid in x.children():
if not seen.has_key(kid):
seen[kid] = 1
all.append(kid)
return all
That is, "all" grows *during* the loop, and x picks up each one added in
queue order. Sick, but pretty <wink>.
> I achieved the same result with the code below, after
> accounting for the array compaction.
Sick, but strained <wink>. If you don't want to copy, how about iterating
over l_sync backwards?
for i in range(len(l_sync)-1, -1, -1):
if l_sync[i].isevil():
del l_sync[i]
This works because only elements whose indices have *already* been examined
can get deleted; it's also enormously more efficient if most of the elements
get deleted (deleting from the end of a list is much cheaper than deleting
from its front). Sick, but effective <wink>.
> Is the temporary copy l_sync[:] as expensive as a deepcopy?
It's a shallow copy; under the covers it (just) copies len(l_sync) pointers;
l_sync[:][i] is l_sync[i]
is true for all i in range(len(l_sync)).