Jump to:

During update 6001 the function _filefield_update_6001_move_operation is executed repeatedly within multiple client-server roundtrips. It is supposed to update rows in batches of 50 while keeping the progress meter running. It is possible for this process to get stuck if the query condition "vid > %d" for the while loop (filefield.install:310) results in zero rows being returned. In this case the line $context['sandbox']['progress']++; at filefield.install:327 is never executed, and the progress meter never reaches completion. The user sees something like "Remaining 34 of 34...", the web server being hit with many requests, high CPU usage and the same database query being executed over and over again (because of the roundtrip the script is never aborted due to max execution time limit, so the server is pounded as long as the browser remains open).

Comments

At first I couldn't quite figure out in the logic how this would be possible. If the query condition returned 0, you should have finished processing all the nodes and 'progress' would equal 'max', letting the loop exit. However, I found that the update might accidentally skip some content due to an error in our query. This patch changes "nid" to "vid" in our query, ensuring that no nodes are skipped, and hopefully that means that the loop will always be able to exit.

If you could try this patch out on your upgrade and run it again, that would be tremendously helpful. It's rather difficult to reproduce problems like this.

The patch doesn't help. I captured the content of $context['sandbox'] before and after the while loop in the stuck state. As you can see, even though the query returns no rows, progress is not equal max and not being incremented due to the loop not being executed at all.

I suppose your method of counting is broken because vid alone is non-unique in the table that is being processed:

VID doesn't have to be unique, since you can upload multiple files to a single revision of a node. The loop is for every individual file, not each revision.

I think another alternative here is the situation where a revision has multiple files that end up getting lost because the 50 item cap has been reached before the node is entirely processed.

For example, let's say you have a single multi-value field that has 10 images uploaded into it. But 48 files have been processed from other nodes during the update, the update will only convert 2 of the 10 files before hitting the 50 item limit. Then on the next request the update will start with the next node, because it thinks the previous node was fully processed.

This patch makes it so that our update starts every time with last node that was already processed and checks to make sure that all that node's items are pulled in before starting on the next node. Note I haven't been able to test this upgrade, so again it would a huge help if you could give it a try.