As a starting point, I wanted to test the “array of objects” design, and profile it using xdebug_time_index, xdebug_memory_usage, and xdebug_peak_memory_usage. The Xdebug documentation doesn’t really explain what those measure (seconds and bytes, I assume), but I figured at least it’d at least give me a way to judge relative performance and memory load.

The result was pretty scary. It took a long time to build and loop through all 100,000 objects:

time: 50.3534600735 (50-70 sec)
pmem: 39,267,200
mem: 3,205,784

Just for grins and giggles, I decided to make a modification to the constructor and bySQL functions, to pass along the complete array representing the database row, rather than having to go back and query the database again:

Peak memory usage dropped only slightly, but the script execution time dropped by almost an order of magnitude. Clearly most of the time was being spent executing those 100,000 additional selects (duh!), and there was no memory cost associated with doing an initial select *.

Next I tried the Collection class detailed in I heart foreach, minus the mysql_connect and mysql_select_db function calls. The test looked like this:

Once again, performance was surprising. I was expecting an all around performance and memory usage improvement, but it turns out the Collection class actually ran slower:

time: 62.3732869625
pmem: 77,040
mem: 76,944

Of course memory-wise, the Collection object kills. From 39MB to 77KB! Granted this was still using the select id design, so I modified Collection::cacheNext() to pass along the whole array (not just the id value):