ImageMagick

Questions and postings pertaining to the usage of ImageMagick regardless of the interface. This includes the command-line utilities, as well as the C and C++ APIs. Usage questions are like "How do I use ImageMagick to create drop shadows?".

Problem: As a simple user, I'm wondering if it's "worth it" to keep two IM versions around - one q8 for speed, one q16 for 48bpp images. I'm on a 2ghz dual-core 4gb ram laptop, so I'm not *that* worried about memory usage like maybe a webserver is. I'm using IM to watermark images and create thumbnail (polaroid) pages.

Question 1: Are there any informal guidelines or personal impressions how large the speed impact of q16 is when doing what?Question 2:The other important configure option seems to be hdri - how about the impact of that?

Disclaimer:Yes, I'm aware that I could just benchmark it for myself and contribute to the community. It's just that in this case, I take the shortcut and just ask about it as there are bound to be experiences I was unable to find searching the forum.

snibgo wrote:I have never done speed trials of Q8 vs Q16 vs Q32, or integer vs HDRI. I'm not aware of anyone else having done this. I would be interested in any results.

Right, that would the reason why the description on the IM download page about these versions is so fuzzy: "A Q16 version permits you to read or write 16-bit images without losing precision but requires twice as much resources as the Q8 version" - what's that supposed to mean? memory (probably)? disk space? cpu? and what about hdri?

If someone comes around to do some benchmarking on the "usual suspect" operations, a clarification on the release page would be nice because probably I'm not the only one wondering about this and keeps two versions around even if it's not really necessary.

Q16 needs twice as much memory for each image as Q8, and Q32 needs twice as much again. This is regardless of bits-per-pixel in files.

IM was born in the days when CPUs could handle only one bit at a time, and were powered by steam. Well, that's a slight exaggeration, but CPU architecture has changed over the decades. Once upon a time, 16-bit operations took more than twice as long as 8-bit operations. Then 16-bit processors arrived, and some 8-bit operations took longer than 16-bit equivalents because CPUs were optimised for 16-bit.

I haven't picked up a CPU instruction manual for many years, but I notice that CPUs are now commonly 64-bit. They still need to shuffle text so 8-bit movements should be fairly efficient but 8-bit arithmetic may now be much slower than 16-bit or even 32-bit. But I don't know.

A test suite could be devised, and IM compiled in 6 versions (Q8, Q16 and Q32, with or without HDRI).

As you have two versions, you could do informal testing on them using your usual operations. That might be more useful for you, rather than generic test cases.

Larger pixel quantums can cause ImageMagick to run more slowly and to require more memory. For example, using sixteen-bit pixel quantums can cause ImageMagick to run 15% to 50% slower (and take twice as much memory) than when it is built to support eight-bit pixel quantums. The amount of virtual memory consumed by an image can be computed by the equation (5 * Quantum Depth * Rows * Columns) / 8. This an important consideration when resources are limited, particularly since processing an image may require several images to be in memory at one time. The following table shows memory consumption values for a 1024x768 image

I guess sooner or later I'll do a personal benchmark with my main watermarking/thumbnail operation, but it won't be worth that much because it involves a lot of disc activity with temp files and I can just measure the execution time and not the memory footprint. On the other hand, at least then it'll be a real "real world" benchmark.

snibgo wrote:Conclusion: bigger Q takes longer, but not by much. HDRI has hardly any effect.

Great you're doing a systematic test, thanks! Do you have any means of measuring the memory footprint when for example doing polaroid thumbnails of say 4x4 medium-sized 2048x1536 source images?

The q difference seems to be negligible by today's standard, but running out of phyiscal memory certainly is a bad thing. In any case, in light of these results the q16 IM version should probably marked as "the one to get" on the download page more clearly unless running in a very constrained environment.

My results, comparing int to HDRI, suggest that most of the time is spent calculating with the internal format. All HDRI uses (I think) 32-bit float, hence there is very little difference between the HDRI versions. I forget how the bits in floating-point are allocated, but I think 32-bit float has more than 16 bits in the mantissa. (If it has fewer then the precision would be less than for Q16 int.)

I'm pleased that HDRI adds practically no overhead to Q16. Once upon a time, floating point computation was very time-consuming, but no longer. I need more trials with my own workload, but I might migrate towards generally using HDRI.

Marsu42 wrote:Do you have any means of measuring the memory footprint when for example doing polaroid thumbnails of say 4x4 medium-sized 2048x1536 source images?

"-debug resource" seems to show acquisition of memory, so run your command with this option.

I agree that Q16 is generally the "one to get". But folks who run servers that take millions of jpeg snaps from mobile phones, for example, may be well served by Q8.

This is consistent with my previous results: higher Q is slower, but this is less evident when using HDRI. Some operations (such as extreme gamma which tends to clip) are more correct in Q32 HDRI than in Q16 HDRI. If HDRI is to be used, it might as well be Q32.

Similarly, if Q32 is to be used, it might as well be HDRI, because it seems faster than Q32 integer.

Of course, not many file formats can record Q32 or HDRI.

For fastest speed, and least memory usage, Q8 integer is the clear winner.

The Q level in HDRI is typically irrelevent. When HDRI is on Q becomes 'fixed' as it is not actually used except as a 'working multiplier' for floating point numbers, and thus the default depth for output files.

The thing to look at is the file... "magick-type.h" in the "magick" or "MagickCore" directory (depending on v6 or v7 source)

Hmmm a Quantum depth of 64 with HDRI does seem to increase the floating point size to 'long double'.

Looks like that last run was on a machine with floating point math co-processor to make floating point calculations very fast.

Of course whether the image processing involves a lot of floating point calculations (distorts, some resizes, sigmiodal or Power-of calculations) or not (flips, flots, crops, extents) can make a big difference in results. Even image file format (JPEG, verses GIF) can make a big difference.

It is much like resizing, the results are very dependant on exactly what it is you are attempting to do.

Just one point ImageMagick has always had a stronger focus on results, over that of speed. It does not always do things in the most efficent manner. That is not to say developers are not adding optimizations (for example caching resampling filters during a specific resize or distortion operation), just more focused on trying to get it right (though not always succeeding).

A case in point is IMv6 command line parsing whcih is very slow 2 or even 3 pass system on Imv6 (find all major problems before processing images) vs Imv7 whcih is a single pass, do it as you see parser of the command line (or even options in a file stream (script or pipelined).