Change in processing of -limit size options regression

Post any defects you find in the released or beta versions of the ImageMagick software here. Include the ImageMagick version, OS, and any command-line required to reproduce the problem. Got a patch for a bug? Post it here.
Post Reply
apinstein
Posts: 2
Joined: 2012-03-02T15:14:32-07:00
Authentication code: 8675308

Change in processing of -limit size options regression

Post by apinstein »

We recently upgraded from 6.7.2-2 to 6.7.4-2 and experienced a painful regression. We use -limit to control memory usage like so:
convert -monitor -limit memory 200mb -limit map 200mb foo.jpg -flatten -strip \( +clone -resize 1500x1000 -quality 75 -write foo-large.jpg +delete \) -thumbnail 100x75 -unsharp 0x0.8 -quality 65 foo-thumb.jpg
On 6.7.2 this ran in ~3s. On 6.7.4 it took ~40s.

We used -monitor and noticed it was the resize that was going slow, which lead us down the rabbit hole of suspecting OpenMP. We tried re-compiling our same version without OpenMP and experienced the same bug. Again with the latest version of ImageMagick, too.

We were trying to figure out how much memory was actually being used by the "broken" resize from the new version. So we copy/pasted the default map limit from:
$ convert -list resource
File Area Memory Map Disk Thread Time
-------------------------------------------------------------------------------
768 3.7585GB 1.7502GiB 3.5004GiB unlimited 2 unlimited
We started out by changing our 200mb to 500mb then 1500mb. Still it kept going SLOW.

That made me think something was weird with the processing of the -limit map option, since if we removed just "-limit map 200mb" it would get fast again.

So, we copy/pasted the 3.5004GiB and started reducing the number until it took a long time. Well, once we got the number down to below 1GB and it was still going fast (whereas -limit 1000mb was slow), I tried this:
convert -monitor -limit memory 200MB -limit map 200MB foo.jpg -flatten -strip \( +clone -resize 1500x1000 -quality 75 -write foo-large.jpg +delete \) -thumbnail 100x75 -unsharp 0x0.8 -quality 65 foo-thumb.jpg
It was fast.

5 hours of work with 2 guys to figure this out. Ouch.

So anyway, looks like there is a regression (or undocumented change) in the size parsing of -limit sizes.

Enjoy,
Alan
apinstein
Posts: 2
Joined: 2012-03-02T15:14:32-07:00
Authentication code: 8675308

Re: Change in processing of -limit size options regression

Post by apinstein »

I looked at the source a bit and I think this function is where the regression was introduced:

InterpretSiPrefixValue() in MagickCore/resource.c

It looks like there have been lots of changes in that file between 6.7.2-2 and 6.7.4-2, including specifically the processing of the size prefixes.

Seems highly likely that the regression was introduced in there by cristy.

I see lots of mentions of using -limit on the forums, and many people experiencing speed/performance regressions after updating; I wonder if this is a common source of that complaint.

Anyway, hopefully this additional info will lead to a fix. I took a quick look at the code but there's no way I could fix this in under several hours so hopefully cristy can have a quicker impact.

FWIW this bug can easily be worked around by updating the -limit values to simply use the new "SI" units.

Regards,
Alan
Post Reply