Page 1 of 1

ImageMagick convert problems with PSB files

Posted: 2011-02-04T14:11:20-07:00
by paulheckbert
I have version 6.6.7.

BUGS

If you give convert a corrupt or truncated PSB file, in many cases it does not raise an error, but silently outputs a garbage result.
For example, I have a 2MB PSB file which, when truncated to sizes as small as 64KB, still runs through convert without error (as in 'convert -flatten x_trunc64KB.psb ppm:-').

Convert doesn't do -flatten correctly on 16-bit PSB channels.
If I run 'convert -flatten x_16bit.psb z.ppm' on a PSB file that was saved with 16 bit channels, and with 'maximize compatibility' off, convert apparently doesn't read and composite the layers, as it does with 8 bit channels. Instead it reads the file's image data section, which contains a picture of the words "This layered Photoshop file was not saved with a composite image". That image data was put there by Photoshop, not Image Magick. Compositing fails like this even if Image Magick was built with quantum depth 16. It would be nice if Image Magick did proper compositing itself, in this case, or at least gave an error message when its composite is bogus.

Some PSB files cause convert to crop incorrectly.
I have a PSB file (generated by AutoPano Giga version 2.0.9) with a header that says "version=2 nchannels=4 depth=8 color_mode=3" (a more typical Photoshop-generated PSB has "version=2 nchannels=3 depth=8 color_mode=3") with layers that extend outside the [0..width)x[0..height) rectangle defined by the header, and when you do "convert -flatten" on this PSB file, the size of the output is not width*height, but rather max_layer_width*max_layer_height. This is inconsistent with Photoshop (version CS5)'s flattening behavior on the same PSB file. (If you use convert args "-crop widthxheight -flatten", it changes nothing; but if you use "-flatten -crop widthxheight", it fixes the bug, although I don't think I should have to use -crop).

INCONVENIENCES

Convert is inefficient when reading PSB files.
If the file has n layers, the memory space or disk space required to flatten the layers is equal to (total_pixels_in_all_layers + 2*width*height) * 4 * quantum_depth/8 bytes.
For large images, a disk cache is required, and this gets very slow and can fill up your disk.

If a different algorithm were used when the input is a file, either
open the PSB file 3x to read R, G, and B in parallel
or
save decompression contexts during a first pass read, in a manner that allows the code to do
for sequence of horizontal strips {
for layers {
for color channels {
seek to beginning of the relevant data
read & decompress this channel of this layer's strip
composite this into accumulator strip
}
}
output strip
}
then the convert could be done with much less memory/disk (memory of 2*width*strip_height*4*quantum_depth/8), and in the case where it made the difference between disk cache and memory cache, the speedup could be dramatic (100x or more?).