IM not detecting alpha channel in certain .tga files

Post any defects you find in the released or beta versions of the ImageMagick software here. Include the ImageMagick version, OS, and any command-line required to reproduce the problem. Got a patch for a bug? Post it here.
Post Reply
Greg

IM not detecting alpha channel in certain .tga files

Post by Greg »

Seems recent versions of IM have trouble recognizing the alpha channel in .tga files created by LightWave.

This test image is a tga from LightWave that contains 8bit R/G/B/A, and displays normally in eg. XnView (free image viewer which can show alpha channel information) and eog(1) ('Eye Of Gnome):
http://seriss.com/people/erco/tmp/BATCH ... 2_0001.tga

'identify' from old versions of IM (eg. linux / IM 6.0.1) recognize the alpha channel, but more recent versions can't discern it, thinking its an RGB image without alpha.

I first noticed this when trying to composite the images, and found the alpha channel was being dropped in the comps.

Here's output from various versions of IM run on the same image, see 'GOOD' vs. 'BAD' notes:

Code: Select all

******* IM 6.0.1 (linux) *******
Image: BATCHTEST_SIMPLE_R01_022_0001.tga
  Format: TGA (Truevision Targa image)
  Geometry: 1920x1080
  Class: DirectClass
  Colorspace: RGB
  Type: TrueColorMatte
  Depth: 8 bits
  Endianess: Undefined
  Channel depth:
    Red: 8-bits
    Green: 8-bits
    Blue: 8-bits
    Opacity: 8-bits     <-- GOOD: detects alpha
  Channel statistics:
    Red:
      ..min/max/mean snipped..
    Green:
      ..min/max/mean snipped..
    Blue:
      ..min/max/mean snipped..
    Opacity:              <-- GOOD: detects alpha
      ..min/max/mean snipped..]
  Opacity: (19275,14906,13364,65535)      #4B4B3A3A3434FFFF
      ..

******* IM 6.2.4 (linux) *******
BATCHTEST_SIMPLE_R01_022_0001.tga TGA 1920x1080 DirectClass 113kb
Image: BATCHTEST_SIMPLE_R01_022_0001.tga
  Format: TGA (Truevision Targa image)
  Geometry: 1920x1080
  Class: DirectClass
  Type: TrueColor
  Endianess: Undefined
  Colorspace: RGB
  Channel depth:
    Red: 8-bits
    Green: 8-bits
    Blue: 8-bits               <-- BAD: No alpha
  Channel statistics:
    Red:
      ..min/max/mean snipped..
    Green:
      ..min/max/mean snipped..
    Blue:
      ..min/max/mean snipped..
  Colors: 1267                <-- BAD: No alpha
      ..

******* IM 6.3.5 (windows) *******
Image: BATCHTEST_SIMPLE_R01_022_0001.tga
  Format: TGA (Truevision Targa image)
  Class: DirectClass
  Geometry: 1920x1080+0+0
  Type: TrueColor
  Endianess: Undefined
  Colorspace: RGB
  Depth: 8-bit
  Channel depth:
    Red: 8-bit
    Green: 8-bit
    Blue: 8-bit                           <-- BAD: no alpha
  Channel statistics:
    Red:
      ..min/max/mean snipped..
    Green:
      ..min/max/mean snipped..
    Blue:
      ..min/max/mean snipped..
  Rendering intent: Undefined     <-- BAD: no alpha
      ..
****************************************************************************
Seems that, with this problem, we can't properly comp LightWave tga files, due to the alpha getting dropped.
Last edited by Greg on 2009-01-15T02:45:31-07:00, edited 1 time in total.
Greg

Re: IM not detecting alpha channel in certain .tga files

Post by Greg »

I think the problem is in coders/tga.c (IM 6.4.8, current svn).. this bit of logic seems to think the low 4 bits of tga_info.attributes determines if there's a matte channel or not:

Code: Select all

image->matte=(tga_info.attributes & 0x0FU) != 0 ? MagickTrue : MagickFalse;
In the case of the example LightWave image, tga_info.attributes is 0x20, and tga_info.bits_per_pixel is 32. This causes the above logic to think there's no matte channel when in fact there is one present.

It would seem that tga_info.attributes alone might not be a good way to determine if a matte channel is present. Maybe instead, it should include tga_info.bits_per_pixel in the calculation, possibly like:

Code: Select all

    image->matte=(tga_info.attributes & 0x0FU) != 0 ? MagickTrue : MagickFalse;
    if ( !image->matte && tga_info.bits_per_pixel == 32 )
        image->matte = MagickTrue;
..not sure.

Update: I decided to check the tga source code from a different image library; SDL_Image, http://www.libsdl.org/projects/SDL_image/index.html. In that library's "IMG_tga.c" file, it seems the presence of an alpha channel is entirely defined by bitsperpixel being == 32. Excerpt follows (comments in caps are my own):

Code: Select all

int alpha = 0;   // ASSUME NO ALPHA
..
    switch(hdr.pixel_bits) {
        ..
    case 32:
        alpha = 1;     // ONLY PLACE ALPHA SET TO '1'
        /* fallthrough */
    case 24:
        if(SDL_BYTEORDER == SDL_BIG_ENDIAN) {
            int s = alpha ? 0 : 8;
            ..
        } else {
            amask = alpha ? 0xff000000 : 0;
            ..
..so it does seem like IM's current tga.c might have it wrong.

Interestingly, the older releases of IM (6.0.1) don't seem to have this problem. Unfortunately I can't find the source for these older releases to see if there was a regression of some kind.

Anyway, hope the above helps.
User avatar
magick
Site Admin
Posts: 11064
Joined: 2003-05-31T11:32:55-07:00

Re: IM not detecting alpha channel in certain .tga files

Post by magick »

We will get a patch into the Subversion trunk for this problem by sometime tomorrow. Thanks.
Post Reply