8 Replies Latest reply on Nov 25, 2010 2:06 AM by verpies

Chris,

What is the formula that you use to calculate the size of Photoshop bitmap files and the BITMAPFILEHEADER.bfSize member in these files?

I am asking because I am using the formula below and my BITMAPFILEHEADER.bfSize comes out 2 bytes shorter than the bitmap files produced by Photoshop.

where:

cbExtraBitFields = (BitmapInfoHdr.biSize == sizeof(BITMAP*INFOHEADER)) ? ((BitmapInfoHdr.biCompression == BI_BITFIELDS) ? 3*sizeof(RGBQUAD):0) : 0;

BitmapInfoHdr.biSizeImage = ( (BitmapInfoHdr.biCompression == BI_RGB) || (BitmapInfoHdr.biCompression == BI_BITFIELDS) ) ? (ROUNDUP32(BitmapInfoHdr.biWidth * BitmapInfoHdr.biBitCount) >> 3) * ((DWORD)(abs(BitmapInfoHdr.biHeight))) : BitmapInfoHdr.biSizeImage ; //For BI_RGB or BI_BITFIELDS round up to the nearest DWORD boundary. Shift by 3 because there are 8 bits in a byte. (2^3=8)

NOTES:
// The variable name prefix cb... denotes "Count of Bytes".
BITMAP*INFOHEADER BitmapInfoHdr;  //The Declaration of BitmapInfoHdr. Will not compile because of the wildcard asterisk
#define ROUNDUP32(x) ( ((x) + 0x1F)  & (~(0x1F)) )  //The macro to round up to the next multiple of 32.

FOR EXAMPLE:

The following file is a 32bpp, 32x32px image without an Alpha channel generated by Photoshop:

http://1dollarsoftware/32bpp.bmp

According to my formula:

BITMAPFILEHEADER.bfSize = 0x0e + 0x28 + 0x00 + 0x00 + 0x00 + 0x1000;  // == 0x1036 bytes

Yet in this file the BitmapInfoHdr.biSizeImage == 0x1002 and consequently BITMAPFILEHEADER.bfSize  == 0x1038 bytes.  Why ?

At 32bpp, the 32x32px image should take 32*32*4=4096 bytes (0x1000).

I don't understand the origin of the number 0x1002 appearing in that file.  It is not even divisible by DWORD (4 bytes) - not that it needs to be a multiple of 4 for file storage purposes.

Regards,

George

P.S.

I noticed that in Photoshop bitmap files the cbGap == 0  (which is OK).

Hmm, looks sort of like this:

save position of the size field

save position of dataStart field

write image contents

if compression is RLE4 or RLE8, write the EOF marker 0x0001

pad the current position to a multiple of 4 (writing bytes of zero), save that position as EOF

seek back to beginning of file+2 and write the EOF position DWORD as the total file size

seek to the size field position and write (EOF - dataStart) as the image size DWORD

There are also a few notes in the code about observations that most BMP implementations just ignore the size fields.

I think it should be:

"...pad the size of data to a multiple of 4..."

"...pad the current position to a multiple of 4..."

NOTE:

By "data" I mean the Pixel Array (a.k.a. "image contents" including the RLE end marker 0x0001).

Padding the size value and failing to write bytes for that adjusted size could cause problems.

That's why we pad the position by writing zeros.

I never meant to suggest the omission of padding the data with zeros.

Also, I did not suggest solely rounding up the value of BITMAP*INFOHEADER.biSizeImage member to the multiple of 4 bytes.

.

I suggested rounding up the size of "data" to the the multiple of 4 bytes.

This is accomplished by padding the "data" with zeros as needed to reach this desired size.

The distinction is between "rounding up the size of data" and "rounding up the position of data's end".

Both are accomplished by padding the "data" with zeros as needed.

Regards,

George

P.S.

For the purpose of calculating BITMAPFILEHEADER.bfSize, the BITMAP*INFOHEADER.biSizeImage member should reflect the exact size of the "data" (including the padding zeros).

Again, we extend the data to a multiple of 4, and account for that in the size.

You suggested what I already said we did...

If that was the case then the 0x1000 bytes of "data" in my example would not have been extended to 0x1002, because 0x1000 was already a multiple of 4 bytes.

It seems as if you are rounding up the offset of the data's end - not the data size.  A significant difference.

Worst case, you can play with different size images or image content to see exactly what we're doing.

And what we're doing has been working well with other apps for about 18 years now.