I've found that I can access the raw input image through params->u.ld and the output image that I can write to via the output parameter in the Render function. I get most of the information I need via the data, width, height and rowbytes members of these structs, but I can't figure out how to determine the pixel format or bit depth of the images. I would like to process the data from params->u.ld.data and write it directly to output->data but I need to know the pixel format to correctly interpret the image data.
How can I go about finding the bit depth of the input/output images? Is there a way to make my plugin only support 128 bits per pixel floatin poing data?
the easiest way of telling the depth of the incoming world is:
it gives any of the following values: 8, 16, 32.
the "extra" is one of the params passed to your plug-in in the smart_render call, and it's true for the input and output worlds.
the second way is a bit more cumbersome, but you can check any world with it:
PF_WorldSuite2 *wsP = NULL;
ERR(suites.Pica()->AcquireSuite( kPFWorldSuite, kPFWorldSuiteVersion2, (const void**)&wsP));
ERR(suites.Pica()->ReleaseSuite( kPFWorldSuite, kPFWorldSuiteVersion2));
the pixel type AE uses is always ARGB.
oops, pressed send in the middle of writing...
so where were we?
the second method returns one of the following values:
as for making your plug-in support only 128bpp, you can't tell AE that, but you can tell the user.
check the bitdepth, and if you don't like it just copy the input to the output, and send the user a message, or render a message on the image, or... just find a way to let the user know you're not processing.
Thanks for your answer. It seems to do exactly what I want. I've managed to convert the input data to 32 bit float, do some processing, and then convert back to the native bit depth. Before I can verify that it's working correctly, though, I need to figure out how to get my plugin to support floating point data.
Currently it is set to only support 8 bits per channel and After Effects is telling me "This effect only supports 8 bit per channel color. Using this effect in a 32 bit per channel project may reduce color precision and may remove any high dynamic range values" (this led me to think that there might be a way to tell AE that my plugin only supports 32 bits per channel). How can I make my plugin support other bit depths?
to support 16bpc you must set the PF_OutFlag_DEEP_COLOR_AWARE flag during global setup.
for 32bpc, you must set PF_OutFlag2_FLOAT_COLOR_AWARE, which also necessitates PF_OutFlag2_SUPPORTS_SMART_RENDER, which in turn, means you have to use the smartFX API.
it just means that instead of a render call you get a pre_render, and smart_render calls.
shouldn't be difficult to switch to that logic.