01. I am attempting to blend two ImageSnapshots.
02. If I write those out as .PNG files, I am able to get the blend that I desire from the custom PBK that I have written and tested with the 2.5 Toolkit.
03. The snapshots are produced in an AIR application and the desired processing is to perform the blend in a background ShaderJob.
04. I have no trouble creating the Shader from the embedded PBJ, but I am quite confused as to how to configure the ShaderJob. The first error I received was #2166. That caused me to re-read the instructions concerning ShaderInput which, admittedly, I had not been able to understand:
If the shader is being executed using a ShaderJob instance to process a ByteArray containing a linear array of data, set the ShaderInput instance's
height to 1 and
width to the number of 32-bit floating point values in the ByteArray. In that case, the input in the shader must be defined with the
image1 data type.
05. Since my kernel works with input sources typed as image4 [at least when being used against the file versions of the bitmaps], I did not want to change that kernel, rather I thought the diagnostic error message might indicate that I had failed to provide width and height values for the two ShaderInputs. That resulted in error #2165, and that is why I am posting. Clearly I need to understand how to create the job to run against the in-memory image data as differently from how it is run in the Toolkit IDE.
A. Is the phrase "a ByteArray containing a linear array of data" meant to indicate that there are some ByteArrays which do contain a linear array of data and other ByteArrays which do not contain a linear array of data, or does the phrase mean that any use of a ByteArray results in a linear array of data as seen by the kernel's input handler? If it is the former, can you provide some guidance on knowing when a ByteArray does or does not contain a linear array of data. More specifically, for my use case, is the ByteArray which is available as a property in an ImageSnapshot linear or non-linear?
B. Assuming that my ByteArray must be seen by the kernel's input handler as a linear string of bytes, so that I must re-write my kernel to handle image1 rather than image4 input, does that require any changes to the use of the intrinsic outCoord function or sampleNearest? [Searching this forum for ShaderJob, indicates that there was a bug concerning image1 versus image4 in an earlier version. Is that fixed, or are there still 'disconnects' that must be mastered in order to successfully handle bytearrays as shader jobs?] Or, as it seems, does sending in a linear stream only change the 'behind the scenes' parallelization strategy employed by the FlashPlayer when running the ShaderJob? Since I am planning to run the job asynchronously, I am not too concerned about elapsed time, but are there other configuration options to be considered that might result in greater throughput on the multi-core hosts where my application will usually run?
C. Alternately, if a ByteArray may, under some circumstances, be seen by the kernel's input handler as non-linear, what settings of input.width and input.height should be set? Again, to go from the general to the specific, if there is any guidance on specifying the dimensions of a ByteArray created by ImageSnapshot, I would appreciate that advice.