While the links provided are helpful in regards to performance speed with bitmap caching, the main question I have are actually regarding memory usage (on mobile phones, there is only a limited amount of GPU space), so, let me rephrase this a simple as possible:
1. Adobe documentation states that if multiple bitmaps are created, using the same bitmapdata, and cached on the graphics card, than only one instance of the image is cached and used by all bitmaps. Is this effected by the inital scale of the bitmap though? If I have 5 bitmaps set to identity matrices, and 1 bitmap data, is only 1 instance stored. If 2 bitmaps are identity, and 2 are scaled to twice the size, and one is scaled to 3 times the size, how many instances are stored in GPU memory.
2. What is the difference between setting the cacheasbitmapmatrix to new Matrix() or to concatenated matrix. I'm confused, because some websites use the concatenated matrix, but http://help.adobe.com/en_US/as3/mobile/WS901d38e593cd1bac-11f566412b2b29517b-8000.html uses a new Matrix() instead.
I think the bitmap apple link sovles the first one, but I'm struggling to understand it. I found the adobe example of the bitmap apple at http://help.adobe.com/en_US/as3/mobile/WS4bebcd66a74275c36c11f3d612431904db9-7ffb.html. It shows that that reusing the same bitmapdata never updates the data's memory in spite of scaling or rotating. I'm struggling with figuring out how it relates to the GPU however. I don't see the line cacheasbitmap=true anywhere. Is the bitmapdata automatically stored on the gpu? If so, does the cacheasbitmapmatrix variable need to be used to get gpu scaling and rotation, or will this slow things down? I'm currently using the same technic as described by the bitmap apple, but I'm trying to figure out if setting cacheasbitmap or cacheasbitmapmatrix will help, hurt, or take up more memory.
The above demonstrates without the property cacheAsBitmap and implementing the same with optimization techniques like using the Vector object instead of Array, stage quality set to low and high for good rasterizing, and only send the image as one time with the name buffer (create your image as a movieclilp in the library and with the class name AppleSource) and reused using their bitmapapple class.
you tried as an sample?
The game I am building currently works similar to that. Basically, I have a .png file. This is inside my .fla, embedded as a bitmapdata object. When the program launches, I create 1 instance of this bitmapdata object, and then several bitmaps referencing it. What I'm try to do is figure out:
A. Are these objects GPU optimized, or does cacheasbitmap / cacheasbitmapmatrix need to be set on them?
B. If these objects are not GPU optimized, will setting the above 2 properties cause multiple instances of the bitmapdata to be loaded to the GPU (one for each bitmap), or will they still reference the one bitmapdata object in GPU memory regardless of the initial cacheasbitmapmatrix used?
As for basic coding techniques, like using vectors over arrays and other faster data objects, I'm familar with. Its just the rendering that I'm trying to solve.