1 Reply Latest reply on Oct 24, 2012 12:18 PM by Gaius Coffey

    A counter-intuitive optimisation result... can anybody shed some light?

    Gaius Coffey Level 2


      I've been trying to make an asynchronouse PNG encoder as I am encoding some big and complex images that tie up even a good machine for several seconds. So, I thought, it would be easy to split out the processing of the ByteArray into chunks and process a new chunk on each Event.ENTER_FRAME.




      It was easy to implement (thanks Adobe for giving access to your source code!) but the results were counter-intuitive:


      1. Encoding a PNG image entire: Approximately 1 seconds.

      2. Encoding a PNG image in chunks of 100 lines: Approximately 1.5 seconds PER CHUNK and then FOUR or FIVE seconds to compress at the end!


      When I tried this on AIR for Android, the results were even worse with the final ByteArray.compress() call taking more than FORTY seconds!


      This is obviously barmy as it is taking a multiple of the total time to process everything _INCLUDING_ the .compress() call to simply process the .compress() call.


      The ByteArray in Adobe's encoder is a temp variable encased in a single function where the ByteArray in my function is a class member that is populated at the start of the encode and then added to with each successive Event.ENTER_FRAME. That is the ONLY difference I can see at the moment (trust me, I didn't go messing with that code too much... it is strongly referencing the original). Is there any other reason people can see for the massive difference in performance for essentially the exact same lines of code?