This content has been marked as final. Show 4 replies
As far as I understand it the solution would be to split up your function into smaller functions and make the play-head advance between executing those. That might be not easy in your case, but it will in the end provide you with the opportunity to give visual feedback to the audience, which is also something very useful.
splitting loop can always be done. you only need to place your nested loops in a function (which they already are), have each of your 5 loops use a parameter as their start value and put a timing check in your inner most loop. that timing check triggers a call to a function (which then causes flash to catch its breath) and breaks out of your loops. the nested loops are then reentered where they left off and continue until the next timing violation etc.
kglad's solution seems to be a very fine one (as always!) but I still would like to emphasize my point that in most of the cases where very long loop are required, a solution that gives the user a visual feedback is very important.
Of course you can do that with kglad's way too.
As Flash is very CPU intensive, it tends to suck up all processing power of the machine if running into long loops, so that the user's computer will get "stuck". If that lasts longer than one or two seconds it can be very annoying, so if you (have to) make the user wait it is somehow a question of good manners to inform him about what's going on, and give him a chance (e.g.) check for mail in the meantime. I personally thing it preferable to wait 30secs being informed and able to read mail, instead of 15 secs being stuck, not knowing why.
One other thing:
Probably in your case it might be that the whole system of using such monstrous XML files is faulty from the very start.
Even if you can make it work, sometimes it is the much better option to go back a step or two and rectify the problem before it occurs.
XML has it's advantages, for one thing it is easy to use, and human-readable, but it's disadvantages are, that it carries a lot of overhead, and slows down performance. So for really big chunks of data a database is (IMHO), in most cases the better choice.
This is just a general hunch I have here.