Thanks for the compliment, really appreciated.
Your earlier remarks about synthetic testing and it being better to use actual footage, set me to this idea. I have been lucky to have had various people give me all kinds of material to include in the test. Because of the practical limit to download size and test times, I had to leave out DPX and Canon 7D material, but I think it is a nice improvement and when we get more data, the MPE performance chart may be a nice addition.
Is the new suite posted somewhere? No luck finding it 'round here.
we have footage from a lot of people as well. in numerous formats.
it beats renting cams!
Scott, let me explain my thinking in setting up this test.
Our purpose for CS5 benchmarking was to show with actual footage from various cameras and with various formats, as many use multiple formats and various cameras in their projects, what:
1. the effect of MPE would be, the game changer in Adobe terms.
2. the real load on the CPU is when editing dififcult and less difficult footage and exporting (I mean everybody does that) to two very common formats, DVD and BRD.
That left us in a bind because we really liked the old disk intensive test to show whether one's I/O setup was up to the task and not causing common problems like jerky playback. So we had to come up with some sort of test that would use minimal CPU overhead but that would tax the I/O system.
So we used clips with the easiest codec available in PR, MS DV AVI type2, left out all effects and transitions, because that would reduce the CPU overhead and used nearly 550 instances of a single clip, so there would be a lot of disk reading of small files and the sequential writing of a very large (13 GB) file. That was simply the best disk intensive test to come up with inside of PR. It says nothing about DV being nearly extinct or not, but simply a choice to remove as much as possible the overhead of using a more complex codec like MPEG.
Add to the dilemma we were in, that the download size had to be manageable and testing times not too long and we figured this to be the best and most realistic approach. There are other considerations in addition to what Bill mentioned, that forced us to use such a long (1 hour) timeline that Adobe is aware of and working on.
For the rest of the explanations, I ask you to have some patience, as Bill and I are currently writing the texts for the new site.
Before you run off on trying it, this is still a beta version and the version on the site has later builds that are not yet available. Please be patient. We rather have a stable and reliable test, than one that shows different results because some were run with version v2 and others with version v3 with different scripts.
It is in all your interests that the results show reliable and consistent figures that you can rely on.
We would like to publish the test as well, but there are circumstances beyond our control that keep us from publishing just yet and the major reason is that there may be some improvements coming that will heavily impact testing results. It would lead to confusion if earlier results dropped to the bottom of the chart and would require renewed testing and more work from our side as well.
If you are obstinate and still want to run the test, results will not be included at this time, so they will not show up.
My apologies for posting the link.