I currently have this configuration for my CS5: Intel Core i7 Extreme, 12GB memory, GTX480:
1 - 1 - 300 GB OS Drive with the Adobe Master Collection installed.
2 - 4 x 1TB = 4TB Raid3 (Areca ARC-1680D-IX 12-ports SAS RAID Controller)
I have 2 open plugs in the Areca raid controller.
If I get more four more 1TB drives should I set up two Raid3's -- one raid for "read" and the other "for write"?
I assume that would be the approach. Would that improve my export time as well as my rendering time?
Rowby, it is a lot easier to create a second raid3, but then stripe them to a single raid30. You don't need to worry about reading or writing and how to separate them and organizationally it is easier too. You have the same redundancy and because of the striping you have the same performance as your initial idea.
You will have better performance, but the law of diminishing returns applies.
reading and writing to 2 seperate drives or arrays is better than 1
how you raid them is up to you.
for each raid 5 you lose 1 disk to parity
personally i prefer a raid 0 and a daily back up over raid 5
however there is a nice comfort to being double redundant (my servers are)
as cheap as drives are now might as well go for the raid 5!
(and a daily back up) :-)
I perfectly understand why you are an advocate of raid5, because it does not require a costly controller like the Areca, but if one has an Areca, does it not make more sense to you to use the raid3 capabilities that are available instead of a raid5? You have seen that my raid30 performs reasonably good and in the past you remarked something like: those a fabulous figures...
Raid5 is somewhat slower than raid3 for video editing and faster for I/O applications with lots of transactions, but the advantage of raid3 (if you have the controller) is much faster rebuilds, in case disaster stikes.
i prefer the Intel card, better management
software and less cost vs Areca/Atto.
again someone who needs a 3-4 drive array (X 2) absolutly needs an high end raid card.
as far as raid 5/3 the whole point to that is redundancy.
with 5 parity is spread accross all drives
with 3 parity is on 1 drive. if that drive dies OUCH...
3 vs 5 when is the last time you benchmarked this? we didnt see much difference and like the better protection of 5
with 3 parity is on 1 drive. if that drive dies OUCH...
It just rebuilds the parity data from the original data. No OUCH...
here is a bit more on Drive array speeds and what to expect. and what to not waste money on.
pretty sure i had posted all my benchmarks before:
980X at 3.33GHz
12GB Redline at 1600 CL 6
Intel SAS 600 Controller
4 WD 1Tb Sata 64 Meg Cache 600 Drives in 2 Raid 0 arrays reading from 1 set writing to the other
Video material - AVCHD 1080P 24 Frame Each Cut to 30 minutes of material
Export Codec - H264 HDTV 1080P 24 Preset Default
4 Effects per Layer - Fast Color Corrector, Brightness & Contrast, Video Limiter, Sharpen
Each Layer Scaled to 50% for 4 frame PinP view.
3 layer 42:39
4 layer 47:34
4 Drive raid 0 (2 sets) reading to 1 writing to the other
3 layer 41:47
4 layer 55:02
2 Raid 0 arrays reading from 1 set writing to the other
3 layer 42:50
4 Layer 47:30
8x WD 600GB 10K RPM 32 Meg Cache 600 Drives in 1 Raid 5 array
(8 Drive Raid 5 - 745MB/s read 735MB/s Write)
3 layer 41:44
4 layer 53:42
as you can see with AVCHD going past 2 sets raid 0 gains nothing and for 4 layer is actually slower.
even with an absurd 700MB/s+ 8 drive raid array with Raptor drives.
to add even more.
during our tests we used
an SSD and raptor as an OS and then re did the same test vs standard drive
same tests with SSD/Raptor as OS and as Media cache/page files NO differnece
same tests with SSD/Raptor as a seperate Media cache/page files drive NO difference.
increasing CPU speed (over clocking to 4GHz) made a far better improvement to these numbers than did drive arrays. (GHZ is still king)
dual 6 core Xeons (2.8GHz) cant outdo the 980x.
we will have numbers on a pair of 3.33GHz (12 cores) Xeon system in about 10 days (finally sold one) Intel usually sends us the highest CPUs they have for testing but not this go round (they sent the 2.8GHz) thus the hold up until now.
Again for Red4K bigger drive arrays are needed particularly as you increase layers or want to go past 1/4 preview
Scott, what you are doing is comparing a CPU/GPU intensive benchmark under various disk conditions. So it makes sense that you do not see much difference. Do you have any dsik intensive benchmarks?
people this is completely incorrect for 95% of adobe users.
I've often thought so myself. Harm's very knowledgeable, but his version of "bare minimum" would make him a good choice to sell hardware to NASA.
more the question would be, what work flow in Adobe is disk intensive other than the mentioned 4K or higher res
again most people using Adobe are NOT doing anything disk intensive.
i will galdly benchmark a workflow that is disk intensive, i even have some uncompressed 10bit 4-4-4 footage
is the work flow common?
is it more than a few users?
you know us Bill we are all about benchmarks!
Speaking of did you see Erics numbers for your new test?
LOL indeed! him and I are not too different
my personal systems tend to be over the top as well
my idea of a car/truck is a mighty mopar hemi (and yes i own a Hemi Dodge truck) once i am done with my house remodeling
(1 more large addition) my next project is a street car :-)
Like Harm all my systems are overclocked!
geeking is how i got into this crazy biz.
it was supposed to be a hobby not a job.
Scott, for our new disk intensive benchmark we are using a DV string of frames and render it to the Microsoft AVI format. This is much like the PPBM4 disk intensive test, but in this case we made it longer to give roughly the same PPBM5 score as a typical MPEG and H.264 score to "balance" the three results for somewhat equal weighting in the total. No, I have not yet seen Eric's results on our beta PPBM5 v2 as the beta results are right now being sent to Harm and not here. I have to assume that this is a common usage of Premiere as many are still editing SD footage and as a matter of fact we just had a thread where this was precisely the problem.
yes Bill, i am aware of the DV part of the test.
again i would say that is very few users out there using DV who can afford a large disk array (pro level)
vs hobbiest level and even those guys are getting AVCHD cams now.
DV is dead man. (other than a broadcast environment like a chruch where HD is pointless or out of budget reach)
(not everyone has a Joel Olsteen budget)
even with DV i would maintain that a read from 1 (drive or array) write to another is still better than 1 large array
(after all DV is where i derived this from yrs ago)
i get very very few clients calling who are still using DV other than hobbiest.
Scott, you will have to remember my goals are very different than yours. I developed this initial test 5 years ago to help people tune/design their systems (and debunk a few myths) and as such I need quantitative numbers from very disk intensive testing (along with a very CPU intensive test and now added into this a GPU test) so I can feed back helpful information to the users. I have responded to hundreds of people and got replies to all (at least I hope I got a response to all) with suggestions on how to improve their systems if necessary, or to complement them on a well setup and tuned system. On the other hand you get typically very high end users with all the newest and latest formats where I am hoping to (at least sort of) supplement Adobe's customer support with a different aspect of information. Unfortunately one of the results has been bragging rights for the fastest and best scores and I also am one of those hardware geeks myself and with close to 60 years of electronic (later computer) hardware experience. I am delighted that Harm has helped me so much with the presentation of PPBM4 data and now taken a major lead in developing our next generation PPBM5.
"Unfortunately one of the results has been bragging rights for the fastest and best scores and I also am one of those hardware geeks myself and with close to 60 years of electronic (later computer) hardware experience."
All this time I thought of you being in your 30's or 40's. You don't literally mean 60 years, do you?
That is most kind of you Chuck and you did catch me in a slight exaggeration it is only 57 years in engineering, this month I will celebrate my 79th birthday, but I have a long way to go my mother lived to 103.
AWESOME! i guess i should call you Sir as i am a young'n only 49