Let's benchmark our hard drives with this test...

rharder

Do not read this sign.
I found a page about gigabit (wish I had it) at http://www.helios.de/news/news99/N_14a_99.html

They suggest an easy way to benchmark your hard drive (or network performance for that reason). I'm going to try it when I get home. See what you guys get. The test writes a 1 GB file and then reads it back.

Write test (writes to file 'tstfile'):
Code:
% [b]time dd if=/dev/zero bs=1024k of=tstfile count=1024[/b]
Read test (reads file 'tstfile'):
Code:
% [b]time dd if=tstfile bs=1024k of=/dev/null[/b]
Don't forget to delete 'tstfile' when you're done.

Each of these will give you output that looks something like this:
Code:
1024+0 records in
1024+0 records out

real      0m9.179s
user      0m0.000s
sys       0m1.700s
...except that your times shouldn't look anything like these times which are from a different timed command.

Divide 1024 by the "real" time. For example if your "real" time is 0m50.23s then 1024/50 = 20 MB/sec.

Try it out and report back! Don't forget to include relevant computer, hard drive, and possibly network stats!

I got 6.8 MB/sec on SourceForge's shell computers.

-Rob
 
Looks like OS X gives some better information, as you don't have to do any math (this is on my G4/500 DP, 512MB, internal IDE):

nabu:/tmp $ time dd if=/dev/zero bs=1024k of=tstfile count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 63 secs (17043521 bytes/sec)
dd if=/dev/zero bs=1024k of=tstfile count=1024 0.02s user 10.39s system 16% cpu 1:03.36 total
nabu:/tmp $ time dd if=tstfile bs=1024k of=/dev/null
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 46 secs (23342213 bytes/sec)
dd if=tstfile bs=1024k of=/dev/null 0.00s user 7.18s system 15% cpu 46.391 total


Note for those with more than a gig of RAM, you may want to increase the number after 'count=' to 2048, otherwise the whole file might be cached in RAM...
 
wow, does my new harddrive just plain KICK ASS!!!!!!!!!!!!!! i did the test on my main drive (a 20 gig my dad got me a while ago thats 7200rpm, but i dont know the buffer size) and it took 54 seconds to complete, coming out to a little over 18 megs a sec. but i tested on my new 60 gig 2 meg buffer IBM drive, and i got a blazing 32 seconds! thats 32 megs a second! whewwwwww, now thats fast (compared to what ive always used). thanks for the test info! glad to knwo about this! shows the HUGE difference between my two drives.

i have one question for everyone out there though. since the operating system is being loaded off of my 20 gig and anything else is also running off that drive (like this web browser), would that slow down my drive a lot in tests like this one? :confused:


btw: i have an old sawtooth (G4 400 [single] agp) with 704ram, radeon, 100mb ethernet, 60 & 20 gig drives.
 
blb,

Does OS X cache reads/writes that you would need to worry about a multi-megabyte file not being written directly to disk?

-Rob
 
Hey wait a minute...I'm confused here. Or maybe I'm just doing the math wrong (was never my strong point).

The longer it takes to write the file, then the lower it's going to tell you your read/write access time is...

Say I did that and came out with 55 seconds... 1024/55 = 18.6

Say it came out to 1 minute, 5 seconds... 1024/65 = 15.7

And if it came out to be 2 minutes, 5 seconds... 1024/125 = 8.1


...so am I figuring this wrong, or what? Shouldn't a longer read/write time mean that you're reading/writing less MB/s?
 
I think that's exactly what your math shows you. Your fastest example at 55 seconds yields 18 MB/sec while the slowest time at 125 seconds yields only 8 MB/sec.

Maybe you've been up too long. =)

-Rob
 
doh...I do believe you're right. See told you I wasn't good at math :p

Err...you're probably right on the sleep part...been up since around 4:30 yesterday afternoon...had a nice...long...slow night at work...came home and still haven't gone to sleep yet. Heh
 
Originally posted by rharder
blb,

Does OS X cache reads/writes that you would need to worry about a multi-megabyte file not being written directly to disk?

-Rob

Well, from what I've seen so far, X definitely follows the typical Unix-model for caching files. This is why, if you watch top for a while, you'll see free memory slowly drop, even if you aren't doing much, as it will use free RAM to cache files. But it should still be writing to disk, just not at the exact moment a program calls write()...this all results in the possibility that the whole file from dd being on disk and in RAM, so the second dd (the read) could be really fast.

Just did only a 100M file with the test, and it read back in just one second (100MB/s)...

If you do a ps, you should see a process update which should handle the writing...also see 'man sync'.
 
You think we can force a sync/flush/whatever in the 'time' command to account for this?

-Rob
 
A simple script should do the trick. Put

Code:
#!/bin/sh
#
dd if=/dev/zero bs=1024k of=tstfile count=1024
sync

into some script and time that...but on my system, it didn't seem to change the time, so it may be fully written to disk when it's larger than total system RAM.
 
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 87 secs (12341860 bytes/sec)
0.050u 11.980s 1:26.67 13.8% 0+0k 9+19io 0pf+0w

12.3MB/sec...not bad...i think
 
1073741824 bytes transferred in 45 secs (23860929 bytes/sec)

Unfortunatly I couldn't get it to read it back. Anyway, this is the largest partition of my new 80gb Western Digital IDEI, connected to my ATA100 card. The specs of the HD are, 7200 rpm, and 8.9 ms access time. I am running a 300Mhz Blue&White G3 with 576 mb or RAM. I would have tested my original IDE, that came with the Mac, but unfortunatly it is busy. :)
 
Anyone have any idea why dd can't seem to read a file from a network share? I can 'dd' to a Windows share at 1-2 MB/sec, but when I try to read the testfile back, I get a "this file is too large" error (even if the file is like 1MB).

-Rob
 
On a standard Powerbook 400 (Firewire/Pismo) using the 6 gig internal:

3.883614 Mb/sec write
8.738133 Mb/sec read
 
Originally posted by rharder
Anyone have any idea why dd can't seem to read a file from a network share? I can 'dd' to a Windows share at 1-2 MB/sec, but when I try to read the testfile back, I get a "this file is too large" error (even if the file is like 1MB).

-Rob

I wonder if there's some weird incompatibility between dd and SMB that causes this; I just tried doing a dd write/read on an NFS share, and had no problems with it at all on a 25M file. And for those interested, 3744914 bytes/sec on write, and 8738133 bytes/sec on read (to a Sun Ultra 1).
 
Back
Top