slow transfer when hundreds of files

starship

Registered
transfer of 1500 files (=total 22MB) from G4, 10.3.5 to any G4, 10.3.5 clients takes approx 1'30 or more!! (smb://, nfs://, afp://)
transfer the same from a W2003 Server to G4 client is the same (smb://)
transfer the same from a W2003 Server to a XP client P4 takes approx 35"
((transfer 22MB file -in one file- takes 10-15" on all combination))

Does anybody know where to tune the systems?
tried 100Mbit/full or halfduplex, MTU 1500 - 300
any details about the exact process (prepare to copy, copy ...) that would help to find the bottleneck.

Thanks.
 
This is very normal. Each new file require to make a new block on the disk and end the block once it's finished which is slowing it down a lot.
One thing is to zip the files and copy the file to the other folder and finally unzip them over there. This will speed up the transfer, but you will need to zip on your local machine and unzip on the other one.
And welcome to the forum. :)
 
Zammy-Sam said:
This is very normal. Each new file require to make a new block on the disk and end the block once it's finished which is slowing it down a lot.
One thing is to zip the files and copy the file to the other folder and finally unzip them over there. This will speed up the transfer, but you will need to zip on your local machine and unzip on the other one.
And welcome to the forum. :)

The ZIP file can not be used, since these are libraries for CAD. I was thinking of writing a script to sync the files - as a work around.
BUT: I'm wondering about the communication when transferring files, how does it work exactly (handshake.....) Do you have any hint for further info?
(btw, thanks for your quick response)
 
Lemme try and remember the computer networks class I did during my undergraduate years. Optimizing MTUs (Maximum Transfer Units) should really be the last resort. They're basically the size of the packets of data that is transmitted through the physical medium. The packets are also called frames (actually, the correct term is frames but packets are more commonly understood). They vary from 300 - 1500 bytes but they are normally 1500 bytes. Changing them doesn't affect transfer speed but may jeopardize the integrity of the data that is transmitted due to packet collisions (i.e. both hosts transmitting at the same time).

Files tend to be sent one at a time. Most of these protocols work via TCP. Each time a file is to be sent a connection between the computers must be established. This is done via a 3 way handshake. Host A sents a connection request to Host B. Host B acknowledges the request. Host A begins sending data. Of course, some error checking is done to prevent duplicate connection requests, etc. Nothing can really be done about this unless you go an rewrite the TCP standard.

The best thing to do is what Zammy said. Send them as one file (compressed or not is up to you) since this by passes the overhead of establishing a (comparatively) lengthy TCP connection for each file to be transferred.

Hope that answers your questions.
 
the solution, even if not every detail understood, was setting up a regular nfs share!! and specify some of the parameters. The solution was found here in this forum (search for fstab and the shareware nfsmanager http://www.bresink.com/osx/NFSManager.html).
I agree, the MTU has its influence and both together can speed up the network.
I thank anybody for helping and giving ideas. So I learned a lot about shares (nfs, afp, smb) its speed and the setup.
/

Viro said:
Lemme try and remember the computer networks class I did during my undergraduate years. Optimizing MTUs (Maximum Transfer Units) should really be the last resort. They're basically the size of the packets of data that is transmitted through the physical medium. The packets are also called frames (actually, the correct term is frames but packets are more commonly understood). They vary from 300 - 1500 bytes but they are normally 1500 bytes. Changing them doesn't affect transfer speed but may jeopardize the integrity of the data that is transmitted due to packet collisions (i.e. both hosts transmitting at the same time).

Files tend to be sent one at a time. Most of these protocols work via TCP. Each time a file is to be sent a connection between the computers must be established. This is done via a 3 way handshake. Host A sents a connection request to Host B. Host B acknowledges the request. Host A begins sending data. Of course, some error checking is done to prevent duplicate connection requests, etc. Nothing can really be done about this unless you go an rewrite the TCP standard.

The best thing to do is what Zammy said. Send them as one file (compressed or not is up to you) since this by passes the overhead of establishing a (comparatively) lengthy TCP connection for each file to be transferred.

Hope that answers your questions.
 
Back
Top