RacerX said:
Ahem... yes it does.
Defrag utilities are known to break up large files to get all the blocks to line up nicely while getting rid of volume fragmentation.
Sorry.
Not really, although it might seem as if they did in some cases (more below this paragraph). Defragmentation tools do two things: defragment all files, and move them so that all free space is contiguous. The very process of moving them to "get all blocks to line up nicely" is precisely what creates defragmented files.
And, to be more to the point here: what we are interested in isn't really "physical defragmentation", as much as "logical defragmentation" or rather "perceived defragmentation". What we really want is maximum transfer rates, the ability to read and write data as fast as the drive will allow.
In the old days, some tools would let you "optimize" file locations, to make reading and writing faster. Maybe some still do. This takes into account the speed of the hard drive, its physical size and the size of allocation blocks. Optimization is performed in such a way as to minimize the amount of lateral movement the head needs to do in order to read files, and to minimize switching between surfaces, to optimize reading time. What happens is simply that files are broken into chunks large enough to fill the read buffer. The chunks are then moved apart a certain distance, just enough to let the head be perfectly positioned to start reading the next chunk as soon as the buffer has been read. Note that this needs to be done differently on different computers, as you also need to take the external transfer rate into account, i.e. how fast the buffer can be read.
This was especially common practice on drives with low sustained transfer rates, say <10 MB/s, and was often called "optimize for video production" or something similar in the settings of the defrag tools.
Although a physical inspection of the platters in the drive would indeed show that the files had become fragmented, they would actually be read faster than if they weren't. In fact, just about every hard drive today does this automatically. The logic needed is built-in to the controller.
So, while "fragmentation" indeed could be introduced on the physical level, it lead to a perceived "defragmentation" on the logical level, and increased transfer rates. A bit like denormalizing a database from fourth normal form to third, second or even first normal form, to increase performance.
This is probably where the notion that "defrag tools actually fragment some files" came from.
As for using these tools, I just don't. When editing media files (sound, video) I simply use a fast, large, external drive that I never fill up and always reformat between jobs. It's faster and more reliable than defragging. Other than that, I simply make it a point to always leave about 20% free on the startup drive, that takes care of most of the risk of (perceived) fragmentation.
You can read more about hard drives and performance issues here:
http://www.storagereview.com/map/lm.cgi/str