Regarding long names under HFS+, my understading was 32 character limit; thanks for the correction if this is only an OS implementation, rather than FS limit.
In the further text, when I talk about UFS, I am talking about 4.4BSD FFS, known as UFS in Darwin/Mac OS X. I am explicitly not refering to ext2 FS Linux users may be familiar with.
Regarding UFS fragmentation, there is a lot of misunderstanding concerning the fragmentation percentage reported by fsck. A UFS fragment is a partial block (1KB vs. 8KB for newfs defaults), which is used to store short files, thus reducing the space wasted.
As far as the file fragmentation is concerned, that one does exist, but is not so critical as on FAT, let's say. In fact, all UFS files are pre-fragmented into maxcontig (64KB) fragments, which happens to coincide with the read ahead size, if the sequential access heuristics triggered. Furthermore, the maxcontig fragments are placed in rotationally optimal positions (were placed, because this optimization is meaningless on zoned disks), and deleting and rewriting files will use holes, if they are big enough. Since all files are prefragmented, there is always a reasonable amount of 64KB segments, thus the fragmentation of a long used filesystem does not degrade the performance significantly (I have observed a degradation of less than 30% after 2+ years of use on our /home partitions containing high percentage of small files).
There is one thing one can do to fix the fragmentation: dump, newfs, restore. This was not deemed necessary on our installations, because the degradation was not that bad. The performance degradation was practically nil on work filesystems where most of the files are created and deleted daily, and are of 2+ GB average size.
Regards,
Marino Ladavac.