HFS+ vs. UFS

boz

Registered
Hi *

Since the day of the final release comes closer, I am wondering which would be the filesystem to choose?
HFS+ or UFS? Is 9.1 able to read ufs volumes?
Are there performance issues? Which one are you going to choose?

boz
 

gronos

Registered
OS9.1 definitely CAN'T read ufs, so if you want to store files and use them between 9.1 and X, HFS+ is what you want to use. I've also seem postings (thought I have no direct evidence) that OSX runs "better" on HFS+ then on ufs partitions.

rich G.
 

strobe

Puny Member
UFS is useless unless you need to have filenames in the same directory like:

filename
FILENAME
fileNAME
FILEname
FiLeNaMe
fIlEnAmE

|-p
 

ladavacm

Unperson Spotter
UFS is useless unless:

1) disk fragmentation matters
2) access privilege granularity matters
3) security levels matter
4) long file names matter
5) soft updates matter
6) read-ahead optimizations matter
7) guaranteed filesystem consistency matters (yes, even in case of catastrophic failure, short of actual loss of physical media)

... etc, etc, etc.

However, UFS does not:

1) have read nor write compatibility with older MacOS
2) support complex (i.e. forked) files

... etc, etc, etc.

In other words, it is a very good and quite safe general mass storage filesystem, given that the workload mix is mostly reads. With introduction of soft updates, fsck is no longer necessary, even after a system crash: in fact, the soft updates aware fsck operates on a read-write mounted filesystem (some kernel support required).
 

Sven

Registered
How do OSX UFS/HFS+ installations compare in speed? My first installation of PB on an UFS volume was a lot slower than the following HFS+ installation.

Did others see this as well? What about later builds?

Sven
 

drewe2000

Registered
I will be the first to admit that UFS has a lot of benefits. However, in its current form OS X will suffer under UFS. Classic requires HFS+. Even if that's not an issue for you, OS X is optimized for HFS+, not UFS, and boots and runs substantially faster in HFS+. Additionally, AirPort does not work under UFS, but does under HFS+. This is because some of Apple's calls refer to AirPort as 'Airport' and others refer to it as 'AirPort'. In HFS+ this does not matter, but UFS is case-sensitive. There are probably other things that are broken under UFS. Until Apple fixes these issues, stick with HFS+, except (maybe) for external or extra storage drives.
 

majortom

Registered
I haven't checked yet, but is it possible to partition a single drive and have both a UFS and an HFS+ partition?

HFS+ isn't POSIX compliant and has much weaker security, so that for UN*X apps, it is inferior. However, since MacOS 9.x can't read UFS, you need to have both if you want to support 9.x apps.

/carmi
 

majortom

Registered
Originally posted by drewe2000
I will be the first to admit that UFS has a lot of benefits. However, in its current form OS X will suffer under UFS. Classic requires HFS+. Even if that's not an issue for you, OS X is optimized for HFS+, not UFS, and boots and runs substantially faster in HFS+. Additionally, AirPort does not work under UFS, but does under HFS+. This is because some of Apple's calls refer to AirPort as 'Airport' and others refer to it as 'AirPort'. In HFS+ this does not matter, but UFS is case-sensitive. There are probably other things that are broken under UFS. Until Apple fixes these issues, stick with HFS+, except (maybe) for external or extra storage drives.
This is why many within Apple argued that HFS+ should have been case sensitive and then let the finder and AppKit allow for a case insensitive user experiance. Now, if something is stored on a case sensitive filesystem (an NFS server for example), the system will behave differently then if it is stored on a non-case sensitive filesystem.

Oh well.

/carmi
 

marmoset

Official Volunteer
Originally posted by majortom
I haven't checked yet, but is it possible to partition a single drive and have both a UFS and an HFS+ partition?

HFS+ isn't POSIX compliant and has much weaker security, so that for UN*X apps, it is inferior. However, since MacOS 9.x can't read UFS, you need to have both if you want to support 9.x apps.

/carmi
I decided to wipe my PB install and do a clean install
for OSX final.

I partitioned a 20Gig drive into 3 partitions; a 4 gig HFS+
partition for MacOS 9.1, a 10 gig HFS+ partition for
my OSX install, apps, and home directories, and
a 5 gig UFS partition to house any development
I might do, and my news spool.

I ran the cool little NNTP package leafnode (http://www.leafnode.org)
under the PB, but one thing I hatedhatedhated was that fsck after
an unclean shutdown (loss of power, whatever) literally took hours
when my news spool was on an HFS+ volume. I promised myself
that I'd move my news spool to a UFS volume when I got a
chance for this very reason.
Luckily :p I haven't had to test fsck speed since I installed the final.
 

Solaris

Official something...
UFS is optimised for speed, but will fragment over time (and there ain't much you can do about it).

Well, so said the Solaris 8 Admin 1 tutor...
 

ladavacm

Unperson Spotter
Regarding long names under HFS+, my understading was 32 character limit; thanks for the correction if this is only an OS implementation, rather than FS limit.

In the further text, when I talk about UFS, I am talking about 4.4BSD FFS, known as UFS in Darwin/Mac OS X. I am explicitly not refering to ext2 FS Linux users may be familiar with.

Regarding UFS fragmentation, there is a lot of misunderstanding concerning the fragmentation percentage reported by fsck. A UFS fragment is a partial block (1KB vs. 8KB for newfs defaults), which is used to store short files, thus reducing the space wasted.

As far as the file fragmentation is concerned, that one does exist, but is not so critical as on FAT, let's say. In fact, all UFS files are pre-fragmented into maxcontig (64KB) fragments, which happens to coincide with the read ahead size, if the sequential access heuristics triggered. Furthermore, the maxcontig fragments are placed in rotationally optimal positions (were placed, because this optimization is meaningless on zoned disks), and deleting and rewriting files will use holes, if they are big enough. Since all files are prefragmented, there is always a reasonable amount of 64KB segments, thus the fragmentation of a long used filesystem does not degrade the performance significantly (I have observed a degradation of less than 30% after 2+ years of use on our /home partitions containing high percentage of small files).

There is one thing one can do to fix the fragmentation: dump, newfs, restore. This was not deemed necessary on our installations, because the degradation was not that bad. The performance degradation was practically nil on work filesystems where most of the files are created and deleted daily, and are of 2+ GB average size.

Regards,
Marino Ladavac.
 
Top