Defrag in OS X?

UFS doesn't support HFS+ attributes and instead stores them in a separate file. This means UFS ends up being painfully slow unless you're only using UNIX apps.

You're saying it will run normally only if you're using native unix apps? So if you're using a carbon or cocoa app it will run slow; or just a classic app?
 
I started out using UFS, only because I am UNIX admin. However it took and age to load OSX and subsequent upgrades to an unacceptable amountg of time and so have moved to HFS. The Mac seems happier with HFS, even though its limited in some ways. As mentioned months ago none of the standard UNIX "dump" and "restore" work properly (they should!). If "dump" and "restore" would work properly there would be no need for all the chatter about backing up, backup would have been solved freely from day one.

The applications seemed to work ok under UFS, but this was in the days of 10.0, but things like update_prebinding could take ages and there seemd to be a general slow down in performance over a few months or so.

In the end Apple will have to fix HFS so its more in line with other UNIXs, otherwise there will be a day when the lack of mixed case support will come home to roost. In a heterogenious environment of UNIX machines the Mac will run into silly but intractable problems.
 
HFS+ is faster for a number of reasons. Searching for files on HFS+ volumes is probably faster due to a) files being indexed by ID and b) the catalog file being organized in a b-tree.

As for UNIX backup methods the problem here is UNIX lacks an API to handle an arbitrary number of file attributes and forks. This isn't Apple's fault, it's just the sordid 30yr history of UNIX being one giant hack. I mean next you'll complain that UNIX doesn't follow Aliases or can't use FileIDs or FSRefs. UNIX lacks these, always has, always will. For example UNIX will never support creation dates, it wasn't in the original spec some 30 years ago.

PS: I hate UNIX

Not that the toolbox is perfect. However Apple revised large portions of their API to improve the user's filesystem experience. For example functions which used file paths as arguments were deprecated to encourage programmers to treat file paths as variable and not constant. Standards like POSIX make this impossible in the UNIX world. In fact UNIX lacks API abstraction for a lot of things and instead relies on formats which as a result cannot be changed.

Anyway I hope Apple creates a new filesystem which incorporates all of the advantages of UFS and HFS+. It's clear that Apple is developing a journaled version of HFS+. That isn't the same, but it's a good stop-gap.
 
I have to say I do like the performance of HFS, but it does this by making sacrifices. To suggest that HFS is better because of like (a) files being indexed by ID and b) the catalog file being organized in a b-tree) is pooh.

Sun , SGI, IBM, HP, DEC, Linux filesystems...etc etc are all capable (much more capable that HFS), not forgetting things like Veritas VFS and its ability to do online database (Oracle etc) backups (without taking the application down).

All the above can back themselves up properly without reference to any front end which sits on top of it, UNIX manufacturers take the view that basic backup is provided by the base operating system by default, except Apple.

Journalled file systems have been around for donkeys nothing new and are slow, AIX has had it for a decade by default, wherever we were allowed we disabled it using SMIT.

Getting rid of case insensitive stuff is paramount in terms of UNIX interoperability or for that matter any form of filesharing with other OSs, getting out from under the mantle of having the front end dictate how a filesystem will be re-written will make the MacOSX filesystem much more compatible with other UNIX vendors.

Having POSIX around is/was there for a reason, good or bad you take an educated choice, its does however make a lot of things work between platforms of different vendors and thats the knub, which is where Apple should have started rather than take the MicroSoft approach of taking its own seet way and force everybody down their trap, Apple isn't big enough to do that....yet.

Do we mean "business" with this or what, we can't pussyfoot around pretending were the best when we can't take care "step 1" properly. If you want Apple to move into the mainstream UNIX server environment HFS will not cut it. Ok its not far off, I think MacOSX is fab I love it, but we are preaching to the converted here. Convincing the 2nd amd 3rd rate IT managers (MicroSoft managers really) is an uphill struggle that requires more that this one pittyfull person (me) in Aberdeen Scotland to convince IT plonkers to move to something much more inovative, its more than just hard its a 10 to 15 year struggle.
 
The editpost did not seem to work.

Getting back to defragmentation, why doesn't "fsck" work properly, "fsck" bog standard for all other UNIXs? Its a start and will give you an idea of how your disk is fragmented, the result is expressed in a %fragmentation
 
I hope Alsoft's OS X product (when it comes out) replaces Apple's fsck with their own. They have the best HFS+ tools.

UNIX interoperability is over-rated. I would rather have a filesystem which was more suitable as a human interface than a filesystem designed to be compatible with 30yr old crap. In fact the more Apple forges ahead instead of looking behind the better. The UNIX everything-is-a-file-at-a-static-path methodology really sucks and contrasts sharply with Mac OS since System 7.

In fact I would rather get rid of the filesystem altogether. That may break everything using POSIX file paths but that only makes the idea more appealing.
 
I hear you and agree in many respects (I really appreciate inovative behaviour), however we are going to be stuck with this stuff for some time.

Just one final point though, Unix's filesystem is no longer what it was 30 years ago, every vendor has its own SUN UFS (although I think SUN are moving to VFS), SGI XFS , veritas VFS, IBM, DEC, HP & Linux have their own specific versions and are mediated by NFS between platforms. The above are advanced filesystems, they look UNIXy as thats the user interface. But its what you don't see that critical, i.e. high availability, speed, distaster resistant etc (I would suggest that readers go to the vendors to get the specifics as they target different issues). One cannot take a disk from an SGI box, bung it into a SUN box and hope it will be read, the filesystems are very different.
 
All those filesystems share the same limitations which were created some 30yrs ago. For example the only way to refer to a closed file is by path.

HFS+ has limitations too. For example it can't support an arbitrary number of file attributes like I believe BFS can. However the major bottleneck in computing remains the human<->computer one and I'm far more efficient using HFS+ and periodically defragmenting using Alsoft tools than using UFS. Even the extra time required to port UNIX tools (which I have done on occasion) is small compared to the frustration of breaking aliases.

Unfortunately OS X is far eaier to break than MacOS because a lot of it uses the crappy UNIX methodology of everything-is-a-file-at-a-static-path. In MacOS I could move the System Folder anywhere I wanted. To make a bootable backup I just had to drag all my files from one volume to another, just one simple select and drag. Application updaters, including Apple, would look up an Application by its creator code, not its path. Now if you move an app you break the update mechanism. A lot of OS X apps also break if you move the file while it's being used! If you move a disk image while it's being verified it won't mount. If you move a TextEdit document while it's open TextEdit will lose track of it. Everything breaks so easily it's ridiculous!

In MacOS (since System 7) things were a lot easier. You didn't have to keep track of what files were open by which apps because the path of a file was considered variable. In many (or most) OS X apps the opposite is true so you have to keep track of all open files. If I move or rename a file or folder or non-root volume I have to check if there are any crappy apps running. Thus the advantage MacOS had is quickly lost when some OS X apps behave in an inferior manner.
 
No what you are talking about is emulation, not what is really happening as a
reality of a UNIX filesystem. see further comments bellow.

Also take a look at this page just to give you a flavour:

www.cabernet.demon.co.uk/XFS.html

www.cabernet.demon.co.uk/xfs2.html

Thats just a few snippets from SGI.

UNIX filesystems DON'T' share the same limitations as they implement their own FS, its the UNIX tools provided that cause the perception problem your running into. These problems are compounded by Apple not taking care of the UNIX it relies on. I bet you Apple has real working versions of UNIX backup that work, but if implemented would give vendors no place to sell anything.

UNIX vendors provide extra command sets to make available to the user the extended benefits of their filesystems, i.e Logical volumes, growing filesystems

For small or semi single user systems, arbititrarily moving system and data files around the system and still have it working is is fine, but in the end you have to go looking for them using FIND. However for terabytes of data and hundereds of users, stuff will be unmanageable.

As I said the REAL filesystem, that the vendors put out, underneath is wholey different and can be made to do almost anything, those attributes are used by companies like ORACLE, Sybase, high availability systems to maximize product performance. VFS is a good example, in that Veritas sell a filesystem for UNIX vendors. I've been saying this for well over a year on this bulletin board about the fact that Apple only attend to a small portion of UNIX before it takes you no further. Hence there was a "restore" but no "dump" in 10.0, crackers!, there were as host of other missing bits and pieces.

The moving of files around the system and being kept track of is software based, i.e. in MAE (macintosh Application Environment) this was all emulated quite happily, whilst running on a SparcStation under its UFS filesystem, it was the MacOS writting what it needed using UFS, Aliases worked fine, moving the the

Applications folder worked fine whilst using the Aliases under Solaris 2.x. What didn't work was moving /dev into /tmp via MAE, Solaris died instantly.


Apple could quite easily make a version of "dump" and "restore" that worked properly, each of all the other UNIX vendors DO for their extended filesystems.

"dd" should work as it copies the platter, i.e. dd if=/dev/disk1 of=/dev/disk2,the formatting and partitions as well, its slower as it copies all blanks space as well.

CPIO is POSIX compliant, Apple implement it (its supposed to work!), I wonder if it does work?

What it is your looking at under UNIX, i.e. command line executables will present UNIX information that has been interpreted for you in the accepted manner as outlined in "man", the filesystem underneath is completely different and can do many other things.

Remember UNIX is not in reality CODE but, an emulation of, services provided. I could go and write a UNIX, over the next 12 hundered years, that has code which bears no resemblance to anybody elses but woudl bee accepted as a UNIX. Ok there is BSD which you can download and System V which you would have to license, but the code is no where near the same.

Windows NT could quite happily considered a UNIX if it were to change the interface and access to the services which a UNIX is supposed to serve. This is public domain info. But you would'nt know at the end of the day, that you were actually using a MicroSoft product. Do you see what I am getting at, lots of people make the same assumption, but its only a fraction of the story.

Same with UNIX filesystems, they are in reality completely capable of being used in any mode you wish, its just that the standard set of UNIX commands always produce the standard interpretation of the filesystem, thats what they are meant to do. Thats why SUN have a UFSDUMP and SGI have a XFSDUMP, which dump entire filesystems properly even all the extended functionality underneath which the user doesn't normally see, the admin can detect the difference. Why didn't a HFSDUMP ever get made (I thougt there was one once), for a standalone system buying a product is nonsense, for truly big systems a purchased standlone solution won't cut it, you have then to move to NetBackup or Legato for corporate solutions.

Case insensitivity is just plain "BAD", fix the basics first. "System.txt" is the same as "system.txt". Having multiple copies around a system.....well I'll let you work the simplest example (in this case an Alias would have to keep an absolute path), connect this to other servers, NT, SUN, SGI, HP, DEC, Linux is going to be a heap problems that you can't fix.

Connecting to and NT domain (which will happen at work), NTFS is case sensitive, having System.Txt" or/and "system.TXT" on that server as well will give you the wrong file!. I said all this from the start 10.0, they've fixed some of of it but Apple has a way to go with its filesystem.

I'm going on way to long, I may have explained or pointed out some things, but solved nothing, Apple have to do it.
 
Originally posted by plastic
For me, I use the good old copy, erase method. I have had bad things happening to my drives with defragmentation software. Nothing beats cleaning up the platters the good old formatting way. This is why I love Mac. Tell a PC dude to got format his hard disk and most likely he will give you the finger. :D
It's just as easy to backup your files to another hard drive and then format your primary hard drive on a PC as it is on a Mac. Am I missing something?

Scottish
 
He's referring to copying your System Folder. On OS9 a Mac you can copy your System folder to another hard drive (via drag and drop) and have absolutely everything preserved.

I won't think that you can drag your winnt folder to another drive and have everything work flawlessly.
 
Originally posted by Javintosh
He's referring to copying your System Folder. On OS9 a Mac you can copy your System folder to another hard drive (via drag and drop) and have absolutely everything preserved.

I won't think that you can drag your winnt folder to another drive and have everything work flawlessly.
I have two issues with this: 1) I don't use OS9, and neither will any new Mac user after the first of the year. 2) I'd prefer to just backup my files that don't affect the system (music, documents, etc.) and give my hard drive an entirely clean slate. Half the time the problems are located in the System folder.

Scottish
 
I've lost 2 (3rd party) hard drives. I had my OS on both of those drives. I can tell you from experience that dragging the system back is a great feature.

That said, since the /Library and ~/Library folders contain all 3rd party stuff (except for kernel extensions). I think that things are not as dire as they seem. My nightly backup (HD to HD) includes my users folder and the /Library folder. I figure this would get me 90% back to my current System folder if there is a problem.

I also do miss the ability to drag my system folder to another HD and boot from it. I plan on setting up an ATAPI RAID (in addition to my current system folder), once I get this drive formatted, i won't be able to drag the System folder to that drive like I would under OS9.

now that I think about it, that *is* going to be a pain!
 
UNIX does have inherent limitations because it is designed to support legacy crap. For example it must have a filesystem. The claim that everything can just be emulated is wrong. You bring up the example of MAE but the fact remains that there is no way to emulate FileIDs on UFS without breaking their intended behavior. UFS simply lacks something equivalent. The closest you can get are file descriptors, but file descriptors can only be had by opening a file plus you cannot convert a file descriptor to a file path. The only way to emulate FileIDs would be to break the coorelation between MacOS file paths and UNIX file paths.

You can bore me to death about the differences between UNIX and the crappy programs which run on UNIX. I'm running Carbon apps on UNIX so clearly this distinction isn't relevant. UNIX follows the everything-is-a-file-at-a-static-path methodology and that is not only where all the problems begin, but also a problem in itself.

Case insensitivity is not bad. System.txt is only the same as system.txt if you are attempting to open a file by its name. If you are searching for a file you can do a case sensitive or insensitive search. HFS+ is case preserving. The whole problem to begin with is you're trying to open the file by name or path! Again, this all boils down to the UNIX methodology which sucks ass.

Finally Apple doesn't have to kiss your UNIX ass. How many losers are talking about some arcane UNIX tool anyway? Did you know that HFS and HFS+ actually had a backup date file attribute? Why support some arcane UNIX tool to backup files when you could implement a backup function which could be applied to any file or folder?

Frankly backing up was a lot easier wen I could make a bootable backup by drag+drop Now thanks to UNIX that's impossible. Way to go UNIX, take away the easy method and instead insist on some kind of arcane "dump" command. Ugh.
 
Originally posted by Scottish
I have two issues with this: 1) I don't use OS9, and neither will any new Mac user after the first of the year. 2) I'd prefer to just backup my files that don't affect the system (music, documents, etc.) and give my hard drive an entirely clean slate. Half the time the problems are located in the System folder.

Scottish

Well allow me to tell you something from my experience:

Nothing beats a bootable backup!

The reason is I don't have to reinstall the friggin OS which takes loads of time I don't have because in the real world people have DEADLINES and they want to get back to WORK. I don't want to know what broke, I just want to get back to work before the sun sets!

Perhaps half the time the problem exists in the System Folder like a corrupt preference file (probably the most common) but that just proves my point. You make your backup of a WORKING system and then when things go wrong you just boot your backup and continue to work. A bootable backup trumps everything else because it's self-sufficient. I usually had several bootable backups in case either:

a) Hard drive failure

b) filesystem corruption

c) file corruption (like preference file)

d) Meteor strikes the building.

In case of D I always have bootable backups in other buildings which I can put in any Mac and get back to work.

Mac OS X however makes this process a friggin pain in the ASS! I have to use some kind of synchronization app like Synk X. Furthermore Mac OS X is more prone to breaking just by moving or renaming a file/folder/volume. Arg!

I mean in MacOS you could even have multiple System Folders in case one died or one was less compatible. Like it or not some things broke when Apple released 10.2. Now to have both 10.1 and 10.2 installed you have to use two partitions. Talk about retrogression.
 
Originally posted by Javintosh
I've lost 2 (3rd party) hard drives. I had my OS on both of those drives. I can tell you from experience that dragging the system back is a great feature.

That said, since the /Library and ~/Library folders contain all 3rd party stuff (except for kernel extensions). I think that things are not as dire as they seem. My nightly backup (HD to HD) includes my users folder and the /Library folder. I figure this would get me 90% back to my current System folder if there is a problem.

I also do miss the ability to drag my system folder to another HD and boot from it. I plan on setting up an ATAPI RAID (in addition to my current system folder), once I get this drive formatted, i won't be able to drag the System folder to that drive like I would under OS9.

now that I think about it, that *is* going to be a pain!

Exactly, it is a pain. Nothing beats a bootable backup.

90% doesn't get you to 100% any time soon. The time you spend reinstalling Mac OS X (booting on a CD is a pain, damn that takes a long time) and then updating it (downloading, installing, optimizing, rebooting) is time I would rather avoid, especially since shít happens at the worst times. Even with MacOS I would often just mount the restore disk image and drag that system folder to my drive just to get it booting and working.

It comes down to how easy is it to copy one working system fron one disk to another. What beats drag+drop? It certainly isn't some arcane UNIX tool like 'dump'.
 
Actually, its more than that. Before OSX, if I was going to do a major OS upgrade and was worried about compatibility (hardware ans software) I would drag and drop a copy of the system folder to the desktop, separate the system and finder files and do the install. If there was a problem, I would drop the updated system folder into the garbage and put the old system folder back.

I was always confident that I could easily backup from any upgrade.
 
Strobe - I think you lost your plot again and need to read some more. I will say you are nicer than the strobe that lost this argument a year or so ago.

You were initially talking twaddle about the filesystem, you got it wrong now your blaming UNIX, go in a straight line.

"dump" keeps backup dates, NetBackup and Legato keep backup dates, you can "dump" a file or folder without any problem at all, you need to read more in detail and rant less. "A little knowledge is dangerous thing" dissinformation makess for poor quality conversation, keep you comments current and correct.

I suggest you go and have a look at what MAE is doing before bleating on about it, it also ran faster on a Sparc 20 than on the Mac at the time.

The people who know out here about UNIX, know that what yor saying is just plain ranting and not of much use, Apple think your argument is incorrect obviously! i.e. its a UNIX you know.

HFS+ works yes, but Apple want to be compatible with other OS especially, dare I say it, MicroSoft, UNIX and Linux (i.e. this bulletin board) etc. We also want/need a backup that works from UNIX as it should! by default.

Its so simple it's probably difficult for some people, don't get lost in poor second rate terminology, and get your nose in a book.

This bulletin board is Red Hat Linux, bet they don't suffer from the tripe were in. Well I know they don't, I use it as well.

"fsck" is what the original argument was about and should be made to work, it would tell you how fragmented your disk is like EVERY other UNIX, it doesn't.

Anyway I am here to help and ask questions of my own , not to teach one clot what UNIX is and what UNIX is not, I got better things to do.
 
1) I don't give a poop about arcane UNIX tools like "dump".

2) The fact remains that UFS has no equivalent to FileIDs and there is no way to emulate FileIDs on UFS unless you break the correlation between Mac and UNIX file paths. It's just that simple. If you reference a file by its ID then change that file's path on the UNIX-side, how the hell can the Mac path change accordingly?! Answer: it can't, it's impossible.

3) The UNIX methodology is everything-is-a-file-at-a-static-path. You don't seem to want to deal with this, instead pretending that UNIX can be anything and the only real limitations are the sucky apps which run on top. UNIX has limitations, legacy IMPOSES limitations. So you can stick that in a book and slam it.
 
Back
Top