5g24: what's RAID

Raid lets you "combine" 2 or more harddrives into one. Theres various Raid levels (0,1,3, 5.). Some of them help you to gain throughput speed, others mirror a drive by createting an exact copy of one drive on a second drive in realtime, so if one drive fails, the other one can kick in after just a little bit of downtime. Raid 5 combines 3 or more drives into one logical drive. With Raid 5 part of the drives capacity is used for redundancy information. If one drive in a Raid 5 setup fails, u usually dont loose any data and u can coninue working. The whole thing would just slow dopwn quit a bid until you replace the broken hard drive.

Hope that helps some
Alex
 
Originally posted by schnupie
... others mirror a drive by createting an exact copy of one drive on a second drive in realtime, so if one drive fails, the other one can kick in after just a little bit of downtime.

Sorry, but that's not true. With RAID 1 (mirroring) if one drive fails the system should take that drive offline, report an error, but will still keep on running.

Go take a look at this site: http://www.acnc.com/04_01_00.html it's a very good introductory site describing what all the different RAID levels are.

- nd
 
I think one of the most used RAID modes is RAID-5. Imagine you have let's say five 20 gig drives in RAID-5. If data gets written to this RAID-Array
(RAID stands for Redundant Array of Independant Disks), the data is stripped over the four drives, so it can - theoretically - write four times faster than on one disk. So, let's say the data which is to written is 00111011 in binary.
The first disk will get the first "0", the second the second "0", third disk will get the "1", fourth will get another "1". Now, the fifth disk (the parity disk) writes a "0" if the checksum of the four disks is even (as in this case 0+0+1+1 is 0 modulo 1, so it is even) and a "1" if it is uneven. The rest will be written in the same process. Now, you may ask, why would I waste 2 gig in this case for a parity drive? Easy. Imagine one of the four drives fails...the data is lost, because the bits stored on this drive are lost....right? No. Due to the parity disk, it is easy to restore the disk (most RAIDs do this automatically as long as the failing drives is not physically damaged...they automatically unmount it, format it, and restore it). This restore-process now uses the parity bit. So let's say drive three fails in our example. We have the bits 00?1 with an even parity, so the missing bit HAS to be a 1. This system is good for storage of very important data which has to be accessible all the time (company webservers, huge file servers etc). That's basically the advantage of RAID-5 (as long as only one drive fails...good ;).
RAID 1 is called mirroring, in this case, data is written to two drives. Not stripped, but "copied", so on each drives lies any exakt copy - a mirror - of the other drive. Advantage? You can read the data much faster if you read the data stream from two disks. One bit from disk A, the next bit form disk B. RAID-2 to 4 are variations of the mentioned RAID modes, as there is mirroring with parity and stripping without parity and stuff like that, but the common ones are RAID 1 and RAID 5.
 
But my favorite is RAID 0, which I guess isn't really RAID at all since it's not a Redundant array. Oh, but that's why it's a 0!

I'm cheap, so RAID 0 lets me use only two hard drives to increase performance. You just gotta cross your fingers and hope you don't have a hard drive failure 'cause every other byte or bit or whatever is stored on the "other" hard drive!

-Rob
 
actually, RAID 0 is the same as RAID 5 but without the parity disk, so the data is stripped over the two disks (in your case), increasing the reading speed. But you are right, if one drive fails....but well, how many PC or Mac users DO actually make backups of non-RAID disk systems? Most people tend to say "Nah, this won't happen to me". The people I know who make regular backups (like me) experienced a dramatic hard drive crash once (like me) and had to suffer under it, especially when they lost business data (like me) ;)

(FreeBackup placed in the Shut down items folder and all is well ;) )
 
Well, if you mean a hardware RAID controller? Much. First, you need at least three HDDs...SCSI or IDE (don't know if Mac ATA-RAID-cards exist). Most hardware RAIDs come with a very small amount of RAM, so you should think about buying one or two more 32 MB EDO chips (don't know if modern cards use SD-RAM). A RAID controller costs around 200 to 600 bucks....depends on the quality you need....there are RAID controllers for huge fileserver which easily handle a few dozens of disks, but they are expensive as hell....I also can't tell you manufactures at the moment, but I think Adaptec makes RAID solutions, Silicon Graphics and - IIRC - even IBM.

The software RAID, however, as it is installed in Win2KServer, Win2KAdvanced Server and MacOS 10.1 (hopefully, if this is no joke...I still think it might be a OS X Server only thing) does only cost you the money for the drives...so with three ATA drives, you could already establish a RAID 5 system. If you have a SCSI system, it is even easier (and more expensive ;) ) since you don't "waste" three of your four ATA-ports.

Damn, I still which Apple would adapt modern PC IDE controllers...they can controll up to six devices, rather than the two per bus on the mac.

BTW: It is completely useless to create RAID with different partitions on one physical drive since it won't speed up anything! Some people insist on telling you that this is the way to go. They just don't get the idea behind RAID (well....the ID stands for Independant Disks ;) )
 
Six drives on an IDE controller? Man, my new-in-November Athlon-based board can't do that.

Partition a hard drive to make a RAID? LOL. That is a good one. Let me guess, a Tech Support guy probably tried to pass on that piece of wisdom!

-Rob
 
no kidding! I remember DMA-100 boards which could do this! Never used one, but have seen them in action...IIRC both Asus and Gigabit made such boards. Don't know what's now, I am not very well informed on the PC market...I have my "old" P II 450 which runs Visual Studio, that's all I need for coding... ;)
 
U are right of course...the computer should continue to operate if you run Raid 1. It usually does, but on some machines - in particular with Microcraps productline- the system and especially databases start running very sluggish or even crash. Happens more often with software Raids then with hardware Raids.
 
Originally posted by rharder
..'cause every other byte or bit or whatever is stored on the "other" hard drive!
In most RAID 0 implementations I've seen/done, the 'whatever' is a value you select when you initialize the RAID set for the first time. It's called the 'chunksize' and determines how many bytes are written to one drive before moving onto the next. The optimum value for chunksize very much depends on what it's being used for: e.g. a filesystem, or a raw data partition (like Oracle). The size of the application/filesystem 'blocksize' being used (which can also be tuned to suit the type of data being processed) should determine the value of 'chunksize'.

For example, when writing out data you get good performance if the write operation occurs as a single contiguous write, which filesystems will generally try to do. But you get even better performance if that write is split over multiple spindles, because they can all write a chunk each at the same time.

That's the general idea anyway.

- nd
 
Originally posted by PowerBookDude
Do you need a RAID controller for a RAID or can you do with out it?

Both :). If the OS support software RAID natively, then you can use that (whatever its implementation happens to be) or you may be able to use/buy a layered product that implements it (and works on your OS). For software RAID you don't need any special hardware, though it often makes more sense to have a couple of controllers with RAID members spread across them so that you build protection from controller failure into your design and also get the benefit of increasing controller bandwith.

The positive attribute of Software RAID is that it can be very cheap to implement. Its negative attribute is that it can be very CPU intensive, as the system has to do all the leg work in organising the I/O. On a box running a CPU bound application this can be a big problem. Also it can be a b*stard to restore in a disaster recovery situation because it adds extra levels of complexity. For that reason alone, I have used it very very sparingly.

Hardware RAID is different. You need dedicated hardware to pull it off. If can be either an HBA (controller card) that supports it (the cheap option) or an intelligent device that sits remotely out on the bus somewhere stringing together disks and presenting them to the OS at different LUN's (SCSI Logical Unit Numbers). Good examples of this kind of hardware are Compaq's Storageworks kit - the HSZ family of controllers.

The positive attributes of hardware RAID is that it offloads all the work from the host system. Depending on the hardware it can be very easy to manage (software RAID is notoriously over complex) and can simplify any disaster recovery procedure you may have. Hardware RAID often has extra bells and whistles like holding disks in a 'spareset' then automaticly ejecting duff disks and recovering using a spare when faults occur (this usually only works with mirrors). Also you may get dedicated, mirrored cache, and cache battery backup. All depending on how deep your pockets are of cause :).

The negative side to SCSI RAID hardware is that for most (if not all) implementations, you can't RAID over multiple buses because the RAID intelligence is sitting out there on the bus on the wrong side of the controller :). Therefore you have a single point of failure in the HBA.

A growing number of new platforms now get round this by providing support for something called Multi-Pathing. This is where the RAID controller is capable of being physically connected to the system via multiple buses, and the OS knows how to map the devices on a bus such that the device's filename is independent of the bus it's located on.

Hmm, I think I'll stop here, before I completely disappear down a rat hole and start talking about World Wide ID's and Fibrechannel as well.

- nb
 
1. I find a spider.
2. I get the can of RAID and spray the spider.
3. Spider is dead.


Never had it fail me yet! LOL


DJ XTC
 
Ok, so now for another question.


1. What RAID should I use? I will have two 80GB drives in a tower running Mac OS X Server. And I want the second drive to mirror the first.


2. So it is better to have hardware RAID? Is there any RAID cards that work with Mac OS X Server? If so, could you tell me?


Thank you!

PBDude
 
Originally posted by PowerBookDude
1. What RAID should I use? I will have two 80GB drives in a tower running Mac OS X Server. And I want the second drive to mirror the first

That woul be Raid Level1.

2. So it is better to have hardware RAID? Is there any RAID cards that work with Mac OS X Server? If so, could you tell me?
Hardware Raids do not use the cpu(s) to perform their tasks. Therefor softwareraids were out of question for a long time. Situation has chanhed a bit due to the much increased processor-capabilities. Still on high-performance-servers under heavy load a hardwareraid ist the better choice.
A raid-controller for MacOSX? Difficult. I read somewhere, that Adaptec and Atto have released some drivers for MacOSX.

HTH

cu:Stray
 
Back
Top