Jump to content
Sign in to follow this  
Trash80toHP_Mini

SoftRAID Level 5 for OS9, suggestions, possibilities?

Recommended Posts

I figure this is a peripherals topic.

 

I'm looking into setting up a RAID Level 5 array for backing up my data in the future and looking for a little help. So far I've only found DiskArray 1.2a from Optima Technology, the readme was last updated at the very end of 1995, so I'm wondering if it will work under OS9 or not.

 

If anyone knows that answer or if anyone has a suggestion for another SoftRAID package that will do RAID 5 that would be wonderful.

 

I'm looking for RAID implemented in software because controller based RAID systems are vulnerable to controller failure, which is more of a concern to me than speed. If I understand it correctly, my data should be able to be rebuilt/repaired if any one of the three drives in a minimum RAID Level 5 setup should fail. If it's software based, I should be able to use any compatible controller card and these can be regular SCSI Cards instead of dedicated SCSI RAID controllers?

 

I've got a stack of these cute little 2.5"U160 Savvio Server Drives and I feel the need to get my hands dirty in software for a change. Getting something like set up ought to be a fun way to do some temporary backup with redundancy in between massive DVD burning sessions, I HATE those! This might be fun, but if it doesn't work out, I can keep dooin' what I'm doing now, backing up all the data onto pairs of these little server drives for redundancy.

 

Honest question, does it sound like I have absorbed the least of clues about this stuff or should I just stick to piling stacks of paired backup drives and then do the marathon DVD-RW burning session?

 

It's late, I'm tired and I just wanna know which way to go. :I

Share this post


Link to post
Share on other sites

I can't speak to what's available third party, but I would advise that "fault tolerance" (RAID, in most cases) is not backup. In addition, software-based RAID solutions have long been considered t obe insufficient in performance. Up to a few years ago, it has been thought of as being so bad that it wasn't even worth making the software to do it easily available.

 

This has changed more recently and Windows 8/2012 and Solaris/FreeBSD have slightly more flexible versions of this idea built in, complete with the ability to randomly add or remove disks or change redundancy levels, something many hardware RAID solutions make not-easy. (Or make you leave the OS to do, if you don't have the management tools or you're booted from a RAID volume.)

 

Part of this is that server CPU horsepower has gone up a lot in the past few years, and part of it is that a lot of groups now have dedicated storage servers, which serve volumes up via iSCSI to the actual application and infrastructure servers that are using that capacity. (file/mail/database/web etc.)

 

In terms of the physical arrangement: I wouldn't bother with pulling backplanes out of other servers. I'd either cable the disks up manually inside a standalone scsi tower, or I'd buy a specific disk shelf.

 

Unfortunately, even if the software does exist, and your disks are "fast," you may not get very good performance out of it. RAID controllers you can buy today often have PowerPC-like or similar RISC chips on them which can perform the necessary calculations for RAID faster than any Power Macintosh G4 could ever hope to do. And, they have the benefit of not also needing to run Final Cut, Photoshop, or even Word at the same time.

 

The following is the part of my post which is "not fun."

 

But also, if this is an OS 9 system with USB, then I honestly recommend the "keep it simple" method. I literally own five of these and they've all been perfect, and it was much more cost-effective than the tape drive I wanted to get.

 

If you want firewire: http://www.newegg.com/Product/Product.aspx?Item=N82E16822236542

Firewire, 2TB: http://www.newegg.com/Product/Product.aspx?Item=N82E16822236543

 

Okay, fun again:

Do you have a copy of Retrospect for Mac OS 9? Have you considered looking into the cost of a used DAT72 or DAT160 mechanism? They can connect via USB as well as SCSI and SAS/SATA and they fit natively 36 and 80 gigs on a cartridge, respectively. In addition, for those gigs, although they'll take a painfully long time to write out, you won't need to be present at the computer. It's potentially a very huge win if you have less than a hundred or so gigs of stuff, are willing to take a chance on some used equipment (because new pricing on DAT equipment, which does still exist, is insane) and you'll be able to be that guy who does tape backups.

Share this post


Link to post
Share on other sites

THX, but five of those suckers would be my retro-computing budget for a whole freakin' year. :-/

Retrospect I've got, but from what I recall, tape backup is prone to loss at failure of the tape drive upon which it was written? :?:

 

BTW, which is faster, U160 or FW400, IDK offhand and could the faster of the two make any appreciable difference for my needs anyway? I'd rather be able to swap out the Trio and U160 cards as needed anyway. A USB card, an ATA card and my 1080p/1600x1200 capable VidCard with DVD decoder daughtercard are all I really need to keep in this box for anything other than backup time.

 

Luckily though, I don't have nearly the amount of data to back up that you have. Used server Drivelets I can afford to buy in quantity and I've got three or four PCI SCSI Cards, including that great two channel U160 Card and a handful of those HQ UltraSCSI cables on hand already.

 

Other than the fact that performance is not in any way necessary for my backup needs, I think I've already decided that two pairs of Savvios for a redundant backup pair beats fooling around with RAID 5 by a long shot. Thanks for confirming that for me, such was my sneaking suspicion.

 

I've got a drive to drive system up and running already and I just fabbed a packaging solution that rescues both drives from their precarious perch atop the prostrate MetalMiniTower's PSU. I've settled on a plan for building a cold swap four bay 2.5" SCA drive setup that will down atop the MetalMinitower-G3 with its own PSU/Cooling equipment on board. The backup drives I can store within my standard storage stack mailing boxen, one set to keep here and the redundant set offsite at my sweetie's place.

 

With the backup drives out of the way, I can play striped UltraSCSI RAID Array performance games on the PCI toys and Fast Wide SCSI II on the JackHammered NuBus toys.

 

I think this will work out well for me, I can do the DVD/RW Burnathon thing before the next backup session to the same drive sets.

 

That approach is way more my Knuckle Draggin' Hardware Hacker's speed anyway. :-/

Share this post


Link to post
Share on other sites

haha, well you'd only need one, at least to start, and I've found that Target has good deals on external storage from time to time. Plus, if performance isn't too important, then the speed of USB should be fine, especially if it's a system you're willing to let do its backup thing overnight.

 

but from what I recall, tape backup is prone to loss at failure of the tape drive upon which it was written?

 

In general with both DAT and LTO, you shouldn't need to worry about having a specific mechanism. (like, your specific serial number) -- but you do need to have a type of mechanism that is compatible with reading your tapes. LTO is write-compatible one generation backward and read-compatible two generations backward. DAT has a similar system but it's slightly more erratic.

 

In particular, this is actually one of the things that makes tapes more desirable than hard disks. (Although I'd still consider having rotating single disks better than using RAID at all in a backup system, at least in a desktop-level backup system. It's different of course if you're building a larger d2d2t system.)

 

I think I've already decided that two pairs of Savvios for a redundant backup pair

 

In RAID, or as independent disks but with the data duplicated, say using Retrospect or just by dragging and dropping the files? (or, even better, using drive "A" one week and "B" the next, for a little bit of version history?)

Share this post


Link to post
Share on other sites

No RAID for me, thanks, you've cured me of that particular twinge of madness! :lol:

 

I've already assembled a rudimentary, but far less precarious setup for backing up the borked QS'02's drives to my stack of cute little Savvios over in the MMTG3 I/O Hack thread.

 

file.php?id=6159&t=1

 

I'm more the KISS approach Neanderthaler, drag it and drop it into folders so's they hold appropriately huge chunks of data and then drag-n-drop those onto appropriately huge partitions on another HDD and let 'er run for as long as . . .

 

Retrospect is for IT backup gurus such as yourself, tried it that way a few times . . .

. . . went right straight back to draggin' the knuckles again! Heh! ;D

Share this post


Link to post
Share on other sites
I can't speak to what's available third party, but I would advise that "fault tolerance" (RAID, in most cases) is not backup. In addition, software-based RAID solutions have long been considered t obe insufficient in performance. Up to a few years ago, it has been thought of as being so bad that it wasn't even worth making the software to do it easily available.

 

This has changed more recently and Windows 8/2012 and Solaris/FreeBSD have slightly more flexible versions of this idea built in, complete with the ability to randomly add or remove disks or change redundancy levels...

 

Uhm, The standard Linux tool for handling software RAID devices has been around since 2001. And the LVM framework that lets you do NetApp-esque tricks like resizing volumes when you add disks to storage pools has been around even longer. (Granted, some of the filesystem-level support for the really fancy stuff, like growing volumes without reformatting, taking live snapshots, etc, is "only" about 8-10 years old, depending on which filesystem we're talking about.)

 

All this stuff has been in RedHat and friends (IE, the "industrial strength" Linux distributions) as out-of-the-box features for over a decade so I'm not clear what the heck you're talking about. Practically every multi-disk NAS device sold for home use based on Linux, which is most of them, uses software raid and LVM, only very expensive models have any hardware acceleration. And I've been using roll-your-own RAID 5s in home-built servers for that "over a decade" span. (I wouldn't recommend it for a desktop, but it works fine on a *file server* that doesn't need its CPU for much else.) I was going to assume you just meant "for desktop operating systems" but then you mentioned Solaris and FreeBSD, so... ?

 

It's also probably also worth noting that most cheap "RAID cards" or the "RAID" function built into many motherboards are also actually software RAIDs. *Occasionally* they'll have some rudimentary acceleration that helps distribute reads/writes when run in the simplest RAID0/1 modes, but usually they're just a plain IDE/SATA controller with a little BIOS ROM that provides an INT13H driver sufficient for recognizing a "This is part of a raid set" header on drives associated with a RAID container, "assembling" them into a single virtual device, and making it look like single drive in Real Mode so you can use it as a boot device. The Windows (or whatever) driver for the card needs to take over the software RAID-iness once it's booted. In that form Windows has supported "Software Raid" for positively forever. (Back to the 9x versions. The old Highpoint HPT370 chipset was one of the original poster children for this sort of device.)

 

Anyway, re this:

 

I would advise that "fault tolerance" (RAID, in most cases) is not backup.

 

That is exactly correct. Strictly speaking a "Backup" should consist of disks/tapes/whatever that are only accessed to write the copy of the data and read it back in case of a disaster. (Ideally of course you have at least two sets, so if disaster strikes in the middle of you *taking* a backup that destroys both the original and the backup you're overwriting you have the last set to work from.) If your "backup" is online and writable when your computer gets infected with the Gawdknowswhat Worm and it decides to overwrite all your data with a billion copies of a bad photoshop of Mr. Ed with Britney Spear's face it's just as gone as if you didn't have it "backed up" or on "redundant storage".

Share this post


Link to post
Share on other sites

Maybe time for a sanity check?

 

In addition, software-based RAID solutions have long been considered t obe insufficient in performance. Up to a few years ago, it has been thought of as being so bad that it wasn't even worth making the software to do it easily available.

 

Ok, I've got a lot to say about these statements, some of which was covered by Gorgonops, but mostly I can't reconcile it with the later statement:

 

But also, if this is an OS 9 system with USB, then I honestly recommend the "keep it simple" method.

 

The initial argument is don't do software raid because of performance, then recommending USB 1.1?

 

Trash's initial concerns about controller failure are extremely valid, and IMO, one of the best reasons to use software raid, and acknowledges the performance tradeoffs. I see nothing wrong with the initial premise, although if you've got a small enough data set to fit on a single disk (or can be easily split between disks), avoiding RAID entirely would be preferable, and then doing as Cory suggested with manually mirroring data between disks with retrospect, or file copies, or whatever you find easiest, is going to be a better solution. This is OS9 we're talking about, with HFS, which does have a tendency to eat the filesystem and lose data during power failures, OS hangs/crashes, or just whenever. That old HFS isn't journaled or anything, so putting all your eggs in one volume isn't that hot, even if the underlying storage is fault tolerant. And for that reason, probably the single best thing you could do for data safety is putting it on a more modern system.

Share this post


Link to post
Share on other sites

Interesting, if not entirely ironic. You're saying that I'd be better off using one of my comatose G4s to do the Backup of my OS9 G4's data onto media formatted and written to a redundant pair of Savvios under OSX . . . [:O]]'>

 

p.s. thanks for confirming my original assumptions and concerns.

Share this post


Link to post
Share on other sites

Or another option would be, if you could scrape up the money, to lay down the cash for something like a low-end Netgear ReadyNAS and push your files to that for "warm storage". Last I checked they were still using a version of NetaTalk that worked with OS 9.

 

(For archival backups the ReadyNAS boxes have a USB port and a "Backup" button on them that can snapshot the online storage to a portable USB disk for safekeeping, thus helping make"multi-level" backup strategies easier for even knuckledraggers to handle. I'm sure plenty of other brands have similar features, I just happen to be vaguely familiar with the ReadyNAS line.)

 

You also have that ATOM board you keep talking about. That's the perfect sort of hardware to roll your own NAS with one of the many dedicated USB Key/LiveCD-based NAS OS distributions there are floating around. All you'll need is a couple of big disks (Two 2GB drives in a software mirror enough?) and a box. Granted this may get you in deeper than you want to be with the technical details, which is why the prefab boxes like the ReadyNAS exist.

 

In any case, talking to network storage should be considerably faster than using USB 1.1 as long as your home network supports at least 100mb. I guess if you're talking to a Beige G3 then... well, heck, 10mb Ethernet should at least make it about a tie.

Share this post


Link to post
Share on other sites
Or another option would be, if you could scrape up the money, to lay down the cash for something . . .

Negatory on that outlay of bread, toothsome comrade. :-/

 

I've already got eight extra $2 Savvios on hand, ATA/whatever on the Tempo Trio, fast SCSI II and 1080p/DVD decoding on the cards already into the MetalMiTower G3/DT edition running under OS9.

 

Once all the data from the HDD from the QS'02 with the beige-borked drive blocks is backed up, it's DiskWarrior time! If all goes well, the repaired, backed up drive goes into the FireWire enclosure and onto the Pismo500, my temporary main workstation. It's booting into OS9 on the tiny drive from the PDQ for now, the QS'02 (partition size limitations researched this time around) as the main boot drive for the ClamshellCutie makeover of Beater3.

 

I'll worry about long term backup solutions after I get the QS'02 a new PSU and a whole bunch of other things on the to do/to pay for list accomplished.

 

Chalk the RAID 5 notion up to the Silly Savvio Hacking Session that made my Thanksgiving staycation so much fun.

 

The best thing to come out of that bit of idle time madness is the backup config in the piccie above that lets me get the MMT/G3 off of the rubber feets on its "side" and back upright on the rubber feets on its "Bottom" so it stops hogging so much desktop real estate!

 

I've come to terms with the fact that I've always been a maker/modifier of real things IRL . I'm a hardware hacker from back in the day when I did it for utlilty and now ISL (In Surreal Life) for fun and the occasional feedback I get in my readonkulous hacks threads here at the barracks.

 

I've always wanted to learn electronics and to get under the hood of Linux, but I'm prioritizing the dreams for the next thirty years or so. That's if I'm blessed to live up to longevity standards set by my forbears.

 

The ATOM board is my training wheeled vehicle for learning the GIMP for when I can snag an inexpensive board/proc to actually let me RUN the GIMP.

 

Meanwhile, I'm trying to downsize the IRL dream projects to downsized shop tools and facilities. While I'm at that, I'm trying to optimize that setup for hacking playtime here ISL. [:)]]'>

Share this post


Link to post
Share on other sites

I'd probably handle OS 9 backups with an external FireWire hard drive and maybe some simple entry-level backup software like Retrospect if there was enough data or frequent enough backups to merit automation (the province of "IT backup gurus" would be something more like TSM).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×