• Updated 2023-07-12: Hello, Guest! Welcome back, and be sure to check out this follow-up post about our outage a week or so ago.

Cheap IDE on scsi bus solution?

Unknown_K

Well-known member
I'm not using my SCSI to IDE adapters at the moment since I still have working native drives and cards for PCI equipped machines. To be honest for a while I was scouping up Mac compatible SCSI cards.

Anybody notice the prices of Nubus SCSI cards these days? Those cards are getting crazy expensive (Jackhammer and SEIV).

Somebody is always buying up the last surplus stock of stuff like the Acard 7720U  and then blowing them out cheap. Few people bother to stock up since they assume the supply is unlimited and then they are all gone and prices shoot up.

 
Last edited by a moderator:

ChunkyPanda03

New member
All right so I guess SCSI to SD is probably the way to go. But I had another idea would it be possible to swap controller boards from a hard drive with a quantum fireball IDE drive and one from a scsi hard drive.

 

johnklos

Well-known member
But I had another idea would it be possible to swap controller boards from a hard drive with a quantum fireball IDE drive and one from a scsi hard drive.
It's possible to swap boards, but chances are it won't work. The factory-made defect list is usually stored in flash memory on the controller board, and that defect list is much more important than later remapped sectors.

If you try, let us know of your results.

 

Gorgonops

Moderator
Staff member
The factory-made defect list is usually stored in flash memory on the controller board, and that defect list is much more important than later remapped sectors.
In addition to that it would also depend on the two respective drives having exactly the same (or at least close enough) low-level formats and compatible embedded servo information. I know places like Drivesavers sometimes do board swaps for low-level recoveries, but so far as I know they do them between exactly the same models...

I mean, sure, I guess if you have, say, Quantum Fireball 540T and its SCSI twin just lying around and you don't mind losing both of them I'd love to hear what happens. But ultimately this isn't a very scalable solution. (While the IDE versions of various drives that have SCSI twins may well of sold in larger numbers back in the day they're not going to be much easier to find today, and there's no reason to think their mechanisms were any more reliable.)

 

Franklinstein

Well-known member
Actually, you have that backwards. The SCSI2SD will likely be faster than most old SCSI drives, but any modern-ish IDE or SATA drive will be worlds faster than a SCSI2SD.

I have several ACARD SCSI-IDE adapters with IDE-SATA adapters in all sorts of various machines. They'll do an honest 80 to 90 MB/sec with an SSD on a 160 MB/sec U2W SCSI bus (tested in an Alpha-based API CS20). The 50 pin SCSI models like the AEC-7720U will saturate a 10 MB/sec SCSI bus easily with any decent IDE disk.

A SCSI2SD is fine, in my opinion, for systems that don't require tons of speed and which can benefit from external access (swapping SD cards, connecting SCSI2SD to USB to another computer), but if you can find a less expensive ACARD on eBay, that'd be preferable for speed.
Oh yeah? I figure a good SD controller with one of those high-speed SD cards would surely outperform an average HD, especially if you're not spending a ton of money on a fast HD. Most of these machines have problems with drives and/or partitions exceeding 128GB anyway; I'd stick with a 32GB flash card and call it good.

In addition to that it would also depend on the two respective drives having exactly the same (or at least close enough) low-level formats and compatible embedded servo information. I know places like Drivesavers sometimes do board swaps for low-level recoveries, but so far as I know they do them between exactly the same models...

I mean, sure, I guess if you have, say, Quantum Fireball 540T and its SCSI twin just lying around and you don't mind losing both of them I'd love to hear what happens. But ultimately this isn't a very scalable solution. (While the IDE versions of various drives that have SCSI twins may well of sold in larger numbers back in the day they're not going to be much easier to find today, and there's no reason to think their mechanisms were any more reliable.)
These drives only have so much controller memory available and are not reprogrammable without specific commands being invoked; none of them could autonomously reprogram their local memory. On a lot of drives, especially in the late 80s/early '90s, the program data was stored on an EPROM and couldn't be changed anyway.

Anything modern with SMART typically keeps everything related to defect management (among other parameters) in a special reserved area on the drive's media. I would imagine older drives do the same, except perhaps on ancient MFM-era drives that have the defect list printed on the top of the drive. Even then the disk driver/file system often keeps a record of bad sectors (at least SilverLining would map and reallocate bad sectors at format time or on-demand as errors arose).

Typically with trial and error it's possible to switch controller boards among any drive within the same family (so, a Quantum Fireball TM with another TM, or a CX with another CX), whether it's SCSI or ATA, higher or lower capacity. It doesn't always work, especially if there were large revisions somewhere in the product's lifetime, but it does more often than not. Apparently the Fireball TM had a very poor reliability record in ATA guise but was fine with the SCSI controller (though honestly it was a lackluster drive regardless of interface); I have a couple of the 3.2GB variant on ATA that aren't recognized by any host computer that I'm keeping in case I get a bad SCSI version to swap boards.

Generally only consumer-class drives (Quantum Fireball, Seagate Medalist, some IBM DeskStars) were sold with the same HDA on either ATA or SCSI; the high-end HDAs were only ever sold as SCSI ("real" Seagate Barracuda or Cheetah, IBM UltraStar, Quantum Atlas), though you could swap boards between narrow, wide 68, and wide SCA versions.

 

Gorgonops

Moderator
Staff member
I figure a good SD controller with one of those high-speed SD cards would surely outperform an average HD
I don't know the exact situation with the guys who make the SCSI2SD, but here is a thing to remember: most/all open-source projects that use a generic microcontroller and home-grown firmware to communicate with SD cards can only use them in single-pin SPI mode, not their "native" mode that utilizes all for data pins. The reason for this is last I checked native mode required a paid license and a signed NDA. There are ways around this; for instance, if you use a controller that has embedded firmware your open source can wrap around, but lacking such you need to figure the speed you'll get from an SD card in an open source project will at best be less than a quarter of what it's rated for.

That said, I'd still think even in SPI mode an SD card should blow away any HD when it comes to access time. For casual use vs. streaming applications that's what you're more likely to notice.

Edit: just checked, the SCSI2SD website confirms they use SPI, which on the v5 hw is limited to a 25mhz clock, for approximately 3MB/s throughout. V6 supposedly supports up to 20MB/s, though, which should theoretically compare well with any real SCSI solution relevant to a before-the-late-1990s machine.

 
Last edited by a moderator:

Cory5412

Daring Pioneer of the Future
Staff member
Most of these machines have problems with drives and/or partitions exceeding 128GB anyway;
That's an IDE limitation. A SCSI machine with the right chain of adapters and a new enough OS (say, 7.6.1 or 8.1 so you can get HFS+) would have no problems with something bigger.

That said, not really sure there's a really compelling reason to put over a hundred gigs of storage in something that age, perhaps unless you're using it as a bridge server.

Oh yeah? I figure a good SD controller with one of those high-speed SD cards would surely outperform an average HD, especially if you're not spending a ton of money on a fast HD.
Gorgonops addresses this in better detail. The answer is kind of "no, not really" but the other part of the answer is "but, it doesn't actually matter that much anyway."

Like I said above, I use an SCSI2SD v6 in my Power Macintosh 8600/300 with OS 9.1 on it and it's "fine". Probably more responsive overall than if I had a stock disk from the era in it, but there are faster/better AV-focused hard disks meant for capture destinations that would likely outdo my SCSI2SD in raw transfer rate.

On newer machine, CF is an option, and on anything with PCI slots as an option, PCI scsi, IDE, and SATA cards introduce a lot of potential. I'm not sure that potential matters an awful lot for anything pre-G3, but, again, in my experience on a "high end" but still close to stock 604ev system is that the SCSI2SD is fine.

 

joethezombie

Well-known member
Just a quick update... After seeing Cory's results, I couldn't stop thinking there must be something wrong on my end.  So I bought a new SD card, and dd'd the old card to the new one.  I also see there was a firmware update on the codesrc website that specifically addresses write performance.  So I updated the firmware, and along with my new SD card.  Poof!  No drop out at 60k.

IMG_2920.jpg

Writes now stay speedy!  This is in the SE/30, so the SCSI is saturated.  I'll try it in the IIfx and with the Jackhammer tomorrow and see if the changes fixed everything up, but I'm pretty happy now.

 

joethezombie

Well-known member
Just to confirm, the new firmware and SD card combination has completely fixed my strange write performance reported earlier.  On the Jackhammer, I have sustained writes above 2500 KB/s at all sizes.  Very happy now with the v6 card.

 

Gorgonops

Moderator
Staff member
Like I said above, I use an SCSI2SD v6 in my Power Macintosh 8600/300 with OS 9.1 on it and it's "fine". Probably more responsive overall than if I had a stock disk from the era in it, but there are faster/better AV-focused hard disks meant for capture destinations that would likely outdo my SCSI2SD in raw transfer rate.
Honestly I'd love to see this conjecture tested in a side-by-side shootout to see if there is *any* drive, either native SCSI or, why not, a modern SATA SSD with the correct stack of adapters, that really can meaningfully outrun the v6 when connected to the internal bus. (Taking into account the continuous improvements in the v6's firmware since the last time there was a thread on this subject, and with the v6 equipped with a known-good high-end card.) I mean, sure, I have no illusions that you could no doubt outrun it with either an accessory SCSI card and sufficiently exotic 68 pin drives or native SATA card, but I'm pretty deeply skeptical you're going to get an improvement that actually matters with the built-in bus.

(IE, the fastest benchmarks I've ever seen of the internal bus seem to show it topping out at around 80% of its theoretical 10MB/s capacity, and it looks like with the correct combination of firmware and SD card the v6 is pretty much there.)

Obviously for a PCI Mac if you really *need* speed the brain-dead choice today would be a SATA card, forget bothering with per-drive adapters on either the internal bus or a SCSI card since direct SATA is an option that actually exists. It seems to me the only place where the stack-o-dapters approach to put a SATA SSD on a SCSI bus makes sense would be the extremely niche case of actually needing that sort of speed in a computer that already has a faster SCSI bus in it that *can't* be swapped out. (IE, a late Nubus machine or some proprietary UNIX workstation, etc.) Even then, though, I suspect there's some overlap between that domain and the capabilities of the SCSI2SDv6; if you're talking about, say, a Quadra-based video capture setup that used a disk array the v6 *probably* about matches or betters the drives originally in the harness; if those were good enough in 1993 then there may not be a reason to go nuclear on the problem.

 

Cory5412

Daring Pioneer of the Future
Staff member
but I'm pretty deeply skeptical you're going to get an improvement that actually matters with the built-in bus.
This is where I am too.

I would love an AV drive for my 8600, for the laughs, but in all practicality, the SCSI2SD v6 has been perfect for it, and my money on future disk purchases is almost certainly better allocated to more SCSI2SD v6es.

Mine would be better if I upgraded to the next card up in Samsung's line, or to the Sandisk card my friend is using.

A lot of this comes back to my personal large skepticism that Mac OS 9 is meaningfully faster on anything faster than  G3/300. Even with that skepticism it's not hard to believe that the experience of using a beige Power Mac of any kind is improved by having 0ms seek times.

 

johnklos

Well-known member
Yeah, it all depends on what you want it to do. If you want simple and fast enough for 98% of m68k Mac things, then a SCSI2SD is perfect. I just wish we could buy some more v6 ones now. AmigaKit says they'll be back in stock on 19-March-2019, (which is ten days before Brexit - I hope that doesn't complicate things).

Most of my machines are sitting around for weeks / months / years compiling. Over the years I've worn out many, many SD cards, plus a decent number of SSDs. Experience has pushed me more and more towards preferring Samsung, although I still don't trust either as much as a good, old fashioned spinning rust disk. Of course spinning rust disks fail, but they don't fail because you've written too much data, plus they usually give some indication via SMART before they fail completely. So my uses usually involve doing things that would eventually kill SD cards and sometimes even SSDs.

I do have some SCSI card readers like the SCM Microsystems PCD-50B, but like the pre-v6 SCSI2SD, they don't do synchronous mode, so they're fine for m68k Macs and VAXstations. That they can handle multiple cards at the same time helps - I put swap on a dedicated card and move the location of the swap partition from one part of the card to another every once in a while so they don't wear out completely.

But if I want to have a bootable drive which I can set up with things that are (directly obtainable right now) or (require lasting through tons and tons of disk writes), I'd still be doing SATA - IDE - SCSI.

 

Gorgonops

Moderator
Staff member
So my uses usually involve doing things that would eventually kill SD cards and sometimes even SSDs.
I am curious how the bar has been moving on that front, per my recent post about testing out an M.2->PATA sled in a G4 PowerBook. OS X Tiger doesn't know anything about TRIM, of course, and I'm sure with only 512MB (or even 1GB, sigh) I know it must be pounding the snot out of swap. I remember horror stories about how quickly you could murder the early SSDs back when they started really being a thing about a decade ago; research it today and you get a lot of conflicting arguments about whether it's realistically a problem or not.

They sell SD cards that are specifically rated for constant/repeated overwrites for applications like continuous video surveillance; I actually picked one up the other day (it cost $3 more than the regular card) intending to use it in the next Raspberry Pi I set up. Of course, I did this having done no research about the basis for the durability claims and whether it would realistically affect a "using it in a computer" profile...

 

johnklos

Well-known member
Some cards are meant for continuous, sequential writes, like cards meant for video capture. I wish there was more information about what's specifically different about them, since long, sequential writes isn't quite the same as simply lots of writes, but I suppose it has to be at least a little better.

These days, SSDs, in my opinion, are robust enough to be used anywhere, regardless if the underlying software / OS supports TRIM. I've worn out plenty of Patriot, Sandisk and others, and the two Samsung SSDs I've killed were a test of sorts of workloads which are definitely not typical. So would I put a regular SSD in a non-TRIM Power Mac? Certainly, and I have - I have an mSATA SSD in an adapter in a 12" PowerBook G4.

SD cards, I imagine, would be pretty hard to kill on an m68k system and perhaps on an older, slower PowerPC system, too. 

 

Franklinstein

Well-known member
These days, SSDs, in my opinion, are robust enough to be used anywhere, regardless if the underlying software / OS supports TRIM. I've worn out plenty of Patriot, Sandisk and others, and the two Samsung SSDs I've killed were a test of sorts of workloads which are definitely not typical. So would I put a regular SSD in a non-TRIM Power Mac? Certainly, and I have - I have an mSATA SSD in an adapter in a 12" PowerBook G4.

SD cards, I imagine, would be pretty hard to kill on an m68k system and perhaps on an older, slower PowerPC system, too. 
I thought the whole point of modern SSDs was that the controller managed all of the TRIM-related stuff internally so that the host didn't have to be concerned with it. High-end CF cards do this too, don't they? That was always a big selling point for CF over SD: the internal controller was usually faster and more intelligent than that used for the average SD card and thus you'd get better performance and longevity from CF vs SD. 

Anyway SSDs aren't the first things to be killed by either poor internal management or oblivious hosts: laptop hard drives that use ramp loading heads had problems with premature burn-out in the early days. This is because the drive would unload the heads when idle after 30 seconds or so, and then the OS would do some mundane management function every few minutes which forced the hard drive to reload the heads for that process and then unload them when finished. So basically even if the machine is just sitting idle with no user processes running, the hard drive is still loading the heads at least once every few minutes. Rinse and repeat for long enough and the load cycles were exceeded and the drives failed. Linux was apparently the biggest killer back in the mid-2000s.

If you're using a classic Mac OS-based system, turning off VM should result in a flash device, even a generic SD card, serving a long useful life. Unfortunately I don't know any good disk utilities (for flash or spinning media) that can reliably mark bad any sectors if they arise; usually they try to remap bad sectors but this doesn't help much if the entire cylinder is starting to have problems. Also it greatly increases seek time while the drive tries to find the disparate sectors that used to be sequential; I'd rather the system just say, 'hey don't use this sector anymore' and skip it rather than try to remap it but apparently that's too much to ask of "modern" computers.

 

Cory5412

Daring Pioneer of the Future
Staff member
Most of my machines are sitting around for weeks / months / years compiling.
There's "niche" and then there's "you might actually be the only one on the forum or in the scene doing that." This is closer to the latter, I think.

Anecdotally, I have really good experience with regular SSDs on my modern computers. The two SSDs I've had longest are still kicking, being relocated from machine to machine as I find a use case wherever they move to for something bigger and put the money into it.

Once I'm done with The Big Torrent I'll probably put my 180GB Intel 520 in the Mac mini, replacing the 2.5" 2TB disk I bought and installed in it.

My ThinkPad T400 still has a 2010-era 128GB Toshiba SSD that came out of a MacBook Pro in it.

Around ten years ago there was some math pertaining to some of the then-current SSDs that basically suggested most modern regular-duty SSDs should be capable of endurance along the lines of a couple decades of full-effort use (like, write the full capacity of the ssd continuously) for some number of decades.

The one SSD I've personally used that has died was a Sandisk in my Mac mini. It was kind of tough because I did some patches and the machine never came back. It's absolutely my own fault I wasn't running more frequent updates, and the one most critical piece of data there that seemed lost, was eventually found in an image backup I had made from the previous year.

Otherwise, I largely trust my SSDs to keep truckin' on, but I'm moving away from storing data directly on client computers, with the exception of anything stationary enough to keep a daily backup running on.

I thought the whole point of modern SSDs was that the controller managed all of the TRIM-related stuff internally so that the host didn't have to be concerned with it.
There's a lot more going on behind the scenes these days, but I've never explicitly heard that having the OS do TRIM is obsolete as a concept.

High-end CF cards do this too, don't they?
Not that I'm aware of. CFast might, but that's an entirely new (and, more modern) standard.

That was always a big selling point for CF over SD: the internal controller was usually faster and more intelligent than that used for the average SD card and thus you'd get better performance and longevity from CF vs SD. 
I can't actually say that I've ever heard this. Ten years ago, when cameras still shipped with CF, it was performance and capacity.  Most of the use cases for both kinds of cards involved a lot less random read and write than exists today. The cycle was almost always "write until full", then swap cards, read, delete, repeat.

'hey don't use this sector anymore' and skip it rather than try to remap it but apparently that's too much to ask of "modern" computers.
Can't speak to "flash" media such as USB pen/key flash drives, CF/SD/CFast, but on SATA SSDs, there's a percentage of NAND that exists that has been over-provisioned for exactly this purpose.

Given that SSD media generally features 0 seek time, this shouldn't have any performance impact.

 
Top