Jump to content

Recommended Posts

17 hours ago, Franklinstein said:

Most of these machines have problems with drives and/or partitions exceeding 128GB anyway;

That's an IDE limitation. A SCSI machine with the right chain of adapters and a new enough OS (say, 7.6.1 or 8.1 so you can get HFS+) would have no problems with something bigger.

 

That said, not really sure there's a really compelling reason to put over a hundred gigs of storage in something that age, perhaps unless you're using it as a bridge server.

 

17 hours ago, Franklinstein said:

Oh yeah? I figure a good SD controller with one of those high-speed SD cards would surely outperform an average HD, especially if you're not spending a ton of money on a fast HD.

Gorgonops addresses this in better detail. The answer is kind of "no, not really" but the other part of the answer is "but, it doesn't actually matter that much anyway."

 

Like I said above, I use an SCSI2SD v6 in my Power Macintosh 8600/300 with OS 9.1 on it and it's "fine". Probably more responsive overall than if I had a stock disk from the era in it, but there are faster/better AV-focused hard disks meant for capture destinations that would likely outdo my SCSI2SD in raw transfer rate.

 

On newer machine, CF is an option, and on anything with PCI slots as an option, PCI scsi, IDE, and SATA cards introduce a lot of potential. I'm not sure that potential matters an awful lot for anything pre-G3, but, again, in my experience on a "high end" but still close to stock 604ev system is that the SCSI2SD is fine.

Share this post


Link to post
Share on other sites

Just a quick update... After seeing Cory's results, I couldn't stop thinking there must be something wrong on my end.  So I bought a new SD card, and dd'd the old card to the new one.  I also see there was a firmware update on the codesrc website that specifically addresses write performance.  So I updated the firmware, and along with my new SD card.  Poof!  No drop out at 60k.

 

IMG_2920.jpg.553f6c1bbadb70f118cb908a01d4e425.jpg

 

Writes now stay speedy!  This is in the SE/30, so the SCSI is saturated.  I'll try it in the IIfx and with the Jackhammer tomorrow and see if the changes fixed everything up, but I'm pretty happy now.

Share this post


Link to post
Share on other sites

Just to confirm, the new firmware and SD card combination has completely fixed my strange write performance reported earlier.  On the Jackhammer, I have sustained writes above 2500 KB/s at all sizes.  Very happy now with the v6 card.

Share this post


Link to post
Share on other sites
16 hours ago, Cory5412 said:

Like I said above, I use an SCSI2SD v6 in my Power Macintosh 8600/300 with OS 9.1 on it and it's "fine". Probably more responsive overall than if I had a stock disk from the era in it, but there are faster/better AV-focused hard disks meant for capture destinations that would likely outdo my SCSI2SD in raw transfer rate.

Honestly I'd love to see this conjecture tested in a side-by-side shootout to see if there is *any* drive, either native SCSI or, why not, a modern SATA SSD with the correct stack of adapters, that really can meaningfully outrun the v6 when connected to the internal bus. (Taking into account the continuous improvements in the v6's firmware since the last time there was a thread on this subject, and with the v6 equipped with a known-good high-end card.) I mean, sure, I have no illusions that you could no doubt outrun it with either an accessory SCSI card and sufficiently exotic 68 pin drives or native SATA card, but I'm pretty deeply skeptical you're going to get an improvement that actually matters with the built-in bus.

(IE, the fastest benchmarks I've ever seen of the internal bus seem to show it topping out at around 80% of its theoretical 10MB/s capacity, and it looks like with the correct combination of firmware and SD card the v6 is pretty much there.)

Obviously for a PCI Mac if you really *need* speed the brain-dead choice today would be a SATA card, forget bothering with per-drive adapters on either the internal bus or a SCSI card since direct SATA is an option that actually exists. It seems to me the only place where the stack-o-dapters approach to put a SATA SSD on a SCSI bus makes sense would be the extremely niche case of actually needing that sort of speed in a computer that already has a faster SCSI bus in it that *can't* be swapped out. (IE, a late Nubus machine or some proprietary UNIX workstation, etc.) Even then, though, I suspect there's some overlap between that domain and the capabilities of the SCSI2SDv6; if you're talking about, say, a Quadra-based video capture setup that used a disk array the v6 *probably* about matches or betters the drives originally in the harness; if those were good enough in 1993 then there may not be a reason to go nuclear on the problem.

Share this post


Link to post
Share on other sites
31 minutes ago, Gorgonops said:

but I'm pretty deeply skeptical you're going to get an improvement that actually matters with the built-in bus.

This is where I am too.

 

I would love an AV drive for my 8600, for the laughs, but in all practicality, the SCSI2SD v6 has been perfect for it, and my money on future disk purchases is almost certainly better allocated to more SCSI2SD v6es.

 

Mine would be better if I upgraded to the next card up in Samsung's line, or to the Sandisk card my friend is using.

 

A lot of this comes back to my personal large skepticism that Mac OS 9 is meaningfully faster on anything faster than  G3/300. Even with that skepticism it's not hard to believe that the experience of using a beige Power Mac of any kind is improved by having 0ms seek times.

Share this post


Link to post
Share on other sites

Yeah, it all depends on what you want it to do. If you want simple and fast enough for 98% of m68k Mac things, then a SCSI2SD is perfect. I just wish we could buy some more v6 ones now. AmigaKit says they'll be back in stock on 19-March-2019, (which is ten days before Brexit - I hope that doesn't complicate things).

 

Most of my machines are sitting around for weeks / months / years compiling. Over the years I've worn out many, many SD cards, plus a decent number of SSDs. Experience has pushed me more and more towards preferring Samsung, although I still don't trust either as much as a good, old fashioned spinning rust disk. Of course spinning rust disks fail, but they don't fail because you've written too much data, plus they usually give some indication via SMART before they fail completely. So my uses usually involve doing things that would eventually kill SD cards and sometimes even SSDs.

 

I do have some SCSI card readers like the SCM Microsystems PCD-50B, but like the pre-v6 SCSI2SD, they don't do synchronous mode, so they're fine for m68k Macs and VAXstations. That they can handle multiple cards at the same time helps - I put swap on a dedicated card and move the location of the swap partition from one part of the card to another every once in a while so they don't wear out completely.

 

But if I want to have a bootable drive which I can set up with things that are (directly obtainable right now) or (require lasting through tons and tons of disk writes), I'd still be doing SATA - IDE - SCSI.

Share this post


Link to post
Share on other sites
31 minutes ago, johnklos said:

So my uses usually involve doing things that would eventually kill SD cards and sometimes even SSDs.

I am curious how the bar has been moving on that front, per my recent post about testing out an M.2->PATA sled in a G4 PowerBook. OS X Tiger doesn't know anything about TRIM, of course, and I'm sure with only 512MB (or even 1GB, sigh) I know it must be pounding the snot out of swap. I remember horror stories about how quickly you could murder the early SSDs back when they started really being a thing about a decade ago; research it today and you get a lot of conflicting arguments about whether it's realistically a problem or not.

They sell SD cards that are specifically rated for constant/repeated overwrites for applications like continuous video surveillance; I actually picked one up the other day (it cost $3 more than the regular card) intending to use it in the next Raspberry Pi I set up. Of course, I did this having done no research about the basis for the durability claims and whether it would realistically affect a "using it in a computer" profile...

Share this post


Link to post
Share on other sites

Some cards are meant for continuous, sequential writes, like cards meant for video capture. I wish there was more information about what's specifically different about them, since long, sequential writes isn't quite the same as simply lots of writes, but I suppose it has to be at least a little better.

 

These days, SSDs, in my opinion, are robust enough to be used anywhere, regardless if the underlying software / OS supports TRIM. I've worn out plenty of Patriot, Sandisk and others, and the two Samsung SSDs I've killed were a test of sorts of workloads which are definitely not typical. So would I put a regular SSD in a non-TRIM Power Mac? Certainly, and I have - I have an mSATA SSD in an adapter in a 12" PowerBook G4.

 

SD cards, I imagine, would be pretty hard to kill on an m68k system and perhaps on an older, slower PowerPC system, too. 

Share this post


Link to post
Share on other sites
35 minutes ago, johnklos said:

These days, SSDs, in my opinion, are robust enough to be used anywhere, regardless if the underlying software / OS supports TRIM. I've worn out plenty of Patriot, Sandisk and others, and the two Samsung SSDs I've killed were a test of sorts of workloads which are definitely not typical. So would I put a regular SSD in a non-TRIM Power Mac? Certainly, and I have - I have an mSATA SSD in an adapter in a 12" PowerBook G4.

 

SD cards, I imagine, would be pretty hard to kill on an m68k system and perhaps on an older, slower PowerPC system, too. 

I thought the whole point of modern SSDs was that the controller managed all of the TRIM-related stuff internally so that the host didn't have to be concerned with it. High-end CF cards do this too, don't they? That was always a big selling point for CF over SD: the internal controller was usually faster and more intelligent than that used for the average SD card and thus you'd get better performance and longevity from CF vs SD. 

 

Anyway SSDs aren't the first things to be killed by either poor internal management or oblivious hosts: laptop hard drives that use ramp loading heads had problems with premature burn-out in the early days. This is because the drive would unload the heads when idle after 30 seconds or so, and then the OS would do some mundane management function every few minutes which forced the hard drive to reload the heads for that process and then unload them when finished. So basically even if the machine is just sitting idle with no user processes running, the hard drive is still loading the heads at least once every few minutes. Rinse and repeat for long enough and the load cycles were exceeded and the drives failed. Linux was apparently the biggest killer back in the mid-2000s.

 

If you're using a classic Mac OS-based system, turning off VM should result in a flash device, even a generic SD card, serving a long useful life. Unfortunately I don't know any good disk utilities (for flash or spinning media) that can reliably mark bad any sectors if they arise; usually they try to remap bad sectors but this doesn't help much if the entire cylinder is starting to have problems. Also it greatly increases seek time while the drive tries to find the disparate sectors that used to be sequential; I'd rather the system just say, 'hey don't use this sector anymore' and skip it rather than try to remap it but apparently that's too much to ask of "modern" computers.

Share this post


Link to post
Share on other sites
3 hours ago, johnklos said:

Most of my machines are sitting around for weeks / months / years compiling.

There's "niche" and then there's "you might actually be the only one on the forum or in the scene doing that." This is closer to the latter, I think.

 

Anecdotally, I have really good experience with regular SSDs on my modern computers. The two SSDs I've had longest are still kicking, being relocated from machine to machine as I find a use case wherever they move to for something bigger and put the money into it.

 

Once I'm done with The Big Torrent I'll probably put my 180GB Intel 520 in the Mac mini, replacing the 2.5" 2TB disk I bought and installed in it.

 

My ThinkPad T400 still has a 2010-era 128GB Toshiba SSD that came out of a MacBook Pro in it.

 

Around ten years ago there was some math pertaining to some of the then-current SSDs that basically suggested most modern regular-duty SSDs should be capable of endurance along the lines of a couple decades of full-effort use (like, write the full capacity of the ssd continuously) for some number of decades.

 

The one SSD I've personally used that has died was a Sandisk in my Mac mini. It was kind of tough because I did some patches and the machine never came back. It's absolutely my own fault I wasn't running more frequent updates, and the one most critical piece of data there that seemed lost, was eventually found in an image backup I had made from the previous year.

 

Otherwise, I largely trust my SSDs to keep truckin' on, but I'm moving away from storing data directly on client computers, with the exception of anything stationary enough to keep a daily backup running on.

 

5 minutes ago, Franklinstein said:

I thought the whole point of modern SSDs was that the controller managed all of the TRIM-related stuff internally so that the host didn't have to be concerned with it.

There's a lot more going on behind the scenes these days, but I've never explicitly heard that having the OS do TRIM is obsolete as a concept.

6 minutes ago, Franklinstein said:

High-end CF cards do this too, don't they?

Not that I'm aware of. CFast might, but that's an entirely new (and, more modern) standard.

 

6 minutes ago, Franklinstein said:

That was always a big selling point for CF over SD: the internal controller was usually faster and more intelligent than that used for the average SD card and thus you'd get better performance and longevity from CF vs SD. 

I can't actually say that I've ever heard this. Ten years ago, when cameras still shipped with CF, it was performance and capacity.  Most of the use cases for both kinds of cards involved a lot less random read and write than exists today. The cycle was almost always "write until full", then swap cards, read, delete, repeat.

 

13 minutes ago, Franklinstein said:

'hey don't use this sector anymore' and skip it rather than try to remap it but apparently that's too much to ask of "modern" computers.

Can't speak to "flash" media such as USB pen/key flash drives, CF/SD/CFast, but on SATA SSDs, there's a percentage of NAND that exists that has been over-provisioned for exactly this purpose.

 

Given that SSD media generally features 0 seek time, this shouldn't have any performance impact.

Share this post


Link to post
Share on other sites
41 minutes ago, Franklinstein said:

Linux was apparently the biggest killer back in the mid-2000s.

This is why most desktop linux distributions started using the 'noatime' flag by default.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×