Jump to content


  • Content count

  • Joined

  • Last visited


About Cory5412

  • Rank
    Daring Pioneer of the Future

Contact Methods

  • AIM
  • Website URL

Profile Information

  • Location
    Arizona, USA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Cory5412

    SSD for Powermac G3 beige minitower

    OK, found it (re-asked the person who knows for sure): If you disable virtual memory, the system's MMU is disabled. When that happens: Every program you launch needs to be 100% loaded from disk when you launch that program, which could cause program launches to take longer, particularly applications which are themselves large. The impact of this will vary based on the speed of the disk or network volume the application file resides on The other thing that will happen is all applications will instantly take up their maximum possible RAM allocation, which even in 768, if you're running 9-era stuff, especially anything creative, could have a big impact. In newer versions of 8 and 9, you can look at Get Info for any given application to view what an app will take when launched. I believe (but can't confirm at the moment) that it'll show you what the allocation will be. These penalties are there and as such I personally recommend against ever disabling VM in Classic Mac OS on PPC. (these limitations do not apply to 68k) In a system with, as you say, enough RAM, you'll never actually hit the disk when paging anyway. Now, again, if you're largely single-tasking or running older 7-era software, have a lot of RAM, and you choose a faster disk option (like a SATA SSD connected to a SATA card) then the impact might not be that bad. As far as the speed of launching applications go, that'll depend on what type of connection you use and how good the media you get is and how fast the bus you connect it to is. I.e. if you plan on disabling VM, using the IDE bus (or the built-in SCSI bus) is the worst case scenario. So, that's what that is. The other-other thing to consider is that in situations involving "real" SSD media (mSATA/m.2 SSDs, SATA SSDs, certain very high end SD cards) wear leveling and other technology is good enough that the risk of damaging the drive from even fairly heavy swapping activity (for example: OS X with low RAM) is very very minimal. But the good news, because nothing is entirely infallible, is that just because you got an SSD doesn't mean you stopped running backups, amirite?
  2. Cory5412

    SSD for Powermac G3 beige minitower

    There was a different issue with virtual memory on PowerPC Macs -- "don't disable virtual memory, in general" applies even on sytems with much slower disks. (Or, perhaps, chiefly there, I'd have to look.) There's a handfull of tasks where it's recommended, but that's true of everything on Classic Mac OS, all the way back to the system 6 and 7 days, advice to do x and such thing (usually disable something) then reboot and do a particular task, then reenable that thing and reboot. All manner of things fall under that particular advice. So, obviously nothing outright bad will happen if you turn off VM, but it won't help save any writes if you have enough RAM anyway and (I'll have to look it up, it's late etc etc) could cause some other disadvantage.
  3. Cory5412

    SSD for Powermac G3 beige minitower

    To add: for performance reasons, I tend to recommend against turning off virtual memory in any PowerPC-based Mac. Helpfully, I've never actually sat down and tested it, so if the general concensus is either (and, I've forgotten the technical reason) that the SSD's speed and random r/w performance negates the problems with turning off virtual memory, then it's a fine option. The other option is to just have enough memory not to hit virtual memory option, since if I remember correctly in 7/8/9 it won't hit virtual memory until you actually fill main memory. Another option, of course, is to just buy a new and good enough card to be able to avoid this problem. Anything old enough for CF to be a very reasonable choice on, you won't need a 256gb card. The sweet spot is probably 32GB cards which you can use fully in at least 7.6.1 or newer on supported machines. If you've got anything older than that then you'll either have to deal with many partitions or you'll want an older card. Given the cheap nature of SD cards, that's less of an issue (and, sizes like 8GB are still readily available, at least in the US, often for under $10 for good brands at retail, meaning you can walk over to your nearest gas station or grocery store and buy another SD card for your scsi2sd) - and there's of course no moral problem with just not usng some capacity. I'm currently using just 256 megs of a 30-gig card in my Apple IIgs. I have been meaning to, but have yet to actually, test SCSI2SD v5, v6 and a SATA card against the stock hard disk in my Power Macintosh G3@300. Unfortunately, due to upcoming travel and the fact that I've had this intention for the better part of a year, it'll be "A While" before this actually happens.
  4. Cory5412

    Performa 5320: what clock speed?

    My guess is, given that the architecture scaled up to 120MHz and most people with those machines don't talk about having a lot of trouble, those 543D latches are probably "not that bad" regardless of whether or not they introduce latency or anything of that nature. (Especially since: you're buying a performa on which to run math blaster or clarisworks, not a 9500 on which to run photoshop or avid or protools.)
  5. Cory5412

    Performa 5320: what clock speed?

    one more thought: back in the '90s, schools loved to do upgrades, and they loved to "employ" students to get "technology class" credit to do, among other things, relatively easy machine upgrade and maintenance tasks, and opening up an entire fleet of machines to swap out cards or upgrade RAM wasn't necessarily out of the question. It wouldn't at all surprise me if school districts ordering a fleets of 6200s and swapping existing LCPDS Ethernet cards from old I/II/III systems into the new 6200s was a thing that actually happened at some point. It wouldn't even surprise me if some of those places just slotted the new 6200 in place and kept using the 12" monitors and older keyboards and mice. (Though in the '90s a popular move was to order a new computer lab and then relocate your old computer lab into a cheap manufactured building out back or distribute the machines directly into classroom to increase overall computer access.)
  6. Cory5412

    Performa 5320: what clock speed?

    More generally, and I don't know if there is a good answer to this at all, it sounds like the question you're trying to ask is whether or not it would have been possible to build a better system. The philosophical answer here is that the 603 in the 6200 needed to have the issues it did in order for the better cache design for the 603e to be evident.
  7. Cory5412

    Performa 5320: what clock speed?

    Did you have a chance to take a look at the develoepr notes? The notes for the 6200 are linked from and hosted on the Taylor Design web site: http://taylordesign.net/downloads/references/PowerMac5200-6200.pdf A mirror of the 630 developer notes are here: http://mirror.informatimago.com/next/developer.apple.com/documentation/Hardware/Developer_Notes/Macintosh_CPUs-68K_Desktop/Mac_LC_630_Quadra_630.pdf The main difference is that there is a "Bus Translation Unit" slotted between the CPU and the rest of the system bus for what appears to be addressing purposes. Another bridge connects the data bus. Given that the performance improves dramatically with the improved cache structure of the 5300/6300 and friends with their 603e CPU and its bigger, unified L1 cache, I think it's at best misguided to simply blame the Capella (aforementioned Bus Translation Unit") for all of the 6200's problems. I can't speak to what demand there was to actually use the same expansion as the previous LC/Performa series systems. Perhaps the initial thought was that these systems would work with the IIe card, or that some of the (admittedly, relatively few) other non-Ethernet upgrades were worth preserving. The TV/Video system was probably worth implementing on the new series, but there's also the argument of leaving that kind of things for higher end machines with more room for expansion. Though, especially given that Apple's gimme to the education market as late as 1999 was to have ATi build a Rage GPU with analog video input, I'm gonna guess that education was doing something with those video and TV/Video systems. Whether that's laserdisc playback in HyperCard stacks, k-12 level video production as part of lessons, or something else, I couldn't tell you for sure. So, the more important part of the 630 to that market was probably the a/v system, and Apple probably legitimately saved money by putting a 603 and a Capella on the 630 board instead of building an entirely new architecture. Apple could probably have saved itself some space on these systems by going the route of the x100 Power Macs and building Ethernet onboard and then using either a dedicated PDS slot or omitting card-based internal expansion (and also implicitly: a/v functionality) entirely. It would likely have resulted in a "better" system that simultaneously failed to meet core needs of one of its largest audiences. Of course, this gets into a different kind of issue, which is that we can probably have years worth of discussions about different possibilities for re-engineering these systems. For example, why wouldn't Apple have just re-shaped the 6100 platform for these systems? (That would be the more natural progression, I think, than introducing PCI on the low end education/consumer system first.)
  8. Cory5412

    Performa 5320: what clock speed?

    . Aha, forgot that detail, thank you. Notably, these issues would also be present on the Performa 630 series, as well, since the 6200 was built pretty much by adding a 603 to the existing architecture. It's easy to find the developer notes Apple published for each machine and verify this in the block diagram. A good bit of information to accompany this part of your article-or-articles (and, a good exercise for you if you haven't done it) would be to look at the block diagram for the machines. As Taylor Design noted, the setup in the 6200 is pretty much what a modern northbridge/southbridge looks like, so it wouldn't surprise me if a newer machine mirrored that, despite switching out almost all of the relevant components. It's been a while since I looked, however, I'll see if I can make itme to do that later today.
  9. SGI built their own GPUs for the O2, Octane, and the VW320/540 which were their machines paired with the 1600SW. On the Mac, a Number 9 Revolution was often used, and a few different cards (but mainly that one) were used on commodity PCs. SGI (and also Formac if I remember correctly) had what was called the multi-link adapter, which adapted from regular DVI to DFP. Unless I'm reading the spec wrong -- it's possible I am and that DFP was on the MLA and not on the 1600SW itself. Oh yeah -- Compaq was honestly one of the PC OEMs to give IBM the best run for their money, their laptops tended to be on point and set up well, plus they'd acquired DIGITAL and were doing more or less a reasonable job with Alpha. It's interesting though because I hadn't seen any DFP references when looking at that stuff, but Compaq (and later HP) favored ATi stuff at the end of the Alpha platform, so if it would've been anywhere, it stands to reason it would have been there. Mediocre is the key word, but as an 8MB card, that card probably wasn't considered mediocre when it was new, it's just, not what I would've paired my SGI 1600SW with, is all. (My presumption here is that when it was new, an 8MB ATi Rage-class card would've been good for 2D work or perhaps then-modern 3d gaming, and so it's really a "just got a new-modern 20" CRT I'm running at over 1600x1200" for big spreadsheets" card, but it was around the late '90s that using very specialty GPUs started falling out of style in favor of, just, most software started to be faster on CPUs (only to start to fall back into favor at the end of the 200xs, but that's a different story.)
  10. Looking again, I don't think it is a D1 connector. Just did a bit of looking around for the above-mentioned DFP, and it looks like that's probably a winner. Though, you likely would want a bit of a better card to run an SGI 1600SW off of. I don't know off hand any other displays that used this standard.
  11. one thought, this is the card I was thinking of initially: http://www.welovemacs.com/1094310000r.html Note the older DB15/Mac SVGA video connector. Helpfully, it also uses the SGRAM module to upgrade, although I believe I've heard you can upgrade them to 8MB? Maybe not.
  12. I seem to recall seeing a bunch of the 4MB upgrade modules for Beige G3s online, but this card as good a way to get to a higher color depth or resolution. This card, or one like it, was also an option on the Power Macintosh G3 (Blue and White) - I believe that card had an Apple style "improved" S-Video port (with like 7 pins, some of which provided power) as the input. What I wonder is if this might be a version of that card, but with this as the input: https://en.wikipedia.org/wiki/D-Terminal If so, it's likely a card that was originally sold into the Japanese market, or there's a version of that purple video input box running around with this cabling. Ultimately, you're not super likely to find anything here in the US that works with that unless the auction included the ATi TV/video box that connects to that port. Presuming, of course, that's what it is. That's all just a guess based on the only other ATi cards of that type I've ever seen, which had composite and s-video in, primarily for use in education, where 5x00/6x00 and Beige G3s might have had their a/v kits used.
  13. This is not an eBay seller feedback forum.
  14. Cory5412

    Performa 5320: what clock speed?

    Just casually, given that you already have the machines and it sounds like you may have set them up and started to use them -- what are your initial impressions? The heart of what LEM wrote, in 1997 when they first started publishing, basically boils down to that if it's 1997 and these machines still cost several hundred dollars and you have a choice between, say, a 6200 and a 7200, is that you should get the 7200. I'd be interested in a more modern take on it, and my perception of that (and what I'll likely write on my own blog when I've had a chance to poke at the 6220 a bit) is whether or not they "can be used" -- the descriptions you see of the 6200 describe a computer that, more or less, should literally fail to boot up. My take on the entire thing has long been that the 5200 and 6200 and their immediate family (up through the PCI change) were the cheapest PowerPC-based Macs you could get and were often among the cheapest you could get, period (though: some 5xx and 6xx machines stayed on sale for far too long.) Considering their performance has to be done thinking about the context of a world where you can get an entire computer and some start-up software and usually a printer and a monitor for about $1900, or you can pay 3x that and get a computer (the 9500) that should be just around 2x as fast on paper and includes the following amenities: power cord mouse unformatted text editor Of course, those are the extremes, but even stepping up to the next machine after the 6200 close to doubled your cost once you actually assembled a working computer and put some software on it. That comparison of course stops making sense when the iMac and the Power Mac G3 crossed over in price and your $1299 could buy you either an expandable system with fewer included amenities or a most-in bundle system, trading some legacy compatibility for most of what you'd need to get started up front, without trading off any performance. One more thought: From a modern perspective, I don't think that "reviews" of vintage hardware make sense, especially in any supply-limited ocmmunity. We shouldn't, at this point, be telling anybody to shy away from any Mac they can get their hands on. I think it's important to be aware of what you're getting (something LEM does insanely badly in the modern context, for continuing to host extremely factually incorrect articles with little or no revisions showing up-to-date research or reflecting the needs of people "shopping" for these machines in modern times) but I don't think there's a good reason to classify any given vintage Mac as an "avoid this one". Again, it's not like we're shopping for three-year-old PowerBook G3s to run OS X on and cache-having /233s cost the same as cacheless /233s on the used market. To address this, specifically: An important thing to note here is that to my knowledge, Mac OS did not completely shed 68k code until literally Mac OS X. Every single release up to that point was frequently lauded as "even more PowerPC-native!" but as far as I know, apple either never really finished the job, or they only finished it mostly in the very newest versions of OS 9.2, which won't run on anything so old. (9.2 requires either PCI, by which point "the pain points" were gone or a G3, I forget which.) So, the fairest way to evaluate this specifically is to run 9.1 with as much RAM as you can fit into either of these machines, and the newest software you reasonably can, such as the PPC versions of IE/OE4 or IE5/OE5 and Office 98 or 2001. Those applications have stiff-ish system requirements though, and so you might run into the other problem: A machine from 1995 with limited upgrade potential just isn't well equipped for things that were new several years later, at a time when everything in computing was moving very fast. (That said: Anecdotally, IE4/OE4 run "fine" on my 840av under 8.0 or 8.1 with 24MB of RAM, but really only one at a time, I haven't had a lot of reason to try IE/OE4 on my next closest system, the 6100/66, but it'll be something I make time to do under 7.6.1 and 8.1 on that system as well.) Given that the author has close to the exact machine you do, and says that in their experience it's "fine" I think it's okay to accept the hand-wavey explanation as being sourced both in the technical fact of reducing the amount of 68k emulation gets rid of one of the biggest pain points of the machine, and their experience using it. That said, if your point is to prove something about the original group of 603 machines, the 5200/6200, doing it on a 5320/100 with the 603e (which addressed one of the bigger pain points in 68k emulation specifically) and had a 256k cache as a cherry on top is... not very valid. One more thing to note: Initial shipments of 5200s had 8MB of memory installed from the factory, and there's a reasonably good chance, especially for those of us around the right age to have used these things when they were new in our schools, 7.5 on 8MB of RAM is an extremely bad look and probably exacerbated everything else about the machine. (Incidentally there were a handful of 7200 configurations with 8MB of RAM as well, I don't know why Apple thought that was a good idea except that Apple has pretty much for its entire existence included too little RAM in its computers.)
  15. Cory5412

    Performa 5320: what clock speed?

    My money is on either documentation error, or last minute production change, similar to the 366MHz model of the Power Macintosh G3 that was announced, listed for sale in almost every catalog reseller in very early 1999, and then never materialized. In terms of 5320 vs 5320CD, my guess is they are the same machine. Usually instances of "same number, different configuration" were Performa vs. Power Macintosh vs LC situations, which added to the confusion for the 630 and 6100/6200+ families. I believe a (small) handfull of 6100 configurations had "6100" vs "6100/xxCD" configurations but as far as I know no 5200+ or 6200+ systems were shipped without optical drives. As also mentioned, it wouldn't be the first incorrect bit of documentation, and since as far as I can tell every single mac-dex is based on Apple's USA spec database, it wouldn't surprise me if someone at Apple in the USA flubbed typing down that machine's information or was given incorrect information at some point and hadn't seen anything else about it to be able to correct it. In regards to the performance: Even if it had been 120MHz, I think you'll find the 5420 is faster, and I don't think anyone will be very surprised by that result. I suspect you'll also find that the 5320 is "fine" -- being one of the fixed/revised machines with the 603e CPU which had the better designed L1 cache. 256k of L2 will help the 5320 along as well. Another interesting comparison would be a 6200/75 and a 7200/75. I have a 6200 variant coming at some point and I intend to compare it with what I've got hanging around, although I don't happen to have a 7200/75 at the moment. [general '90s mac industry commentary below this point] In general, Performa was the brand sold to homes, Power Macintosh was the brand sold "professionally" and LC was the brand sold into education, until the 5400/6400 when that was narrowed to Power Macintosh for both education and "professional" and Performa for home, and then later simplified to all models presenting as "Power Macintosh" -- although the 6400 and 6500 both had "for $MARKET" as a configuation title in some media. (for example, there's a "6500/250 for Small Business" configuration listed in the service manual. These aren't often reflected on mac-dexes such as everymac or LEM, however.) Anyway, the glut of performas was part of Apple's image problem at the time, but when you get right down to it, most of the mid '90s Apple spent floundering -- money they might have used into building up better machines was wasted on all manner of side-projects. Part of Jobs' simplifying the product line and introducing the iMac was to address a problem Apple knew it had in the mid '90s, which was that nobody was excited by the next incremental performa upgrade. Even dumping the Performa name in favor of "Power Macintosh 6500/xxx for Multimedia" didn't excite anyone. I'm not even sure a 6500/400 would have. In the clone era, people chose clone vendors because they were less expensive and in many cases because buying one was easier and simpler than getting your hands on an Apple Mac, even one that was "easy" to choose, like a member of the 7/8/9 series. Of course, those had almost exactly the same problem the performas did: Apple frequently added models to the lineup, with spec difference that were often invisible or buried a couple numbers deep. For example, there were cacheless 7200 and 6400 configurations, and additional CPU speeds got piled onto the stack often without Apple doing a good (or any) job re-aligning (say, 6400/180 is the release model, 6400/200 and 6400/225 are released, 6400/180 never gets discontinued or discounted, for example.) At the high end, Apple had trouble with, of all things, pro Mac configurations that took forever to actually ship because Iomega had frequent production delays and backlogs with Zip drives, and for some reason ("dumping") Apple felt compelled to include Zip drives with the 8600 and 9600, instead of just letting users of those big professionally-oriented machines choose on their own what to add. It wasn't until 1998 that Apple would partner with some more reasonable retail resellers (CompUSA) and start offering direct sales online, so until then it made more sense to go buy a UMAX or a Power Computing from one of their web sites. And ultimately that speaks to something I think is as true today as it was in 1998: In general, people don't want to spend a lot of time and energy balancing a bunch of different factors of computer performance against their budget. If you go spend $3000 on a machine, you want to trust that you got around $3000 worth of machine. There were a few gems in the Performa lineup, but (especially internationally where low-cache and no-cache models were most popular) a few machines were duds and often there would be no good reason given as to why this 6400/180 is way slower than another 6400/180. (There's also the somewhat related issue of, essen Apple did a slightly better job of listing all the specs in 1997, but organizing and comparing them wasn't quite as easy as it was later on when the web was that much more ubiquitous. Add to that, different catalog resellers would have different deals and often major-looking catalog and back-of-magazine resellers would be selling old-stock and used machines way past when those machines stopped being "available" (a stuffed channel that was rejecting some new shipments was another problem Apple acknowledged in its annual reporting in the mid '90s leading up to the iMac.) (I realize nobody said this, here, but these issues are why people were saying Apple was on its deathbed, which was, just to be clear, never really completely true.) I don't think Apple was on a course to die in 1997 if they hadn't changed anything bigger, but I will argue that Jobs saved them from a death in the early 2000s if they didn't make a few major changes by then.