• Updated 2023-07-12: Hello, Guest! Welcome back, and be sure to check out this follow-up post about our outage a week or so ago.

And Daystar 68060 Accelerators...

trag

Well-known member
http://www.accessmylibrary.com/article-1G1-15070641/daystar-68060-accelerators-near.html

Galen Gruman
DayStar Digital has announced that it will introduce a series of Macintosh PDS accelerators based on the Motorola 68060 CPU, the successor to the 040 that Apple decided to forgo when it adopted the PowerPC chip as the future Mac CPU. DayStar will still sell its Turbo 040 line of 68040-based accelerators but plans to gradually supplant it with the new Turbo 060 family. The main reason for introducing products based on a chip that Apple may never use is that DayStar expects it will be a while before a lot of 680X0-based applications are available in native PowerPC versions. DayStar says its 060 accelerators will run any software that's compatible with the 040, because the 060 uses exactly the same instruction set.
 

Bunsen

Admin-Witchfinder-General
Except the 060 doesn't use exactly the same instruction set, from what I've read, but rather a subset.

 

Gorgonops

Moderator
Staff member
The 68040 didn't include all the functionality of a 68030+68882, either. (In particular, the 68040's FPU is missing hardware support for IEEE transcendental functions.) The differences are papered over by trapping the missing instructions and emulating them in software. The same thing can be done to make a 68060 compatible with the 68040, but doing so by a third-party accelerator vendor would of been "tricky" because Apple wouldn't have written the software for them, like they did for the 68040 when the Quadra was introduced.

This sort of stuff is of course in vivid contrast to x86 CPUs, which almost always are complete supersets of previous models. (Occasionally *very model specific* instructions may change between generations. I remember vaguely reading that the first x86 versions of NextStep targeted the 80486 so specifically that they made use of several cache control instructions which disappeared/changed when the Pentium came out, thus requiring a patch. But that's an *extremely* rare example.)

 

Unknown_K

Well-known member
The Amiga used some software patches to get the 68060 to work, Apple could have done something in ROM to get around it. An old thread here mentioned Daystar didn't see a major speed increase over the 68040 accelerators (or maybe the PPC PDS cards) to make it worth the price to sell 68060 ones or something like that. Besides when the 68060 was out PPC would have been in use and they are faster anyway.

 

Bunsen

Admin-Witchfinder-General
[edit - oops question posted to wrong thread]

To save going over the same ground again, here are previous '060 and Coldfire discussions:

viewtopic.php?p=106530#p106530

about the 060's, they wont work in mac's the 060's differ from the rest of the 680x0 line in that their permission states(supervisor mode?) for certain things were altered. Without a complete re-write of the OS, I don't think anyone will ever be able to get any kind of 68060 Frankenstein to actually work.
viewtopic.php?p=5063#p5063

viewtopic.php?p=19756#19756

viewtopic.php?f=16&t=527

 

Quadraman

Well-known member
The 68060 didn't scale well, either. The fastest versions lack the MMU, FPU, or both. Too many transistors for a wafer that size back then. The fastest 060's required separate chips to carry out those functions. Linux 68k doesn't even support the EC versions of the 060 because the MMU is required to run Linux. The fastest full versions of the 060 only ran at around 50mhz before the heat started becoming an issue and components had to be removed from the die. I can't imagine a 50mhz 060 being all that much faster than a 33 or 40mhz 040 especially since the full 040 instruction set wasn't there and would have to have been emulated anyway. A 60 or 66 mhz 601 emulating an 040 probably would have been about the same speed in most cases.

 

johnklos

Well-known member
The 68060 didn't scale well, either. The fastest versions lack the MMU, FPU, or both. Too many transistors for a wafer that size back then. The fastest 060's required separate chips to carry out those functions. Linux 68k doesn't even support the EC versions of the 060 because the MMU is required to run Linux. The fastest full versions of the 060 only ran at around 50mhz before the heat started becoming an issue and components had to be removed from the die. I can't imagine a 50mhz 060 being all that much faster than a 33 or 40mhz 040 especially since the full 040 instruction set wasn't there and would have to have been emulated anyway. A 60 or 66 mhz 601 emulating an 040 probably would have been about the same speed in most cases.
A few mistakes here - there weren't separate chips for the FPU and MMU, for starters. Also, although Motorola (and now Freescale) now sell m68060 chips which have a mask which is half the size of the first masks, they simply never qualified them at speeds higher than 60 MHz. In reality, though, the latest mask (I have one) runs comfortably at 80 MHz in my machine and doesn't even get hot! It barely gets warm. Other people run them at 100 MHz and more:

http://www.powerphenix.com/CT60/english/overview63.htm

Also, components aren't removed from the die; they are simply disabled. The die is otherwise the same.

Regarding speed, you couldn't be further from the truth. The few instructions which need to be emulated on the '060 are not implemented because they are not frequently used. A 50 MHz m68060 runs circles around a 40 MHz m68040 - it's at least twice as fast in almost all benchmarks. Back in the day when the PowerPC 601 machines first came out, my Amiga 1200 ran Mac OS significantly faster than a Quadra 840av and definitely faster than a PowerMac 8100. Even comparing PowerPC native applications on an 8100 to m68k versions on the m68060 had the m68060 coming out ahead in many instances. It's not a slow processor in the least.

If you like, I'd be happy to post some m68040 versus m68060 benchmarks.

 

Quadraman

Well-known member
We're not talking about the latest versions that are still being made today, though. We are talking about the chips that were actually produced back when the design was new and those chips DID get extremely hot with all components on board at higher speeds. 50mhz was about as good as it got for a fully functional chip. Also, once the other components were disabled, how do you think they replaced the functionality of those components? They had to go off chip to get that functionality back or else who would have any use for a crippled EC or LC version?

 

Gorgonops

Moderator
Staff member
... how do you think they replaced the functionality of those components? They had to go off chip to get that functionality back or else who would have any use for a crippled EC or LC version?
Quite a few embedded applications don't need an MMU nor an FPU, so the "lacking" versions of the 68060 didn't *need* to support them via external support chips. The full-up 68060 was never really a serious contender in the workstation market, as among other things its FPU was seriously underpowered compared to just about any comparable chip, including the Pentium. And if you're not running a virtual memory OS you have no need for an MMU. (Witness the fact that several Amiga models used the stripped-down embedded versions of Motorola CPUs, such as the early Amiga 4000's with their MMU-less 68EC030, simply because Amiga OS lacked both virtual memory and memory protection. You can't run UNIX-oid OSes on such machines without upgrading the CPU, but you *can* also upgrade such systems with 68EC060-based upgrades if you don't care about such things. The classic MacOS makes so little use of the MMU that you could probably fairly trivially do without it, for that matter. Worst case you'd lose virtual memory.)

The point still stands in that it was fair to say that the 68060 was "insufficiently better" than the 68040 to merit Apple spending any time on it when it would of mattered. The integer performance was good, but then as now benchmarks sold computers and appearing to offer half the speed of your competitor is not a good place to be.

 

johnklos

Well-known member
We're not talking about the latest versions that are still being made today, though. We are talking about the chips that were actually produced back when the design was new and those chips DID get extremely hot with all components on board at higher speeds. 50mhz was about as good as it got for a fully functional chip. Also, once the other components were disabled, how do you think they replaced the functionality of those components? They had to go off chip to get that functionality back or else who would have any use for a crippled EC or LC version?
The 50 and 60 MHz chips all came from the same masks. Everyone that I know who has overclocked their m68060s has been able to go to at least 60 MHz, and in most instances 66 MHz. Accelerator boards such as the phase5 CyberStorms and Blizzards ran the rest of the accelerator at CPU speeds, so the board is more often the limiting factor than the CPU. For instance, my m68060 does run at 100 MHz, but the memory bus does not. The CPU reads the ROM with tons of wait states, starts initializing memory, has issues, and flashes the brightness of the front LED to indicate. At 80 MHz, the CPU barely gets warm, but the custom logic chips on the CyberStorm get hot enough to burn skin.

I decided to run my CyberStorm at 66.666 MHz because I care more about stability and longevity than about speed (especially because it's colocated and needs to be up 100% of the time), but obviously the CPU can go much faster. My other CyberStorm has the revision 1 original mask m68060, and it's run at 66 MHz with nothing more than the same kind of heat sink you'd put on an m68040 for at least ten years now.

If you still believe that the m68060s were too hot and not overclockable, you should read about Amiga 1200 accelerators which were available with either an m68040 or an m68060. In all instances the m68060 takes less power and requires less cooling than an m68040.

Lots of people use CPU cores without MMUs or FPUs, by the way. m68k processors were used in many embedded systems, in industrial controllers, routers, et cetera, and many of those uses didn't require an MMU or FPU. AmigaDOS, as well as MacOS, can run just fine without either.

With regards to Apple's move from m68040 to PowerPC skipping the m68060, I believe that it has much more to do with the fact that the performance of the m68060 was too good. Why would people be encouraged to buy a PowerPC 601 if they could run new applications at the same speed as the PowerPC and run their older applications faster than on a PowerPC? A PowerPC 601 emulating an m68k was comparable to a modest m68040, and since the important parts of the rest of the OS ran at native speeds, it was generally faster than the m68040 Macs. On the other hand, m68060 Macs would have significantly muddied the waters with many people deciding to buy the m68k over the PowerPC making the transition last that much longer.

Remember, the m68060 was able to execute more average instructions per clock than the Pentium in spite of having a 32 bit bus as compared with the Pentium's 64 bit bus and in general performed better on mixed code. It was also easier to optimize code for the '060. They both had near feature parity - both were superscalar, had branch prediction, and had 8k data and instruction caches. The Pentium's FPU was definitely more heavily improved than the m68060's (because it was pipelined, it's fastest FPU dispatch rate was twice that of the m68060). On the other hand, the m68060's FPU was significantly faster than the m68040's.

If you're interested in learning more facts about the m68060, check out Wikipedia's entry:

http://en.wikipedia.org/wiki/Motorola_68060

 

Gorgonops

Moderator
Staff member
Remember, the m68060 was able to execute more average instructions per clock than the Pentium in spite of having a 32 bit bus as compared with the Pentium's 64 bit bus and in general performed better on mixed code. It was also easier to optimize code for the '060. They both had near feature parity - both were superscalar, had branch prediction, and had 8k data and instruction caches. The Pentium's FPU was definitely more heavily improved than the m68060's (because it was pipelined, it's fastest FPU dispatch rate was twice that of the m68060). On the other hand, the m68060's FPU was significantly faster than the m68040's.
Of course, the 68060's minor advantage in integer IPC over the Pentium was pretty much a moot point thanks to the Pentium scaling upward in clock speed *much* faster. The original Pentium shipped at 60/66Mhz in May 1993, while its much-improved 75-100Mhz replacement shipped only ten months later. (Which itself was bumped up to 120/133Mhz in fairly short order.) Somewhat better IPC doesn't help you *that* much if the competition can clock three times higher. Not to say Apple couldn't of sold quite a few 68060-based PCs if they'd wanted to. Low-end PCs were using 486-socket CPUs were still selling up to around the 1997-ish timeframe, while Apple discontinued their last 68040s in 1996, and 68060-based models probably would of been "better" budget computers than the really awful early PPC-based Performas. However, they *would* of seriously muddied the waters (even worse) for the PowerPC transition. If they'd gone with the 68060 then they *really* would of needed a 68080 to come after it.

(I'm also somewhat suspicious of the Wikipedia citation claiming "more average instructions per clock than the Pentium" in the first place. The MIPS/Dhrystone benchmarks I'm finding actually put it lower per clock. Yes, it's a contrived synthetic benchmark, but the Pentium seems to win it. If someone has a good reference to back up that "better than Pentium" claim it'd be great to see it.)

Looking back at how Apple constantly topped the "corporate deathwatch" lists during that period all it really goes to show is it's bad for business to confuse your customers. With Motorola's 68k designs so abruptly hitting the wall Apple probably should of either bit the bullet and switched to x86 then, or bought out a chunk of Motorola's CPU design IP and became a fabless manufacturer of super-high-speed 68xxx variants. Sticking in a partnership with a company with a proven track record of spotty backwards compatibility and a heavy focus on embedded products for their next-generation CPU architecture probably wasn't the best decision ever. Honestly, I think what happened was the result of too many otherwise smart people drinking way too much RISC Kool-Aid.

 

ClassicHasClass

Well-known member
I can't comment on the '060 (though I do like ColdFires), but remember that there were *three* people in the AIM alliance, not just Apple and Motorola. That big blue gorilla was compelling, and while it's easy in this age of Core to say that Apple should have gone x86 from the beginning, how quickly we forget Pentium's early growing pains and the mess that Netburst turned into. So I don't think Apple going x86 at that point was as obvious a move at the time as hindsight would tell us now.

But, hey, my loyalties lie with POWER, so what do I know. :cool:

Now, if your argument is against the 88K rather than PowerPC, in that case you have my full agreement (but 88K was clearly a compromised alternative even then).

 

johnklos

Well-known member
The point about the comparison between the m68060 and the Pentium is that the m68060 would've scaled quite well had Motorola had reason to continue improving it (that is, if Apple had used it).

I agree that the POWER and PowerPC chips were nice, but had IBM spend a little time making a PowerPC Cell-type processor with a new memory bus in place of the G4 which could work in a laptop and a POWER 6 type desktop chip to supersede the G5, things might have been different. Personally, I liked the idea behind the PowerPC 615 - put a microcode-level x86 emulator in the CPU, and you can run whatever you want... Oh, well!

 

Gorgonops

Moderator
Staff member
I can't comment on the '060 (though I do like ColdFires), but remember that there were *three* people in the AIM alliance, not just Apple and Motorola. That big blue gorilla was compelling, and while it's easy in this age of Core to say that Apple should have gone x86 from the beginning, how quickly we forget Pentium's early growing pains and the mess that Netburst turned into. So I don't think Apple going x86 at that point was as obvious a move at the time as hindsight would tell us now.
But, hey, my loyalties lie with POWER, so what do I know. :cool:
You know that old saw about a camel being a horse designed by a committee, of course... ;^)

I suppose in the end it's a bit difficult to *exactly* pin down the blame for the failure of PowerPC, but in hindsight the whole thing really sort of comes off as a "Three Stooges" episode. (Use your imagination as to which company maps to each stooge.) These days it's popular to blame IBM for not "coming up with the goods" in regards to a G4 successor, but arguably the writing was on the wall from almost the beginning. Apple shot the AIM Alliance squarely in the foot and actively sabotaged the whole effort by failing to license the Mac OS (in total, or piecemeal as part of the Taligent) on reasonable terms. (The "Mac Clones" don't count. The original goal of AIM was an open platform that could run software from multiple OSes seamlessly on off-the-shelf machines from any AIM-licenced manufacturer. The Mac Clone program had very specific hardware requirements dictated solely by Apple, and further required the presence of an Apple ROM.) Without MacOS' software base and pretty consumer-friendly face there was no particularly good reason for the bulk of the buying public to switch to PowerPC, and without a critical mass of customers PowerPC completely faceplanted on its stated goal of providing more "bang-for-buck" than those nasty old-fashioned CISC-based PCs could provide. You can't blame IBM for losing interest in a niche business which didn't make them any money. And of course Motorola, with its laser focus on embedded platforms, didn't care one bit about PowerPC's performance stagnating as time went on... after all, they were still plenty fast for everything Motorola's customers wanted them for. I doubt Motorola's heart was *ever* really in it.

Total fail all around, really.

I'd still posit that Apple was suffering from some sort of Fatal Attraction to RISC, a kind of corporate inferiority complex. It's interesting how almost every major UNIX workstation vendor (SUN, SGI, HP/Apollo...) founded their business on the back of the Motorola 68k line, but by the late 80's had jumped ship to an in-house RISC design. (SUN to SPARC; SGI to MIPS; HP to PA-RISC...) Clearly Apple had this idea stuck in their heads that they were a "Workstation" vendor (despite all the evidence to the contrary) and felt they needed RISC as well. There's one thing that all these manufacturers did that Apple didn't have the option to do, however: the UNIX workstation vendors almost completely ignored backwards binary compatibility. SUN sold 68030, SPARC, and even *i386*-based machines all at the same time at one point, all running "SunOS 4", but there was no expectation that you could take a compiled program from one of those machines and run it on another one. Being able to do that on an Apple machine was *vital*, unless they were willing to call the new machines something other than "Macintoshes". (Which was actually the plan when they first looked at RISC, via the "Jaguar" project.)

Apple's devious plan for incorporating binary emulation seamlessly into the existing Mac OS was very clever, but it also meant they wasted loads of effort making a new CPU act like an older one, time that might of been better spent simply making the old CPU faster. (Hey, all those workstation vendors Apple was aping took CPU design in-house, why not Apple? They couldn't of done any worse than Motorola.) Furthermore, they also completely forgot about the original plan of creating a newer/better OS to take advantage of more modern and scalable software concepts. So in the end they ended up with the worst of both worlds: an archaic OS running via emulation on unproven and more expensive hardware. Way to go?

A point that Apple *really* missed in all this is that RISC as a design philosophy falls down when binary compatibility is a paramount design concern. RISC is heavily compiler-dependent, by its very definition needing code tuned and optimized to take advantage of the internal architecture of a CPU. Once you have to focus on running already-compiled-code *faster* then a "RISC" design starts converging on a comparable-performance CISC design in complexity because you start requiring the same sort of hardware tricks to optimize performance "on the fly". The details of instruction set almost become meaningless. With that being the case there's absolutely no reason why the 68k ISA couldn't of easily kept up with Intel's x86 designs, which in turn were usually at least in the same ballpark as the best RISC had to offer. The key, of course, is whether Apple could of sold enough Macintoshes to properly fund the research to produce such designs. Given Apple's focus on huge profit margins at the expense of market share? I dunno. You could argue it either way, but it's worth noting that even measly little design foundries like Cyrix were able to produce "competitive" x86-compatible products during the 90's on relatively shoestring budgets. If Apple had managed to expand their market base even a *little* they probably could of been producing AMD Athlon-speed 68k variants by the year 2000... notably the year that Apple failed to bring anything faster than 500Mhz to market, the same speed they ended up stuck at for 18 months while both Intel and AMD cracked the 1Ghz barrier.

Anyway. The PowerPC debacle is a great object lesson as to what happens when you let engineers make marketing decisions. :^b (It really pains me to say that, as I rarely have anything nice to say about the crazy stuff marketing asks engineering to deliver on ridiculous schedules. But, hey, they do know what Joe Sixpack actually wants. And in the end, "elegant" CPUs aren't it.)

 

ClassicHasClass

Well-known member
I don't have any major disagreement with any of those points (except one point below); they're certainly well-taken, and PReP/CHRP certainly had much squandered potential. My main point was that Apple picking PPC over x86 (or, for that matter, any other architecture) was not an unreasonable decision at the time given what was known at the time, and while Motorola may not really have had their heart in it, so to speak, Freescale has done well with the PPC in the modern embedded space and it wouldn't have gotten there without the parent company doing *something* with the architecture. We know now that there was a lot of bloody hash to be made by all sides (even, to some extent, IBM), but when Apple had to pick something past the 68K, picking the PowerPC had a lot going for it when the decision was being made.

One thing I will take issue with is (emphasis mine)

Apple's devious plan for incorporating binary emulation seamlessly into the existing Mac OS was very clever, but it also meant they wasted loads of effort making a new CPU act like an older one, time that might of been better spent simply making the old CPU faster.
I'd like a citation for this. I know that the PowerPC was tweaked to fit the 88K bus, because Apple was already using the 88K in their prototypes, but that's like saying the 6501 was wasted making it "like" a 6800 just because it used the same bus (and of course the 6502 *does* use a 6800-compatible bus, albeit with switched pins). That's a point of convenience, not (necessarily) of technical compromise. The 601 certainly didn't cause any problems for IBM, and the 601 core is still very much like RS/6000 albeit with some POWER1 instructions in software (and this problem disappeared with the 603, so it wasn't like it was never rectified).

Furthermore, PowerPC had a lot going for it not just from Apple and IBM (AIX, OS/2); even Microsoft wanted a piece of it with Windows NT. I still have an NT 4 "universal" CD, for crying out loud. If I ever find a compatible box, I'll probably slap it on there for gits and shiggles.

It's all water under the bridge, of course, and we know that the rot set in faster than it ought, but when Apple made the choice for PPC I still maintain it was not the wrong one (just one that didn't work out, and one of a larger selection of choices that at the time could also have been considered "right").

 

johnklos

Well-known member
One thing I will take issue with is (emphasis mine)
Apple's devious plan for incorporating binary emulation seamlessly into the existing Mac OS was very clever, but it also meant they wasted loads of effort making a new CPU act like an older one, time that might of been better spent simply making the old CPU faster.
Wasn't that referring to m68k emulation on PowerPC?

Personally, I was disappointed that Apple didn't make a new environment as part of their CPU transition. They could've easily ran some form of Unix (or any other OS, for that matter) on the PowerPC and thereby finally adding memory protection, preemptive multitasking, et cetera, and run m68k Mac OS in a box like they had done with A/UX and with Mac OS X. Native apps would have to be aware of the new OS, but older apps could've run as is.

Oh, well!

 

Gorgonops

Moderator
Staff member
Apple's devious plan for incorporating binary emulation seamlessly into the existing Mac OS was very clever, but it also meant they wasted loads of effort making a new CPU act like an older one, time that might of been better spent simply making the old CPU faster.
I'd like a citation for this. I know that the PowerPC was tweaked to fit the 88K bus, because Apple was already using the 88K in their prototypes, but that's like saying the 6501 was wasted making it "like" a 6800 just because it used the same bus (and of course the 6502 *does* use a 6800-compatible bus, albeit with switched pins). That's a point of convenience, not (necessarily) of technical compromise. The 601 certainly didn't cause any problems for IBM, and the 601 core is still very much like RS/6000 albeit with some POWER1 instructions in software (and this problem disappeared with the 603, so it wasn't like it was never rectified).
Indeed, the "act like an older one" element I'm referring to is the 68K software emulator, not the fact that the PowerPC was built to ape the Motorola 88K's bus. Apple certainly did wonders with their emulator by the time the PowerPC machines debuted, but the fact is they spent years engineering machines that when introduced ran legacy software slower than top-of-the-line computers sold *almost three years earlier*. (Comparing the emulated performance of a 1994 7100 to a 1991 Quadra 900.) All on top of the same obsolete old OS as before, just even more unstable thanks to all the hackish engineering under the hood. (An OS that still ran vital parts of itself on the emulator well into the OS 8 era.)

In other words, Apple spent a good four years running in a circle creating the PowerMac, and in the end had nothing to show for it other than the ability to sometimes win benchmark tests *if* the user bought a new version of the software they'd already paid for. (And assuming the machine didn't crash in the middle of it.) Good show.

Furthermore, PowerPC had a lot going for it not just from Apple and IBM (AIX, OS/2); even Microsoft wanted a piece of it with Windows NT...
Yeah, but all those other OSes ran fine on x86, and none of them were even slightly interesting to the vast majority of the computer-using public. There really was no compelling reason to switch unless you needed the *absolutely best* performance on some specific task that was covered by software which actually came in non-x86 versions. (Which outside of the stratospherically-priced UNIX workstation arena was a rare thing.)

Anyway. In the end, well... I'll grant there was this perception in the late 80's-90's that CISC might hit a wall with the 80486, and software companies were taking an active interest in keeping their options open by making their products work with multiple architectures. Maybe we can't blame Apple for getting sucked along the tide, and the fact that their processor vendor had obviously thrown in the towel made life even more interesting. Still, it's somewhat amazing in retrospect how little the Mac evolved while going through the PowerPC transition. As much as we all dislike Microsoft's software quality it has to be granted that they were actively evolving the *fundamentals* of Windows while the PPC was being born. Over roughly the same period Apple was designing the PowerMac (counting from when NT branched off from OS/2 around 1990) Microsoft grew the basic Windows API from its cooperatively-tasking DOS shell roots into the early versions of the same fully-fledged preemptively-multitasking multi-user virtual-memory OS which 95+ percent of home/office computers use to this very day. (And at the same time cleverly kept churning out incremental improvements to the DOS shell versions to keep the upgrade money coming in.) At the same time Intel broke through the CISC glass ceiling, successfully managing to take the x86 ISA superscalar with the Pentium; by the time the PowerMac shipped they were well on their way to the Pentium Pro, the direct ancestor of the "Core" series Apple uses today. Meanwhile, Apple churned out a faster, partially compatible version of the Mac II and a lot of broken promises. Clearly some cards didn't get played right.

Hindsight is 20-20 and I'm sure no one could of accurately predicted this all at the time. Really, though... given all that *did* happen, could a different decision on Apple's part actually have come out worse? Perhaps somewhere in the universe there's a Bizarro-world version of Earth where everything is identical except Apple decided to go with a "Star Trek"-type project instead. (And of course we'll also need to find the "Apple as a CPU designer", "Apple licenses SPARC/MIPS/whatever", and "Amiga is a raging success so Motorola takes the CPU market seriously" planets as well.) Short of finding those worlds and exchanging emails via the Arecibo telescope I guess we'll never know which was the best answer.

 

Bunsen

Admin-Witchfinder-General
In some senses, it could be said that the '90s were Microsoft's decade, and the '00s were Apple's. In the '90s, Apple struggled to even tread water in market share and profitability, in fact going backwards, whilst simultaneously promising and failing to deliver the OS to end all OSes, sinking millions of dollars and worker-hours into pie-in-the-sky OS research that lead in the end to significantly less powerful releases than promised. Microsoft's performance in the '00s was similar.

Of course, this is hardly an original observation.

 
Top