• Hello MLAers! We've re-enabled auto-approval for accounts. If you are still waiting on account approval, please check this thread for more information.

Amiga: the fastest 68k Mac

I love both Mac and Amiga but for completely different reasons.

From the user standpoint, my completely unscientific opinion about why Mac squeezed out Amiga is that the Mac UI was much easier to learn and teach than the Amiga UI. I'm not saying one is better than the other, just that the Mac UI was easier. That's something I personally noticed because in the early 90s I had the opportunity to teach both platforms in classes.

In a superficial way this difference is also similar to the current situation with touch devices vs. computers. An enormous number of people use a phone or tablet instead of a computer to do the same tasks. Obviously there are many reasons for this, but it's undeniable that a touch device is easier to learn as proven by small children, pets, and retail workers.
 
Finally, because x86 was so dominant, far more effort from software developers and compiler writers was applied to code on those architectures, giving a performance advantage (binary compatibility worked against being able to port code too).

Motorola knew this too, but Motorola also knew that because their general purpose CPU division was far smaller than Intel, they would have to switch to architectures where they stood a chance. But also knew legacy CPUs provided their current income, so there was an inherent business conflict. I think this explains the 68040 and 68060: 68K CPUs that found it harder to compete, because their R&D only had a fraction of a smaller CPU division.

Meanwhile, Motorola's first commercially produced RISC: the 88K did show promise, enough to get the contract to develop PowerPC with IBM (especially as Gary Davidian had already written an impressive 68K emulator for the 88K).
This point is more important than many people give it credit for; not only was insane pipelining introduced at the hardware level, but looking at the precompiler and compiler work done to optimize for later x86 vs RISC like the Acorn chips... a lot of people had sunk a lot of costs into the x86 instruction set, and it was simpler to throw a few thousand engineers at the hardware AND a few hundred thousand at the software development side than to re-train everyone to switch to a fundamentally different mindset around what gets handled where. An early ARM running a microkernel pushed a lot more onto the individual software developer (also giving them a lot more freedom) compared to x86.

People are, in general, much happier to let someone else make the complex decisions and use a more strictly designed instruction set.

That's really why I enjoyed using the MC68k series so much, after all, even if it was self-limiting :)
 
I think people overthink why x86 won. IBM made an open system where nobody had to pay money to make hardware upgrades. Large software companies made programming languages for x86 to support office and industry and tons of applications came about because of that. Finally, sales were so great that Intel and AMD (second source CPU from the start) had the money to make much better and faster CPUs killing everyone else.

The 8086/8088 and 68000 came out around the same time 1977 and the only reason Intel won with a 16 bit CPU vs Motorola 32 bit 68000 was because of the volume sold to IBM.

Before the IBM PC people had to relearn the OS and apps every time they upgraded machines, so they were used to it. Even in the early days of the x86 you had a few versions of DOS, Windows 2.x/3.x, OS/2, CPM/86, GEM, hell even GEOS came out for the PC plus assorted UNIX. There was just so many choices and so many apps.
 
This point is more important than many people give it credit for;
Thanks.
<snip> sunk a lot of costs into the x86 instruction set, and it was simpler to throw a few thousand engineers at the hardware AND a few hundred thousand at the software
Yep (but here you're agreeing with me, and now I'm agreeing with you agreeing with me ;-) )!
<snip> An early ARM running a microkernel pushed a lot more onto the individual software developer (also giving them a lot more freedom) compared to x86.
Right and this explains how ARM started out and expanded its market too. As a recap for those who don't know (which probably doesn't include @adespoton , but might include some reader of this comment in the future), ARM began at Acorn Computers in Cambridge, UK.

Acorn had achieved some early success with their 2MHz (ie. 2x faster than an Apple ][, or VIC-20/C64) 6502 based 8-bit computer: The BBC Micro. The BBC Micro was indeed commissioned by the British Broadcasting Corporation in 1980-1981, because the UK government had been shocked by the sudden implications of ICs and how far behind we were (perhaps). The BBC figured if they comissioned and endorsed a computer, then they could educated the British public with it. And they did: they built an educational TV series around it and something like 80% of UK schools bought BBC Micros. The spec was good, making the computer relatively expensive (£400 for the day), but it was a fast 6502 running a very fast version of a structured BASIC.

This was a similar situation to IBM: "No-body ever got fired for buying a BBC Micro for a school". If you knew nothing about computers (which went for most teachers and head teachers), then you would feel safe by buying it. But by the mid 1980s, it was obvious that 8-bit computers had had their day and Acorn needed to find a proper successor.

I was at the 30th anniversary of the BBC micro in 2012, at ARM, in Cambridge where Steve Furber explained. Furber and Sophie Wilson (who design the BASIC) went to Intel in the US to see if they would licence the 80286 to ARM. They liked the CPU, but figured they could design a more efficient bus interface, but Intel wouldn't let them create a custom 80286. But before they left for the UK, they dropped in on the Western Design Centre, because they knew they were designing the 16-bit successor to the 6502: the 65816 (the 65C816 CMOS version came a bit later). They thought it might plug the gap.

What they didn't expect was that WDC was basically a two-person outfit with some students dropping in to help out. And this made them think: "If these guys could develop a CPU, then Acorn could too". They'd heard about the early RISC research so they set about designing the ARM (Acorn Risc Machine) CPU, modelling the design in MODULA-2 on a BBC Micro itself. Furber did the architecture, while Wilson wrote BBC BASIC for it. And both of these went together: the language influenced the design of the CPU. The key thing is that ARM was small and low-power, because the engineering resources forced it to be minimal.

The first CPU taped out in 1985 at 25,000 transistors. Then the next iteration: ARM2 was used in the earliest Acorn Archimedes computers. These computers ultimately attracted the attention of Apple after they'd given up using the Hobbit CPU for the Newton. Apple then persuaded Acorn to spin off the CPU division to a separate company going under a modified name for the CPU: Advanced Risc Machines.

And this is how ARM made it work. All the earliest embedded applications for ARM were single customers with high sales volumes of hardware & low power requirements. Here, the cost of the programmer is small compared to the cost of the device's manufacturing and this makes the application viable. It didn't matter that drivers and OSs had to be ported or rewritten. ARM had the basic advantage that embedded applications needed. With this, the portfolio of ARM chips grew over time.

In both cases, ARM progressed from pretty much entrenched positions: Schools to Embedded Systems to general purpose computers. Of course, the first ARM application was a general purpose computer, it's just that those computers weren't in an entrenched positio, which is why they were out-competed by PC in the end.

That's really why I enjoyed using the MC68k series so much, after all, even if it was self-limiting :)
I like 68K coding, because it's relatively easy.

I think people overthink why x86 won. IBM made an open system where nobody had to pay money to make hardware upgrades. Large software companies made programming languages for x86 to support office and industry and tons of applications came about because of that. Finally, sales were so great that Intel and AMD (second source CPU from the start) had the money to make much better and faster CPUs killing everyone else.
Some of this is true, but some of it isn't really what happened at the time. The IBM PC 'won', because execs knew "Nobody ever got fired for buying IBM." So, technical rationalisations based on the "it's an open architecture" are themselves overthinking the situation, because the IBM PC had already won before these factors were major drivers for its success.

Nevertheless, it's still very true that Digital Research and Microsoft produced high quality tools for x86 very early on and this massively leveraged the popularity of the architecture from a development viewpoint. You're correct that AMD was a second source from the start (an IBM requirement), but neither company had the money to outcompete other CPU manufacturers until after the PC became overwhelmingly dominant. Intel, for example, weren't the all-powerful company they became even up to the beginning of the 80386 era: the 8086 was designed because they were struggling and the 80286 was already challenging their business model at the time the 80386 was launched (this is one of the reasons Intel sent AMD buggy microcode).

The 8086/8088 and 68000 came out around the same time 1977 and the only reason Intel won with a 16 bit CPU vs Motorola 32 bit 68000 was because of the volume sold to IBM.

Before the IBM PC people had to relearn the OS and apps every time they upgraded machines, so they were used to it. Even in the early days of the x86 you had a few versions of DOS, Windows 2.x/3.x, OS/2, CPM/86, GEM, hell even GEOS came out for the PC plus assorted UNIX. There was just so many choices and so many apps.
 
You seem to forget IBM made the first small portable computer in 1975, the 5100 and it flopped hard.
I'm aware of it. It's possible it failed because (a) IBM didn't care much about small computers (b) The starting price of $9K was more expensive than many minicomputers of the day (c) it wasn't the right moment.

I'm really just going on how the IBM PC was portrayed in its earliest years, up to around 1984 when Compaq started to make real inroads. Articles, as I recall reading them at the time, and which, when I re-read them today (I have nearly every back issue of Personal Computer World from Dec 1980, before the PC to early 1991) didn't focus on the 'openness' of the system, but the authority that IBM brought to the microcomputer market.

For example, both Apple ][ computers and S100 computers were open in that the circuits were published and they were expandable. The ACT Sirius/Victor 9000 was a 16-bit, 8088 computer that ran MSDOS, CP/M '86 and was technically superior.

It shouldn't be controversial to claim that it was the authority of IBM that made its PC a success.
 
You seem to forget IBM made the first small portable computer in 1975, the 5100 and it flopped hard.
I'm really just going on how the IBM PC was portrayed in its earliest years, up to around 1984 when Compaq started to make real inroads. Articles, as I recall reading them at the time, and which, when I re-read them today (I have nearly every back issue of Personal Computer World from Dec 1980, before the PC to early 1991) didn't focus on the 'openness' of the system, but the authority that IBM brought to the microcomputer market.

It shouldn't be controversial to claim that it was the authority of IBM that made its PC a success.
I've just been reading the review of the IBM PC here:


It focusses almost exclusively on the actual features of the PC rather than potential expansion. I haven't read anything which claims it's exceptionally expandable or open. However, page 60 is quite revealing:

1751355941295.png1751356009789.png

Basically, the reviewer is saying that a typical PC setup will have already filled most of its slots: +64kB + MDA adapter + 5.25" FD adapter + Serial will take up 4 out of 5 of the slots. That's for a 128kB machine. A 192kB machine would take up 5/5 slots. A CGA based PC would have to sacrifice two of: {64kB, Serial Card, Parallel card}.

Having said that, I think the most common setup for early users would have been a 64kB machine + 5.25" FD adapter + MDA/Printer. That's just 2 slots.

So, the reason they don't talk about expandability in this review is because almost all the slots will have been used up in a fairly standard configuration.
 
The IBM 5150 64K I have came with a MonteCarlo RAM/Serial card to deal with having only 5 slots. You can tell it is an early card made for the 5150 because it uses the same wide black ISA brackets common on that system.
 
The BBC Micro was indeed commissioned by the British Broadcasting Corporation in 1980-1981, because the UK government had been shocked by the sudden implications of ICs and how far behind we were (perhaps). The BBC figured if they comissioned and endorsed a computer, then they could educated the British public with it. And they did: they built an educational TV series around it and something like 80% of UK schools bought BBC Micros. The spec was good, making the computer relatively expensive (£400 for the day), but it was a fast 6502 running a very fast version of a structured BASIC.
Well, what really got the UK government going on the BBC Micro project was France rolling out Minitel. Which, I have to say, was an awesome, if self-limiting enterprise (They identified and implemented most of the on-line features people commonly use today except computer-intensive stuff like multiplayer realtime gaming, all over a 1200 baud full duplex connection). The UK solution was less connected, but provided a framework that allowed for more growth at the individual user level, and was able to be used as a proving ground for later ARM development as the field progressed, both online and offline. Minitel lasted right up until 2012, although eventually the network was opened to regular personal computing devices and not just the original Alcatel terminals.
 
Well, what really got the UK government going on the BBC Micro project was France rolling out Minitel.
I didn't know that. I knew that there were debates in the Houses of Parliament after the TV show "The Mighty Micro" was aired, on ITV (at the time, only 3 TV channels in the UK).


Although the UK never had minitel, there are probably a few more cross connections between IT in the UK and France during that period. Minitel used block-graphic video whose idea came from UK research on Teletext in the late 60s and early 70s, spawning Ceefax and Prestel (The Post Office/British Telecom). Prestel was much closer to minitel, with the same 1200/75 baud rate, but privatised and expensive. I think I only ever knew one family who had it.

There's a 1985 Minitel teardown in Hackaday, which contains an 8051 and seemingly 2kB of RAM.
Which, I have to say, was an awesome, if self-limiting enterprise (They identified and implemented most of the on-line features people commonly use today except computer-intensive stuff like multiplayer realtime gaming, all over a 1200 baud full duplex connection).
I think we in the UK were impressed by MiniTel. It seemed like a very hi-tech phone directory (I know know it was far more than that, more like Prestel). The BBC's computer literacy project had a different goal, to accelerate computing in the UK, which I think has been validated. Computer adoption in the UK in the early 1980s was really phenomenal. Take a look at this sales chart from July 1983 to December 1983.

1751530771345.png
There are 20 computers on the list, nearly all of them mutually incompatible. People argue that it was standardisation that provided innovation, but this is why I argue for the opposite: extreme heterogenous computing fostered both a rapid growth in development, as companies tried to out compete in whatever niche was possible.

In 1985, 13% of UK households had a home computer, compared with 8.9% in the US in 1984. That's despite the UK having a lower median and average income, AFAIK.

The UK solution was less connected, but provided a framework that allowed for more growth at the individual user level, and was able to be used as a proving ground for later ARM development as the field progressed, both online and offline. Minitel lasted right up until 2012, although eventually the network was opened to regular personal computing devices and not just the original Alcatel terminals.
Sure. I guess even in the early days it was possible to use a Thompson home computer with MiniTel modem.
 
It doesn't get that far. Early sad chime, before the memory test, on a Q650. Nothing on a 605 ROM. I disabled the FPU and superscalar execution for bring-up purposes.

I haven't dug into why, it may be hitting an unimplemented integer instruction and need the integer support package loaded earlier than I thought. I need to capture the address bus and ROM oe off one of my interposer simms (and pull cdis low...) so i get an execution trace.

This is actually the reason I built both the ISP SIMM and rom interposer, but I've been lazy about actually getting down to it.
As I understand it, the '060 drops some hardware instructions to save transistors (or more accurately, to use them on other features). These missing instructions are then written into a software-based emulation library that's loaded at some point in the boot process. When a deleted instruction is called, it's trapped, and the CPU looks it up in the emulation library, executes the equivalent functions of the deleted instruction, and then continues on with normal code. If the deleted instruction is called before the emulation library is loaded, the trapped instruction results in a hang because the '060 can't continue.

So, the Mac ROM needs to be recompiled to be '060 native, removing any code that calls deleted instructions, and also somehow making the system aware of the emulation library so it can reference it when the Mac begins loading the OS and running programs. I don't really know how you'd go about that. I figure it'd be easy if you could get to a certain point in the Mac OS boot process where it's loading extensions (like the LibMoto math library for G3 chips on older versions of Mac OS) but if the OS calls deleted instructions before it can load the emulation library extension, it'll hang. Maybe a patched System file? A System Enabler?
 
As I understand it, the '060 drops some hardware instructions to save transistors (or more accurately, to use them on other features). These missing instructions are then written into a software-based emulation library that's loaded at some point in the boot process. When a deleted instruction is called, it's trapped, and the CPU looks it up in the emulation library, executes the equivalent functions of the deleted instruction, and then continues on with normal code. If the deleted instruction is called before the emulation library is loaded, the trapped instruction results in a hang because the '060 can't continue.

So, the Mac ROM needs to be recompiled to be '060 native, removing any code that calls deleted instructions, and also somehow making the system aware of the emulation library so it can reference it when the Mac begins loading the OS and running programs. I don't really know how you'd go about that. I figure it'd be easy if you could get to a certain point in the Mac OS boot process where it's loading extensions (like the LibMoto math library for G3 chips on older versions of Mac OS) but if the OS calls deleted instructions before it can load the emulation library extension, it'll hang. Maybe a patched System file? A System Enabler?
Yes. The integer and floating point support packages I referred to are called via the unimplemented instruction trap in order to make up that missing functionality in a way that is transparent to software. No need to recompile anything. It is likely that it's hitting an unimplemented instruction and taking the catch-all trap leading to a sad chime, I just haven't gotten around to confirming that.

The integer support package will need to be added to the trap table after we have DRAM and the vector table is set up in RAM. It might get interesting if the ISP is required before we have RAM, will likely need to modify the ROM directly to avoid problematic instructions until we can set up the vectors. Other bits, there is already a floating point support package loaded for 040 CPUs - the 060 package needs to be loaded instead, and at very early boot there's additional setup for superscalar execution required. All of this will require a modified ROM; no way around that.

All comes back to getting that execution trace I mentioned. This is actually the project I designed the ISP-SIMM and blinkenSIMM to support. Technically, could possibly hack on QEMU to the same end, but that's less fun than messing with real hardware :)
 
The integer support package will need to be added to the trap table after we have DRAM and the vector table is set up in RAM. It might get interesting if the ISP is required before we have RAM, will likely need to modify the ROM directly to avoid problematic instructions until we can set up the vectors. Other bits, there is already a floating point support package loaded for 040 CPUs - the 060 package needs to be loaded instead, and at very early boot there's additional setup for superscalar execution required. All of this will require a modified ROM; no way around that.
I'm no expert at either, but if I remember correctly, the Amiga 060 support packages are loaded as libraries, thus already at a moderately late stage in ROM initialization. Obviously that's dependent on the actual instructions used by ROM before RAM is setup, and it's possible Macs get themselves in trouble where Amigas don't. But the Amiga ROM wasn't designed with 060 in mind either and it happened to work out so I'll still count that as a win for plausibility on the easier path here.
 
It's possible the Amiga '060s use the host CPU for initial boot, then there's a handoff after a certain point where the '060 is initialized, its IPLs are loaded to RAM, and the host CPU is halted.

In the same way, you could get an '060 in a Mac if you treat it like Sonnet did the 601 PDS G3 upgrades: host CPU boots and runs until the Sonnet extension loads, then it hands off control to the G3. This would only work in Macs with a PDS or a socketed CPU, but that's actually a fairly wide range of machines to target (though you'd need multiple PCB and FPGA configurations).
 
It's possible the Amiga '060s use the host CPU for initial boot, then there's a handoff after a certain point where the '060 is initialized, its IPLs are loaded to RAM, and the host CPU is halted.

In the same way, you could get an '060 in a Mac if you treat it like Sonnet did the 601 PDS G3 upgrades: host CPU boots and runs until the Sonnet extension loads, then it hands off control to the G3. This would only work in Macs with a PDS or a socketed CPU, but that's actually a fairly wide range of machines to target (though you'd need multiple PCB and FPGA configurations).
I believe they work with pure 060 setups, no host CPU needed.

Ignoring the FPU, which shouldn't be needed for boot, the instructions requiring the emulation package really aren't bad:
Code:
The unimplemented integer instructions are:
DIVU.L <ea>,Dr:Dq 64/32 ⇒ 32r,32q
DIVS.L <ea>,Dr:Dq 64/32 ⇒ 32r,32q
MULU.L <ea>,Dr:Dq 32*32 ⇒ 64
MULS.L <ea>,Dr:Dq 32*32 ⇒ 64
MOVEP Dx,(d16,Ay) size = W or L
MOVEP (d16,Ay),Dx size = W or L
CHK2 <ea>,Rn size = B, W, or L
CMP2 <ea>,Rn size = B, W, or L
CAS2 Dc1:Dc2,Du1:Du2,(Rn1):(Rn2) size = W or L
CAS Dc,Du,<ea> size = W or L, misaligned <ea>
- MC68060 User Manual, Section C.2

Obviously they get used since a 060 doesn't work out of the box, but it's not a huge list at least.
 
Back
Top