New Project: DoubleVision SE/30 Card

I can easily add 1.75GB of memory in a completely system friendly manner on a 1990 Amiga 3000. ;-)
Most Macs are technically limited to 2GB, 1.5GB in practice - we're talking about stock here with no new hardware or mods.

The OS will happily let you set virtual memory (disk based) to 1GB 😆 - why you would in 1993, I do not know!
 
How does it create this if it doesn't have any means of knowing where the physical memory actually is?
It does a machine specific test of each bank to see if it can write and read back, and tests how far into each bank it can go by testing for standard module sizes. Then it sort of defrags the memory space 😆

All Macs post about 1998 (other than a couple of 68000 machines based on older designs) have an MMU. Nothing shipped with an EC. Plus apple were using their own chipsets which generally had some level of memory management, even back to the 68000 days where they switch things about depending on how much RAM you installed in which banks, for example on the Plus (electrically with jumpers)
 
Sure, a stock Amiga 3000 supports 16MB Ram + 2 MB Chip Ram.

But each of the expansion cards has, very similar to PCI, a configuration space, I/O space and memory space, so the OS can decide where the cards are physically located, and if they have memory needing to be added to the memory pool, or a firmware rom with driver (usually a device or library) which needs to be hooked into the system.

Yeah, I guess I am a spoiled, thankless brat. ;)
 
But each of the expansion cards has, very similar to PCI, a configuration space, I/O space and memory space, so the OS can decide where the cards are physically located, and if they have memory needing to be added to the memory pool, or a firmware rom with driver (usually a device or library) which needs to be hooked into the system.
Yeah, for us the Nubus slots have pre-defined space. Avoids conflicts if two cards wanted to do the same thing.

Note we did have Nubus based RAM cards BITD. I think I've even seen it mentioned in Apple docs, so it isn't "against the rules" to map in some extra memory.

I have a couple of period accelerator cards for my SE that map in 16MB of RAM to a machine that shipped with a 4MB ceiling, by relocating the ROM and some other stuff to give more room in the virtual map.
 
I can easily add 1.75GB of memory in a completely system friendly manner on a 1990 Amiga 3000. ;-)

And look where it got the Amiga platform...

You could also hack memory expansion in some (most?) older systems (8-bits era). Turns out, the future was to not do that due to closer and closer integration of the memory controllers with the CPU. They eventually became built-in to the CPU in the early 00s, and no-one was looking back until CXL promised memory pooling - and a few years later that's far from widely adopted.

That the SE/30 does support 128MB of memory, on the other hand, is amazing

IIRC, the early PCI PowerMacs ("95) supported 1 GiB, same as my Sun Ultra 1 Creator (also '95), which was a high-end workstation when it was introduced. Apple did over-engineer the memory capacity of the mid- and high-end systems.

How does it create this if it doesn't have any means of knowing where the physical memory actually is?

The ROM knows the type of machine it is running on (either dedicated ROM, later more generic ROMs using machine IDs), so it knows the base physical address they answer to (later controllers can be configured to change those base addresses). It then just probe the memory to see where there is some memory, and where that memory is aliased (for controllers who don't check higher address bits), to detect how much memory there really is.

And why does this mechanism of creating "contiguous" memory from a fragmented memory map exist in the first place, if nothing more than the on-board memory controller is assumed to be part of the system?

Most post-MC68000 systems support multiple banks. Hardware-wise it's a lot easier to hardwire the base address of every bank, with enough space in-between for the largest supported SIMMs (later DIMMs). That way any amount of memory can be put in any bank and it will work - easier for the user (the rules and jumper settings for filling out the banks on e.g. a Sun 3/60, which requires contiguous physical memory, are really annoying!). The MMU takes care of creating a continuous address space for older version of the System (as in System 6, System 7, etc., later to be renamed MacOS). The ROM also creates a structure describing the available physical ranges for the System, so those supporting "virtual memory" knows the truth and can use the MMU for proper paging.

Yes, I know that the MMU can do this, but I'm trying to understand how much of a "hack" this is. Because the next question would be, who is building the MMU tables, and responsible for the memory allocation.

Explained above - summary: the ROM creates the "installed physical memory" table and setup the MMU for basic usage. The System then takes over and can use the memory as-is or can take over the MMU based on the "installed physical memory" table.

I think I will drop the "memory extension" feature for the purpose of keeping the hardware, as best as I can do it given the limited amount of OS support, "user friendly". Thanks for helping me coming to this conclusion. ;-)

In the IIsiFPGA it works but honestly, it's a proof-of-concept. It's waaaaay easier to just buy a larger SIMM for the machine... But it was more fun to wrestle the FPGA and ROM into becoming a memory extender than just buying another SIMM :-)
 
It does a machine specific test of each bank to see if it can write and read back, and tests how far into each bank it can go by testing for standard module sizes. Then it sort of defrags the memory space 😆

Ahhhh, the MMU is being used to compensate for the lack of capability that the memory controller can align the memory banks by itself.

All Macs post about 1998 (other than a couple of 68000 machines based on older designs) have an MMU.

Well, they better should have! ;)

I completely realize now that an 68EC030 is basically useless in a MAC.

Nothing shipped with an EC. Plus apple were using their own chipsets which generally had some level of memory management, even back to the 68000 days where they switch things about depending on how much RAM you installed in which banks, for example on the Plus.

Sure, where it is necessary, but on the SE/30, you can completely skip on this feature if you make sure that only CPUs with MMUs are used.
 
Well, they better should have!
Sorry, obvious typo :) should have been '88!

I prefer my post '98 Machines to be a little higher spec'd, although in 1998 the fastest machine I had was still a hand me down 16MHz 68000 with 4MB RAM 😆 Beggars can't be choosers!
 
Sure, where it is necessary, but on the SE/30, you can completely skip on this feature if you make sure that only CPUs with MMUs are used.
I'm struggling to separate the two slightly because lately I've been reading stuff for 68020 and 68000 hardware, where there isn't an MMU. I do feel like the hardware didn't completely depend on the MMU though... At least in some cases. It might be just edge cases like how the 68030 based LC II has a chipset that works with the 68020 and so does the 24bit / 32bit stuff in the basic memory controller that isn't really, just an address space decoder with multiple modes.
 
Ahhhh, the MMU is being used to compensate for the lack of capability that the memory controller can align the memory banks by itself.
(...)
I completely realize now that an 68EC030 is basically useless in a MAC.

Apple was surprisingly prescient in some of their design choices. MMU were not an obvious choice for personal computers when they introduced the II (which had a slot for a MC68851 and an even poorer substitute by default) and later IIx and SE/30, At the time it was mostly an expensive feature of Unix workstations (quite a lot of fast SRAMs in a Sun-3 MMU!) and even higher-end systems. The MC68851 was pretty terrible, but the built-in MMU of the MC68030 wasn't anywhere near as bad (the latency of the '851 was cut down by a lot once a subset was integrated).

Turned out it wouldn't be long before all personal computers that still mattered had an MMU once the MC68030 and later, 80386 and later, and pretty much all the RISC CPUs, were taking over. It helped decorrelate the operating system from the memory map and made it easier to support feature like virtual memory and memory protection.

My opinion is that platform like the Amigas eventually failed because they traded short-term benefits for long-term software backward compatibility nightmares, while the PC and Macs mostly didn't. Not a stupid decision at the time, as it was a continuation of how things had been done before, but it doomed those platforms.
 
And look where it got the Amiga platform...

Well, was a close call for Mac either, wasn't it. ;)

If you go by that argument, 640K is enough, and everything else just needs more jumpers and kludges, because THAT has been the world dominating "architecture" in the past.

Design and quality have, only in very rare cases - defined the mass market.

I like simple designs which scale well. Most autoconfig cards have 2-3 PALs, an address latch, comparator, and an 8-bit rom -> bang, almost same featureset as PCI 6-7 years later.

You could also hack memory expansion in some (most?) older systems (8-bits era). Turns out, the future was to not do that due to closer and closer integration of the memory controllers with the CPU. They eventually became built-in to the CPU in the early 00s, and no-one was looking back until CXL promised memory pooling - and a few years later that's far from widely adopted.

Most of todays's compute rely on memory pooling, which is not sitting on desktops anymore.
All we talk about here has limited practical relevance anymore - including the concept of the desktop computer itself, which has somehow still survived itself, like television.

But I do admit that I am an oldschool guy, hence I wouldn't be crazy enough to make hardware for obsolete platforms, and explore what can be done with past technologies. Same with you, obviously. ;)

IIRC, the early PCI PowerMacs ("95) supported 1 GiB, same as my Sun Ultra 1 Creator (also '95), which was a high-end workstation when it was introduced. Apple did over-engineer the memory capacity of the mid- and high-end systems.

That's the problem if you rely on your on-board memory controller to define the maximum memory capacity of the system.

Nowadays, it's cool that you can explore one or the other "modern use case" on such designs, which exactly do not abide by the laws of the mass market.

In the IIsiFPGA it works but honestly, it's a proof-of-concept. It's waaaaay easier to just buy a larger SIMM for the machine... But it was more fun to wrestle the FPGA and ROM into becoming a memory extender than just buying another SIMM :-)

And that's what it should be about nowadays - to find fun in our hobby. :)
 
Apple was surprisingly prescient in some of their design choices. MMU were not an obvious choice for personal computers when they introduced the II (which had a slot for a MC68851 and an even poorer substitute by default) and later IIx and SE/30

That is correct and something I really do find cool. The only reason MMUs have been used on the Amiga was that part of the engineering team was still obsessed with shipping a Unix machine - so the first 68020 and 68030 cpu cards, and the Amiga 3000, had full 020/030 CPUs.

For Amiga OS itself, the MMU had no relevance until maybe the last 5 years (the driver of the Amiga variant of my graphics card uses it to bank the 32MB VRAM in the 4MB PCMCIA window by trapping page faults in the virtual framebuffer memory area).

My opinion is that platform like the Amigas eventually failed because they traded short-term benefits for long-term software backward compatibility nightmares, while the PC and Macs mostly didn't. Not a stupid decision at the time, as it was a continuation of how things had been done before, but it doomed those platforms.

Huh? Autoconfig has always been part of the Amiga specification, and has always been 100% backwards compatible. You can even put 16- and 32-bit cards in the same slot, and even CPU board local I/O and memory devices benefit from the same mechanism.

There is no "24-bit mode" in Amiga OS, and it has never been necessary, because the OS was always designed to handle proper 32-bit addressing and resource allocation (well, almost, the MSB is used by the memory allocator to signify that memory couldn't be allocated).

There are lots of arguments you can bring up against the Amiga, but long-term system design scalability? Nope, sorry but here, your assumption is completely mistaken. :)

If there is one system which absolutely did NOT fail due to missing features or merits of its system architecture, for sure its not the Amiga.

What has haunted Commodore, and the Amiga itself, has been the image of a "game company" due to the world wide market domination of the Commodore 64 in the 80s. As much as Apple had a hard time to divorce itself from its actual core business, the Apple II, still up to the early 90's.

No system concept holds up if your mass product is basically a game machine (Amiga 500) and the first thing most disk based games do is completely turning off the operating system and directly bang the metal.

I can run AmigaOS completely without using any of the built-ins machine capabilities. The OS graphics system has always been designed in a modular way in separate libraries (graphics.library, intution.library), with the effect that I CAN actually use my blitter design on the Amiga.

Therefore, here, I'm seeking new challenges. ;)
 
Last edited:
No need to polute the thread any more than I've already done; I've ranted about my opinion time and time and again :-)

No system concept holds up if your mass product is basically a game machine (Amiga 500) and the first thing most disk based games do is completely turning off the operating system and directly bang the metal.

My point exactly. Then backward compatibility is nigh impossible, and the future of the platform is bleak (darn, I'm doing it again :-/ )
 
But at the end of the day, we're all 68k:)

The main benefit of a 30 year old machine is how it makes us feel happy, usually because it's what we have fond memories of from the time, so it is different for different people.
 
But at the end of the day, we're all 68k :)

OK, I'll confess: I've recently bought IDT79RV4700-150GH - 64-bits, 3.3V, 150MHz MIPS R4000-class CPU in PGA179 package (same as the MC68040). They were dirt cheap, and they would fit my FPGA-based homebrew project... I've betrayed both 68k and SPARC...

... but my last received CPUs were MC68010P12 - DIP-64, 5V, 12.5 MHz '010, meant to become a Sun-2 replica eventually. Still missing FPGA emulation of the critical peripherals for now (i82586 Ethernet, and perhaps some form of mass storage).
 
OK, I'll confess: I've recently bought IDT79RV4700-150GH - 64-bits, 3.3V, 150MHz MIPS R4000-class CPU in PGA179 package (same as the MC68040). They were dirt cheap, and they would fit my FPGA-based homebrew project... I've betrayed both 68k and SPARC...

... but my last received CPUs were MC68010P12 - DIP-64, 5V, 12.5 MHz '010, meant to become a Sun-2 replica eventually. Still missing FPGA emulation of the critical peripherals for now (i82586 Ethernet, and perhaps some form of mass storage).
I intentionally bought a MIPS based router just to have one, although my old Spark NAS exploded a few years ago, I was sad about that. I learnt Linux using that.

Always happy to spend a day playing on my ARM250 based computer though ;)
 
No need to polute the thread any more than I've already done; I've ranted about my opinion time and time and again :-)

I always welcome, let's say, exchange of experiences, although in your case, it shows that most of your opinion is based on the Amiga 500 alone. ;)
Try to learn some new things, and you'll widen your experience (as I try to do here).
Maybe your FPGA memory & graphics card would actually make for a great Amiga product. :)

My point exactly. Then backward compatibility is nigh impossible, and the future of the platform is bleak (darn, I'm doing it again :-/ )

Not really, even though I am using a graphics card, I can open screens and use the hardware acceleration in a system friendly way, thanks to the modular concept of AmigaOS.

On the contrary, as with most systems, I can actually stick a floppy disc into a Mac, and boot directly from it with my own custom loader, right? Just a few pages earlier, we were discussing that some games, even on Mac, assume exact hardware properties. ;)

That's hardly an issue of the Amiga as a system. :)

Only us hackers are responsible that Commodore was doomed. The hardware itself was actually very well abstracted and supported in the OS itself.

And now, back to the Double Vision. ;)
 
Last edited:
Back
Top