• Hello MLAers! We've re-enabled auto-approval for accounts. If you are still waiting on account approval, please check this thread for more information.

MacSE-RAM, a crazy Mac SE PDS RAM concept!

Snial

68000
OK, so I've had a glass of wine, but here's the dumb idea. The MacSE has a PDS slot (page 445-446 of the Guide To The Macintosh Family Hardware), which explicitly says it can be used for memory expansion. So, we decode another 3MB from $600000 to $8FFFFF and another 1MB from $C00000 to $CFFFFF; then write an INIT which modifies the System Heap so that the top of RAM is $CFFFFF, with $400000 to $5FFFFF and $900000 to $BFFFFF pre-allocated as non-purgeable. So, in this case if each memory block begins with a header, that means that the header will be stored just below $400000 and just below $900000 or if it involves setting system Master Pointer then the appropriate adjustments are made.

It would mean that a Mac SE could use up to 8MB of RAM even though no more than about 3MB (in System 7) could be allocated to any particular application. But since a 4MB Mac SE under System 7 only has about 3MB free anyway, it'd still be about twice as useful :-) .

I'm sort-of aware this has probably been proposed a gazillion times ;-) !

SE Memory map:

1702245772482.png
 
OK, so I've had a glass of wine, but here's the dumb idea. The MacSE has a PDS slot (page 445-446 of the Guide To The Macintosh Family Hardware), which explicitly says it can be used for memory expansion. So, we decode another 3MB from $600000 to $8FFFFF and another 1MB from $C00000 to $CFFFFF; then write an INIT which modifies the System Heap so that the top of RAM is $CFFFFF, with $400000 to $5FFFFF and $900000 to $BFFFFF pre-allocated as non-purgeable. So, in this case if each memory block begins with a header, that means that the header will be stored just below $400000 and just below $900000 or if it involves setting system Master Pointer then the appropriate adjustments are made.

It would mean that a Mac SE could use up to 8MB of RAM even though no more than about 3MB (in System 7) could be allocated to any particular application. But since a 4MB Mac SE under System 7 only has about 3MB free anyway, it'd still be about twice as useful :) .

I'm sort-of aware this has probably been proposed a gazillion times ;-) !

SE Memory map:

View attachment 66551
I think application RAM needs to be contiguous for the System to use it, which can be done with an MMU, except the SE doesn't have one stock.

You could use it for custom applications though?
 
This has sort of be done on some 512k/Plus accelerators. You could use the upper range as a ramdisk.

Not sure if MacOS would allow the System Heap to be moved while it's running
 
I think application RAM needs to be contiguous for the System to use it, which can be done with an MMU, except the SE doesn't have one stock.
Yes, so what I mean is that the heap is set up to look like a contiguous block with a couple of pre-allocated blocks (as though there were 2 applications there, but there aren't really).

You could use it for custom applications though?
And that's possibility.

Not sure if MacOS would allow the System Heap to be moved while it's running
A little while back I was reading a Mac GUI article about the boot-up processes on an early classic Mac. So, one of the first things that's done is that the System Suitcase loads an internal INIT resource that patches buggy or updated $Axxx entries and that it's possible to add resources that are called at the beginning of the boot up process, before the main OS is loaded. I wouldn't want to move the System Heap, just patch some entries to fake pre-allocated blocks and change the top address (the OS uses the bottom of memory AFAIK rather than the top).

 
Last edited:
The MacSE has a PDS slot, which explicitly says it can be used for memory expansion
Hardware-wise, there's no major problem. Just need some memory chips and an adequate memory controller mapped in the proper area, and some way to discriminate between the two maps so the new controller doesn't conflict with the onboard BBU, which is likely the most complex part.

Software is the hard part. If you plan on a custom software handling $60_0000 ... $8F_FFFF itself, then it's easy - but only the custom software will make use of it. System 6/System 7/MacOS 8 won't be able to use it.
A lot of ROM and system code assumes the main memory area is contiguously addressable.
On non-68000 system, it's achieved by the ROM setting up the MMU to remap the different physical memory banks into a contiguous address space and adding pass-through for I/O. That's patchable to add extra controllers (at least on the IIsi :-) ), you just need to work around assumptions in the code (number of banks, ...).
On 68000 systems, it's achieved by the memory being actually contiguous at the controller level. If you add a secondary controller at $60_0000, then you would need to patch a lot of the memory handling code to become aware of that additional memory. You can't remap it because as mentioned by others you don't have a MMU. Newer 68000 systems ended up with a different memory map to have more room for the memory area, Look at the PowerBook 100 memory map for instance.

I suspect a lot of the I/O stuff in the SE is hardwired in terms of addresses, so it's probably not possible to rework the memory map/remap the devices to get more space for main memory with a heavily patched ROM in a similar way to newer systems.
 
Rather than using the address space as a general RAM expansion, with the software difficulties that would involve, how about using it for a RAM disk or ROM disk?
 
RAM Disk is the easy soln to make use of the additional RAM - that'd be an example of custom software as @Melkhior mentioned that could make use of the extra RAM without needing to hack on Mac OS. This is what a lot of SE accelerators do as the 68030 based accels often have onboard RAM slots for a 32 bit RAM bus (rather than the 16 on the LB).

On such an accel, LB ram is usually made available for use as a RAMdisk: I'd assume on the accelerator they are recognizing access to RAM address space and instead activating the onboard DRAM controller on the accelerator while inhibiting control signals to the LB. Then access to the upper window is directed to the LB RAM instead when accessed with the ramdisk code. Something like that.

@Phipli knows much about SE accelerators so he might know if any break this pattern and allow a greater total amount of RAM for direct use by Mac OS. I'd assume if it were reasonably practical on the vintage designs it'd have been done.
 
Connectix made a clever program called Compact Virtual that worked with many of the Plus & SE accelerators with extra RAM. It'd create a RAM disk, then use the RAM disk for virtual memory. Effectively increasing the RAM beyond the maximum without any performance hit.

If you're going to make a new RAM card, maybe you could somehow make it compatible with Compact Virtual? Then no new software would be needed. Dunno if that'd be possible.

Also, is there a limitation to the maximum size of the RAM disk on an SE? Could it be 128 MBs or larger? There were a handful of NuBus RAM cards that could have 256 MBs of RAM for the RAM disk. Would be cool to see an SE with 128 MBs of RAM. :)

Also, could a small portion of it be used for GWORLD to speed up the video a tiny bit?
 
If you're going to make a new RAM card, maybe you could somehow make it compatible with Compact Virtual? Then no new software would be needed. Dunno if that'd be possible.
Virtual only works if you have an MMU, which is the fundamental issue, a stock SE doesn't have one.
 
Also, is there a limitation to the maximum size of the RAM disk on an SE? Could it be 128 MBs or larger?
Yes there is, the processor can only address a total of 16MBs, which includes RAM, ROM and all hardware devices.

You could technically set up a paging system, but that would be slow and... well... not very compatible.
Also, could a small portion of it be used for GWORLD to speed up the video a tiny bit?
All of SE RAM is already sort of GWorld while not being at all, because the SE uses main memory for VRAM. There is no way to speed up the RAM access side of video on an SE without serious reworking the design of the computer, your best bet is an accelerator card to speed up the QuickDraw maths part.
 
Virtual only works if you have an MMU, which is the fundamental issue, a stock SE doesn't have one.
Bearing in mind, this is basically a thought-experiment & an exercise in creativity, rather than a serious proposition, I suspect it might be possible to patch application switching to make it possible to switch between a number of bank-switched applications.

This is how I imagine it could work. Let's imagine for the moment that my earlier assumption about being able to fool System 7.x into thinking the application space goes from $000000 $CFFFFF, but with pre-allocated holes (ROM, SCSI, SCC..) that can't be allocated to applications and that $C00000 to $CFFFFF is a bank-switched area. At any one time, only one application (with a maximum of 1MB of application space) can use it. That doesn't mean the bank switching hardware has to be that crude, perhaps it can map 1MB at a time on any 64kB boundary (maybe there are 16x 64kB pages each containing a 16-bit register => 16+16 = 32 bit addressing = a maximum of 4GB 🤪 ).

Whenever an application is launched, the patched application launcher allocates a bit of system heap for the new application's mapping; finds enough 64kB pages for the application's requirements and launches it into the banked region (so we need something like 16x2 bytes + SP for context switching). When the user switches to a different application, the application launcher treats it as a normal application switch if the other application isn't in banked space; else it pushes any more needed context; then remaps the bank registers for the other application and transfers to it.

There are issues with this concept. At least:
  1. VBL and timed tasks in banked memory would force bank-switching too (slow, possibly too slow for reliable operation).
  2. The Finder code for listing applications probably needs modifying, because if it's actually trawling through the full memory space, then obviously it wouldn't see any of the mapped out applications. But my guess is that the Finder uses a System API to iterate through applications, so if that was patched, it would see the banked applications as intended.
Admittedly, it's a fairly pointless concept, undoubtedly much easier to add an '030, which gives you a proper MMU and true 32-bit addressing :) . Still Expanded Memory was a super-stupid concept for PCs, invented after the 80386 was released, but forced by the 15 year failure of the industry to use protected-mode properly and they still did that!
 
Bearing in mind, this is basically a thought-experiment & an exercise in creativity, rather than a serious proposition, I suspect it might be possible to patch application switching to make it possible to switch between a number of bank-switched applications.
I was referring to Virtual specifically :) I have no idea about the feasibility of what you're proposing as it is far beyond my understanding 😆
 
Bearing in mind, this is basically a thought-experiment & an exercise in creativity, rather than a serious proposition, I suspect it might be possible to patch application switching to make it possible to switch between a number of bank-switched applications.

This is how I imagine it could work. Let's imagine for the moment that my earlier assumption about being able to fool System 7.x into thinking the application space goes from $000000 $CFFFFF, but with pre-allocated holes (ROM, SCSI, SCC..) that can't be allocated to applications and that $C00000 to $CFFFFF is a bank-switched area. At any one time, only one application (with a maximum of 1MB of application space) can use it. That doesn't mean the bank switching hardware has to be that crude, perhaps it can map 1MB at a time on any 64kB boundary (maybe there are 16x 64kB pages each containing a 16-bit register => 16+16 = 32 bit addressing = a maximum of 4GB 🤪 ).

Whenever an application is launched, the patched application launcher allocates a bit of system heap for the new application's mapping; finds enough 64kB pages for the application's requirements and launches it into the banked region (so we need something like 16x2 bytes + SP for context switching). When the user switches to a different application, the application launcher treats it as a normal application switch if the other application isn't in banked space; else it pushes any more needed context; then remaps the bank registers for the other application and transfers to it.

There are issues with this concept. At least:
  1. VBL and timed tasks in banked memory would force bank-switching too (slow, possibly too slow for reliable operation).
  2. The Finder code for listing applications probably needs modifying, because if it's actually trawling through the full memory space, then obviously it wouldn't see any of the mapped out applications. But my guess is that the Finder uses a System API to iterate through applications, so if that was patched, it would see the banked applications as intended.
Admittedly, it's a fairly pointless concept, undoubtedly much easier to add an '030, which gives you a proper MMU and true 32-bit addressing :) . Still Expanded Memory was a super-stupid concept for PCs, invented after the 80386 was released, but forced by the 15 year failure of the industry to use protected-mode properly and they still did that!
I think one of the prototype macs that never made it to production banked a load of "small" pages... I can't remember which, YACC, Turbo Mac, or Big Mac or whatever.

@GRudolf94 - your memory is better than mine, do you remember which prototype heavily used basic bank switching for a kind of application context switching? Or was it proposed for Pink?
 
I think one of the prototype macs that never made it to production banked a load of "small" pages... I can't remember which, YACC, Turbo Mac, or Big Mac or whatever.

@GRudolf94 - your memory is better than mine, do you remember which prototype heavily used basic bank switching for a kind of application context switching? Or was it proposed for Pink?
Uhm, I don't recall. But I think you're right in that it was supposed to be one of the "what's the Mac II gonna look like?" ones. Might be in one of those spec proposal papers?
 
Uhm, I don't recall. But I think you're right in that it was supposed to be one of the "what's the Mac II gonna look like?" ones. Might be in one of those spec proposal papers?
I skimmed a few of them but didn't catch it sadly.

Just made myself want a backplane Mac again.
 
Virtual only works if you have an MMU, which is the fundamental issue, a stock SE doesn't have one.
Also the 68000 has a bug where it can’t work with virtual memory. (A paged MMU where all of the memory is always physically present is fine).

The 68010 fixed it, but Apollo workstations hacked around it by having two 68000s. When a page needed to be loaded from memory it held /DTACK on the main 68000 to freeze it while the second 68000 loaded the page. The main CPU then continued not knowing anything had happened.
 
Also the 68000 has a bug where it can’t work with virtual memory. (A paged MMU where all of the memory is always physically present is fine).

The 68010 fixed it, but Apollo workstations hacked around it by having two 68000s. When a page needed to be loaded from memory it held /DTACK on the main 68000 to freeze it while the second 68000 loaded the page. The main CPU then continued not knowing anything had happened.
OK, so that's how it was done. I knew they used two 68Ks, but I'd always read that the mechanism was to have the second 68K one instruction behind the other so that it could take over after a page fault. That didn't make sense to me, but the /DTACK solution does.
 
You could theoretically rewrite the ROM to move all hardware addresses to the top of memory and then either:
  • Replace/recreate the glue logic or
  • Move/add a 68000 to the PDS card and rewrite memory accesses for hardware
 
I have a prototype open-source 68HC000-based accelerator made in partnership with Garrett’s Workshop that can do what’s described here. The accelerator has 4 MB of onboard fast RAM for the main 4 MB of RAM at $000000-$3FFFFF and does not use motherboard RAM except for the video/sound buffers. Since the motherboard RAM is mostly unused, the accelerator can be configured to remap the addresses in the $600000-$8FFFFF range to motherboard RAM. That way you get almost 3 MB of additional RAM but due to limitations of the existing hardware, there’s a 64 kB hole in the middle where the screen and sound buffers are. So you get 1.9375 MB contiguous, then 64k screen/sound, and then another contiguous 1 MB. Won't be that fast but the accelerator (unlike many legacy designs) is able to do back-to-back reads from the motherboard and can also do a posted write to motherboard RAM. So even when running an app out of motherboard RAM, it will be significantly faster than on an unaccelerated SE. When I brought up the accelerator initially, I disabled onboard RAM and used motherboard RAM exclusively to ensure the bus bridge to the motherboard had good performance. In this case Speedometer 3 was giving something like 1.25 CPU score compared to a Mac SE baseline score of 1.0. Running from fast RAM exclusively on the accelerator gets you like 4.15x CPU score. Since the main 4 MB of RAM on the accelerator contains the OS, lowmem globals, and framebuffer, running a typical app from motherboard RAM will be even faster than 1.25x but almost certainly less than 2x except in contrived cases.

I have a thought about patching the OS to use this extra RAM. It’s totally uninformed except from a theoretical perspective. The Mac’s memory manager probably maintains a free chain, right? This is a linked list representing allocated and free regions of memory. I guess at the system level it would be representing the heap spaces for different apps and the system. When more RAM needs to be allocated, the memory manager looks through the free chain and finds a free region of memory large enough to make the allocation and then updates the free chain with a new linked list item saying that the memory in that region has been allocated. At least, this is how it usually works. Can’t we just find the free chain and make some more entries to represent the permanent allocation that is ROM and SCSI at $400000-$5FFFFF and the free RAM at $600000-$8EFFFF? I’d try that first but there are a number of reasons that it might not work.

I just built some new prototypes but have yet to try them out. I just updated the code to map the memory at $600000-$8FFFFF as I described. If anyone formulates a good plan to patch the memory manager I’ll send you a prototype WarpSE to develop it on. If it can be made to work well, I will upgrade future versions of the WarpSE with more fast RAM so that high memory is just as fast as regular RAM and also fix the memory mapping so there isn't the hole for screen/sound RAM in the middle.
 
Last edited:
Off the top of my head, most things are going to rely on system globals such as MemTop, ApplLimit, BufPtr, etc. I believe those are some of the values that the system will use when determining how far it can go with application heaps and stacks.

It may just be easier to have a patched ROM on the card and move the ROM and SCSI addresses way up, moving those two alone would give you 9MB of contiguous space to use for RAM.
 
Back
Top