• Updated 2023-07-12: Hello, Guest! Welcome back, and be sure to check out this follow-up post about our outage a week or so ago.

They still make a 68040 <-> PCI chip :-)

Melkhior

Well-known member
I don't know why I feel happy about it, but for some reason Renesas still manufacture some variant of their "QSpan II" Processor-to-PCI Bus Bridge, which has native support for the 68040 bus. They are bit pricey though.

Theoretically, one could build a PDS'040 <-> PCI bridge board to be able to use a PCI device in a Quadra. And there's still some PCI <-> PCI express bridge being manufactured, so even PCIe is theoretically possible. That or using the GTP pins from an Artix-7 FPGA to get PCIe directly.

Of course the lack of software support is the show-stopper. But it's a nice thought.

Happy new year!
 

chelseayr

Well-known member
a very interesting thoughts indeed. guessing that the quadra would have to run system 7.5 or newer which corresponds with the actual early pci powermacintosh's as far as cards support natively went leaving only the question of the os actually seeing the bridge chip itself
 

Phipli

Well-known member
Radius had PCI cards running in 68k macs using a PDS card on a Rocket apparently, but only ever internally.

I don't know the details or they why. I believe they were some kind of storage cards??? My memory is terrible.
 

joevt

Well-known member
a very interesting thoughts indeed. guessing that the quadra would have to run system 7.5 or newer which corresponds with the actual early pci powermacintosh's as far as cards support natively went leaving only the question of the os actually seeing the bridge chip itself
Is the Quadra supposed to be upgraded with a PowerPC? All existing classic macOS PCI drivers probably require PowerPC. So you probably have to start from scratch anyway. In that case, you might as well allow your new PCI driver to work in System 6 (though System 7 is required for 68040?).
 

Arbee

Well-known member
Yeah, there is no chance of getting any PowerMac compatible PCI stuff to work on a 68K machine. The Slot Manager in NuBus machines has no idea what PCI is, and the stuff on PowerMacs that does is PPC native. So you'd need to make the whole thing look like NuBus. Patch the system ROM to program the PCI card's BARs to appear in the FsXXXXXX range and replace the card's ROM with a NuBus style declaration ROM. Then it should work, except DMA would likely be problematic.
 

joevt

Well-known member
You don't need Slot Manager to use a hardware device. You just need to poke some bytes and the device will do something. For that you need to know where those bytes are. For PCI, you need code to reserve ranges of memory for PCI config space and the BARs and code to setup the BARs.

For DMA, the bridge chip just needs to be able to do memory requests from a bus mastering PCI device. That's all handled by the hardware. Code in the PCI device driver just sets up the DMA and starts it running.

The last bit of code you need is for handling interrupts. The system interrupt handler needs to be patched so that it detects an interrupt from the PCI bridge chip and passes control to the interrupt handler of the PCI device driver.

There's probably many threads discussing this. Here's one:
#19
 
33 MHz PCI signaling is fairly forgiving, I worked on the original Intel EISA and ISA chipsets at Dell way back when. I also designed a PCI bridge for a graphics card using an Altera FPGA with fewer pins than a cheap Spartan 6 you can buy today. I had to hack the PCI config cycles in due to limited space on the FPGA, but it was enough to make it work. And no the FPGA didn't meet the PCI signaling specs...

One thing I patented was determining a burst pattern most optimal from the host, with the appropriate number of waits in a 4 bit pattern we could max out the PCI bus by not causing the host write buffer to go empty (which would cause a new frame cycle). We used this on real graphics GPU's later when PCI was in full swing.

I have seen your code for your video card and I honestly don't think its beyond your abilities to roll your own PCI <--> bridge, you would probably have to multiplex the 040 signals into the FPGA to save pin space.. but that's just a few multiplexers on the board.
 
Top