In that light I've been trying (and failing) to express my thoughts in terms of a block diagram and the "NuBus chipset" as one block of logic within the FPGA or as part of logic would need to be added when the time comes to interface with the SRAM
But why do you think keeping that the block that's labeled "Video" has a "PDS-like" interface on it? It doesn't, or at least it doesn't unless there's a lot of design decisions made that were thrown out but later discarded. Per whit:
Way back in the thread I *did* propose the idea of breaking this across two FPGAs because of the worries about whether it was possible to get enough I/O pins to do everything (the bus interface, the RAM interface, and the video port) with one reasonably-inexpensive and accessible FPGA module. My idea for doing that was essentially having the bus live in one place (probably a different piece of programmable logic), the video in the FPGA, and have them both interface to a chunk of RAM in between them. (Then in one case you only need enough pins for bus glue + RAM interface in one device and RAM interface + HDMI in the other.) In *that* case then, sure, I guess there's some argument to be made that the RAM interface you have hanging off the video component is... kind of PDS like, kind of, remotely, but not really? But I've been looking at how that Amiga card that started this whole thing off is put together, and it looks to me like there are *plenty* of pins to implement Nubus; their implementation of "68000 PDS" requires 64 pins, which is almost 16 more than we need for Nubus.
Full width 68030 is going to be another 24 pins if we assume all the other signals other than address and data are also present and needed. So that's why I estimated 90. That is flatly a dealbreaker for the FPGA module they used in the design referenced by the OP, it has 64 digital I/O pins. 64.
The FPGA itself has more than 64 pins, but they're already soldered to the SDRAM/HDMI port/etc that are built into that card. And the FPGA itself that has enough pins to do this is a BGA, which means it's pretty flatly a no-go for homebrew. That whole section about breaking it up was a grasp at "well, how could we do this with FPGAs that are available in solder-able packages" instead of modules.
Frankly I think I've pretty much convinced myself there's a real possibility of essentially using the Amiga-designed card hardware design nearly unmodified
(*1) beyond putting a Nubus connector on it instead of Zorro II and re-writing the bus module to speak Nubus instead of 68000. The design
might *also* work nearly unmodified for a 16 bit LC PDS, that is also true, and *that* may well be less work because you could probably reuse a lot of the 68000 bus code, but I'm not entirely sure about that because there's some details of bus decoding with PDS slots I'm not entirely sure about, IE, whether you actually need to decode all 32 address lines when you're using the "psuedoslot" assignment or not.
(*2) But a "direct refit" won't scale to 32 bit PDS. Period. *Maybe* you can make it fit by adding a little external logic to handle the bus sizing signals and only using a 68030 PDS as a 16 bit slot, but this is not an option for 68040 PDS.
If we throw out at least the physical design of the Amiga video card this all started with *maybe* a 32 bit PDS version will all fit with that Spartan6 dev board that was mentioned a few pages ago, as it says it has a 108 pins? (But also doesn't have the HDMI ports already implemented.) Assuming all 108 quoted are actually *usable* (sometimes on boards like this I/Os are only usable if you're not using some of the hardware built into the motherboard) with the DRAM enabled then... maybe. It'll be tight.
(
*1 with the proviso that the video module will need to be redesigned so it used Mac-compatible pixel packing.)
(
*2: Here is what is says in "Designing Cards and Drivers, 3rd edition, about PDS slots:
[SIZE=11pt]To [/SIZE][SIZE=12pt]ensure compatibility with future hardware and [/SIZE][SIZE=11pt]software, you [/SIZE][SIZE=12pt]should decode [/SIZE][SIZE=11pt]all [/SIZE][SIZE=12pt]the address bits to [/SIZE][SIZE=11pt]minimize [/SIZE][SIZE=12pt]the chance of address [/SIZE][SIZE=11pt]conflicts. To [/SIZE][SIZE=12pt]ensure that the [/SIZE][SIZE=11pt]Slot Manager [/SIZE][SIZE=12pt]recognizes your [/SIZE][SIZE=11pt]card, make [/SIZE][SIZE=12pt]sure that the declaration [/SIZE][SIZE=11pt]ROM [/SIZE][SIZE=12pt]resides at the upper address [/SIZE][SIZE=11pt]limit [/SIZE][SIZE=12pt]of the [/SIZE][SIZE=11pt]16 [/SIZE][SIZE=10pt]MB [/SIZE][SIZE=12pt]address space. [/SIZE]
This sounds to me like you need to care about at *least* 24 address lines.... no actually. Maybe I'm wrong but this:
[SIZE=11pt]Notice [/SIZE][SIZE=12pt]that [/SIZE][SIZE=11pt]the /NUBUS signal (Table 15-7) is an address decode of the memory range $60000000 [/SIZE][SIZE=12pt]through [/SIZE][SIZE=11pt]$FFFF [/SIZE][SIZE=10pt]FFFF. [/SIZE][SIZE=11pt]The [/SIZE][SIZE=12pt]/ [/SIZE][SIZE=10pt]AS [/SIZE][SIZE=12pt](address strobe) [/SIZE][SIZE=11pt]signal qualifies the assertion of the /NUBUS signal. The /NUBUS signal is asserted [/SIZE][SIZE=12pt]a [/SIZE][SIZE=11pt]maximum of 26 ns after the [/SIZE][SIZE=12pt]/[/SIZE][SIZE=10pt]AS [/SIZE][SIZE=11pt]signal is asserted, [/SIZE][SIZE=12pt]and [/SIZE][SIZE=11pt]is removed [/SIZE][SIZE=12pt]a [/SIZE][SIZE=11pt]maximum of 22 ns after the [/SIZE][SIZE=12pt]/[/SIZE][SIZE=10pt]AS [/SIZE][SIZE=11pt]signal is removed. [/SIZE]
[SIZE=11pt]Remember [/SIZE][SIZE=12pt]that [/SIZE][SIZE=11pt]/NUBUS is valid when the [/SIZE][SIZE=12pt]processor [/SIZE][SIZE=11pt]is accessing the [/SIZE][SIZE=12pt]on-board [/SIZE][SIZE=11pt]video logic; [/SIZE][SIZE=12pt]therefore, [/SIZE][SIZE=11pt]to avoid [/SIZE][SIZE=12pt]possible [/SIZE][SIZE=11pt]data bus conflicts, you must [/SIZE][SIZE=12pt]decode [/SIZE][SIZE=11pt]one of the [/SIZE][SIZE=12pt]pseudoslot [/SIZE][SIZE=11pt]address ranges [/SIZE][SIZE=12pt]when [/SIZE][SIZE=11pt]using the /NUBUS signal as [/SIZE][SIZE=12pt]a [/SIZE][SIZE=11pt]qualifier. [/SIZE]
This makes it sound like you need to worry about the whole enchilada. Now, it certainly would be possible to slap some additional logic outside of the FPGA to sit on these address lines and do the necessary decoding, they don't *have* to terminate on the FPGA, but that means at the very least PDS will need more parts unless you find an even bigger FPGA module.)
I'm very familiar with NuBus and PDS in terms of Card and Driver Development
Are you? You didn't seem aware of things like there actually isn't such a thing as "16 bit mode" for Nubus, which is something that comes to light pretty quickly when you read the manuals. (Yes, there's the "bus lane" thing, but that doesn't apply to any "RAM-like" device like a video card *except* for the declaration ROM. They're *very* clear about that.) I get that you have it really drilled into your head that the two kinds of cards "look alike" if you use the "Psuedoslot" methodology, and, sure, that's great when you're talking about what you're sticking on the declaration ROM, but that cart is so far in front of this horse it's not even funny.
(Well, okay, there's another place where byte lanes come into play, the whole "endian-ness" thing which is kind of a pain in the neck, or at least is is when you don't have upteen gazillion gates inside of an FPGA at your disposal and have to build the card with generic 7400 logic and PALs.)
Understood, actually I never suggested it was. I've only been talking about the implementation of that logic, it's been problematic for engineers to do in the past and may be so at present.
No, you've been excessively harping on how hard it is, and the bulk of your evidence seems to be a Byte article that came out the same year as the first Mac using the bus, as if nothing has been learned since then. To be clear, I'm not pretending that writing an FPGA Nubus implementation is going to be a walk in the park, but, seriously, we do have the manual and years worth of tech notes and addendums to work off of.
NuBus saves but a bit less than 32 lines over a PDS implementation and when introduced in the SE/30 at 16MHz could be likened to running on the wet sand at the shoreline as compared to NuBus at 10MHz running in the dry sand. PDS transitioned...
So what? Seriously, here you are again making a case that essentially boils down to
"anything less than better than the best Apple ever made, ever, is totally a waste of time, and therefore we have to make sure we fully exploit the fastest, highest-frequency, most electrically difficult to exploit expansion port available to us for the first revision!" Again, gotta say it, doesn't seem like a helpful attitude. And more than half your last post consisted of this sort of thing. Approximately 64% of it if my broad strokes pipe through "wc" is correct.
This idea that this has to be the best, fastest video card ever does not seem to have been a requirement of the OP, nor several other people who chimed in and said it'd be nice just to have an alternative to increasingly hard to find decent NuBus cards that aren't very friendly with modern monitors. Why are you so intent on making WARP SPEED a non-negotiable item on the feature list? You do realize how slow *all* the machines you might target with this are, right? That stupid little SPI video card on a CPLD I tossed out earlier can handle about 25 640x480x256 full frames per second on a
62mhz single bit SPI bus. That's about seven and a half megabytes a second sustained. I don't think there is any 680x0 Mac that can meaningfully do that, full-screen 480p video @25hz. I'm including the 840av in this. Performance is *really* a ridiculous thing to harp on this hard.
You said earlier that I might need to draw a picture of what I've been trying to get across to you and I think it's time to do exactly that for the next chapter. Words so often, if not always fail me. Bunsen suggested much the same years back and said doing it helped immensely.
I'm very much looking forward to this illustration.