Jump to content

Development of Nubus graphics card outputting to HDMI?


Recommended Posts

The challenge with SDRAM is that typical SDRAM controller logic presents a transaction interface, and doesn't have any fixed or guaranteed address-to-data timing. It's a black box with variable timing depending on what other memory transaction are in progress. That's no good for a streaming framebuffer where you need to be constantly reading out pixel data at some fixed rate, while also interleaving reads and writes from the CPU. To accomplish that, you may need to jettison the canned or wizard-created SDRAM controller and write your own from scratch, which is a decent-sized project all by itself. This was my downfall in my attempts to build a DIY graphics card, years ago. In contrast, doing it with SRAM is trivial. Some FPGA dev boards do have SRAM, such as the Altera DE1 that I used to build Plus Too. But SDRAM is certainly fine too if you can solve the interface challenges easily enough.

Link to post
Share on other sites
  • Replies 185
  • Created
  • Last Reply

Top Posters In This Topic

6 hours ago, bigmessowires said:

At least in my experience, it's much easier to use a microcontroller than an FPGA. The development experience for an FPGA is more challenging, the tools are complex and confusing, it's difficult to wrap your head around what the Verilog or VHDL is doing, and the simplest operations seem to require 10x more effort and have 100x more bugs.

Yeah. Well, a big part of what I want to get out of this, is to learn about FPGAs and FPGA logic design.

 

6 hours ago, bigmessowires said:

An FPGA with integrated ARM would be neat, if you have a plan for the ARM. There are also simple open source soft-core CPUs that you can implement in the FPGA logic, if your FPGA has enough resources.

Any plans with ARM would be way down the line. Perhaps soft cores might be worth looking into for QuickDraw acceleration later?

 

3 hours ago, Gorgonops said:

(Accelerated quickdraw introduced a ton of new concepts like off-framebuffer rendering, etc.)

Where might one read more about that?

 

3 hours ago, bigmessowires said:

you could build a CLUT-based card using SRAM instead of SDRAM, making the memory interface much simpler.

Yes. It might be worth exploring if SDRAM becomes intractable.

 

3 hours ago, bigmessowires said:

Here's an updated version of my earlier code, modified to assume the framebuffer is in external SRAM instead of internal FPGA memory. The CLUT is small and can stay in internal FPGA memory. Hopefully it's clear:

It is, thanks. That's roughly what I expected for a CLUT. I don't think B&W support would be hard either? I imagine (naively) you'd probably want to read a byte (for example) every eight pixel clock cycles, and on every pixel clock cycle perform some bit masking to determine the state of the current 1-bit pixel, then output the appropriate 24-bit pixel.

 

2 hours ago, trag said:

You can work up to these things in small steps.   I would start with:

That is more or less the same progression I see, except HDMI and SDRAM instead of VGA and DDR2, respectively.

 

2 hours ago, Gorgonops said:

One question about that? Does the TDMS encoder operate on "a word at a time", or is there some kind of streaming function with it? IE, are there primitives hard-coded into the FPGA board's hardware design that accelerate grabbing bytes straight off the DRAM memory? Unless something like that is in play then I don't see the problem with inserting a CLUT; as BMoW's pseudocode shows, you can basically think of the CLUT as if it were a 256 pixel long framebuffer, with the pixel value the output circuitry reads from the "clutbuffer" determined by using the data value fetched from the actual framebuffer as the address.

I used "streaming" kind of loosely, sorry. There is no primitive (some sort of DMA?) for grabbing bytes from memory and pumping them to the TMDS encoder that I am aware of. I agree CLUT is not a big deal, at least in theory. I was simply lacking imagination at 4am last night for how it might be done in an FPGA but BMoW's pseudocode totally makes sense.

 

2 hours ago, trag said:

Ultimately, for VGA output, I was looking at something in this family for a, geeze, I'm having vocabulary failure today, the thing a majigger that acts as the D to A converter and changes a stream of pixel data into video out data.

Yeah, I briefly looked at the equivalent ADV7123 DAC, since that's what's used on the optional daughter board (which I did not purchase) of the QMTECH FPGA dev board I have. QMTECH ships an example design to drive that IC. The link I shared earlier about hacking 1080p @ 60Hz output from a Spartan 6 appeared to imply an HDMI TX IC would be able to workaround the serialization bandwidth limitations of that FPGA, so I also looked at the TFP410 and ADV7513.

 

---

 

BTW thanks again for all the input and discussion. I'm definitely going to need all the help I can get to have any hope of realizing this project.

Edited by dlv
Link to post
Share on other sites
29 minutes ago, dlv said:

It is, thanks. That's roughly what I expected for a CLUT. I don't think B&W support would be hard either. I image you'd probably want to read a byte (for example) every eight pixel clock cycles, and on every pixel clock cycle perform some bit masking to determine the state of the current 1-bit pixel, then output a 24-bit pixel.

The way the real hardware did it back in the day was generally to feed your byte (or word) size memory read into a parallel-load shift register and simply clock out a bit on every pixel clock. With your HDMI output I imagine you could just effectively do the same thing, IE, you read 8 or 16 bits (depending on the width of your video buffer), stuff it into a shift register, and as you clock out out you effectively just multiply the 0 or 1 you get into the appropriate bit mask that'll generate an all white or all black pixel.

(Or, if you want to be stupid, design it so you can choose what color is used for lit pixels so you can emulate a green or amber monitor. That'd be a totally useful feature.)

For 2 and 4 bit indexed modes you'll sort of deal with a similar problem, IE, each byte read will have two or four pixels in it. My suggestion might be to load the pixel data read from  RAM into a register that implements a "sliding window" that moves in appropriate size chunks down the register and only supplies the bits in the window as the address for the CLUT. IE, in an 8-bit mode if you have a pixel that looks like this:

 

10010011

 

Then you simply read the 147th position in the CLUT for the output, but if you have the same value representing two 4 bit pixels on the first read you mask it like this:

 

0011

 

and the color is item #3 in the CLUT, and the second read:

 

1001

 

gives directs you to color #9. Strictly speaking B&W could be a special case of this; the 1 bit selects from a 1-bit CLUT load where 0 is black and 1 is white.

 

I don't know if this would strictly be kosher, we'd have to see if Quickdraw ever *reads* the CLUT off a card instead of just loading it, but here's an optimization: For every color depth less than 8-bit have the driver load the 256 item CLUT with 16, 64 or 128 identical copies of the palette. Then when the hardware is processing each pixel it simply masks all the bits belonging to all the other pixels in the read buffer with zeros, thereby automatically applying only the data that matters to its own positional copy of the palette. Then you don't even have to move/shift/"right justify" the pixels you're using, the same CLUT hardware is used unchanged for all indexed pixel depths.

Hopefully what I just said there makes some rational sense...

Link to post
Share on other sites
49 minutes ago, dlv said:

Where might one read more about that?

I was going to point you to a pared down summary that's floating around the web but I think I found the thing it's based on. Which still isn't anywhere near enough to write your own Quickdraw implementation, but it at least mentions some of the conceptual bugaboos.

 

https://vintageapple.org/develop/pdf/develop-03_9007_July_1990.pdf

 

16 minutes ago, dlv said:

Yes. And a shift register is a much better/cleaner idea.

For just black and white the shift register is fine. But if you need to suck up doing indexed colors anyway I think the sliding window thing is the way some real video cards do it.

 

Note another way this "sliding window" could be accomplished is essentially making the buffer you load the data you've read into *itself* kind of a shift register. IE, if we go back to that 16 color example, on the first pixel clock you read the first 4 bits/slash/mask the rest of the bits with zeros. Before the next read you simply rotate the data 4 bits over, thereby assigning the data you've already rendered to the ash heap of history, so the next read gets the next four bits, etc. (And how far you shift depends on your color depth.) Or to put it simply, treat your read buffer as a stack and POP each pixel's worth of data off, however many you need, in parallel.

 

Okay, this is actually better, because now you don't need to load multiple copies of your palette in the CLUT, you can always just use the lowest slots. So...

 

1fb08e63500e9d2e6f6479c3e8edcc54.jpg

Offhand observation: Technically speaking you still might want the CLUT in the path even in a 1-bit mode. I do *not* know if Macs support this, but I think both EGA and VGA (and presumably their descendants for a while) and many other systems with similar hardware technically let you change the palette for 1-bit mode. Granted that's a totally weird edge case.

Link to post
Share on other sites
4 minutes ago, Gorgonops said:

I don't know if this would strictly be kosher, we'd have to see if Quickdraw ever *reads* the CLUT off a card instead of just loading it

The only remotely possible case I can think of for that being true might be the Macintosh Display Card 8•24 GC (you mentioned as well documented) doing a block transfers of frame buffer/CLUT data from and then back to a second NuBus card for which it is providing QuickDraw acceleration? Can't imagine it's not hijacking the frame buffer input and CLUT at the CPU QuickDraw output level for that card to itself for distributing the accelerated buffer data as block transfers to the unaccelerated card's frame buffer/CLUT it would be supporting, Thought it might be worth mentioning if someone digs up that documentation. I'll go quiet again now. :mellow:

Link to post
Share on other sites
2 hours ago, Gorgonops said:

Which still isn't anywhere near enough to write your own Quickdraw implementation, but it at least mentions some of the conceptual bugaboos.

 

https://vintageapple.org/develop/pdf/develop-03_9007_July_1990.pdf

 

Doh. Obviously I meant to include that the part I'm referring to starts on page 332.

There are references to other pieces of documentation in Inside Macintosh, technotes, etc. I still imagine it'd be a very tall order to implement a QC accelerator without some source code to reference. I am curious how other vendors that made accelerated cards pulled it off; the fact that they did does imply there's some kind of reference implementation? Question might be whether it required an NDA to see it.

Link to post
Share on other sites

On the subjects of CLUTs, while reading the LC III technote for other reasons I happened to notice this paragraph and thought it might be relevant:

 

Quote

Color modes up to 8 bits per pixel use a 256 x 24-bit CLUT which is provided by an enhanced version of the custom chip used in the LC and LC II. Monochrome modes also use the CLUT but drive the red, green, and blue inputs with the same signal.

So that's one mystery solved, that at least on Mac video cards that support both color and mono monitors grayscale modes use the CLUT to map shades of gray, not some alternate "direct DAC" path.

Link to post
Share on other sites
On 4/19/2019 at 4:40 PM, Gorgonops said:

Doh. Obviously I meant to include that the part I'm referring to starts on page 332.

Thanks. That read more like marketing for the 8*24 GC but was still interesting. What I was looking for is Imaging With QuickDraw (PDF), which I highly recommend since it has answered a lot of questions already. 

 

Quote

I still imagine it'd be a very tall order to implement a QC accelerator without some source code to reference. I am curious how other vendors that made accelerated cards pulled it off; the fact that they did does imply there's some kind of reference implementation? 

A quick search revealed Apple donated the source code of the 68000 QuickDraw implementation to the Computer History Museum. It's a fine software-only reference implementation (although may not implement Color QuickDraw) - and might even be able to be used largely as-is if we had a 68k processor or FPGA core available to us - but that's only part of the challenge. What's missing w.r.t. accelerated graphics cards is the interface between an accelerated QuickDraw implementation and the hardware, and how an accelerated QuickDraw is enabled and used. Also, how QuickDraw operations are queued for execution. That's likely to be highly implementation-specific, so I don't expect to find documentation. Reverse-engineering the 8*24 GC may be necessary (but would require in-depth knowledge of several things, including the Am29000 processor).

 

Edited by dlv
Link to post
Share on other sites
1 hour ago, dlv said:

Thanks. That read more like marketing for the 8*24 GC but was still interesting.

Yeah, I know it's mostly fluff, but it's the closest thing to a description of how the 8*24 at least *did* things like have the capability to redirect gworld buffers onto local memory on the card, etc. Certainly isn't anywhere close enough to actually write an implementation.

 

1 hour ago, dlv said:

What I was looking for is Imaging With QuickDraw (PDF), which I highly recommend since it has answered a lot of questions already. 

I hadn't seen that before, I've saved a copy for later.

 

The one comment I'd have about it so far that's discouraging is searching for variations on the word "acceler" produce a tiny number of hits which mostly refer to a flag you can set that force-prevents your offscreen gworld from having the option of being loaded into local memory on an accelerated card.
 

1 hour ago, dlv said:

A quick search revealed Apple donated the source code of the 68000 QuickDraw implementation to the Computer History Museum. It's a fine software-only reference implementation (although may not implement Color QuickDraw) - and might even be able to be used largely as-is if we had a 68k processor or FPGA core available to us

Alas I really can't imagine it being much use, at least without a lot of help. The documentation really seems to go to great length to stress that Color Quickdraw incorporates a whole raft of concepts not in the original "Basic" Quickdraw, which is the 68000 version. Maybe that's pessimistic; if the API documentation is good enough maybe it's doable...

 

1 hour ago, dlv said:

What's missing w.r.t. accelerated graphics cards is the interface between an accelerated QuickDraw implementation and the hardware, and how an accelerated QuickDraw is enabled and used. Also, how QuickDraw operations are queued for execution.

Yes, that's the massive mystery, and I have no idea where the answers to that are. As noted, there were a number of third parties selling cards with what their advertising literature would call "Standard Quickdraw Acceleration", and I don't think that all of them used AMD 29000s, which would mean they're not just running licensed clones of the 8*24 code. Therefore it seems it *must* follow that there has to be some documentation out there for how to grab the necessary hooks.

 

One thing I wonder vaguely about is if there might be another form of "QuickDraw Acceleration" that's implemented in the form of an extension that leverages dumber fixed-feature graphics card acceleration features like block transfers and basic line drawing/fill primitives but *doesn't* depend on having a full QuickDraw implementation that understands GWorlds? No idea.

Link to post
Share on other sites
On 4/20/2019 at 6:19 PM, Gorgonops said:

So that's one mystery solved, that at least on Mac video cards that support both color and mono monitors grayscale modes use the CLUT to map shades of gray, not some alternate "direct DAC" path.

On the flip side, that "Imaging with QuickDraw" PDF does mention at least a few cases (it specifically mentioned grayscale PowerBooks) in which Grayscale *is* implemented in the form of a dumb directly-mapped DAC, no CLUT register, so technically if you wanted to do just a Grayscale card it's a valid config.

 

I was doing some more leafing through the "Designing..." card the other night just reading the video driver chapter in a little more depth and it mentioned that technically hardware gamma control registers are another thing you *could* put into a card. If I understood the gamma discussion correctly, however, it looks like most Apple cards do gamma correction for Indexed modes by modifying the color values they write to the CLUT. (IE, if your palette says X-Y-Z and your gamma correction is Q what actually gets written to the hardware CLUT is Xq-Y-q-Zq where lowercase "q" is the value of the gamma curve Q for the unadjusted brightness of XYZ. Or something vaguely not like that at all.)

Link to post
Share on other sites
  • 68kMLA Supporter

Perhaps this is already obvious to all those involved, and if so, I apologize for being the nerd jumping in unneeded...

 

Re Quickdraw Acceleration:

 

Many (most?) ROM/Toolbox routines are called as unimplemented instructions.    The 68K architecture has a feature where there's a whole slew of unused op codes.   If your software makes a call to one of those opcodes, it triggers a routine in the CPU much like an interrupt handler, where the program counter jumps to a location set in a vector file.

 

So a bunch (all?) of the (Color) Quickdraw routines are called using these unimplemented instruction codes.  The CPU handles those and goes to grab a program counter vector off of the address/vector table, and the program counter vector/address has been set up to point to the corresponding Quickdraw Routine in the ROM.

 

In order to implement Quickdraw acceleration, or acceleration of any other routing contained in the ROM and called with this method, one simply loads a driver/extension at boot time that modifies the address/vector table.   Specifically, go to that table and substitute the address for your replacement routine for the address in ROM of the stock routine.

 

In the case of Quickdraw acceleration, this would probably take the form of a routine that sends a code/instruction and any necessary data to the video card, and the video card has logic that, for example, handles rotating the entire image 90 degrees in the memory of the video card, rather than having to read it all out the CPU memory, operate on it and write it all back to the video card.

 

So you shouldn't really need special guidance, although it would be nice.     It should be enough to identify the (Color) Quickdraw routines that are available.   Decide which lend themselves to hardware acceleration.   Learn enough about how they are normally called (any associated data structures/arguments, etc.) and then write your own subsitute routine that sends the requisite data to the video card hardware and add logic on the video card to do the processing.

 

I'm pretty sure the Quickdraw routines are documented well enough to provide the necessary information.  An example to copy or reverse engineer, would contain what?    Maybe a clear list of which routines are worth accelerating?  

 

Link to post
Share on other sites
16 hours ago, trag said:

I'm pretty sure the Quickdraw routines are documented well enough to provide the necessary information.  An example to copy or reverse engineer, would contain what?    Maybe a clear list of which routines are worth accelerating?

There's a section in that Quickdraw manual starting at page 3-129 titled "Customizing QuickDraw Operations" that based on a reference earlier in the chapter might be a clue as to what operations Apple considered off-load-able? (I get the feeling that certain parts of QuickDraw aren't atomic enough to override sucessfully, but, yeah, I have no idea.) It's actually not that long of a list, so... maybe it is hypothetically doable?

I hate to drag up this terrible canard because I think the idea of slapping a Raspberry Pi-like single board on everything is totally overused, but... just hypothetically speaking, considering that realistically you're probably not going to be able to push more than, I dunno, 10MB/s through NuBus, if it might actually be realistic to consider an architecture for an accelerated card that consists of a CPLD that implements the NuBus logic, a few 32 bit data and address buffers, and a very fast 8 or 16 bit multiplexed bus that goes to a "Pi-like board" (perhaps something like the BeagleBone, which contains some realtime I/O co-processors called PRUs) that handles video output with its dedicated GPU hardware and makes available a ton of CPU cycles to do... whatever. I imagine there would be substantial latency for bus transactions like individual byte/word reads from the framebuffer , but "substantial" might not be that significant in the grand scheme of things.

Anyway, that's a dumb idea, forget I said it.

Link to post
Share on other sites
  • 68kMLA Supporter
5 hours ago, Gorgonops said:

Anyway, that's a dumb idea, forget I said it.

 

Actually, it's kind of an interesting idea.   I don't like it, because I too abhor the fad of wanting to glue a Raspberry Pi on everything.  Also, I just think programming things in hardware language is more elegant.  But then, I'd rather program in assembly than C, and I do everything in my power to avoid going higher level than C.    When one programs in assembly, one controls the result;  programming in anything higher level than assembly is just making suggestions.   Building hardware logic on an FPGA is even better....

 

But, my emotional shortcomings aside, the Pi already has all that logic and stuff for driving a display on board and its cheap.   Finding a way to feed it a Frame Buffer from the Mac and make it come out of its already built video port is a very tempting morsel if the main desire is to build a working video card on least effort.

 

Re: latency.  I don't know what speed the Pis typically run at, but with the host machines running at 16 - 40 MHz, and more relevantly, the NuBus running at 10 MHz, one can fit an awful lot of Pi cycles into every host cycle.

 

On the other hand, didn't dlv write earlier that one of his goals for this project is to learn to use a hardware description language?

Edited by trag
Link to post
Share on other sites
45 minutes ago, trag said:

On the other hand, didn't dlv write earlier that one of his goals for this project is to learn to use a hardware description language?

Yeah, and really, I think it's a good approach for making a basic card. It's a great way to learn the nitty gritty of how framebuffers actually work, I think a "hardware" implementation of NuBus handshaking is likely to be more successful than trying to entirely bit-bang it, there's a lot of other potentially cool projects that having a working programmable logic implementation of NuBus could enable, etc...

 

The "fast SBC grafted to the bus" idea was just an idea if the project really did move to trying to do acceleration, QuickDraw or otherwise since, yeah, the speed disparity is so huge that *particularly* if much of the bus handling were offloaded you might be able to make it almost as fast as a full hardware framebuffer. But, well, the fact remains that if you've built the "dumb" framebuffer in an FPGA first then the bus logic at least becomes an already solved problem. Which would be great.

Link to post
Share on other sites

I kinda figured that any card that'd come close to modern resolutions would need acceleration. I was hoping that this discussion would decide on 030 PDS because, selfishly, I've got an SE/30 myself, and also, the increased bandwidth might make higher resolutions more possible. But honestly, the more time the CPU is sending data to the PDS the less time it has for other calculations, so it'd just slow down the rest of the computer even more than a fully saturated Nubus. Gotta keep your resources in mind, I guess. No wonder Apple seems to heavily imply that even PDS cards should be talking pseudo-Nubus.

 

I really hope I can keep up with you all enough to contribute in some way. I'm not an experienced programmer by any means, but I'm definitely going to be keeping a lookout for opportunities to help out as this progresses. Bare minimum I'm gonna be sanity checking any code for more obvious bugs when that starts coming along, if I can't do anything else.

Link to post
Share on other sites
On 4/23/2019 at 7:23 PM, trag said:

I don't like it, because I too abhor the fad of wanting to glue a Raspberry Pi on everything.  Also, I just think programming things in hardware language is more elegant.

If there weren't the clear precedent in the Macintosh IIfx design employing a pair of 6502s to offload I/O processing tasks I might be more in your camp as regards the Pi ploy. For that reason alone I don't feel like it's cheating to suggest the inverted Pi a la mode approach to making ice cream. Its onboard I/O hardware can deposit a double scoop on top of the pie on top of the ice cream between it and the 6502 sprinkles on the IIfx logic board.

 

As early as 1989, developing in C for prototyping code for such hardware and later retrofitting whatever required assembly for speed seemed the thing to do. Dunno if that approach might help spread the software development load to a greater number of participants for this project?

 

 

edit: never liked programming, but I can see how a greater number of members reading each others research finds and approaches to QuickDraw acceleration and sharing well documented routines at a common higher language level might be fruitful? I'd suggest doing it in parallel in a dedicated thread to avoid muddling both projects.

Edited by Trash80toHP_Mini
Link to post
Share on other sites
  • 68kMLA Supporter

Heh.   Your point is still valid, but IIRC, the I/O coprocessors on the IIfx don't actually do anything in the Mac OS.   They only get used in Apple Unix (forgot the name.)    Again, IIRC, Apple never got around to putting support into the Mac OS to make use of the coprocessors and they just sit there and operate in pass through mode.

 

I would love to find that that memory is wrong.

Edited by trag
Link to post
Share on other sites
37 minutes ago, trag said:

I would love to find that that memory is wrong.

So far as I'm aware your memory is basically correct.

 

Technically the high-end original Quadras (900/950) have the same I/O coprocessors (condensed into a higher integration part?) as the IIfx but, likewise, I don't think they actually do anything with it.

Link to post
Share on other sites

Functional limitation to operation under A/UX makes sense to me. Apple never did multiprocessing until they bootstrapped it off a clonemaker's development work, no? No hooks in the OS to make any use of coprocessors. IIfx and Q900/950 were targeted A/UX platforms, IIfx for general use and the latter for ANS application. So embedding 6502s in their VLSI goodies makes sense.

 

Doubled tangent there aside, I think it's rather delicious that a slice of Pi used to offload some processes running under MacOS into LINUX. Maybe not directly, but the fickle finger of fate does provide a bit of amusement now and then.

Link to post
Share on other sites

In theory at least if you cooked up NuBus transceiver logic that could be interfaced to a "Pi-esque" SBC that could push packets at some reasonable fraction of the practical, real-world speed of the bus (which until someone can show me evidence to the contrary I'm going to peg at maxing out at around 10-15MB/s despite the *theoretical* capacity of a Nubus burst transfer being around 40MB/s) then so far as I know there's no reason you couldn't have one Nubus card pretend to be, I don't know, say a video card, a network card, and a storage device all in one slot. The only limitation I can think of is if there's some restriction on how much I/O space you can have on a card that's also a framebuffer, or if the Slot Manager software framework has some limitation relating to drivers for multifunctional devices or... whatever.

As to "offloading processes" how possible that might be depends on whether you're talking about writing your own software or making something that magically accelerates existing software. In principle I could see, I dunno, writing replacements for things like the SANE numerical libraries that can offload some FPU functions to the much more powerful FPU you have hanging off the card, but how practical that is and how much gain you could possibly get out of it would probably depend a lot on how much massaging it would take for the native Mac data formats to take advantage of the alien hardware. You could certainly present new device functionality like, say, SSL/web accelerator APIs or a web media format processor you can throw JPEGS and MPEGS at to convert into bitstreams you can handle more easily on a feeble CPU, but obviously this requires all new software.

In any case, this is totally outside the "make a video card" scope defined in this thread.

Link to post
Share on other sites

Yep, it is that. But keep the Futura II SX VidCard/NIC Daughetercard design example in mind as the main thread of development progresses. I wish bbraun would come back in from out in the world, he explained some of how that worked under a single SlotID to me long ago. You've also got the example of the DuoDock II logic board which is a multifunction Slot E PDS card with an independently implemented NuBus side car along for the ride.

Link to post
Share on other sites
1 minute ago, Trash80toHP_Mini said:

he explained some of how that worked under a single SlotID to me long ago

From what I remember from scanning the "Designing..." document I didn't really think there was any explicit barrier to piling as many functions as you want onto one card (within reason) so I'm not really surprised that a video+Ethernet card was a thing that already existed. Since I didn't know for sure, however I threw the "RTFM" warning in as a precaution.

 

As an aside, I've noticed that the phrase "standard QuickDraw acceleration" is something that appears nowhere outside of LEM video card profiles. A jaded part of me is starting to question how many of the cards so described are actually "accelerated" in the same sense the 8*24GC is.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...