• Updated 2023-07-12: Hello, Guest! Welcome back, and be sure to check out this follow-up post about our outage a week or so ago.

Development of Nubus graphics card outputting to HDMI?

Trash80toHP_Mini

NIGHT STALKER
... seriously, I'm having a really hard time understanding why you think it would be relevant here.
Thought it might be worth mentioning. It's not, got it and I'll zip it. Remembered it wasn't real color, BTW, not real grayscale either, dithering in B&W maybe? Just thought rudimentary QuickDraw primitives might be useful on a simple card for something a bit more than B&W

Before I learn to crawl I gotta learn to roll around on the floor for a while. :mellow:

 

dlv

Active member
I really think people are worrying too hard about NuBus.
Yeah, I find myself going back to a NuBus design:

  • Easier than I expected to understand (so far). E.g., after a few hours tonight, I understand, at least at a high level, how single transaction reads and writes work. 
  • Maybe avoid having to re-read the Motorola MC68030 manual for now. It's not clear to me whether a read of the NuBus/NuBus90 specification is necessary yet.
  • Electrically slightly less challenging (10MHz, fewer I/O pins, fewer things to solder when the time comes)
  • Will work in my Quadra 950 :)  No need to source a Mac with a 68030 PDS (although I think I want an IIfx some day) 
I've also been musing about the possibilities a FPGA with ARM SoC opens, just for fun. It might be possible to do some really neat stuff like running WebKit on the ARM cores and rendering to a framebuffer that's overlayed onto space within the window of a custom browser interface. A hardware web accelerator. 

Some thoughts on current discussion:

  • I wouldn't rule out the use of a fast microcontroller either but I believe a FPGA really is the right tool here.
  • I'm not worried about color depth or CLUTs appreciably affecting complexity? I'm oversimplifying but I suspect that B&W output will be just as challenging as true color, but once we figure out any sort of output, other modes will follow.
 
Last edited by a moderator:

dlv

Active member
I'm not worried about color depth or CLUTs appreciably affecting complexity? I'm oversimplifying but I suspect that B&W output will be just as challenging as true color, but once we figure out any sort of output, other modes will follow.
I wanted to elaborate my thinking: I think true color is actually a natural choice to target at the beginning because it's what the HDMI TMDS encoder takes as input. Supporting B&W would require extra FPGA logic to stream 1-bit pixels and convert them into a 24-bit pixels to feed into the encoder. That is probably not hard but I wouldn't know where to begin with a design. Similarly, supporting a CLUT would need extra FPGA logic to perform the lookup and stream 24-bit pixels. I'm certain these are solved problems, but let's follow the path of least resistance. 

The unknown for me is how MacOS interfaces with CLUTs but that mystery can wait.

 
Last edited by a moderator:

bigmessowires

Well-known member
Dlv, I like this idea, and let me know if I can do anything to help. 

I'm not really worried about NuBus, but I only mentioned using an MCU because I think it could be much simpler than using an FPGA, assuming it's fast enough to work at all. At least in my experience, it's much easier to use a microcontroller than an FPGA. The development experience for an FPGA is more challenging, the tools are complex and confusing, it's difficult to wrap your head around what the Verilog or VHDL is doing, and the simplest operations seem to require 10x more effort and have 100x more bugs. But an FPGA would certainly give you the most flexibility and speed, and as Gorgonops mentioned the MCU might not be fast enough to meet tight timing requirements anyway. You'd have to look at where the critical paths are in a bus transaction, that can't be covered with wait states or something, and determine if the MCU could keep up.

An FPGA with integrated ARM would be neat, if you have a plan for the ARM. There are also simple open source soft-core CPUs that you can implement in the FPGA logic, if your FPGA has enough resources.

Supporting only true color sounds reasonable if you want to keep things as simple as possible. I think adding a CLUT is only a very small increase in complexity, though. Verilog pseudocode for a 640x480 frame buffer with 307200 pixels:

True color:

reg [23:0] frameBuffer[0:307199];
reg [18:0] pixelAddr;
reg [23:0] output;
always @posedge clk begin
    output = frameBuffer[pixelAddr];
    if (pixelAddr == 307199)
        pixelAddr = 0;
    else
        pixelAddr = pixelAddr + 1;
end




Indexed color:

Code:
reg [7:0] frameBuffer[0:307199];
reg [23:0] clut[0:255];
reg [18:0] pixelAddr;
reg [23:0] output;
always @posedge clk begin
    output = clut[frameBuffer[pixelAddr]];
    if (pixelAddr == 307199)
        pixelAddr = 0;
    else
        pixelAddr = pixelAddr + 1;
end
 

bigmessowires

Well-known member
If you haven't seen it, here's a great discussion of how a basic NuBus video card works: http://dec8.info/Apple/How_the_Macintosh_II_Nubus_Works_Byte88.pdf If you can get your hands on the "NuBus Monitor" software they describe, it would be a very helpful debugging tool.

I was thinking that another advantage of using a CLUT is that it requires less RAM. Up to 800 x 600 x 8-bit will fit into 512KB, while anything in true color requires more than 1 MB RAM. That may not seem important, but I think it could be. The SRAM chips that I've seen top out at 512KB, so you could build a CLUT-based card using SRAM instead of SDRAM, making the memory interface much simpler. I know SDRAM interfaces are a "solved problem" and there are wizards and things to build them for you, but it always hurts my head just thinking about it. In contrast, an SRAM interface would be dirt simple to build and understand and verify that it's correct.

Here's an updated version of my earlier code, modified to assume the framebuffer is in external SRAM instead of internal FPGA memory. The CLUT is small and can stay in internal FPGA memory. Hopefully it's clear:

// SRAM interface
reg [18:0] address;
wire [7:0] data;
// internal state
reg [23:0] clut [0:255];
reg [18:0] pixelCount;
// output stream to video converter 
reg [23:0] output;
always @posedge clk begin
// get a byte from the framebuffer in SRAM
// the pixel count is the SRAM address
    address = pixelCount;
// do a CLUT lookup for the framebuffer byte that was
// fetched on the previous clock cycle
output = clut[data];
if (pixelCount == 307199)
pixelCount = 0;
else
pixelCount += 1;
end


Additional minor advantages of the smaller memory requirement are fewer address bits to decode (so fewer I/O pins needed), and the ability to fit the whole address space of the card into the 1 MB slot space of the Mac's 24-bit memory map.

 

Gorgonops

Moderator
Staff member
I've also been musing about the possibilities a FPGA with ARM SoC opens, just for fun. It might be possible to do some really neat stuff like running WebKit on the ARM cores and rendering to a framebuffer that's overlayed onto space within the window of a custom browser interface. A hardware web accelerator.
Sadly that's probably easier to pull off than the "obvious" use for an embedded ARM SoC, which would be to do Quickdraw acceleration with it, IE, essentially clone something like the 8*24GC. There's an article floating around explaining in detail how that card works and suddenly you're in a whole new world compared to the "here's an array of memory, point the CPU at it".

(Accelerated quickdraw introduced a ton of new concepts like off-framebuffer rendering, etc.)

 

Trash80toHP_Mini

NIGHT STALKER
I think true color is actually a natural choice to target at the beginning because it's what the HDMI TMDS encoder takes as input. Supporting B&W would require extra FPGA logic to stream 1-bit pixels and convert them into a 24-bit pixels to feed into the encoder. That is probably not hard but I wouldn't know where to begin with a design.
+1 for true color. Such would be the heart of the matter for any high end NuBus VidCard spec. and that's exactly what we're talking about here when you get right down to it. NuBus cards with CLUT for low color, low resolution (relatively) output would only have been used for gaming in Macs of the II-IIcx era. NuBus gaming abruptly ended when internal video was introduced with the IIci.

Only the IIcx lacks a spare slot for for a Toby or High Performance card from Apple which remain inexpensive on eBay. The added cost of a used MultiSync LCD is reasonable cost and far more attractive a proposition for any Mac collection, remaining useful in a primary workstation right up through the spanned screen Notebook setup from which I'm making this post and its eventual replacement. Said display is a better solution for gaming across the full spectrum of internal video Macs NuBus AND PDS.

The IIfx would be the only flyer in the NuBus group and should wait until such time as a 68030 PDS card might be targeted and achieved in a quest for 040 PDS. IIfx remains a non-standard, outer ring flyer even in that group and need never be targeted.

 

Trash80toHP_Mini

NIGHT STALKER
If you haven't seen it, here's a great discussion of how a basic NuBus video card works: http://dec8.info/Apple/How_the_Macintosh_II_Nubus_Works_Byte88.pdf If you can get your hands on the "NuBus Monitor" software they describe, it would be a very helpful debugging tool.
It's several pages back, so I'll re-post my list of reference materials:

How the Macintosh II Nubus Works -  Second 1988 Mac Special Edition • B Y T E - software developed in their NuBus card build might be available?

Developing For The Macintosh NuBus-CERN-CM-P00062891 - its list of reference materials is invaluable. I've been searching documents on the list. Those easily found online crossed off:

Inside Macintosh

Volume I

Volume II

Volume III

Volume IV - listed as having relevant information

Volume V - listed as indispensable and I've not found it yet

X-Ref

IEEE 1196-1987 - Standard for a Simple 32-Bit Backplane Bus: NuBus - withdrawn in 2000 so this may be difficult to come by - https://standards.ieee.org/standard/1196-1987.html

Nubus Designer's Workbook for the Mac II from Eclipse Technology, Inc - commercial publication, https://isbnsearch.org/ was no help,probably doesn't have one. Need used book source?

That's what I've put together so far. The gang at Mac OS9 Lives has posted an excellent listing of the Developer CD Series: ADC Developer CD series (1991-2002) ISTR the first one being important to this project for whatever reason, but I lost the reference for why. Possibly error corrections, hopefully with additional NuBus development information:

Developer Helper Volume 1: Phil and Dave’s excellent CD

_______________________________________________________________________________________________________

I have the wood pulp based media version of Technical Introduction to the Macintosh Family, gone over it and not found any reason to add it to the list.

 

trag

Well-known member
Here is a summary of my thinking, back when I was looking at this type of project.

The FPGA development board I was looking at (something in the Spartan family) had a DDR2 chip on board and a rudimentary VGA out port on board.   The VGA port was actually something like a ridiculously crude (resistor ladder?) D to A converter, but servicable.

Code for displaying images through the VGA output for the development system was available.   Code for using the DDR2 chip was available.

My thought was that the starting point should be, as others (Gorgonops) have noted above, choose a segment of the DDR2 to use a frame buffer.    Point the VGA display code for the development board at that region of memory.    

Then the two tricky parts remaining, are:

1)   Provide a (mumble, what'sit'called?) ROM that will tell the host system this is a video card.  

2)   Provide an interface so that reads and writes on the NuBus (or PDS) interface come from and go to the segment of DDR2 being used as a frame buffer.

You can work up to these things in small steps.   I would start with:

1)  Get the VGA display code working.   Display a solid color or other test images.

2)  Learn to write to and read from the DDR2 using potted code.

3)  Learn how to point the source for the VGA display at the DDR2 chip.

4)  Change the DDR2 contents and watch to see the VGA display change.

5)  Interface the FPGA development system with the Macintosh electrically.
6)  Get the FPGA interface to the Macintosh bus working.   Try some simple write to/read from experiments from Macintosh to FPGA.  Perhaps something as simple as a program on the Macintosh that pokes an address in the FPGA/expansion card address space and lights up an LED on development board.

7)   Get the declaration ROM recognized/working.

8  )  Set up the declaration ROM to point at the segment of DDR2 chosen to be the frame buffer.  

 

trag

Well-known member
Ultimately, for VGA output, I was looking at something in this family for a, geeze, I'm having vocabulary failure today, the thing a majigger that acts as the D to A converter and changes a stream of pixel data into video out data.

View attachment ths8135.pdf

View attachment ths8200.pdf

View attachment THS8133.pdf

View attachment ths8134b.pdf

The THS8135 is less than $10 at Digikey.

But to start with, using the VGA output provided on the/a development system simplifies matters tremendously.  Leave all the CLUT stuff for later, unless the Macintosh requires it and won't work without it.

 
Last edited by a moderator:

Gorgonops

Moderator
Staff member
+1 for true color. Such would be the heart of the matter for any high end NuBus VidCard spec. and that's exactly what we're talking about here when you get right down to it...
A couple comments:

#1: If you look at the advertising flyers for Apple's *own products* (I looked up a few when I was checking to see what grayscale depth things like the Portrait card support) Apple actually lists as a feature in some of them that the card can let you use fewer colors for greater performance. I *really* think you might need to take a step back and think about how fast these machines actually are. The Quadra 950 went to 24 bit color at 832x624 with PDS-speed VRAM and even at the time I don't think anyone would have described that mode as "fast".

The counter argument is, of course, that cards like SuperMac's wares that went up to 1600x1200 did exist, and in fact existed over Nubus, but they *were* accelerated.

#2: Handwaving games is great and all, but the fact is you *will* be sacrificing software compatibility if your main display is locked only into True Color mode, and while it may be all right for some people to say "oh, well for that stuff I keep around this old Multisync I connect to this other card/motherboard video" that's not a super helpful suggestion for a lot of other use cases.

#3: Seriously, a CLUT isn't hard and I wish I hadn't mentioned it. The sample driver code in the manual *does* lay out the boilerplate for what functions you need to support for handling the Quickdraw transactions that write values to the CLUT, the part that was missing was the code for actually writing updates onto the Toby hardware, which strictly speaking isn't important unless the goal is to make a register-level compatible hardware clone of the Toby. The reason I bemoaned that is at least when I skimmed it the first time it was unclear to me exactly where the hardware registers (including those to set the CLUT table, but a lot of the others too) are mapped in slot space and, possibly importantly how the division between the "RAM space" and "Control Space" is handled when both need to be crammed into a 1MB slice while running in 24 bit mode. This is a thing you're going to have to at least minimally figure out even if you have a card lacking a CLUT. And I'm sure the information is there somewhere, I just didn't really grok it the first time.

But, *shrug*, whatever.

Here's an updated version of my earlier code, modified to assume the framebuffer is in external SRAM instead of internal FPGA memory. The CLUT is small and can stay in internal FPGA memory.
The dev board that's been bandied around for this has 32MB of (DDR?) SDRAM with a 16 bit bus width on it, the assumption so far has been that the framebuffer will live in that. (Which of course is going to necessitate a read/write buffer for the Mac to be able to reach it, but that shouldn't be a huge deal.) But, yeah, the CLUT should definitely live internally. It only needs to hold 256 24 bit words of memory, IE, less than 1K, so it shouldn't be a big deal.

I wanted to elaborate my thinking: I think true color is actually a natural choice to target at the beginning because it's what the HDMI TMDS encoder takes as input.
One question about that? Does the TDMS encoder operate on "a word at a time", or is there some kind of streaming function with it? IE, are there primitives hard-coded into the FPGA board's hardware design that accelerate grabbing bytes straight off the DRAM memory? Unless something like that is in play then I don't see the problem with inserting a CLUT; as BMoW's pseudocode shows, you can basically think of the CLUT as if it were a 256 pixel long framebuffer, with the pixel value the output circuitry reads from the "clutbuffer" determined by using the data value fetched from the actual framebuffer as the address.

Even if there is some kind of "streaming" where "streaming" is a FIFO of some size on the FPGA that still shouldn't be a problem, you can load that FIFO with the results of the indirection described above, right?

 

bigmessowires

Well-known member
The challenge with SDRAM is that typical SDRAM controller logic presents a transaction interface, and doesn't have any fixed or guaranteed address-to-data timing. It's a black box with variable timing depending on what other memory transaction are in progress. That's no good for a streaming framebuffer where you need to be constantly reading out pixel data at some fixed rate, while also interleaving reads and writes from the CPU. To accomplish that, you may need to jettison the canned or wizard-created SDRAM controller and write your own from scratch, which is a decent-sized project all by itself. This was my downfall in my attempts to build a DIY graphics card, years ago. In contrast, doing it with SRAM is trivial. Some FPGA dev boards do have SRAM, such as the Altera DE1 that I used to build Plus Too. But SDRAM is certainly fine too if you can solve the interface challenges easily enough.

 

dlv

Active member
At least in my experience, it's much easier to use a microcontroller than an FPGA. The development experience for an FPGA is more challenging, the tools are complex and confusing, it's difficult to wrap your head around what the Verilog or VHDL is doing, and the simplest operations seem to require 10x more effort and have 100x more bugs.
Yeah. Well, a big part of what I want to get out of this, is to learn about FPGAs and FPGA logic design.

An FPGA with integrated ARM would be neat, if you have a plan for the ARM. There are also simple open source soft-core CPUs that you can implement in the FPGA logic, if your FPGA has enough resources.
Any plans with ARM would be way down the line. Perhaps soft cores might be worth looking into for QuickDraw acceleration later?

(Accelerated quickdraw introduced a ton of new concepts like off-framebuffer rendering, etc.)
Where might one read more about that?

you could build a CLUT-based card using SRAM instead of SDRAM, making the memory interface much simpler.
Yes. It might be worth exploring if SDRAM becomes intractable.

Here's an updated version of my earlier code, modified to assume the framebuffer is in external SRAM instead of internal FPGA memory. The CLUT is small and can stay in internal FPGA memory. Hopefully it's clear:
It is, thanks. That's roughly what I expected for a CLUT. I don't think B&W support would be hard either? I imagine (naively) you'd probably want to read a byte (for example) every eight pixel clock cycles, and on every pixel clock cycle perform some bit masking to determine the state of the current 1-bit pixel, then output the appropriate 24-bit pixel.

You can work up to these things in small steps.   I would start with:
That is more or less the same progression I see, except HDMI and SDRAM instead of VGA and DDR2, respectively.

One question about that? Does the TDMS encoder operate on "a word at a time", or is there some kind of streaming function with it? IE, are there primitives hard-coded into the FPGA board's hardware design that accelerate grabbing bytes straight off the DRAM memory? Unless something like that is in play then I don't see the problem with inserting a CLUT; as BMoW's pseudocode shows, you can basically think of the CLUT as if it were a 256 pixel long framebuffer, with the pixel value the output circuitry reads from the "clutbuffer" determined by using the data value fetched from the actual framebuffer as the address.
I used "streaming" kind of loosely, sorry. There is no primitive (some sort of DMA?) for grabbing bytes from memory and pumping them to the TMDS encoder that I am aware of. I agree CLUT is not a big deal, at least in theory. I was simply lacking imagination at 4am last night for how it might be done in an FPGA but BMoW's pseudocode totally makes sense.

Ultimately, for VGA output, I was looking at something in this family for a, geeze, I'm having vocabulary failure today, the thing a majigger that acts as the D to A converter and changes a stream of pixel data into video out data.
Yeah, I briefly looked at the equivalent ADV7123 DAC, since that's what's used on the optional daughter board (which I did not purchase) of the QMTECH FPGA dev board I have. QMTECH ships an example design to drive that IC. The link I shared earlier about hacking 1080p @ 60Hz output from a Spartan 6 appeared to imply an HDMI TX IC would be able to workaround the serialization bandwidth limitations of that FPGA, so I also looked at the TFP410 and ADV7513.

---

BTW thanks again for all the input and discussion. I'm definitely going to need all the help I can get to have any hope of realizing this project.

 
Last edited by a moderator:

Gorgonops

Moderator
Staff member
It is, thanks. That's roughly what I expected for a CLUT. I don't think B&W support would be hard either. I image you'd probably want to read a byte (for example) every eight pixel clock cycles, and on every pixel clock cycle perform some bit masking to determine the state of the current 1-bit pixel, then output a 24-bit pixel.
The way the real hardware did it back in the day was generally to feed your byte (or word) size memory read into a parallel-load shift register and simply clock out a bit on every pixel clock. With your HDMI output I imagine you could just effectively do the same thing, IE, you read 8 or 16 bits (depending on the width of your video buffer), stuff it into a shift register, and as you clock out out you effectively just multiply the 0 or 1 you get into the appropriate bit mask that'll generate an all white or all black pixel.

(Or, if you want to be stupid, design it so you can choose what color is used for lit pixels so you can emulate a green or amber monitor. That'd be a totally useful feature.)

For 2 and 4 bit indexed modes you'll sort of deal with a similar problem, IE, each byte read will have two or four pixels in it. My suggestion might be to load the pixel data read from  RAM into a register that implements a "sliding window" that moves in appropriate size chunks down the register and only supplies the bits in the window as the address for the CLUT. IE, in an 8-bit mode if you have a pixel that looks like this:

10010011

Then you simply read the 147th position in the CLUT for the output, but if you have the same value representing two 4 bit pixels on the first read you mask it like this:

0011

and the color is item #3 in the CLUT, and the second read:

1001

gives directs you to color #9. Strictly speaking B&W could be a special case of this; the 1 bit selects from a 1-bit CLUT load where 0 is black and 1 is white.

I don't know if this would strictly be kosher, we'd have to see if Quickdraw ever *reads* the CLUT off a card instead of just loading it, but here's an optimization: For every color depth less than 8-bit have the driver load the 256 item CLUT with 16, 64 or 128 identical copies of the palette. Then when the hardware is processing each pixel it simply masks all the bits belonging to all the other pixels in the read buffer with zeros, thereby automatically applying only the data that matters to its own positional copy of the palette. Then you don't even have to move/shift/"right justify" the pixels you're using, the same CLUT hardware is used unchanged for all indexed pixel depths.

Hopefully what I just said there makes some rational sense...

 
Last edited by a moderator:

Gorgonops

Moderator
Staff member
Where might one read more about that?
I was going to point you to a pared down summary that's floating around the web but I think I found the thing it's based on. Which still isn't anywhere near enough to write your own Quickdraw implementation, but it at least mentions some of the conceptual bugaboos.

https://vintageapple.org/develop/pdf/develop-03_9007_July_1990.pdf

Yes. And a shift register is a much better/cleaner idea.
For just black and white the shift register is fine. But if you need to suck up doing indexed colors anyway I think the sliding window thing is the way some real video cards do it.

Note another way this "sliding window" could be accomplished is essentially making the buffer you load the data you've read into *itself* kind of a shift register. IE, if we go back to that 16 color example, on the first pixel clock you read the first 4 bits/slash/mask the rest of the bits with zeros. Before the next read you simply rotate the data 4 bits over, thereby assigning the data you've already rendered to the ash heap of history, so the next read gets the next four bits, etc. (And how far you shift depends on your color depth.) Or to put it simply, treat your read buffer as a stack and POP each pixel's worth of data off, however many you need, in parallel.

Okay, this is actually better, because now you don't need to load multiple copies of your palette in the CLUT, you can always just use the lowest slots. So...

1fb08e63500e9d2e6f6479c3e8edcc54.jpg.2be8f907665aa1cf284b1c7802d249f4.jpg


Offhand observation: Technically speaking you still might want the CLUT in the path even in a 1-bit mode. I do *not* know if Macs support this, but I think both EGA and VGA (and presumably their descendants for a while) and many other systems with similar hardware technically let you change the palette for 1-bit mode. Granted that's a totally weird edge case.

 
Last edited by a moderator:

Trash80toHP_Mini

NIGHT STALKER
I don't know if this would strictly be kosher, we'd have to see if Quickdraw ever *reads* the CLUT off a card instead of just loading it
The only remotely possible case I can think of for that being true might be the Macintosh Display Card 8•24 GC (you mentioned as well documented) doing a block transfers of frame buffer/CLUT data from and then back to a second NuBus card for which it is providing QuickDraw acceleration? Can't imagine it's not hijacking the frame buffer input and CLUT at the CPU QuickDraw output level for that card to itself for distributing the accelerated buffer data as block transfers to the unaccelerated card's frame buffer/CLUT it would be supporting, Thought it might be worth mentioning if someone digs up that documentation. I'll go quiet again now. :mellow:

 

Gorgonops

Moderator
Staff member
Which still isn't anywhere near enough to write your own Quickdraw implementation, but it at least mentions some of the conceptual bugaboos.

https://vintageapple.org/develop/pdf/develop-03_9007_July_1990.pdf


Doh. Obviously I meant to include that the part I'm referring to starts on page 332.

There are references to other pieces of documentation in Inside Macintosh, technotes, etc. I still imagine it'd be a very tall order to implement a QC accelerator without some source code to reference. I am curious how other vendors that made accelerated cards pulled it off; the fact that they did does imply there's some kind of reference implementation? Question might be whether it required an NDA to see it.

 

Gorgonops

Moderator
Staff member
On the subjects of CLUTs, while reading the LC III technote for other reasons I happened to notice this paragraph and thought it might be relevant:

[SIZE=10pt]Color modes up to 8 bits per pixel use a 256 x 24-bit CLUT which is provided by an enhanced version of the custom chip used in the LC and LC II. Monochrome modes also use the CLUT but drive the red, green, and blue inputs with the same signal. [/SIZE]
So that's one mystery solved, that at least on Mac video cards that support both color and mono monitors grayscale modes use the CLUT to map shades of gray, not some alternate "direct DAC" path.

 

Trash80toHP_Mini

NIGHT STALKER
Interesting. Likely irrelevant entirely, but some of the mono VidCards for TPD and the like need to be set up to drive RG&B at the same time to display correctly. Otherwise you get a dim black on blue display IIRC. Your reference seems to jibe with that?

 
Top