30Video: New video board for SE/30 / IIsi / more?

Reasons.

Well-known member
Are you thinking about something like this ?

Nah, this would be even simpler--just taking the basically-AUI internal interface between the card and the I/O board and exposing it externally as actual-AUI (or AAUI, in the silly version). I'm not sure how practical it'd be to implement, but would have the advantage of making the I/O board simpler and cheaper to produce instead of bundling an entire transceiver for a portion of users who might not need one. ZZJ's idea of using the same RJ-45 connector for both old and new cards is a much better way to save space, though.
 

joevt

Well-known member
I would be curious if anyone has thoughts on build systems, though. Right now I use a makefile to govern the main process, but I'm getting to the point where I need to run multiple sub-builds for card variants over time. Also, I hate touching the makefile; it drives me nuts.
Can't the makefile do sub-builds? Everything that can be built is an item in the makefile and each item has dependencies on source files and commands that use those source files to build the item. The items and source files can be any file type. The commands can be anything.
 

uyjulian

Well-known member
Wrote this in the for-sale thread but realized it made more sense here. A succession of silly Wednesday afternoon thoughts on constructing a universal VGA and Ethernet I/O board for the SE/30:
  1. The existing I/O board is just a passive connector, which I'm pretty sure is also the case with Bolle's Ethernet riser.
  2. Might make sense to just add AUI to the mix; you could have a backplate with VGA, RJ-45, and AUI connectors.
    • Connect things appropriately on the inside, add a transceiver on the outside. It's not neat but it would get the job done without active circuitry.
  3. Hmmmm, but the AUI connector is pretty big. Probably couldn't fit that on the I/O plate with the rest. If only there was a smaller connector, sorta a mini DisplayPort equivalent.
This is how I found myself reinventing AAUI, an objectively cursed idea. Those connectors are still available, though, and I do know enough KiCAD to be dangerous...
You could try looking at SFP connectors in place of AUI if you want something smaller.
 

zigzagjoe

Well-known member
Can't the makefile do sub-builds? Everything that can be built is an item in the makefile and each item has dependencies on source files and commands that use those source files to build the item. The items and source files can be any file type. The commands can be anything.

The issue is more that of structure. I'm not an expert in Makefiles so I could just be missing something. Currently, all 3 variants of the card use the same source with different flags to change identity and behavior, and to build one complete ROM may require building two complete ROMs (again with different flags) to get a "complete" ROM.

I haven't found a way to change flags based on the currently active build, so for each major variant it has to have rules to build objects for each variant's output directory with the appropriate flags applied... which doesn't seem to be something make really wants to do.

It's just not structured for long term maintainability since new cards using the same controller will have closely related software but need their own slight changes for each... I just can't wrap my head around a way to get make to do that.

1724334394450.png
 

Reasons.

Well-known member
Even less of an expert than you, but the QMK keyboard firmware sounds like it has a somewhat similar structure to what you want, albeit on a very large scale.
 

zigzagjoe

Well-known member
Even less of an expert than you, but the QMK keyboard firmware sounds like it has a somewhat similar structure to what you want, albeit on a very large scale.

Yeah. PlatformIO does something similar too... I just don't know the most effective way to go about it. My experience with Makefiles is like wrestling with a pig, you both get muddy and the pig enjoys it.

Came up with the IO plate for the IIsi. It's a bit clever in that it snaps into place with the VGA connector removed, but screwing in the VGA connector stiffens it up so it stays in place even if yanking the VGA out.
1724341299534.jpeg1724341308872.jpeg
 

joevt

Well-known member
The issue is more that of structure. I'm not an expert in Makefiles so I could just be missing something. Currently, all 3 variants of the card use the same source with different flags to change identity and behavior, and to build one complete ROM may require building two complete ROMs (again with different flags) to get a "complete" ROM.

I haven't found a way to change flags based on the currently active build, so for each major variant it has to have rules to build objects for each variant's output directory with the appropriate flags applied... which doesn't seem to be something make really wants to do.

It's just not structured for long term maintainability since new cards using the same controller will have closely related software but need their own slight changes for each... I just can't wrap my head around a way to get make to do that.
I think you need a separate item for each variant of the card. Anything that is different between variants needs a separate path so they don't interfere with each other (like how you made a separate objects folder for gs and vga .o files which are built from common/shared .c and .S files).

I don't see a link between FULLROM and the vga or gs .o files.

I'm not sure how FULLROM is built. It's VGAHALFROM except the second 32K is replaced with the first 32K of GSHALFROM when CARDTYPE is "GS" or the first 32K of VGAHALFROM when CARDTYPE is anything else?

There's a makefile tutorial at:
 

zigzagjoe

Well-known member
I think you need a separate item for each variant of the card. Anything that is different between variants needs a separate path so they don't interfere with each other (like how you made a separate objects folder for gs and vga .o files which are built from common/shared .c and .S files).

I don't see a link between FULLROM and the vga or gs .o files.

I'm not sure how FULLROM is built. It's VGAHALFROM except the second 32K is replaced with the first 32K of GSHALFROM when CARDTYPE is "GS" or the first 32K of VGAHALFROM when CARDTYPE is anything else?

There's a makefile tutorial at:

Sorry, that was only a fraction of the makefile to show the duplicated rules. This is the complete one. https://pastebin.com/gkcTZ0Ez
As it's written, it builds a ROM for a Si card or GS card depending on the variables set.

The GS card is something of a special case as it's dual-personality, switching the active ROM image depending on if the grayscale mode is active or not. All ROMs are the based on same source with different flags, sometimes with a different (extra) include directory for the major variants with different resources/headers.

So: Each 'half' ROM build entails compiling mix of assembler and c, then linking and some post processing to create a binary file. Simplified it works out to something like this.
  1. Grayscale
    1. build VGA 'half' ROM with grayscale-card type defines and VGA include directory
    2. build GS 'half' ROM with grayscale-card type defines and GS include directory
    3. post-build concat the binaries then build an updater
  2. VGA/SI
    1. build VGA 'half' ROM with SI-variant defines and VGA include directory
    2. post-build concat the VGA 'half' twice then build an updater
    3. (this one currently builds a redundant GS ROM which is discarded)
Future cards would follow the VGA model with a single build per card, at least. In an ideal world, from a build tooling perspective, a new card variant would just need to supply any extra flags for the build (ie. the extra include directory) and little else.

The common theme is one or more ROM builds with distinct flags, then post-processing of those builds. I suppose something like cmake does with build directories might work better, as then each (sub) ROM build can be its own cmake build for the compile then do the post-processing in the makefile. No great desire to learn cmake, but that may be the path of least resistance as that would seperate the nuts and bolts of building the 'half' ROM and the post-processing to make a full ROM.
 

zigzagjoe

Well-known member
960x540 over analog VGA looks like a bust. I tried CVT, CVT-RB, CVT-RB2 calculated timings with appropriate frequencies and I can still only get one test LCD to pixel perfect sync. The remainder of my test monitors don't seem prepared to deal with 16:9 over analog and instead either try to treat it as 1024x768 or 640x480.

However, I did find that a CVT-RB XGA signal at 54hz does seem to work just fine across the 4 monitors I've to hand to test with. I've posted up a beta that enables 1024x768@54hz, if people find this works reliably then I may add it. I don't expect this mode to work on CRT monitors - RB timings were intended for LCD only.

Regarding 30Video SI, it was found to function correctly on top of a Bolle stack as expected (in a SE/30). I don't think that I'll be able to come up with a resolution requiring populating the secondary crystal slot, so release is probably imminent. I'll be bringing some of these to VCF also.
 

Reasons.

Well-known member
1024x768 is really cool! Glad you were able to get it working. How does 54Hz look?

Out of curiosity, did you test 960x540 with a scaler? I'd be interested to see if that can force it to work on a generic monitor. At the very least I've got a couple of GBS Control boards I could use to test it so long as populating the other crystal slot wouldn't blow something up. Understand if you'd rather not release something that doesn't work reliably without a second piece of hardware, though.
 

zigzagjoe

Well-known member
1024x768 is really cool! Glad you were able to get it working. How does 54Hz look?

Out of curiosity, did you test 960x540 with a scaler? I'd be interested to see if that can force it to work on a generic monitor. At the very least I've got a couple of GBS Control boards I could use to test it so long as populating the other crystal slot wouldn't blow something up. Understand if you'd rather not release something that doesn't work reliably without a second piece of hardware, though.

You can't notice the 54hz since LCDs are always illuminated unlike CRT. So no flicker, or perceptible motion issues that I've seen. As long as the monitor detects it's 1024x768 then it ought to be fine. A secondary crystal would allow bringing that refresh rate up to where it ought to be (56mhz required).

I don't have a scaler, so I hadn't tried that, but I had the same thought though that they could likely be convinced to sync it. No idea how common of a use case that would be. I could see having nonstandard resolutions hidden by default but have an extension to unlock them. Depending on how flexible scalers are a second crystal might not even be needed.

With how things are structured currently, there has to be a resolution that explicitly codes for the switch to the second crystal slot, so just having something there wouldn't harm anything. However the ROM must have to have specific parameters for whatever is populated there since there's no way to detect what it is. If I find enough interesting resolutions within the capability of the controller I would instead put a Si5351a frequency generator and bit bang the I2C so I'd not be at the mercy of crystal(s).
 

eharmon

Well-known member
You can't notice the 54hz since LCDs are always illuminated unlike CRT. So no flicker, or perceptible motion issues that I've seen. As long as the monitor detects it's 1024x768 then it ought to be fine. A secondary crystal would allow bringing that refresh rate up to where it ought to be (56mhz required).

I don't have a scaler, so I hadn't tried that, but I had the same thought though that they could likely be convinced to sync it. No idea how common of a use case that would be. I could see having nonstandard resolutions hidden by default but have an extension to unlock them. Depending on how flexible scalers are a second crystal might not even be needed.

With how things are structured currently, there has to be a resolution that explicitly codes for the switch to the second crystal slot, so just having something there wouldn't harm anything. However the ROM must have to have specific parameters for whatever is populated there since there's no way to detect what it is. If I find enough interesting resolutions within the capability of the controller I would instead put a Si5351a frequency generator and bit bang the I2C so I'd not be at the mercy of crystal(s).
I wonder how much modern monitors might actually be more tolerant of low refresh rates thanks to 30Hz HDMI.

My Extron is no miracle worker. It can pull in quite a few signals that are slightly out of spec, but it just increases the margins. Putting things way out of spec it isn't too different from monitors.

It's from Epson's S1D1 line.
S1D13505? If so that maximum pixel clock of 40MHz is pretty limiting, yeah.
 

zigzagjoe

Well-known member
I wonder how much modern monitors might actually be more tolerant of low refresh rates thanks to 30Hz HDMI.

My Extron is no miracle worker. It can pull in quite a few signals that are slightly out of spec, but it just increases the margins. Putting things way out of spec it isn't too different from monitors.


S1D13505? If so that maximum pixel clock of 40MHz is pretty limiting, yeah.

I think it'd be more recognizing the aspect ratio so much than dealing with timing issues. the monitors that didn't like the 16:9 960x540 seemed to be sampling at less than 960 pixels (yet were correctly stretching to the full width). Possibly if that same signal was arriving over HDMI they'd be more receptive.

You might be right though. My little NCR monitors will drop into a "failsafe timing" mode where framerate is halved on signals they particularly don't like, suggesting that perhaps it's doing a trick to allow it to display something it otherwise couldn't.

Epson: Yep, that's the one. It can run a bit faster than that maximum though - it seems to be very conservatively rated. In the case of XGA though asking for 66mhz might be too much, and I don't have enough resolution for full blanking intervals anyways. Given that the CVT-R timings dated to 2003 and were intended for LCDs which don't require the long blanking intervals of CRTs, I'd assume most LCDs would be OK with them. Ironically, the only screen I've found that complains is my little NCR displays which otherwise will sync just about anything including compact mac internal video.
 

zigzagjoe

Well-known member
Sneak peak of what's in the pipe :)

1729918479590.png1729918485628.png

This new board is intended to support all of the current functionality of the SI cards, while adding hardware 16 bit color support and provision for connecting a LCD. 16 bit color works well at 640x480, but is definitely slow at 800x600 resolutions due to bandwidth limitations. Still fine for desktop use. (16 bit can't be done at 1024x768)

I've also designed a test board to evaluate the possibility of a LC version, but I am not sure that there will be sufficient interest to justify finishing it.

1729918463527.jpeg1729918701809.jpeg

Software 16 bit support is still something I'm playing with for the existing 30video cards, but it's a difficult problem to solve without using excessive CPU or updating so slowly as to be unusable.
 

Reasons.

Well-known member
Really cool! What's the difference here that allows for 16-bit color? Are there any other potential implications of the hardware changes?
 

zigzagjoe

Well-known member
Primary focus is exposing the LCD interface, with the intent being to drive the 8.4" 640x480 LCD shown above. This is to support those who want to swap the CRT for an LCD. Only my first prototype cards had the interface present, but the connector/pinout wasn't optimized for noise (or anything, really) so it has some issues as seen above when on an extended cable.

Secondary focus is the 16 bit color: this design has a CPLD to perform byteswapping of data lines which is required in 16 bit, the epson chip always wants little endian in memory despite supporting and respecting endianness on the bus interface. As a side effect, now that the CPLD has the data bus going through it, there's a possibility I could find a purpose in making it addressable (act as a peripheral interface?). If nothing else it certainly could be used to drive a programmable clock generator.

The existing cards require "fixing" the 16 bit mode in software. The approach I've been using requires shadowing VRAM and copying pixels to the actual display buffer while transforming them, which isn't a fast process to begin with, and it's further complicated in that there is no way to be informed that a region of the screen requires updates. The best "reliable" approach I've found is something like MiniVNC does which is to hash each row of the screen buffer to determine if it's changed. Except, at that point, it's about as expensive in CPU time to just swap the bytes anyways...

This is roughly what the software approach looks like.


The cursor trails can be eliminated, at least, but the slow refresh speed remains as there's a direct relationship to refresh speed vs. amount of CPU time available for OS/applications. Pretty awful, and a terrible user experience if someone clicks "Thousands" and expects it to work like the lower depth modes.
 

Reasons.

Well-known member
Oh yeah, that seems much more optimal than the software approach. Great to have a good internal option for driving a LCD, too!

Not to hop on my hobby horse here, but does having the CPLD in the mix make it easier to implement Swivelview? I know the blocker with the original cards was the requirement to come up in 8bpp mode instead of 1bbp mode.
 

zigzagjoe

Well-known member
Oh yeah, that seems much more optimal than the software approach. Great to have a good internal option for driving a LCD, too!

Not to hop on my hobby horse here, but does having the CPLD in the mix make it easier to implement Swivelview? I know the blocker with the original cards was the requirement to come up in 8bpp mode instead of 1bbp mode.

Indirectly, in that it could enable a programmable pixel clock generator if I can find a resolution that would benefit from it. Currently I can get by with the single 50.350mhz crystal for VGA, SVGA, and nonstandard XGA signal.

The 8-bit requirement, while not ideal, can be hacked past as @GeekDot did for @Bolle's Greycake card. The issue with swivelview is more that there's not a widescreen resolution that I've been able to generate over VGA and get more than one monitor to sync to it well. 960x540 would be ideal, but I have only been able to get one of my monitors to sync it pixel-perfectly. The others sync it as another resolution and lose horizontal resolution or have other visual issues.

With the LCD interface exposed, it does provide a path to hook a TMDS encoder (DVI/HDMI), which monitors may be more inclined to accept oddball resolutions on, but if I'm still restricted to multiples of 1080p for pixel perfect scaling, that basically means still only 960x540 is an option, or possibly 640x360 (not a good option...)

I suppose there's a possibility I could do something along the lines of what this mad bastard did and integrate a realtek scaler... just had to plug that project as it's nuts. Ultimately, I think finding (or developing) a more capable controller would be needed to do it sensibly, and I definitely don't have the chops for FPGA design at this time.
 

zigzagjoe

Well-known member
Wrote this in the for-sale thread but realized it made more sense here. A succession of silly Wednesday afternoon thoughts on constructing a universal VGA and Ethernet I/O board for the SE/30:
  1. The existing I/O board is just a passive connector, which I'm pretty sure is also the case with Bolle's Ethernet riser.
  2. Might make sense to just add AUI to the mix; you could have a backplate with VGA, RJ-45, and AUI connectors.
    • Connect things appropriately on the inside, add a transceiver on the outside. It's not neat but it would get the job done without active circuitry.
  3. Hmmmm, but the AUI connector is pretty big. Probably couldn't fit that on the I/O plate with the rest. If only there was a smaller connector, sorta a mini DisplayPort equivalent.
This is how I found myself reinventing AAUI, an objectively cursed idea. Those connectors are still available, though, and I do know enough KiCAD to be dangerous...

I ended up designing something like this for selfish reasons. I wanted to be able to run a 30Video GS + a SEthernet/30 derivative design, and my Interware GrandVimage card all in a single machine, which requires DA-15 (Apple Video/AUI), DB-15 VGA, and a passive ethernet port.

And so the Combo Breaker was born. Once I have it tested by someone with Bolle's riser I'll post the files up. Unfortunately, this won't work with vintage ethernet cards as they have AUI->Ethernet electronics on their IO boards. Might be possible to nudge the VGA and DA-15 connectors over to leave some room for a keystone coupler, though.

1730084785674.jpeg1730084653062.jpeg1730084872156.jpeg
 
Top