• Updated 2023-07-12: Hello, Guest! Welcome back, and be sure to check out this follow-up post about our outage a week or so ago.

SE/30 GS: 640x480 RGB interlaced/downscaled to 512x342 GS frame sets?

Trash80toHP_Mini

NIGHT STALKER
Crazy: that's a given: bounced some WAGs about this off apm in pm, thought I'd risk public humiliation yet again: [:p] ]'>

fuzzy caffeine deprivation alleviation musings: fuzzy synopsis mode:

512 is exactly 80% of 640 pixels: that'd be the crux of the problem: horizontal frequency limitations of the A/B

342 is exactly 80% of 480 pixels: that'd be a significantly less non-trivial solution, we've seen "scrunched/offset" HiRes images regularly.

Possible vehicle for image manipulation, from: SE/30 Grayscale Project: 640x480 @4 bit or 8bit

Rebuilding a Macintosh II "Toby" Display Card donor:
Why the Toby?

- it's as simple a frame buffer card as it gets

- 99% discrete or DIP components that are easily harvested if ©Apple/replaced with SMT versions of stock CMOSS/LS DIP ICs

- only remaining component is a socketed PGA ASIC!

- DeclROM on board

- drivers by Apple

- crystal cans and RAMDAC easily modified/replaced to better match GS converter input requirements

- multilayer PCBs and design tools now within reach of mere mortals

Procedures:

- schematic development of Toby to enable:

--- removal of NuBus MUX

--- replacement with 030 PDS buffer/line driver setup

--- replacement of space hogging low density DIP memory with an easily sourced, higher density SMT memory configuration

--- addition of SlotID header setup for SE/30's 030 PDS requirements

PCB would be enlarged DiiMO form factor with PDS passthru for horizontal NIC and joetheaombie's DayStar adapter GAL clone.

Possible implementation of a second passthru slot for remaining interrupt?

DIP components could be aligned vertically with layout "backwards" placing PGA ASIC's thruholes out of the way on the rear of the card enabling easier routing of passthru traces.

Dunno, what do you analog/CRT boffins think of this latest crazy notion? [:D] ]'>
The object here would be replacement of the Toby's VRAM with a high speed MUX circuit, probably based on something from the Rpi universe (faster/more simple, dedicated CPU or something that could be implemented in FPGA) to collect the data feed to the frame buffer card, manipulate it on the fly and cough it up in chunks on demand from the RAMDAC. These would be tweaked for vertical and horizontal line offset requirements by the GS converter card and fed to the CRT within the tolerances of the A/B.

Toby's output is at 67Hz so successive 512x384 interlaced frames that add up to 640x480 would need to be handled at 22.333Hz by the A/B.

Todays fuzzy notions:

GS converter would need to:

Scale the fields to spread the pixels out across the interlaced format

jitter them up/down/left/right to flip/flop/tween the successive fields

Hoping that interlacing 80% of the image data combined with phosphor persistence will sneak that interlacing past the visual cortex as an acceptable 640x480x8bit GS image.

<shrugs>

Dunno, caffeine level is getting too close to unfuzzing what I use for logic. Here's the original caffeine deprivation stream of incoherence:

TLDR:

I picked up a VGA color 10" CRT that's supposed to be 1024x768 capable. Never ran it, just started to experiment with hacking it into a compact case. Reports from a buddy said it looked good at 800x600, but not higher. Read the docs and figured that was probably due to the signal being interlaced.
Point is, would setting the SE/30's signal as interlaced at 640x480 be of any help in lowering the flyback tweak? Persistence of B&W phosphors ought to make the interlaced image plenty good enough as compared to a dot pitched color CRT.

Just wondering/spitballing. Might my revision proposition for the Toby handle feeding an interlaced signal directly to the analog portion of the system help? It's firing three guns for RGB, if we split the RGB gun signals out with delay circuits, the effective horizontal resolution requirements might be reduced by 1/3 for each pass with the GS conversion board spreading the interlaced signals across what would be a digitally(?) decompressed version of the compressed horizontal image you're getting at high frequencies at a more easily achievable 21.666Hz. Tweaking the discrete crystal cans for horizontal and vertical timings at the source ought to be a lot easier than torquing the A/B all out of spec?

RGB is tied together to get a B&W white image onto a color monitor. I'm hazily "looking" at this as an inverse process. Can't help it, images of this crap just pop into my head while I'm drinking coffee and trying to wake up, practical, idiotic or somewhere in between.

I hope this makes some kind of sense.

_____________________________________________________

Wasn't clear at all before about how the interlacing would work. The horizontal scan rate would remain the same, you'd just be chucking 20% of the data out of each pass to match the 512 pixel columns available from A/B capabilities. You'd be splaying the sample out across the "full" visible area of your 640 pixel column assemblage with width adjustment/timing tweaks/whatever. So that it would appear as a set of vertical bands on each pass.

It would take three cycles to throw up the "full" image. 80% each of R, G & B on each pass. Might want to do 80% of each successive frame's RGB data to keep it from flickering by "segueing" into each successive data field? You'd be tossing out the 100% of both Red and Green data to display 80% of blue on any given pass and vice versa, but building up a visual image from sparse data sets is exactly what the visual cortex does for a living. A lot depends on the phosphor's latent image retention.

It might wind up looking pretty good on a small screen. You'd need to massage your GS board's input sampling on the RAMDAC side of things so a hacked VidCard delivers what the GS converter needs in order to display as much as it possibly can at the neck of the tube within the set limits of the overall system

Is that any clearer or more muddied?

______________________________________________________

I was hoping to divert the data stream on RAM side heading INTO the DAC. Maybe read the contents of VRAM buffer as they're loaded, sample and select the digital data, feeding interlaced "misinformation" to the DAC which would be "distorted" by the GS converter into a virtual 640x480 image.

Some blurbs I've seen about lower native resolution panels "supporting" higher resolutions gave me that crazy notion. Strikes me as being likt the downsampling they do in movie production, but turned into a third of a frame at a time on basically stock hardware.

Dunno, it's idle curiosity at this point, but sounds like something that could be easily done with the very fast, very cheap I/O oriented Pi generation of hardware.

Amounts to wedging high speed digital signalling junk into one side,the other or both sides of modern memory installed in the middle of an unsuspecting 29 year old VidCard chipset that's found itself relocated to a Chinese PCB. [}:)] ]'>
 
Last edited by a moderator:
Top