• Updated 2023-07-12: Hello, Guest! Welcome back, and be sure to check out this follow-up post about our outage a week or so ago.

What's the difference between the various Radius 24 cards?

Cory5412

Daring Pioneer of the Future
Staff member
Hmmmm, interesting. I wonder why they did that with the AV card, unless it was a bus bifurcation thing like you can do with PCI Express today? I didn't think any of Apple's buses at the time had any real concept of "lanes" per se.

Worth doing a set of MacBenches, regardless. My 6100 is the DOS version and I don't have either the HPV or the AV card, although the AV card would probably match my use cases a bit better, it seems like people tend to prefer the HPV for the performance and for the better color depths at higher resolutions.

I'll have to see if I can open my own macbench files, probably tonight when I get back to my own system, I think the reference 6100 uses the AV card so in theory it'll do better at graphics than mine does.


EDIT: Looks like I was incorrect about the HPV having any specific acceleration: http://kan.org/6100/graphics.html
 
Last edited:

Unknown_K

Well-known member
I wonder if the older video benchmarks give weird results if you have the 1MB cache module in place (if most of the benchmark graphics fit in cache). Both use the same speed VRAM but have different RAMDAC and different video controllers.
 

Nathan_A

Well-known member
I wonder if the older video benchmarks give weird results if you have the 1MB cache module in place (if most of the benchmark graphics fit in cache). Both use the same speed VRAM but have different RAMDAC and different video controllers.
I've wondered this too, but it would really surprise me if the cache controller allowed the RAM dedicated to the framebuffer to be cached.
 

olePigeon

Well-known member
I'm disappointed the E-Machines LX did so relatively poorly. The optional ethernet card makes it a really good card for IIsi or similarly NuBus-constrained machines.
 

Trash80toHP_Mini

NIGHT STALKER
None of those benchmarks really add up to much hard information. Running at legacy 13" resolution defeats the purpose testing any high resolution card optimized for the reason for its purchase. Nobody bought one of those monster CRT/VidCard combos to run at 13" resolution, so no point in doing much if any optimization for tiny displays.

Even testing at 1024x768 probably won't do much good for high end cards optimized for TPD thru 1600-1200.

It would be interesting to test cards and onboard video systems at native 16", 19", 21" and MonsterSized displays for which the cards and Mac systems are optimized.
 

Nathan_A

Well-known member
I'll be testing on a CRT that will do 1920x1200 or something like that.

The problem is that legitimately high-res testing is challenging because most of the things being tested can't really do it, and they all fall apart at those high resolutions really. Just the overall low power of the whole system and numerous bottlenecks made just getting that many pixels written to anywhere from anything a challenge. I'll just do whatever MacBench 4.0 thought of as "high-resolution publishing" at the time. The effective pixel-rate limitations of any of these machines, devices, interfaces, etc. aren't going to have made optimizing for 1600x1200@24-bit better than 1152x870@24-bit from what I can tell.

My prior pile of tests using my 8100/100 seemed to indicate that most of the specific operational optimizations of different cards seemed to be mostly around bit-depth and less about resolution (at least once the performance cliff in pixel pushing capacity has been reached). Which kind of makes sense to me as a matter of data representation being mapped to hardware functions and resources when word alignment, transistor/gate count, etc. were likely to be dominating factors?
 

trag

Well-known member
I'm disappointed the E-Machines LX did so relatively poorly. The optional ethernet card makes it a really good card for IIsi or similarly NuBus-constrained machines.

Remember there was an E-Machines Futura LX card and an E-machines Futura II LX card. I think the ones with the ethernet port are the II series.

According to E-machines the II series is considerably faster than the non-II series. I've never had both lines on hand to test though.

This causes confusion.

E-Machines
Futura vs. Futura II

Radius:
PrecisionColor vs. PrecisionColor Pro
 

olePigeon

Well-known member
@trag Aaaaaah. I have the Futura II LX. I could give it a whirl and benchmark it. Well, as soon as my IIci is up and running again. I hope to get it going this weekend.
 

Cory5412

Daring Pioneer of the Future
Staff member
None of those benchmarks really add up to much hard information. Running at legacy 13" resolution defeats the purpose testing any high resolution card optimized for the reason for its purchase. Nobody bought one of those monster CRT/VidCard combos to run at 13" resolution, so no point in doing much if any optimization for tiny displays.

Even testing at 1024x768 probably won't do much good for high end cards optimized for TPD thru 1600-1200.

So, what the MacBench publishing graphics test are measuring is: How fast can a machine and its graphics card execute QuickDraw calls. You can test that speed factor at any resolution.

Measuring that at "low" resolutions like 640x480 or 1152x870 is still useful. The other MacBench tests will product numbers at any resolution, but they're more about specific tasks/functions like fill rates, tesselation, etc etc.

FWIW here, 640x480 was common for most day to day users into the mid-late '90s, and it was common for second-hand computer buyers not to have access to particularly large displays. (Although that got better into the early 2000s and mostly it was a matter of finding adapters, especially for people a bit older than I was who had slightly more discretionary income.) (For my part, there were high end Quadras for days at the local used computer shop, but the only Apple monitors you could get were 13/14-inchers.)

To the greater point: resolutions higher than 1152x870 didn't become "normal" until way later. The PCI PowerMacs are the first Apple graphics solution I'm aware of to support over 1152x870 and the Beige G3 in 1997 is the first Mac to support over 1280x1024 (up to 1600x1200 or 1920x1080 on the G3's Rage, if you put in the max vram.)

I'll have to look at some 1993-1994 magazines for this but it would be interesting to see how important performance is at all in the context of "getting to 1600x1200". This sort of hearkens back to a discussion we had before where we found out 24-bit color in 1991 pretty much only existed at 640x480 and any higher resolution/larger displays were typically 256 colors or monochrome. I think jessenator mentioned in another thread that publishing workflows tended to have "the layout machine" and "the color work machine" and so you'd just have different computers for those different tasks.

The other thing w/re graphics benchmarks is, to the extent possible, all solutions that are being compared should be run in the same machines with the same configurations and software loadouts. Especially for unaccelerated cards, the speed of the computer can have a huge impact because quickdraw will be rendered on the CPU. An 8100 with a G3/400 upgrade will probably win at every graphics test at every resolution, even if the card involved is, like, a Toby. (I mean, on that card the resolution will probably be 640x480, but.)
 

trag

Well-known member
@trag Aaaaaah. I have the Futura II LX. I could give it a whirl and benchmark it. Well, as soon as my IIci is up and running again. I hope to get it going this weekend.

It's also possible that I'm misremembering and the Futura II performance is about the same, but on a 7" card instead of a 12" card. I.e. they got smaller.
 

Trash80toHP_Mini

NIGHT STALKER
And they got upgraded with that expansion slot for DSP or NIC interface card. I have both flavors of NIC, one of each for my two cards, but they're limited in terms of what protocals they support?
 

Trash80toHP_Mini

NIGHT STALKER
This sort of hearkens back to a discussion we had before where we found out 24-bit color in 1991 pretty much only existed at 640x480 and any higher resolution/larger displays were typically 256 colors or monochrome. I think jessenator mentioned in another thread that publishing workflows tended to have "the layout machine" and "the color work machine" and so you'd just have different computers for those different tasks
The silver bullet for DTP in that time frame for a multitasking setup for small shop or a single user was the 13" beauty combined with a Grayscale TPD.

By '94 or so the 17" CRT was coming into its own as the standard in the business world.
 

olePigeon

Well-known member
@Trash80toHP_Mini I have the 10Base-T card and the DSP card. It supports MacTCP, anyway, and that's what I use. So I'm fine with that. :)

I'm wondering if it'd be possible to use both the network card and the DSP card with a riser. If placed on the right-most NuBus slot, it should be possible to physically fit the DSP over the network card. However, I somehow doubt the card would work in that configuration, otherwise they probably would've made the cards L shaped to accommodate both.
 

Trash80toHP_Mini

NIGHT STALKER
Likely not gonna happen, but there's a chance it might. The NIC is implemented with the infamous "by further decoding" method which allows the VidCard and NIC Daughtercard to occupy a single NuBus Slot ID/interrupt.

If DSP operations were limited to processing on the card itself without interaction with the CPU over NuBus it might be possible. However that's almost certainly not the case. The DSP supports CPU operations offloaded to it in the manner of Floating Point Unit support if I'm understanding things correctly?

The only chance of such a thing working would be that the CPU is tied up in an either/or situation doing DSP OR NIC function exclusively at any given time. That way there might be no interrupt or bus arbitration complications at any given time?

Then again you might go Neanderthal on the Kluge as I'm wont to do. 😬 Hack an A/B switching setup on your riser if both drivers can be active at any given time. 🤪

@trag does that architectural dilemma make any sense at all to you?
 
Last edited:

trag

Well-known member
And they got upgraded with that expansion slot for DSP or NIC interface card. I have both flavors of NIC, one of each for my two cards, but they're limited in terms of what protocals they support?

Well, in my experience, if you have Open Transport loaded, the computer will freeze during boot, when the OT extensions load. It works fine with MacTCP, but not with OT.
 

Trash80toHP_Mini

NIGHT STALKER
THX, for the clarification, figured as much from previous discussions.

What do you think about my theory that the NIC and DSP daughtercards might improbably manage to function along with video function at the same. I've been thinking about it some more and realized that time The DSP is doing its work the CPU might try to network? Then again,might the NuBus access be limited to one function or the other without I/O contention? Unless a card is bus mastering across the slots as in card to card QuickDraw acceleration, even then the CPU can only connect to a single card at a to fill their frame buffers?

Dunno, still in coffee deprivation mode. This might be considered off topic, then again oP is asking this question and it's his thread. Then gain, Radius acquired E-Machines along with SuperMac and others at some point, no?
 

Trash80toHP_Mini

NIGHT STALKER
@olePigeon Found an old thread that explored the differences between 24xp and 24xa:

 
Top