• Updated 2023-07-12: Hello, Guest! Welcome back, and be sure to check out this follow-up post about our outage a week or so ago.

68K/early PPC with onboard AAUI 100mbps ethernet?

Byrd

Well-known member
Hi all,

I'm rebuilding my 840AV which has a Nubus 10/100 mbps ethernet card in it - I'm considering removing this for a graphics card, and using onboard ethernet.  Does anyone know if the 840AV or similar Macs of the era have onboard 100mbps ethernet via their AAUI-15 ports?  I've Googled around but can't find anything definitive that says if any onboard AAUI-15 were 100mbps, or all were 10mbps.

Thanks

JB

 

omidimo

Well-known member
AAUI will be 10Mbps.

I made a similar call on my Quadra 700 as the internal drive/NuBus interface will never be fast enough to get the most out of the AsanteFAST card, so no real loss.

 
Last edited by a moderator:

Byrd

Well-known member
Thanks omidimo :)

Removed the 10/100 card for something else.  I'm going to try a FWB Jackhammer in one slot, I think I've some suitable UW SCSI hard drives - big capacity (9, 18GB), but old - will benchmark and see if this improves disk performance.

 

Trash80toHP_Mini

NIGHT STALKER
That benchmark data will be interesting to see at long last.

Love my Quadra 700, but expansion option reductions almost outweigh its 68040 goodness. The third NuBus slot of the IIci was wasted on AAUI instead of an additional I/O port added to the spec and the PDS was moved over inline with one of the two remaining NuBus slots, interference rendering PDS/NuBus an either/or choice.

It's too cute and fast to really call it a RoadApple, but the Quadra 700 design was severely compromised. If you install the PPC upgrade card, have you reduced the Quadra 700 to the lowly level of the other "one slot wonders" from Apple?

I'd love to see a Quadra700/PPC set up to go head to head with a IIci/PPC. Same baseline clock on the machines, but the JackHammer, AsanteFast 10/100 NIC and Thunder IV GX 1600 in the IIci. I've no love for PPC downgrade cards whatsoever, but the rest of the parts to that puzzle are on hand.

If I were to move any cards around I'd probably go with the IIci/50MHz PowerCache and those three cards over any possible iteration of a Quadra 700 setup. Go figure. ::)

 

ArmorAlley

Well-known member
Found the installation guide for the AsanteFast 10/100 NIC: https://www.prismnet.com/~trag/Asante/afnubusig.pdf

Someone had asked me long ago if it would have worked in my IIfx, no system compatibility is listed outside of NuBus interface. (c)1995 Might work in a Mac II?

edit: Anybody got a driver link handy?

< /lazy >
I've got a 10/100 Ethernet card in my IIfx and I'm fairly sure it's an Asanté card. I have a disk-image of the driver and I can put it up on the Macintosh Garden, if it isn't already there.

I'll have to do it tomorrow evening though 'cos I'm wrecked now.

 

Unknown_K

Well-known member
You cant get full speed because of the Nubus bus speed limits but they are faster then plain old 10mb cards.

 

Trash80toHP_Mini

NIGHT STALKER
Methinks NuBus throughput is plenty fast enough in its original spec, NuBus90 would far outstrip 100 mega bit per second.

Yep: Wikipedia

The NuBus became a standard in 1987 as IEEE 1196. This version used a standard 96-pin three-row connector, running the system on a 10 MHz clock for a maximum burst throughput of 40 MB/s and average speeds of 10 to 20 MB/s.

What's 100Mbps, max throughput w/o any overhead at all can't exceed 12.5 MBps.

 

Unknown_K

Well-known member
Yea, but if you running a SCSI card and anything else you share all that bandwidth. I havn't seen any SCSI card beat 15MB/sec by itself and built in SCSI does a fraction of that so how are you going to feed a 100Mb ethernet card?

 

Cory5412

Daring Pioneer of the Future
Staff member
I've always wondered exactly how beneficial a 100-megabit NuBus Ethernet card would be. I have a blue-and-white G3, which is one of Apple's first systems with built-in 10/100 ethernet, and it can never download files from my web server at more than maybe a few hundred kilobits per second anyway. This is with the system literally on the same Ethernet switch as the server, which itself has gigabit networking and can throw a few hundred megabits per second to multiple other computers any day of the week.

Networking in Classic Mac OS was never particularly fast, even when it did get faster. (going from mactcp and classic appletalk to OpenTransport, for example, and then improvements Mac OS 8 made that made networking better.

In fact, I struggle to think that it would even be worthwhile on a particularly fast 8100 or 9150, like at 100-120MHz.

Does anyone using a card find that it's a meaningful improvement over the built-in Ethernet?

It's too cute and fast to really call it a RoadApple, but the Quadra 700 design was severely compromised.
Just idly, but if you install five cards in a six-slot computer, does that make it a road apple?

I also think that just as a style guideline, you can't, in the same breath, call the 700 a road apple(1) because it only has one slot left when you install an upgrade that uses the second slot, and also say you don't even like that upgrade anyway.

Other thing to consider: in what year was the graphics card you mentioned introduced? Hell, in what year were the SEIV and Jackhammer introduced? My bet is that you're buying the 700 thinking about wanting to maybe install some upgrade in the future, but not necessarily about brain-slugging the entire computer, the way people tend to do with older ultra-high end Macs.(2)

Ultimately though, any computer you buy is a compromise. Buying at the top tier of Apple's product line has always been extremely expensive. Going by Wikipedia's listings, a 700 was $5700 and a 900 was $7200, which was enough money to buy a 16" monitor, or upgrade one monitor to two, or install some more RAM or buy a second hard disk, or buy one or more programs you might want. Or perhaps you're making that particular choice because the cost of a 900 over a 700 is $1500 you simply don't have.

(1) Terminology I still hate and I still consider this phrase to be one of the worst things Low End Mac has ever contributed to the Macintosh fandom.

(2) A practice I have always kind of wondered about, it really only seems to have ever happened in the used market where, for example, there was a slim period of time where you could upgrade a 9500/9600 to midrange G4 standards for just a little less than a new G5 might have cost.

 

Unknown_K

Well-known member
Classic Mac OS sucks for ethernet to be honest. The fastest speeds on a 68k with 10Mb ethernet I have ever seen were on a AWS95 running A/UX 3.1 and that machine pretty much maxed out the card.

A few hundred KBs on a B&W G3 is way too low, something was broken.

 

Trash80toHP_Mini

NIGHT STALKER
Yea, but if you running a SCSI card and anything else you share all that bandwidth.
Only one card's on the bus at any given time unless you're doing block transfers which is where that 40MB/s burst rate figure comes in. What's real world throughput over ethernet at 100Mb/s. NuBus wins hands down.

 
Last edited by a moderator:

Cory5412

Daring Pioneer of the Future
Staff member
something was broken
The only thing i can think of is perhaps the disk, but it was a fresh-ish installation (even if it was "old" it was basically bone stock) of 9.2.1 if I remember correctly. The transfer was using plain HTTP and IE5.

The disk is certainly old and if I start using that machine more, it'll be up for replacement.

I'll eventually pull that system out again and try once more.

unless you're doing block transfers which is where that 40MB/s burst rate figure comes in
Do NuBus 10/100 Ethernet cards usually do this? Which systems can take advantage? Is it just the 660/840 and x100 PowerMacs or is this available on, say, the Mac IIci and Quadra 650 as well? Can any system 6/7/8 software take advantage of that mode?

If an Ethernet card could do it, would every other card in the system need to be able to do it? "burst mode" strikes me as the kind of thing that, say, a '90s-era graphics card would probably never do.

It's not an advantage if no cards, very few systems, and no software can do it.

 

trag

Well-known member
NuBus only transfers 32 bits at a time and it does not have separate address and data busses.   So, e.g., to set up a transaction, first those combined address/data pins must be used to send an address, then after that they can be used to send or receive data.   So any transaction over NuBus is going to take several clock cycles, using the same pins for commands, address and data.   PCI has the same issue, but it runs at 33MHz, instead of 10MHz.

So you can't really say 10MHz X 32 bits = 320 Mbps (40MB/s) throughput for NuBus.   The overhead of each transaction is substantial before you get the point where data is moving along.  jt's mentions of 10 - 20 MB/s in the real world sounds about right.  Actually, I would never expect to get 20 MB/s, as that would be only a 2:1 penalty, and there are probably very few situations where It only takes one cycle to set up the exchange of a work of data, on the average.

Still, 10 MB/s ought to be able to do a pretty good job of supporting 100 Mbps.   On the other hand, how much transaction processing for the ethernet connection does the CPU have to do?   Alot of the NuBus bandwidth might be taken up with the CPU not just taking data from the ethernet card but passing packet information and responses back and forth.

jt, from your description of the Q700, I think the Q650 is for you.    It solves most of the issues the Q700 has, except, IIRC, the built-in video on the Q700 is actually better.   It'd be nice if they had built the Q800/650 with the Q700's video system.

 

Trash80toHP_Mini

NIGHT STALKER
Do NuBus 10/100 Ethernet cards usually do this? Which systems can take advantage? Is it just the 660/840 and x100 PowerMacs or is this available on, say, the Mac IIci and Quadra 650 as well? Can any system 6/7/8 software take advantage of that mode?
What I said was that burst mode would be between bus master and slave cards without processor intervention. Apple's 1990 8/24 GC(?) did QuickDraw acceleration for unaccelerated Apple video cards on that back channel without CPU involvement as I understand it.

An ethernet card wouldn't be doing that AFAIK, but it doesn't need to, it's communicating with the CPU and with the network.

I got into it back in the day with a know it all "consultant" who insisted that ethernet was faster than SCSI. He couldn't fathom that even at just 2MB/s SCSI was 60% higher throughput that 10Mb/s over etherhet. He wouldn't have known what a byte was if it bit him on the ass. That's why SCSI NICs are just fine for 10bT, not the fastest solution, but workable enough at those speeds.

@ trag: What percentage of ethernet's 100b/s does overhead: handshaking, error detection and correction consume?

Only brought the Quadra 700 up to point out the bad design tradeoffs, putting AAUI on the board in lieu of a slot. A NuBus NIC at that Slot ID would have been far more flexible, adding another port for for AAUI would have been fine.

Like I said I love my Q700 triplets, but only one is stock. The others are PowerPC upgraded sleepers, Quadra700/7100/G3's been done for ages and I've been slowly working on the Quadra700/7600/G4.

Put that board away for PowerCache testing on the new IIci board in that case. That's what made me think of how much better the IIci would be in the PPC upgrade role than a Quadra 700.

 

Cory5412

Daring Pioneer of the Future
Staff member
I don't know why the 700 ended up with only two slots, but as far as I can tell, there's no specific technical reason why. The 900 is otherwise nearly identical, technically, and it got five slots and Ethernet, and the 650/800 have three slots and Ethernet, so I don't think Apple would have made the decision to cut Ethernet in favor of a third NuBus slot, especially given I'm presuming part of their motivation here was that they figured Quadras would be part of high performance Ethernet networks.

Regarding Ethernet overhead: It's not more than 20%. I'd be surprised if it was much over 10%. Just yesterday I was talking to someone about networking speeds and they did a speedtest.net test on their machine (connected to a 10/100 switch) that got a bit over 90 megabits per second. Certain protocols like PPPoE (used by some DSL/fiber internet providers) add a little bit, and there's also an ATM overhead incurred on many ADSL connections. (VDSL2 and AT&T U-Verse ADSL2 is often EFM, which has a lower overhead.)

Protocols within TCP/IP have different overhead, for example there may be different overhead or compute power required to transfer a file using... say... SSHFS or SFP using compression, plain FTP, AppleTalk, SMB/CIFS, and HTTP. Plus within each of those,  you get variations in client and server software and of course there's overall network conditions. (i.e. a transfer from your house to mine might go slower than either our network connections is capable of.

The other thing to consider here is that when I say I saw a PC do 90 megabits on Ethernet, it was HTTP using a modern web browser and on a very modern system relative to all of this.

Unless the thought process here is that with system 8.1 and TCP/IP based appleshare (say: served by Windows 2003 or Netatalk2 on Linux) volumes will be significantly faster than using a local SCSI disk (which, I would believe if I had literally ever been impressed by networking in Classic Mac OS, which I haven't been) then I personally struggle to see the value of a 10/100 card. Anything you'll get for a NuBus-having Mac off the Internet is going to be small enough that if you're downloading it to the system using 10-megabit Ethernet, the transfer times aren't likely to be meaningfully improved, and it seems like in general most people are still interested in putting in big-fast SCSI disks instead of setting up file servers, the one situation where it would make perfect sense to splurge on an advanced network card.

Ultimately, there's some loss to be had using SCSI networking, but on a Mac, because Classic Mac OS is generally bad at networking, and because anything without onboard ethernet or slots is likely relatively slow anyway, you're not losing out on performance that would have been there with a better interconnect. (To the extent, by the by, that I actually think for most purposes one of the serial-based EtherWave products would probably be fine, 

All that said: I would very love to see like timed file transfer tests done on various network connections. Seeing actual numbers would be very helpful for people evaluating whether one of these would be worth one of their slots, especially on systems that have both a limited number of slots and onboard Ethernet that works fine already.

 

Trash80toHP_Mini

NIGHT STALKER
I don't know why the 700 ended up with only two slots, but as far as I can tell, there's no specific technical reason why. The 900 is otherwise nearly identical, technically, and it got five slots and Ethernet, and the 650/800 have three slots and Ethernet, so I don't think Apple would have made the decision to cut Ethernet in favor of a third NuBus slot, especially given I'm presuming part of their motivation here was that they figured Quadras would be part of high performance Ethernet networks.


You put the answer to the question implied in your first sentence into your second sentence: the Q900 has FIVE NuBus Slots, the sixth port being taken up by the onboard NIC.

The Quadras 650/800/840AV subtracted only two ports from the Mac's total of six supported NuBus Slots, the Quadras 900/950 subtracted only one of the six for the logic board's NIC.

Like I said, adding a fourth port for Q700 Ethernet rather than replacing a NuBus slot with it would have been fine and dandy. [;)]

You raise an interesting point. ISTR something about NuBus support being limited to a pair of blocks of three? The architecture of the Q700 might be a bit more different than that of the Q900 than would be expected if this is the case?

Can someone with any of these machines up, running and handy run TattleTech or Slot Info to see if Ethernet shows up as a PseudoSlot implementation and the NuBus Slot IDs supported? The hardware support breakdown could well be interesting.

 
Last edited by a moderator:

NJRoadfan

Well-known member
Even though its overkill, one should still see faster transfer speeds on a 100Mbit card on 68k machines. I thought putting an EISA 10/100 card in my 486 was overkill, but transfers were noticeably faster then using an ISA 10Mbit card. Part of it is likely because EISA has decent DMA bus mastering support, but transfers were above the 10Mbit limit.

 
Top