• Updated 2023-07-12: Hello, Guest! Welcome back, and be sure to check out this follow-up post about our outage a week or so ago.

HD20 Schematics required

Gorgonops

Moderator
Staff member
If you've been reading the technobabble in this thread you'll see that the working hypothesis is that the IWM in the HD-20 has *nothing to do* with driving the daisy-chained drives. (And the traces of the schematic so far support that.)

That aside, so far it's sort of unclear exactly how drive selection works when there are multiple HD-20s hooked up. I don't recall reading how that's accomplished in any of the tech notes. (And in fact one of the goals Dennis Nedry is attempting to accomplish is how the Mac determines whether an HD-20 is present at all. It''s certainly possible that the mechanism for detecting them could scale to an arbitrary number of drives because individual drive addressing is based on something like a serial number. Which would make the theoretical limit either the size of some data structure or hardware electrical strength/noise/signal propagation limits.)

 

napabar

Well-known member
Interesting. Looks like I'll have to round up at least 3 HD20's and give it a whirl. :lol:

Hopefully the external 800K drive still works on the end of the chain, attached to the last HD20 the Mac will accept.

 

Mac128

Well-known member
Napabar, you'll have to better than 3 and an external drive. I've personally run this configuration on a 512K Mac. The disclaimer here is that, that's all I did with it. I didn't actively test it with 3 fully loaded disks and put the Mac through its paces or use filesharing, which would have been one of the chief configurations for this technology. But the volumes all mount. Sadly, while ive had more HD 20s in my possession, I've never had more than three working ones at any one time. Again, with only 512K RAM, I would think there would be a practical limit to how big the volumes attached could be based on RAM and the ability to keep track of multiple desktops and hierarchical subdirectories. Remember when Apple officially limited the HD20 to 2, the biggest Mac that would support it only had 512K RAM, and ate up more with the INIT just to use it. And while the Mac Plus could support up to 4MB, it was only sold with 1MB, and adding more was a very expensive option, that most put off until applications really began to take advantage of more RAM. Less than 9 months later the first SCSI drive hit the stage, so the main sales of the HD20 were for the 512Ke for which it was the only real HD option. Not really worth the effort to test and support larger HD20 setups for the Plus, which had the superior SCSI option, and a supported limitation of up to 7 devices.

 

napabar

Well-known member
Remember, a Mac 128Ke can use the HD20 natively. My plan was to see how many HD20's a Mac 128Ke can mount. I figure no one else has ever stress tested this particular combination. :lol:

 

Gorgonops

Moderator
Staff member
Napabar, you'll have to better than 3 and an external drive. I've personally run this configuration on a 512K Mac. The disclaimer here is that, that's all I did with it. I didn't actively test it with 3 fully loaded disks and put the Mac through its paces or use filesharing, which would have been one of the chief configurations for this technology. But the volumes all mount...
That is interesting to know. It does deepen the mystery of how drive identification and selection is accomplished on the HD-20.

Just to clarify the "ram usage" info, the way the HD-20 INIT works is it essentially replaces the 64k ROM's floppy driver with one functionally equivalent to the one in the 128k ROM. Thus stacking more drives onto a 512k with the INIT in principle shouldn't consume any more RAM per-volume than consumed on a 512ke, Plus, or any other later Mac. It's also worth noting that the "replace the floppy drive driver" method is used by emulators like vMac to mount disk images, so if you want to determine if adding more volumes (or having larger volumes) significantly impacts free memory space it would be easy enough to test by mapping a half-dozen disk images to vMac and seeing what it does to RAM consumption.

Knowing that drive identification/selection isn't the issue then I'd guess the reason Apple specified the limit for daisy-chaining as "2" is electrical. The IWM probably can't reliably drive a chain longer than a few feet.

 

Bunsen

Admin-Witchfinder-General
IN THEORY
A device could be created that emulates an HD-20 AND an external floppy drive with an HD-20 startup disk image in it, allowing a Mac 512 to boot completely from
... a remotely stored HD-20 disk image on a LAN?

I realise that's pie in the sky, and far, far simpler to have a local disk image on $5 worth of flash.

 

ralphw

Member
These are just guesses, without looking at the drive.

The Rodime drives of my youth were either MFM or ST-506 devices. We used them because they only cost a few hundred dollars for a 40M drive, so we could risk

putting them in moving vehicles (a robot truck that could drive itself - as long as you didn't need it to go very fast...)

A company called "Adaptec" (might sound familiar), made an ACB-4000 card, which sat on top of the drive and converted SCSI commands to MFM/RLL ST-506 signals.

Check http://bitsavers.org/pdf/adaptec/400003-00A_ACB4000UM_Oct85.pdf for details.

Not sure if this will help with the reverse engineering project, but the background information will be helpful.

I'm interested in other project that can leverage the 'Disk II" controller technology. It would be fun to build a network interface with one.

 

Mac128

Well-known member
ralphw, yes there were a number of SCSI conversion kits, even one by Personal Peripheral Comouter Corp. actually endorsed by Apple, that was quite popular, based on the MacBottom interface (which some say the HD20 was based on). But I'm not sure that helps us. I posted a similar article about John Bass' early SCSI interface, and the consensus seems to be that once you go SCSI, there are so many easier ways to go about it, unlike in 1985. Without documentation, would such an interface actually be of any more assitance than testing the original HD20 interface directly?

More can be found out about the drive and interface here: viewtopic.php?f=7&t=1036&start=50&hilit=HD20+scsi

 

Dennis Nedry

Well-known member
Sorry to drag up an old topic, but I have been poking a bit at the Z8 ROM dump from my HD 20. This is a 2k ROM and only a very small part of that appears to be instructions when you look at the LST file that I got when I disassembled it. They may very well have done something tricky that did not tip off the disassembler to some of the code, but I feel pretty optimistic about the job that IDA did. Some explanation for the vast amount of non-instruction code is the presence of the 1-bit icons when opened as a raw image in GraphicConverter. It is likely that chunks of Mac code and resources exist in the ROM that are sent over while booting.

This may not be TOO terribly hard to figure out, but I'm not used to this Z8 stuff. I'm stuck almost right away at address 0x0309. There is a return from subroutine here after the code appears to have jumped into the middle of that subroutine instead of calling it - and as far as I can tell, this would cause the stack size to go to -1 after the return, and who knows where it would try to return to. I also am having trouble understanding addressing and initial values when dealing with general purpose registers R0, R1, R2, etc. i.e. Almost immediately R0 and R1 are copied into RAM. I do not know the initial values of R0 or R1 (assumption is that they are pointers to P0 and P1, or both to P0), and also, when dealing with them, I am unsure if I am accessing the pointer itself or the data stored wherever the pointer points to.

Does anyone know of a simple Z8 simulator? I would like to load the ASM or machine code data into it and step through the instructions 1 by 1 and watch the effects on various ports and registers. This would be a great way for me to learn. I've seen these simulators for different processors, often times written in Java and they are quite nifty.

My goal in figuring this out is to develop and share an understanding for whatever protocols are being used by this drive so that it can eventually be reproduced. It honestly DOES NOT appear to be very complicated, and what could you expect from a 1980s Z8-based microcontroller? This is possible to figure out.

 

Gorgonops

Moderator
Staff member
It's interesting that you did indeed find those drive icons that were being discussed earlier in the thread, and they were apparently not already GCR encoded. Just a guess, but I'd be willing to bet that at least some of that "junk data" that you're having trouble interpreting is a GCR lookup table. This NetBSD source file:

http://nxr.netbsd.org/source/xref/src/sys/arch/mac68k/obio/iwm.s

Has GCR lookup tables apparently copied straight from the MacOS .Sony driver, perhaps you could see if they pattern match some of those mystery bytes?

No luck looking for a simple Z8 simulator myself; if you were using a Z80 or 6502 you'd have a million choices, of course. It looks like Zilog has some downloadable developer tools, and the ZDS II - Z8Encore! package says in the readme it includes an "instruction set simulator" along with the C compiler, assembler, disassembler, etc, etc. Is it the Zilog disassembler you're using? Granted I have no idea at all if the "Z8 Encore!" is different enough from an old-style Z8 to invalidate using the tools for it for investigation.

 

Dennis Nedry

Well-known member
I had access to IDA Pro for a short time, and that is where this disassembled LST file came from.

From what I understand, all CGR encoding/decoding in the Mac occurs in the IWM chip, and the respective decoding/encoding in the HD20 occurs in its very own IWM chip, so it seems like no CGR stuff is ever handled in software. That makes a lot of sense - why spend a load of overhead in software when it can be done in hardware with a crazy Steve Wozniac state machine chip that works in both directions.

For development purposes, we could use a real IWM chip from a busted Mac or HD20, and once the HD20 controller is running well, we could then move on to emulating the IWM chip too. Other than the IWM, it looks like there is only a flip-flop for managing the daisy chain port and some simple combinational logic between the Mac and the HD20 controller. This stuff can be reused directly for development and then emulated and integrated in at a later time.

UM001604-0108[/url]"]General-Purpose Registers are undefined after the device is powered up. The registers keep their last value after any reset
Aha! Wait... :?:

edit

R0, R1, R2, etc apparently are "working registers", not "general purpose registers", so the above quote does not apply.

This is weird because it looks like there is a "register pointer" separate from the working registers that can change, and then any operations on the working registers themselves utilize that pointer to read or write to respective parts of memory.

edit

There is an instruction dedicated to setting the register pointer. This is beginning to make sense now. This spec sheet is definitely not written like a Z8 for dummies.

edit

Hard to find, but page 34 states that the reset value of the register pointer (0xFD) is 0x00. Yay!

All registers 0x04 - 0xEF are undefined after reset.

 
Last edited by a moderator:

Gorgonops

Moderator
Staff member
From what I understand, all CGR encoding/decoding in the Mac occurs in the IWM chip, and the respective decoding/encoding in the HD20 occurs in its very own IWM chip, so it seems like no CGR stuff is ever handled in software. That makes a lot of sense - why spend a load of overhead in software when it can be done in hardware with a crazy Steve Wozniac state machine chip that works in both directions.
No. I think there are already links in this thread, but if not a quick Google will dissuade you from thinking that. The IWM is a *very simple* device, and it's almost entirely driven by software. GCR encoding/decoding is done by the CPU, completely, and is incredibly timing dependant which is why the driver code uses a lookup table instead of calculating bytes directly.

(Google) Here's Big Mess O' Wires brief commentary on how it works, and it includes useful links:

http://www.bigmessowires.com/2011/10/02/crazy-disk-encoding-schemes/

Also, I probably threw this in already, but this page is probably useful too. It's about the Apple IIgs, not the Mac, but it's the same IWM chip.

http://www.mac.linux-m68k.org/devel/iwm.php

The page glosses over one thing, the "Unfortunately, the data must undergo considerable preparation before writing and after reading." part. That's where it references the deep-dive Apple DOS documentation, and in *that* you'll find the description of how a track read from disk "in its raw, encoded form" gets converted back to normal.

(Dang it, another edit.)

Now, the one thing that I suppose *might* be possible is that when being used for this application the driver code on both the Mac and HD-20 side aren't using the standard GCR encoding tables at all but something different. The state machine in the IWM is basically works as a crude data separator/"protocol enforcer", which according to its clocking counts bits and signals errors if it encounters invalid patterns of data, which for the most part would involve reading too many "zeros" in a row. The GCR encoding is a way to ensure that doesn't happen... if a given byte is all zeros, say, it's transformed in such a way that what's written to disk never has... I think it's more than two, zeros in a row. (I'd need to check the documentation.) It's probably possible you could write a simpler driver which would, say, instead of using 6 to 8 GCR patterns substitute an interspersed a timing bit every other bit, giving you essentially 4 to 8 GCR, which might be quicker to calculate on the fly instead of with a lookup table. (I wouldn't guarantee it, though.) Maybe the HD-20 does something like that. But at least according to my reading of the data sheet I *don't* think it can operate as just a straight UART. Trying to program that thing is probably above my pay grade, and I don't have the hardware to try to cross-connect two of them, so... the *simpler* thing to hope for is that it's using the documented protocol for data transfer.

 

Dennis Nedry

Well-known member
It has occurred to me that the apparent accessing of un-initialized SRAM registers at the beginning of the program in the HD-20's ROM may be a way of generating random numbers that may be necessary for floppy bus access arbitration among multiple HD20s. Or some other reason that they needed a random number. The SRAM is external to the Z8, and the datasheet does not indicate that there is any particular power-on value of the RAM bits, which is expected.

Earlier, I was attempting to watch the serial communication on the floppy port when a Mac Plus was talking to the HD20. It makes a LOT more sense to watch on the Z8 side of the IWM chip, for several good reasons:

  • Data is in decoded form (No GCR to deal with)
  • Data from other disk drives and stuff is likely filtered out
  • We don't have to know how the PAL works yet, which shouldn't be too hard to crack, but bugs could definitely arise from it
  • You can easily tell if data is going from the HD to the Mac or Mac to HD
  • You can find data from the Mac and pick around in the HD20's ROM for something that handles that command to help figure out what it is.


So, with these things in mind, we can bite off a smaller chunk and focus on figuring out the protocol for how this thing communicates. When/if that comes along, we can detach the PAL/flip flop/IWM from the rest of the controller board and connect a microcontroller that emulates the protocol in place of the removed part of the board.

Once that gets pretty solid, we could begin to emulate the rest of the hardware as well, which is much simpler. The flip-flop is figured out unless I made a mistake. The PAL will take a little testing, it is D-latched and has 2 feedback outputs, but it can certainly be done. And then the IWM - it seems that the IWM has some experience being emulated, so we can save that for the end.

I really want to make this thing work, not only for old times' sake and usefulness, but for the hacking experience. I haven't done a project quite like this before and I think there's a lot to be learned from it. I want to learn about mass storage in Flash memory to/from a microcontroller. I've never had a very good reason to do that.

 

Gorgonops

Moderator
Staff member
[*]Data is in decoded form (No GCR to deal with)
(BROKEN RECORD MODE)

Again, you're going to be very disappointed if you're expecting the bytes read off the IWM's data bus to be in "plain text". The IWM *reads and writes GCR encoded data*. Apple's "data sheet" for the IWM only alludes to this fact because it expects the user to be familiar with the Disk][ controller, but every place in that document where they use the phrase "8 bit nybble" they do *not* mean an unencoded 8-bit byte. They mean an 8 bit nybble, and they are not the same unless the IWM secretly supports a mode where it acts like a straight-up UART.

(/END BROKEN RECORD MODE)

Edits:

#1. For the record, in the book "Beneath Apple DOS", it's noted that the address marks on an Apple II disk are not GCR encoded; instead they use an "even-bits/odd-bits" 4-to-8 encoding, in which every other bit is a one. (Which would make it a valid read so far as the IWM is concerned.) It's certainly possible that communication with the HD-20 does something like that... although doing so would cut the effective data rate by about 40%, so unless it were vitally important that communication with the HD-20 trade performance for computational simplicity it seems like a poor deal. (And the bytes coming off the IWM *will still be encoded*.)

#2. Without scrolling back... did you figure out what that big unlabeled chip on the HD-20 board did? For all I know it's possible *that* is intelligent enough to do GCR encoding/decoding, although cooking an IC just for that would probably be overkill considering how bad Apple seemed to be at cooking custom ICs.

 

Dennis Nedry

Well-known member
I was under the strict belief that the very point of the IWM was to encode/decode GCR. What is your impression of the overall purpose of the IWM chip when it is on a Mac logic board? It seemed to me that the Mac could talk in 8-bit words to the IWM, and the IWM would translate to GCR for the floppy drive. It seems unlikely to me that they would have put that overhead onto the processor instead of a dedicated chip.

It's been a while since I read the digibarn IWM article, but if I have the entire concept of its function wrong, it must have gone WAY over my head.

The large square chip exists exclusively between the Z8 and the Rodime drive from what I can tell. I have found no connections in common with the floppy port side of the Z8.

Today I pulled the PAL chip from my HD20 and I'm trying to figure out how to dump it. It has D latches on 6 of the 8 outputs so that makes things interesting. Only 4 outputs are used elsewhere on the board, 3 of which are gated. I have built a microcontroller program that runs through each possible input, clocks, and records the outputs, but this probably isn't enough data to figure it out. I'm still working on it so we'll see.

 

Dennis Nedry

Well-known member
The IWM is a peripheral device that connects to a host data bus. The device generates and receives serial GCR encoded data.
Because you read and write to floppy disks, the IWM chip must be able to encode and decode data, so hooking it up in reverse like what is happening in the HD20 should be totally possible. The Z8 is on the data bus side of the IWM, so I am assuming that all data available to the Z8 is decoded.

Please point out where I'm wrong, I don't want to jump into something that I'm not capable of.

 

Dennis Nedry

Well-known member
The first 3 outputs are synchronous and register each via a D-Latch on positive clock edge. The 4th output is asynchronous. I tested with all 8 outputs, all 8 inputs, and all 8 outputs as additional feedback inputs. After a very small amount of experimenting, I didn't see any obvious indication that the extra I/O and feedback had any effect, and that leads me to believe that this thing is programmed in a more combinational manner, without any state machines programmed into the PAL. We don't know for sure without some extensive and pretty smart cracking, either that or attempting to dump the data from this chip, which may or may not be protected.

IF this chip is just combinational with output registers, then here is the I/O that I read from it:

Input: Pin 2, 3, 4, 5, 6, 7, 8

Output: Pin 18, 17, 13, 12 (12 being async)

Code:
Input     Output
0000000	0011
0000001	0110
0000010	0110
0000011	0011
0000100	0111
0000101	0110
0000110	0110
0000111	0111
0001000	1001
0001001	1100
0001010	1100
0001011	1001
0001100	1101
0001101	1100
0001110	1100
0001111	1101
0010000	1001
0010001	1100
0010010	1100
0010011	1001
0010100	1101
0010101	1100
0010110	1100
0010111	1101
0011000	1001
0011001	1100
0011010	1100
0011011	1001
0011100	1101
0011101	1100
0011110	1100
0011111	1101
0100000	0001
0100001	0100
0100010	0100
0100011	0001
0100100	0101
0100101	0100
0100110	0100
0100111	0101
0101000	1001
0101001	1100
0101010	1100
0101011	1001
0101100	1101
0101101	1100
0101110	1100
0101111	1101
0110000	0001
0110001	0100
0110010	0100
0110011	0001
0110100	0101
0110101	0100
0110110	0100
0110111	0101
0111000	1001
0111001	1100
0111010	1100
0111011	1001
0111100	1101
0111101	1100
0111110	1100
0111111	1101
1000000	0011
1000001	0110
1000010	0110
1000011	0011
1000100	0111
1000101	0110
1000110	0110
1000111	0111
1001000	1001
1001001	1100
1001010	1100
1001011	1001
1001100	1101
1001101	1100
1001110	1100
1001111	1101
1010000	1001
1010001	1100
1010010	1100
1010011	1001
1010100	1101
1010101	1100
1010110	1100
1010111	1101
1011000	1001
1011001	1100
1011010	1100
1011011	1001
1011100	1101
1011101	1100
1011110	1100
1011111	1101
1100000	0001
1100001	0100
1100010	0100
1100011	0001
1100100	0101
1100101	0100
1100110	0100
1100111	0101
1101000	1001
1101001	1100
1101010	1100
1101011	1001
1101100	1101
1101101	1100
1101110	1100
1101111	1101
1110000	0001
1110001	0100
1110010	0100
1110011	0001
1110100	0101
1110101	0100
1110110	0100
1110111	0101
1111000	1001
1111001	1100
1111010	1100
1111011	1001
1111100	1101
1111101	1100
1111110	1100
1111111	1101
The last output is the last 2 inputs xnored in the data.

 

Dennis Nedry

Well-known member
So the results are in. They seem reasonable, though not proven.

Latched:

output bit 1 = input (NOT3 OR 2) NAND 4

output bit 2 = input 5 OR (6 XOR 7)

output bit 3 = NOR of inputs 2, 3, 4

Asynchronous:

output bit 4 = input 6 XNOR 7

So this means that input 1 doesn't matter?? That's not a good sign. Maybe the guys at Apple didn't reduce / optimize the circuit enough to realize that they didn't need that input at the PAL. It's possible.

 

Gorgonops

Moderator
Staff member
Please point out where I'm wrong, I don't want to jump into something that I'm not capable of.

I embedded a bunch of links along the way, but perhaps this one, by Big-Mess-O-Wires, which details the hoops that he went through emulating the IWM for his Mac-in-an-FPGA project is... the most straightforward. (I put it in before but perhaps you missed it.) I know "it encodes GCR from a straight data feed" seems like it should be a reasonable explanation of what the IWM does, since most other disk controller ICs *do* accept straight data bytes and completely transparently convert them to FM or MFM encoding on the other side, but... that's just not what it does, according to every in-depth manual on programming the thing I've read. It is *annoying* that basically all the references seem to delight in completely glossing over the "you need to encode the data" part, but it's there, really, I swear.

#1: Read Big-Mess-O-Wires page. He's not talking about encoding grabbed from the "outside" of the IWM, he's talking about puzzling how the encoding is done in software in the Mac ROM...

#2: I know it sucks, but Read this assembly source code for the NetBSD driver for the IWM chip. It's very well commented so you don't need to be able to program in 68000 assembly to read it. You can read the translation table, and you can follow the parts of the code that actually do the disk I/O to see how the bytes are transformed. (IE, three 8-bit bytes get stripped down to six bits, and are converted to 8 bit bytes with no more than two zeros in a row according to the "toDisk" translation table. Then the three 2-bit leftovers are combined into one 6-bit nybble and translated via the table. Three plaintext bytes get converted to four GCR bytes via a software routine. A reverse conversion table is also needed to convert data coming *from* the IWM back into regular bytes, and it doesn't match, on purpose.

(One helpful thing that's noted in this driver: the GCR translation table for MacOS is different than that used by Apple ][ ProDos. So while reading the Apple ][ DOS manuals will tell you the theory, you need to use translation tables lifted from the .Sony driver for the actual conversions.)

Again, I know, it's completely counter-intuitive. Given all the gushing love there is out there for the super-clever Woz disk controller you'd think it was a hardware miracle that was easy to use and did all sorts of magic for you, but it's actually little more than a bidirectional shift register and a very minimal window-based data separator. Everything that makes it work is software. It does *not* encode GCR.

That *all* said... reading the data sheet *again* (which isn't very good) and looking at a Q&A about how the Disk ][ controller works... I'm slightly less certain you *absolutely* couldn't shove data through the IWM that doesn't obey the "no more than two zeros in a row" rule, but I'm far from comfortable with saying that's *not* true. The input side of the IWM, as noted, is a window-based data separator, in which a "time window" for a valid read is opened by the trigger of the last valid bit read, and it's not completely clear to me what would happen if you went too many periods without a zero-one transition. I *swear* I read another document that claimed that the data in the output register would be invalid if that happened (it seemed implied that the IWM would stop reading until the register was cleared at the moment the rule was broken), but I can't find that document so I guess I can't absolutely *swear* that what I read there wasn't actually saying that the routine that grabs the byte off the IWM and checks it against the nybble translation table wouldn't be responsible for failing the byte for not matching the table. *IF* that's what it meant... then maybe you can just yoke two IWMs together and send plain-text through them. It's breaking the rules, but the documentation for the IWM is so bad I can't swear if those rules are hardware enforced in it. The floppy disk routines would have kittens over three zeros in a row, but... someone *really* needs to stare at a disassembly of the the Mac Plus version's of the .Sony driver and figure this out once and for all. The DV17 Sony Driver tech note says what part of the Mac Plus's version of the driver would be called if it were communicating with an HD-20, if there's *anyone* out there able to stare at that and see if that part of the driver uses or skips the GCR translation table calls then the mystery is solved.

But... the IWM does *not* encode GCR. *Please* note I'm not saying this in an attempt to discourage you, by any means. It's just important to understand that *if* GCR-style encoding is a necessity for the IWM to successfully read a data stream on the serial side (which I'm now... not certain about) you're going to get discouraged waiting for "plain text" bytes to appear on its data port. The software for doing the translations isn't that horrible, so if you need to do it shouldn't be a deal breaker for something of this magnitude.

 

Dennis Nedry

Well-known member
As is often the case, I did not research all of the details or even pay attention adequately to what people where trying to say right here at 68kmla.

It might be possible to figure out what the Z8 is doing on the IWM side and somehow separate that from what it is doing on the Rodime side. That would include the encoding/decoding that the board does for the Mac. It might also lead us to a Rodime spec somewhat by accident, but unlikely due to the custom square chip involved. (The square chip has access to the Z8 address and data busses, also a few control lines direct to the Z8 8-o )

So the key here looks like figuring out the Z8 code. Assuming that the Z8 is ROMless and runs exclusively from the external ROM chip that I dumped, it might be possible to figure this out with that approach. I'm grateful that you explained this for me because I would have done exactly what you said - I would have started looking for patterns in decoded data coming off of the IWM, which won't work with this strange auto-checksum / encoding stuff if that's in there.

It is also possible, as it seemed you alluded to, that the Mac skips it's encoding / decoding process specifically when talking to an HD20, thereby not needing anything to decode it in the HD20. In this case there might be unencoded data fresh off of the IWM, but I'm not really counting on it anymore.

 
Top