• Hello MLAers! We've re-enabled auto-approval for accounts. If you are still waiting on account approval, please check this thread for more information.

Do StuffIt 5.5 archives encode the resource fork safely for transfer to a non-Mac ?

I too would like some documentation of why DU doesn't work, as well as what has changed to make dd not work. But I wasn't able to find any at the link, and none was forthcoming here. The script is just magic I guess.
My point is, as long as you do it properly, Disk Utility does work just fine. Never sure why the big-to-do with DD.
 
Still not sure why just using Disk Utility isn’t the easiest option.
To quote @Protocol_7:
As for this image, judging by the file size (it isn't a multiple of 2352 which is the sector size in bytes of a raw image) I'd hazard a guess that this was ripped with Disk Utility to a .cdr file and renamed. In the past you could use it to rip a disc to CD/DVD Master format (.cdr) and this would be the same as a .iso.

But somewhere along the way DU wasn't using the necessary elevated access to read the disc to a non-raw file and started making raw images instead. That would explain why whoever ripped it thought they had made a regular iso.
Essentially, like with Toast (only moreso), Disk Utility messes with the image headers; more recent versions of Disk Utility appear to lean on the fsplugin architecture to build the header, and if it can't find the correct fsplugin, it just writes a raw data stream with no headers at all. The result is an image file that lacks the extra data required to accurately re-create the original disc. Often, this isn't an issue, when the disc just contains a single HFS partition, for example. But when it contains multiple tracks (mixed mode, HFS+Joliet, multiple HFS partitions, boot partitions, etc.), the information regarding the layout of those tracks (block size, slack space, gap size, etc.) gets lost, resulting in images that, when loaded in an emulator or burned back to CD blank can't boot, or can't play audio tracks, or only load the last partition on the image, etc.

You can think of it as a BIN/CUE pair, but missing the CUE file.

DD, when used with standardized parameters, can create an image that preserves the raw data and the track layouts, on OS X 10.2 through macOS 26, which is useful for archival purposes.

[edit] Also, in the case of Apple official media, we found that there's a bunch of interesting stuff encoded into the disc header data on the original Apple discs, and if you just read off the raw data stream from the disc, this stuff can get missed, meaning we don't get to see interesting details like who submitted the original image for duplication, what date the CD was pressed, etc. -- and if we're trying to match the image with a checksum value but the original data is overwritten with "Imaged by Toast 2.0" or is totally absent, the checksum will change.
 
Last edited:
Fair enough, and I grant that I don’t really make images with modern Macs (they lack a drive, anyway), and mostly use 10.4-10.6 for imaging, so it makes sense why my images work fine for me. Thank you for the in-depth answer.
 
Regarding my comment from a couple of years ago, as a Windows user, I have found the repos to be very problematic. IMO, all the files on the repos should be in .hqx format (and maybe zipped) and/or auto-converted on upload (if .sit) so that the files work when they have to transition through a Windows environment. However, I haven't been tracking this issue recently. Has anything changed on this front? Thanks.
 
Current strategy is actually to set the metadata, then macbinary-encode the files, and not depend on compression. A macbinary file is just a data file, so as long as it's not being treated as a text stream, shouldn't be garbled by Windows -- and yet, you can still open it with a hex editor and see the metadata, data fork and resource fork contents without decompression.

Unfortunately, the "preferred way" to preserve files has changed over time, so most repos have a massive mix of file storage types.

Personally, I like binhex (hqx) format because it's 7-bit ASCII-safe, and leaves room in the header for any type of annotation you want to add, while preserving the contents. Of course, I like BinHex 5.0, which uses MacBinary to flatfile the three data forks before encoding the contents in a 7-bit data stream; most of the stuff you'll find in archives is BinHex 4.0, which is 6-bit friendly, but was designed in 1985, which means it is unaware of any metadata changes that have happened to the Mac file format over the past 40 years, meaning they'll likely not be encoded and will be lost.

So, since it's been roughly 30 years since data transfer methods used less than 8 bits in their encoding, just using MacBinary III is probably enough for any pre-OS X file out there, today.

I'd personally avoid using AppleSingle, as it was designed for A/UX, and MacBinary III is much more robust. I also tend to avoid using AppleDouble, because it's too easy to accidentally leave the metadata and resource fork behind, due to these files being invisible by default on OS X/macOS and when presented in network fileshares.

So, for myself at least, I try to use MacBinary III, and compress in 7z archives if compression is needed. If it's not archival, I often just use Apple Zip, which stores the data in AppleDouble format inside a PK Zip container. But that's mostly just a bad habit.

So, following all that, it'd probably be good for us to create yet ANOTHER script that applies SetFile attributes and does a macbinary encode, and then optionally applies compression. Bonus marks for creating a secondary file with the md5sum of the data and resource forks and possibly the metadata contents or checksum as well.

Being able to use such a script to create/read/extract on HFS-friendly OSes would be wonderful, as would being able to read/edit the contents on filesystems that lack these features.
 
Back
Top