• Hello MLAers! We've re-enabled auto-approval for accounts. If you are still waiting on account approval, please check this thread for more information.

1TB drive formatted to HFS

Syntho

6502
An 8 kilobyte text file on an HFS+ drive shows up as roughly 30MB on a 1TB HFS drive (Mac OS Standard). That's what I get anyway. Large file sizes aren't too great to deal with but I'm more concerned about something else that may tie into this.

If I boot OS7.6.1 from the 1TB HFS drive and I open up a .txt file (30MB), will that take up 30mb of RAM? The same can go for opening a picture, playing an mp3 etc. That's not exactly how I'm utilizing the 1TB drive however.

I'm booting from an 80GB drive and have the 1TB for storage. So if I play an mp3 or similar from the 1TB drive, but while booted from the 80GB drive, would that take up lots of RAM too?

I'm hoping it's no in both cases.

 
I'm not sure of the details but I don't think you need 30MB of RAM to load a 8k file.  It's how much space is being allocated to one sector.  Since the 8k file is smaller than hard drive's allocation size that one file gets the whole space to itself.  But when it's loaded, only 8k of actual data gets sent over the cable to the CPU and RAM.

You're just wasting disk space, not RAM space.  

 
Aaaand, this is why you should be running OS 8 or later.

Yes, the short answer is that no, your files aren't magically going to expand to fill the allocation block size; when the system reads one it'll stop when it gets to the EOF, so you're not going to see some huge jump in RAM use. Possibly more relevant, however: HFS limits you to a total maximum of 65,535 files. That's... possibly a genuine problem when you're talking about a 1TB+ hard disk. If you assume, for instance, a roughly average size of 3MB each that's "only" about 200GB worth of MP3s...

Which raises an interesting point, actually. The math suggests that if your minimum file size is coming out to 30MB-ish you have closer to a 2TB drive *or* that your drive is just barely over 1TB and HFS has to make the minimum allocation block a power of 2. I'm too lazy to figure out which that is, actually. (The wording on Wikipedia's page on HFS says "the limit of 65,535 allocation blocks resulted in files having a "minimum" size equivalent 1/65,535th the size of the disk", and 1TB divided by 65535 would be 16MB. However, I know with the DOS FAT16 filesystem, which has similar limitations, block sizes have to be an even power of two, IE; if you have a 1GB disk you have 16k clusters, but at 1GB+32k it'll switch to 32k clusters.)

I do suggest you read the other parts of the Wikipedia article on HFS, specifically the part where it talks about its fragility. Seriously, it's not a good basket to keep your eggs in.

 
I thought partition sizes were limited to 2 GB when it comes to HFS?

 
Last edited by a moderator:
Since when is it possible to create a partition larger than 2 GB with HFS (Mac OS Standard)?
Since 7.6. Allowing for the ridiculously large minimum filesizes was basically a stopgap to compensate for the lateness of HFS+. (Which *should* have been introduced with Copland but we know how that turned out.)

 
Last edited by a moderator:
I ran into the problem of 65000 files actually. If my entire collection of like, everything is on the same drive, I'll need to partition everything.

Turns out though, for some reason some files show up as 14.5mb and others show up as 29.1mb. It's like either or.

 
Considering you're using a filesystem mode that was only really supported for two OS versions and is a half-***ed hack anyway it wouldn't surprise me at all that you're seeing weird things happen. Personally I'd *really* recommend doing some soul-searching and figuring out of whatever it is about 7.6 you like more than, I dunno, 8.1 or 8.6, is really worth using a dysfunctional and potentially buggy filesystem on a disk *way* bigger than it was ever intended for. (This was quite literally a stopgap to compensate for some high-end Macs starting to ship with drives in the 8GB ballpark, nowhere close to 1TB.)

 
To add to what Gorgonops said, 8.0 and 8.1 are not that much "heavier" on a system like a 9600 anyway, especially if it's a decked system.

I'll link him to this thread, it would be really interesting to know how defor handles this issue as he has a decked PowerTower Pro running a SATA card with a big disk on it.

One thing you may do if you're insistent on running 7.6 is store your actual data in diskcopy images. You can make those as big or little as you need and as far as I've ever been able to detect, they don't have much of an impact on performance. (In fact, if it were me I'd consider storing my big diskcopy images on a file server instead of bothering with SATA, but that depends on your priorities and whether or not you've added a 100 megabit or gigabit Ethernet card.)

The other nice thing is that you can partition information in that way as well, so as to only have the relevant projects or type of data with which you'd like to work "active" when you need it.

The other other other nice thing is that, of course, you can simply copy the entire dc image to your file server for backup.

 
Turns out though, for some reason some files show up as 14.5mb and others show up as 29.1mb. It's like either or.
There’s a simple reason for that. At the lowest level on disk, each fork of a file is treated as a separate unit, so if your file has both a resource and a data fork… voilà, two allocation blocks, each of them around 15 MiB, for a total of 30.

 
Back
Top