Transfer C source file from modern Mac to Classic Mac (Basilisk II)

thisisamigaspeaking

Active member
So I have Symantec Think C++ 7 installed and working in System 7.5.3. I have a project folder in my shared ("Unix") folder in Basilisk II. I can create a main.c. However, when I drag a .h file to the project folder from modern macOS, Think C doesn't seem to be able to use it. The file will show up in file selection dialogs if I try to open it the editor, but it will then say cannot open file. It is shown there in the folder. If I try to include the .h file from the main.c I am editing with Think C, the compiler will say the file can't be found.

I am guessing this is something to do with the resource fork because similarly if I look at the main.c that was created by Think C on my modern Mac, it's empty. How can I make a Unix .c or .h file available in System 7? Thanks.
 

joevt

Well-known member
I think you need to set the type of the file to TEXT
In classic macOS, you might use ResEdit or Resorcerer or an extension that modifies the Finder Get Info window to change the creator and type of a file (I forget the extension's name).

In modern macOS, the file type and creator are stored in a Finder info extended attribute along with a resource fork extended attribute. There's a GetFileInfo command to get the file type and create and a SetFile command to set the file type and creator. They might be part of Xcode command line tools.

I don't think Basilisk II and SheepShaver use extended attributes? They store Finder info in an invisible folder named .finf

Maybe Think C has an option to ignore file type and just look at the file extension? In Metrowerks CodeWarrior, you would modify the file types settings for your project.

A .h file is not a compilable file so it doesn't need to be included in a project. A .h file is #include'd by a .c or .cpp file. However, I understand that it is convenient to include .h files in the project file to make it easier to modify.

You might use https://github.com/thejoelpatrol/fusehfs to mount an HFS disk image. Then you can copy files from modern macOS to that disk image. Mount the disk image in Basilisk and copy the file from there. Be sure to use SetFile to set the file type.
 

cheesestraws

Well-known member
I think you need to set the type of the file to TEXT

This is my bet too. I'd probably use ResEdit for this (use File -> Get File or Folder Info).

I don't think Basilisk II and SheepShaver use extended attributes? They store Finder info in an invisible folder named .finf

At least on the version of B II I have here (for modern macOS), B II uses extended attributes, which is extremely helpful of it.

I am guessing this is something to do with the resource fork

.c and .h files in THINK C do not have a resource fork. You can tell this by attempting to open them in ResEdit.

if I look at the main.c that was created by Think C on my modern Mac, it's empty

I can't reproduce this: copying a think C .c or .h file to the Unix filesystem here from B II does exactly what it ought. It's not something like line endings making your editor show the whole file on one line or something is it?
 

joevt

Well-known member
Some classic apps would put some info in the resource forks. For example, MPW had font and tab info.
 

LaPorta

Well-known member
I’d use ResEdit to look up the type/creator code of a .h made in System 7, and just put those into the one copied from OS X. Use ”Get Info…” in ResEdit on each respective file.
 

thisisamigaspeaking

Active member
I have not been able to figure out what was going wrong in my Unix/shared folder with the project there. First I thought I had made a mistake when creating the project and it thought the project folder was one level above where it was supposed to be (I picked don't create folder and used an existing one), but when I did successfully create a project in a new folder on the shared drive, it was still having trouble accessing files.

Creating the project on the system drive in the Development folder solved those issues but now:

It's not something like line endings making your editor show the whole file on one line or something is it?
How do I easily fix this to bring my Unix text (source) files over to Mac?

Is there a way I can have it be compatible with both (like a DOS file works on Unix but not the reverse I believe?)

edit: I found a utility dos2unix (which also has a variant mac2unix and unix2mac) and that seems to do the trick.
 
Last edited:

thisisamigaspeaking

Active member
Ok another question (but maybe I should research this first or create a new thread).

If I want to have a static array of larger than 32k, is there any way to do that in Symantec Think C 7? Should I be switching to a compiler that generates native 020 code?
 

cheesestraws

Well-known member
with the project there

Oh, the whole project is on your Unix drive? There may be issues there. The project file does use the resource fork, and while I don't know the cause of this, I can tell you that other compilers certainly don't like their project files being on the Unix volume. I tend to personally use the Unix volume for transfer and keep the projects on the emulated HD.

edit: I found a utility dos2unix (which also has a variant mac2unix and unix2mac) and that seems to do the trick.

Yes, this is probably the most sensible answer.

If I want to have a static array of larger than 32k, is there any way to do that in Symantec Think C 7? Should I be switching to a compiler that generates native 020 code?

Does it absolutely have to be a static array? If you're allocating blocks of memory this big you probably should do it on the heap using the memory manager.
 

thisisamigaspeaking

Active member
Oh, the whole project is on your Unix drive? There may be issues there. The project file does use the resource fork, and while I don't know the cause of this, I can tell you that other compilers certainly don't like their project files being on the Unix volume. I tend to personally use the Unix volume for transfer and keep the projects on the emulated HD.
Seems like that was the issue.

Does it absolutely have to be a static array? If you're allocating blocks of memory this big you probably should do it on the heap using the memory manager.
No, I could take a couple other approaches. I'll explain what I'm looking at.

I have a 3D demo ("testglobe" https://github.com/trguhq/testglobe/ ) with a bitmap (a series of mipmaps really) that is fundamental to the operation of the program. I actually can fit the sizes that would most likely be useful on a classic Mac within 32K but I wanted to include the full resolution of the original version of the demo for an apples to apples comparison of benchmark.

So, I could break the bitmap up into multiple static arrays and then combine them into a complete one on the heap, or possibly just use them separately, or another idea is maybe put them as PICTs in the resource fork. Using the resource fork is my least preferred way because the base version of this demo doesn't load the bitmaps at all, they are just baked into the code as static arrays. I like the idea of an all-in-one demo, no separate files to load, nothing but the executable.

For full explanation, demo has two modes, one is using traditional hardware texture mapping to draw the surface of the earth, and the other, which is relevant here, is mapping the textures to flat shaded polygons for a simulation of texture mapping. I'm trying to convert this to use the CPU for 3d transform and QuickDraw polygon fills to see if the fill rate of a SuperMac Thunder/24 can possibly be useful for this. I really don't know much about QuickDraw or the Thunder/24 so I could use feedback as to whether I'm barking up the wrong tree. I am assuming it does have accelerated fills.

So this is what I'm going for as a baseline functionality on an 040 Mac with accelerated QuickDraw:
testglobe_low.png

And this is what the full resolution of polygons (not hardware texture mapped) looks like on a modern Mac:
testglobe_full.png

And this is what I can fit with a 32K static array holding the texture, which I would be surprised if an 040 could do at any reasonable speed, so if this is the most I can fit it's fine, aside from not being able to have an apples to apples comparison against the full blown version of the demo.
testglobe_mid.png

Feedback appreciated.
 
Last edited:

cheesestraws

Well-known member
Using the resource fork is my least preferred way because the base version of this demo doesn't load the bitmaps at all, they are just baked into the code as static arrays. I like the idea of an all-in-one demo, no separate files to load, nothing but the executable.

I think you may have a fairly fundamental misunderstanding of the way that 68k applications work here. The resource fork is not an external thing to the "executable". It is not even an integral thing to it: it's the other way around, all your executable code ends up as resources in the resource fork of the application. An "executable" is just a file that contains resources that contain code which conform to a convention that the Finder knows how to launch. And the resource fork of an executable is precisely to avoid "separate files to load"; it is a structured bundle of all the data that the application needs, including its code. In fact, most 68k applications etc have no data fork at all; the resource fork is the "executable".

So, the idiomatic way to do this would be to stuff the data you want into a resource, then use the resource manager to get access to it. You *could* stick it in as a PICT, if you want to use the PICT format, but if you have image data in some other format, use a different resource type. You don't need to use an "allocated" resource type within your application, you can just make one up. They're just chunks of data. But don't use a PICT resource for data that isn't in PICT format, it's antisocial.

This way, your memory management will be easier, because you just need to ask the resource manager for a handle to the resource and it will give you one. You don't need to worry about manually loading it, or doing anything like that - just ask and it shall be given.

If you're one of those developers who thinks that "real programmers" only work with text and no metadata nor human-consumable images will ever be a part of your project (sigh), you can use Rez and produce these resources from a text representation.

Feedback appreciated.

I think you are trying to run before you can walk here; classic MacOS is very different from nearly everything else in terms of things like how executable-contained data is managed (both on disk and in RAM), and how the lifecycle of chunks of code works. And your comments above about resource forks in general kind of suggest you have fundamental misconceptions about how core concepts of the operating system actually work.

So, you have two options here:

1. You can put effort in to learning the fundamentals. If you enjoy taking OSes to bits to see what makes them tick, this is a good option, but won't help you with your immediate problem.
2. You follow the conventions of the platform, which will get you 90% of the way there. But if you take this approach you can't do what you did above and go "I know better and I want to do something clever" based on experience with other platforms, because you probably don't know better, and you'll just end up making your own life a misery because you're fighting the system.
 

thisisamigaspeaking

Active member
I think you may have a fairly fundamental misunderstanding of the way that 68k applications work here. The resource fork is not an external thing to the "executable". It is not even an integral thing to it: it's the other way around, all your executable code ends up as resources in the resource fork of the application. An "executable" is just a file that contains resources that contain code which conform to a convention that the Finder knows how to launch. And the resource fork of an executable is precisely to avoid "separate files to load"; it is a structured bundle of all the data that the application needs, including its code. In fact, most 68k applications etc have no data fork at all; the resource fork is the "executable".

So, the idiomatic way to do this would be to stuff the data you want into a resource, then use the resource manager to get access to it. You *could* stick it in as a PICT, if you want to use the PICT format, but if you have image data in some other format, use a different resource type. You don't need to use an "allocated" resource type within your application, you can just make one up. They're just chunks of data. But don't use a PICT resource for data that isn't in PICT format, it's antisocial.

This way, your memory management will be easier, because you just need to ask the resource manager for a handle to the resource and it will give you one. You don't need to worry about manually loading it, or doing anything like that - just ask and it shall be given.

If you're one of those developers who thinks that "real programmers" only work with text and no metadata nor human-consumable images will ever be a part of your project (sigh), you can use Rez and produce these resources from a text representation.



I think you are trying to run before you can walk here; classic MacOS is very different from nearly everything else in terms of things like how executable-contained data is managed (both on disk and in RAM), and how the lifecycle of chunks of code works. And your comments above about resource forks in general kind of suggest you have fundamental misconceptions about how core concepts of the operating system actually work.

So, you have two options here:

1. You can put effort in to learning the fundamentals. If you enjoy taking OSes to bits to see what makes them tick, this is a good option, but won't help you with your immediate problem.
2. You follow the conventions of the platform, which will get you 90% of the way there. But if you take this approach you can't do what you did above and go "I know better and I want to do something clever" based on experience with other platforms, because you probably don't know better, and you'll just end up making your own life a misery because you're fighting the system.
Great thanks for the explanation. I equated resource fork in my mind to Win32 resources, which I do use at times but very reluctantly, and it sounds like there are some similarities (I guess Windows drew on MacOS's design?).

I definitely appreciate what you are saying, but I am more of a Unix/portable coder so I dislike having a different resource management system for every platform and would rather have the same ANSI C solution work for all so the same code will compile with as few changes as possible shoehorned in to support a platform. If I am going to use a premade mechanism for it, I'd rather it be a part of something like Qt (which I like quite a bit), which isn't an option for this lightweight demo.

The point here is just to draw the globe. Learning proper classic MacOS coding is a tertiary consideration at most. This is not a "learn classic Mac" coding project any more than absolutely necessary to get it up and running and not have it be a misbehaved program.
 

cheesestraws

Well-known member
The point here is just to draw the globe. Learning proper classic MacOS coding is a tertiary consideration at most. This is not a "learn classic Mac" coding project any more than absolutely necessary to get it up and running and not have it be a misbehaved program.

You realise this can be rephrased as "I don't want to learn how to do it, I just want to do it"?
 

thisisamigaspeaking

Active member
You realise this can be rephrased as "I don't want to learn how to do it, I just want to do it"?
No. It's "I just want it to work, whatever the path of least resistance to that is." Whatever way works to get the globe drawn on the screen is "how to do it". Delving into an antique operating system's non-portable and peculiar ways of doing this is antithetical to the intent of the demo to be as portable as possible, if you are going to insist on having a confrontation over it.

I'm not saying I won't use resources to store the mipmaps, just that the only reason I would do so is because it's necessary to make it work right.
 

thisisamigaspeaking

Active member
Good luck with your project!
...

If people can swap out the images with a resource editor that's a good reason to do it that way.

This was intended to run in 640KB (or less ideally) all told on MS-DOS so the idea was to have as little code as possible and use as little memory as possible. If it were going to be sophisticated, it wouldn't be in plain C at all, it'd be in C++, in Qt, and do a lot of additional things like support swapping the texture, cartography and GIS, cloud cover, lighting, etc. etc. etc. Increasing the scope a little bit begs for more and more to be added and it no longer to be an "IBM PGC DEMO".

To someone with a UNIX mindset, except for very large sprawling applications, the executable file per se should be able to run the program. So I have been going with that so far, no data files, no nothing. Nothing to get lost. There are a lot of possibilities for how to design it (load the texture from a file the user can replace easily for example) but the intent was to be as simple as possible.

As always I appreciate feedback though, which I asked for. Also suggested that maybe I should research the question before asking it, which it seems like it would've been a good idea as you seem to have gotten annoyed.
 

joevt

Well-known member
I have a 3D demo ("testglobe" https://github.com/trguhq/testglobe/ ) with a bitmap (a series of mipmaps really) that is fundamental to the operation of the program. I actually can fit the sizes that would most likely be useful on a classic Mac within 32K but I wanted to include the full resolution of the original version of the demo for an apples to apples comparison of benchmark.

So, I could break the bitmap up into multiple static arrays and then combine them into a complete one on the heap, or possibly just use them separately, or another idea is maybe put them as PICTs in the resource fork. Using the resource fork is my least preferred way because the base version of this demo doesn't load the bitmaps at all, they are just baked into the code as static arrays. I like the idea of an all-in-one demo, no separate files to load, nothing but the executable.

For full explanation, demo has two modes, one is using traditional hardware texture mapping to draw the surface of the earth, and the other, which is relevant here, is mapping the textures to flat shaded polygons for a simulation of texture mapping. I'm trying to convert this to use the CPU for 3d transform and QuickDraw polygon fills to see if the fill rate of a SuperMac Thunder/24 can possibly be useful for this. I really don't know much about QuickDraw or the Thunder/24 so I could use feedback as to whether I'm barking up the wrong tree. I am assuming it does have accelerated fills.

So this is what I'm going for as a baseline functionality on an 040 Mac with accelerated QuickDraw:

And this is what the full resolution of polygons (not hardware texture mapped) looks like on a modern Mac:

And this is what I can fit with a 32K static array holding the texture, which I would be surprised if an 040 could do at any reasonable speed, so if this is the most I can fit it's fine, aside from not being able to have an apples to apples comparison against the full blown version of the demo.
Reminds me of a project I did in '96/'97 using the X-Sharp 22 library described/included in "Zen of Graphics Programming" by Michael Abrash.
You can find "Michael Abrash's Graphics Programming Black Book Special Edition" online which is a later work containing much of the info in "Zen of Graphics Programming".

I wrote this version in Think Pascal and MPW 68020 assembly for classic Mac OS.
It can use any PICT for the world map. I chose the PICT from the classic Map control panel.
I did not enable lighting in the texture map mode.
I think it's strictly 8 bit indexed color. No graphics acceleration. Fixed-point math.
I did benchmarks to compare various assembly drawing routines (polygons, etc) to QuickDraw.
XSharp22.png
 

Attachments

  • XSharp22.app.zip
    106.7 KB · Views: 2

thisisamigaspeaking

Active member
Reminds me of a project I did in '96/'97 using the X-Sharp 22 library described/included in "Zen of Graphics Programming" by Michael Abrash.
You can find "Michael Abrash's Graphics Programming Black Book Special Edition" online which is a later work containing much of the info in "Zen of Graphics Programming".

I wrote this version in Think Pascal and MPW 68020 assembly for classic Mac OS.
It can use any PICT for the world map. I chose the PICT from the classic Map control panel.
I did not enable lighting in the texture map mode.
I think it's strictly 8 bit indexed color. No graphics acceleration. Fixed-point math.
I did benchmarks to compare various assembly drawing routines (polygons, etc) to QuickDraw.
View attachment 83417\

Very cool! Definitely gives me some more ideas. Thanks for sharing this. Originally I was only interested in measuring performance on (old) accelerated hardware but perhaps it wouldn't take too much to include CPU rendering along side.

Interesting your library was from Abrash, my idea was partly inspired by wondering if Quake could be accelerated on hardware that had reasonable 3d polygon throughput but no (or very weak) texture mapping, by mapping the textures to subdivided shaded/lit triangles (rather than just a simple flat shaded version of glQuake). So I wanted to get a ballpark idea of what performance I would get. I believe the software renderer in Quake uses something of a similar technique to that at some distances but don't quote me on that.
 
Top