SevenTTY — local shell + terminal + SSH for classic Mac OS

Hello @Xero, I just wanted to thank you for this software, no matter how you did it, as I don't care. I have long dreamed of a tool that is easy to install and replicates the navigation and file manipulation commands of Unix/Linux in Mac OS Classic, in addition to having networking capabilities.
I'm sorry about the reception you're getting here, it's frankly ridiculous and impolite to say the least.
It makes me wonder if this place is even moderated by anyone. The release of new software for Mac OS Classic is quite a rare thing. I hope this doesn't discourage you from continuing development.
Cheers.
 
Last edited:
Without Claude you wouldn't have been able to do this unless you actually sat down and learned how programs are written.
This is not yours. It's Claude's

And to Rodin, concerning his statues:
"Without the chisel wouldn't have been able to do this unless you actually sat down and learned how marble is removed.
This is not yours. It's the chisel's"

Curious; in your opinion, does refactoring in IDE also count as "not yours"? How about autocompletion? How about using intangible charges in invisible silicon structures rather than punching holes in a paper tape? That was *real*!!!

Anyway - AI is a tool. It maybe a crappy tool in many cases (my retrocomputing attempts seem to be too much out of AI's comfort zone, or I'm too old to learn new tricks with AI), but it some people can use that tool to be more productive, then more power to them. Of the many areas that AI has invaded, adding features to a retrocomputing SSH seems to be a harmless one.

My opinion is that the level of disclosure of the OP was quite sufficient, and if the resulting code work and is abiding by the original license of ssheven, then it's great to have a new useful piece of software. It also makes me wonder if AI could be used to add acceleration support to the codebase - I do have a few boards with FPGA that would make the experience of SSH on '030 and '040 faster...
 
It also makes me wonder if AI could be used to add acceleration support to the codebase - I do have a few boards with FPGA that would make the experience of SSH on '030 and '040 faster...
Might be worth a collaboration with the OP - @Xero - after you previously QuickDraw patch, would you consider helping @Melkhior add more extensive graphics acceleration to his FPGA based graphics hardware?

It would be interesting to see if it could work on optimisation of performance improvement under specific load cases vs effort / real estate in the FPGA?

Be interesting - for test cases you could define the full range of inputs and then get it to bulk test the non-optimised behaviour Vs the optimised behaviour to go some way towards validating it.
 
Last edited:
It would be interesting to see if it could work on optimisation of performance improvement under load cases Vs effort / real estate in the FPGA?

For QD acceleration, at the moment real estate cost would be fairly low. The *FPGA uses a RISC-V core and some firmware to implement acceleration, not custom hardware - much less area-efficient but easier to implement. The only cost would be additional registers where the Mac can put parameters (effectively, a control structure but in hardware, accel_le is a pointer to the area in the FPGA with those registers) if more are needed, and storage of the firmware - which could be moved away from inside the FPGA to the Flash were the declaration ROM lives.

The primary problem is that the high-level traps are complex to implement, and the simpler low-level traps (like the BitBlit trap my code is accelerating) are undocumented (and they peek and poke in memory, including the stack). If someone could (with or without AI) figure out how to substitute more of those traps by clean C code in an INIT, it would be fairly easy to then offload the C code on the RISC-V core in the FPGA.[/url]
 
@Xero, just wanted to point out that if you want to handle the Unicode conversion, you may want to look at how The Unarchiver does it -- it contains a nice heuristic converter that automatically takes Unicode and converts to MacRoman. I've found the logic used there useful in a number of scripts I've written that take Unicode text and filter it for a MacRoman environment.
 
@luRaichu, I know you have your own opinion and agenda you are pushing, but seriously you really don't understand this. AI code is "slop" in the same way that unreviewed code, code without unit tests, or code from an overly excited employee that has been pushed to main is. If you haven't exhaustively reviewed the code you've put out, it will probably fail under scrutiny.

I don't know if this app has been exhaustively tested, but assuming it is, there should not be any meaningful difference to the end user. You probably would not have even commented on this thread, or noticed it was AI had he not said it explicitly. And yet, his honesty has set you off into a tailspin.

All deșign, whether it is software, art, or architectural, is subject to taste. The "boiler plate" aspects of software design, IE, getting something to a working state, have little to do with how good it is. You'll find if you actually use these tools, they get you to the 90% point on a very difficult project, and then they force you to use your brain and taste to get what you really want. It's impossible to avoid the human element being intertwined with the design, we can't help it.

I think you'll be surprised to learn that most code being written now in the largest companies is happening using AI agents. Do you think they are pushing "AI slop" to main? No, they are reviewing it like any other code. And the software design/taste/intuition, which is actually what senior engineers are paid the big bucks for, is not going anywhere.
 
Hello @Xero, I just wanted to thank you for this software, no matter how you did it, as I don't care. I have long dreamed of a tool that is easy to install and replicates the navigation and file manipulation commands of Unix/Linux in Mac OS Classic, in addition to having networking capabilities.
I'm sorry about the reception you're getting here, it's frankly ridiculous and impolite to say the least.
It makes me wonder if this place is even moderated by anyone. The release of new software for Mac OS Classic is quite a rare thing. I hope this doesn't discourage you from continuing development.
Cheers.
Thank you - I appreciate the support. I felt the same way leading up to this! I want to actually use my old macs to do modern tasks, and if AI can help me achieve this faster, why not use the tools I have available to do so?
And to Rodin, concerning his statues:
"Without the chisel wouldn't have been able to do this unless you actually sat down and learned how marble is removed.
This is not yours. It's the chisel's"

Curious; in your opinion, does refactoring in IDE also count as "not yours"? How about autocompletion? How about using intangible charges in invisible silicon structures rather than punching holes in a paper tape? That was *real*!!!

Anyway - AI is a tool. It maybe a crappy tool in many cases (my retrocomputing attempts seem to be too much out of AI's comfort zone, or I'm too old to learn new tricks with AI), but it some people can use that tool to be more productive, then more power to them. Of the many areas that AI has invaded, adding features to a retrocomputing SSH seems to be a harmless one.

My opinion is that the level of disclosure of the OP was quite sufficient, and if the resulting code work and is abiding by the original license of ssheven, then it's great to have a new useful piece of software. It also makes me wonder if AI could be used to add acceleration support to the codebase - I do have a few boards with FPGA that would make the experience of SSH on '030 and '040 faster...
There's so much irony with all this, considering I would have been equally hating on it all a year or so ago, too. I likewise was thinking about making a similar comparison to IDEs - it's like back in the day when "use notepad or bust" was almost a flex. Like, syntax highlighting is too good for them, lmfao. Or, "frontpage 98/2000? naw I use notepad" lmfao. These were all real sentiments of the past that I remember very well.

Indeed, even to some of my friends that don't like AI, they thought what I've been doing with my retro projects, like coding for IE5 mac, and/or this project, are almost like, anti-projects or "forbidden" projects, because I'm not here looking for venture capital, or profit, or putting on some kind of show about AI. I'm doing it to bring new life to old things, not tout idealism about the future or what not. If anything, it's the haters who have been putting on quite the show for us...
What about libraries? I rarely program from scratch 😆
Indeed. Ironically there's even some odd parallels to music there as well, like, hey, come up with a totally original chord or scale. I have more mixed feelings about the AI music/art stuff, though, I think AI mastering is mostly harmless.
Might be worth a collaboration with the OP - @Xero - after you previously QuickDraw patch, would you consider helping @Melkhior add more extensive graphics acceleration to his FPGA based graphics hardware?

It would be interesting to see if it could work on optimisation of performance improvement under specific load cases vs effort / real estate in the FPGA?

Be interesting - for test cases you could define the full range of inputs and then get it to bulk test the non-optimised behaviour Vs the optimised behaviour to go some way towards validating it.
I'm totally down to try and help! I think I saw some of the projects trying to create new FPGA cards for these old machines, but they seemed a bit unobtainable. If I could help make this stuff more accessible, I'm all for it.
AI is the antithesis to our hobby, and it drives genuine people with the spirit of curiosity who wish to create and collaborate away.

The original author of this software does not desire this. So you should feel bad.

It's unfortunate he feels that way. I actually was commenting on his github issues and even submitted a work-around, a little under a year ago now, but the project seemed to have stalled out. And, FWIW, that was all before I was using AI. I think I stated in my original post how there was some bugs I'd found and they hadn't gotten fixed. I didn't start this project thinking I was competing with anything under active development. I would have reached out and tried to collaborate more, had I seen evidence of such. However, if their own attitude towards AI is so bleak, we likely wouldn't have seen eye-to-eye anyway. If he's worried about AI replacing him at his job as stated in the blog post, that should be seen as a sign to get ahead of the curve. Frankly, I'd suggest the same to any of you. I don't think people shouldn't look at it that way, though. It helps you do your job faster. It only takes your job if you let everyone else get ahead of you. That was never not the case in this industry, though. Things have always been like that.

I think there's also some misrepresentations in his post. I did a commit to make the naming of files more generic, but it still literally says "Based on ssheven by cy384" on the splash screen and throughout every file. A simple grep is proof enough, there's well over 20 references to his name & the original project name appears at least 14 times in various notices/comments/etc. I consider it best practice not to hard code stuff like that in file names, especially since it's no longer *just* an ssh client, it just didn't make sense. It was never intended to remove credit where credit's due, and his name & credit is still very much all over the project, literally in the splash screen, I don't have any plans to change that.

However, I can also understand it's a weird feeling when someone forks your project and imparts their stylistic choices on it. I had similar feelings when one of my home automation projects got forked many years ago. It wasn't the greatest feeling, even though I'm all about open source. I don't want him to feel like this project is a slight towards him, and I empathize with his sentiments, despite not agreeing with everything in his post.

Had that project not existed, I would have just started from scratch, it might have taken more time up front, but we'd likely still be here regardless. Should I feel bad? I tried to help. It'd been nearly a year with no updates. I definitely won't be guilt-tripped because I decided to do something about it. Life moves too fast to sit still. AI just makes it move faster, and I don't see that slowing down just because some people are angry on a forum about it. The world would have stopped a long time ago if that were the case every time that happened!
For QD acceleration, at the moment real estate cost would be fairly low. The *FPGA uses a RISC-V core and some firmware to implement acceleration, not custom hardware - much less area-efficient but easier to implement. The only cost would be additional registers where the Mac can put parameters (effectively, a control structure but in hardware, accel_le is a pointer to the area in the FPGA with those registers) if more are needed, and storage of the firmware - which could be moved away from inside the FPGA to the Flash were the declaration ROM lives.

The primary problem is that the high-level traps are complex to implement, and the simpler low-level traps (like the BitBlit trap my code is accelerating) are undocumented (and they peek and poke in memory, including the stack). If someone could (with or without AI) figure out how to substitute more of those traps by clean C code in an INIT, it would be fairly easy to then offload the C code on the RISC-V core in the FPGA.[/url]
Hmm, this definitely sounds like a lot of the same kinda stuff that I was touching with desktopfix, I'd be curious to see where it could go. Heck, AI is even more than happy to disassemble 68k and do assembler patches for me. If there's a clear enough path, and a good way to debug, I suspect I could help!

@Xero, just wanted to point out that if you want to handle the Unicode conversion, you may want to look at how The Unarchiver does it -- it contains a nice heuristic converter that automatically takes Unicode and converts to MacRoman. I've found the logic used there useful in a number of scripts I've written that take Unicode text and filter it for a MacRoman environment.
Ahh, wish i'd seen that before! I actually mostly have this done as of last night! It's a two-fold approach, for anything with close-enough MacRoman characters, it does similar to what you said - maps them to the equivalents. However, for doing crazy stuff like this (Byte Knight's BBS splash screen), it needed more font glyphs. Now there's a python script which generates custom glyphs in all the font sizes used, that gets embedded as a new font in the resource fork.
2026-02-22_18-01.png
Also, claude now looks a lot better within it as well! Dual monitor quadra 700 w/claude sessions, vibing with a vibe. Haha! Admittedly it's a lot faster from QEMU than the real quadra, but I'm hoping to do some more speed optimizations. Also, xterm-256color support! All mostly working!
2026-02-23_11-20.png
Right now I'm having codex and claude go back and forth doing code review, I'm hoping to get all that wrapped up before I do the next release, but, should be soon!
 
AI is the antithesis to our hobby, and it drives genuine people with the spirit of curiosity who wish to create and collaborate away.

The original author of this software does not desire this. So you should feel bad.

If you notice, the author's primary complaint is not the use of AI, but rather that forking the project split off valuable resources in a niche circle. The license of the project gives @Xero the rights to do what he did. Again, would any of this discussion have happened if @Xero had done a conventional fork and added like 3 basic features? Probably not.
 
@luRaichu, I know you have your own opinion and agenda you are pushing, but seriously you really don't understand this. AI code is "slop" in the same way that unreviewed code, code without unit tests, or code from an overly excited employee that has been pushed to main is. If you haven't exhaustively reviewed the code you've put out, it will probably fail under scrutiny.

I don't know if this app has been exhaustively tested, but assuming it is, there should not be any meaningful difference to the end user. You probably would not have even commented on this thread, or noticed it was AI had he not said it explicitly. And yet, his honesty has set you off into a tailspin.
admittedly, spent a lot of last night doing a bunch of reviews and still going through that now before i cut the next release. There's some stuff codex found that was interesting, and i like to make them fight it out on reviews until neither finds anything, plus my own extensive testing, making sure netcat/telnet are doing the right thing, numpad enter vs regular enter, all sort of fun little edge cases I've had to go through to get this to a point i'm happy with. It may not ever be perfect, but what software is??
All deșign, whether it is software, art, or architectural, is subject to taste. The "boiler plate" aspects of software design, IE, getting something to a working state, have little to do with how good it is. You'll find if you actually use these tools, they get you to the 90% point on a very difficult project, and then they force you to use your brain and taste to get what you really want. It's impossible to avoid the human element being intertwined with the design, we can't help it.

I think you'll be surprised to learn that most code being written now in the largest companies is happening using AI agents. Do you think they are pushing "AI slop" to main? No, they are reviewing it like any other code. And the software design/taste/intuition, which is actually what senior engineers are paid the big bucks for, is not going anywhere.
That last statement is more truth than a lot of people here seem to want to admit, this is the way a lot of businesses, both large and small, are already operating. Perhaps not all, but a damn good amount. And it's only going to grow as agent capabilities get better.
If you notice, the author's primary complaint is not the use of AI, but rather that forking the project split off valuable resources in a niche circle. The license of the project gives @Xero the rights to do what he did. Again, would any of this discussion have happened if @Xero had done a conventional fork and added like 3 basic features? Probably not.
Yeah, he even admits it's almost more a personal issue than a real issue. It's honestly the same feeling i had when i had my own project forked, and it didn't even matter if it was AI or not. Like, I totally get that feeling. I could have also just hid the fact I used AI, but i really didn't think it would matter, none the less stir the shit-show that it has. I'm hardly trying to be some AI evangelist, but it's like, the thread has got me defending it as if I am. To me, it's just cool new tech doing cool old stuff, and making old things work new ways has been something I've always done, whether I used AI or not.
 
admittedly, spent a lot of last night doing a bunch of reviews and still going through that now before i cut the next release. There's some stuff codex found that was interesting, and i like to make them fight it out on reviews until neither finds anything, plus my own extensive testing, making sure netcat/telnet are doing the right thing, numpad enter vs regular enter, all sort of fun little edge cases I've had to go through to get this to a point i'm happy with. It may not ever be perfect, but what software is??

That last statement is more truth than a lot of people here seem to want to admit, this is the way a lot of businesses, both large and small, are already operating. Perhaps not all, but a damn good amount. And it's only going to grow as agent capabilities get better.

Yeah, he even admits it's almost more a personal issue than a real issue. It's honestly the same feeling i had when i had my own project forked, and it didn't even matter if it was AI or not. Like, I totally get that feeling. I could have also just hid the fact I used AI, but i really didn't think it would matter, none the less stir the shit-show that it has. I'm hardly trying to be some AI evangelist, but it's like, the thread has got me defending it as if I am. To me, it's just cool new tech doing cool old stuff, and making old things work new ways has been something I've always done, whether I used AI or not.
I for one am happy you've been transparent with it; combined with the previous "I built this with AI" thread on here, it shows both extremes of how the technology can be used. If people started hiding the fact that they used a particular tool when developing something, that would be bad, IMO. Best to always cite your sources, whether it be an AI model or a compiler, or a particular library or even binary source that you reversed.

[Adespoton's random pontifications]

The challenge tends to be that we've now seen a bunch of AI slop, and when many people see "AI" they now automatically append "slop" without actually investigating whether that's how it's been used.

The other challenge, of course, is that when used correctly, it does what you'd otherwise task to a team of junior coders, just making different types of mistakes. This means that it enables more experienced coders to operate as a one person team for projects like this one, but in businesses where there used to be whole junior coding departments, now there's junior coders who are vibe coding to push content faster -- which means they never learn what they need to learn to become senior coders. So it's a double-edged sword.

As far as those users hyper-focused on the AI angle... I'll just say that AI is the new Microsoft. Almost all the same arguments etc. apply to AI today that applied to the Microsoft of the late 90s. Microsoft's still here in a slightly altered form today, and it's likely AI will still be around in a slightly altered form in 30 years. What will change is how it is perceived and (ab)used.

Of course, I've been using AI since the 90s in the form of general and expert systems, LLMs since around 2008. So this whole AI bubble that's been blowing up since 2020 has been kind of crazy to me, as a bunch of people have been making a lot of hubris about LLMs and diffusion models on the pro and anti sides, without often understanding (or willfully misrepresenting) the real capabilities of these tools, leading to cultural misunderstanding and misuse as well.

So in summary, 1) I've seen LLMs abused to pump out worthless garbage by people proud of their "accomplishments" and 2) I've seen LLMs used to do some of the drudge work, connecting dots that would take way too long for a single person doing a hobby project in their spare time, leaving them to work on the interesting bits of novel code and design. It'd be lovely if we could have the second without the first. It'd also be lovely if we could have the second without people claiming it's always the first. I've peer reviewed enough human code to know that slop is not limited to stuff generated by an AI.

In my view, the only real downside in this current example is that AI has helped move this project along so quickly that there's no way for @cy384 to merge all the desired changes without also fundamentally changing the way they've instrumented their own project. This is hard to process; I've had this happen to work I've done myself, where the project moved on to other maintainers, sometimes with issues I understood and could fix still in place, but they were downstream from me and I didn't know the rest of their codebase (and they were no longer consuming my fixes).


So. With all that, I hope these projects continue to bring people joy and we survive the next round of tool use wars.
 
Next release is now available, this one's got a good mix of new features and bug fixes, including a pesky key-repeat bug when scrolling. The big one is 256-color support and a custom glyph font for doing tons of special characters. Plus, the results of my back/forth review sessions between claude/codex.


SevenTTY v1.1.0​


Symbol Font & 256-Color Support​

  • Custom bitmap font for box drawing (─│┌┐└┘├┤┬┴┼), block elements (▀▄█░▒▓), shading, geometric shapes, and CP437 characters — renders at all 7 font sizes
  • Full 256-color palette (xterm cube + grayscale ramp) and true-color RGB via Color QuickDraw
  • TERM=xterm-256color, COLORTERM=truecolor for modern TUI applications
  • Fixed vertical glyph alignment: block elements and box drawing now tile seamlessly with no gaps between rows

Key Repeat / Escape Sequence Fix​

  • Holding arrow keys in modern TUI apps (e.g. claude --resume) no longer causes erratic behavior like jumping to the wrong UI element
  • Root cause: each key repeat sent a separate tiny SSH write; remote TUIs could split reads mid-escape-sequence and misinterpret ESC as a standalone Escape keypress
  • Fix: batch up to 5 pending key repeat events into a single coalesced write

Session Stability & Thread Safety​

  • Thread lifecycle tracking: ThreadID per session with safe disposal via GetThreadState/DisposeThread
  • SSH disconnect race condition fixed: wait for thread confirmation, leak rather than use-after-free
  • Session slot reuse requires confirmed thread death (no more ghost sessions)
  • Event loop reaps detached sessions periodically, freeing leaked resources
  • ssh_write_s handles LIBSSH2_ERROR_EAGAIN (retry instead of fatal disconnect)
  • Partial write handling: ssh_write_s now drains full buffer instead of silently dropping bytes

Bug Fixes​

  • Numpad Enter no longer sends Ctrl+C (used vkeycode 0x4C instead of conflicting charcode 0x03)
  • Right-Ctrl via QEMU now works for Ctrl+C, Ctrl+D in local shell and nc disconnect
  • Telnet sessions restore correct output callback after key batching (was incorrectly using SSH path)
  • ~15 safety fixes: null checks, buffer leak plugs, overflow guards, host-key rejection bypass, copy selection memory leak
 
Hmm, this definitely sounds like a lot of the same kinda stuff that I was touching with desktopfix, I'd be curious to see where it could go. Heck, AI is even more than happy to disassemble 68k and do assembler patches for me. If there's a clear enough path, and a good way to debug, I suspect I could help!

I think the usual path for optimization and acceleration would work, in our case:

(a) identify the hot spots - where the the most time is spent in QD calls under normal use. That's what should be targeted first. No need to accelerate something that takes no time! Internal trap _BitBlt I've done for the *FPGA, though not necessarily quite cleanly or comprehensively - it could be an interesting first attempt as there is something to compare with
(b) reverse engineer the trap(s) so they can work when fully substituted by an INIT (mostly for reference purposes, that code is not really needed, we want the one from the next step)
(c) refactor the reverse engineered version to expose a "clean" function (per trap, maybe more than one per trap if needed) that does the bulk of the FB updates; by "clean" I mean
* all the required inputs are either (1) by-value parameters or (2) pointers to framebuffer areas (so no peeking and poking in the stack, the QD globals, etc. all such accesses must be done prior to entry and passed as by-value parameters)
* no QD calls, libraries calls, system calls, etc; just local C code updating the FB based on the parameter
* there should be no outputs other than FB update ; if some updates to e.g. QD structures are needed, they must be handled pre- or post- the "clean" function
* ... and should take enough time that acceleration is worth it (i.e. if it takes more time to send the parameters than do the actual update, it's not very useful!), which may be depth-dependent

That way, the "clean" function should be directly usable via recompilation on an accelerator (no dependency on MacOS), and the parameters can easily be passed via e.g. hardware registers or some dedicated parameter area in FB memory or some other accessible memory, while the QD housekeeping stays on the Mac side of things.
 
@Melkhior - you've reminded me of a thing I read here once - some of SuperMac's acceleration was actually just host side QuickDraw optimisations.

As more generally the community isn't a hardware vendor, we could potentially some day look into QuickDraw optimisation generally, rather than just hardware optimisation. Especially as we generally have more RAM than these machines did on release, I'm sure we can afford a few more unrolled loops!
 
Next release is now available, this one's got a good mix of new features and bug fixes, including a pesky key-repeat bug when scrolling. The big one is 256-color support and a custom glyph font for doing tons of special characters. Plus, the results of my back/forth review sessions between claude/codex.
Thank you for the CP437 character support! CQ II BBS looks much better now, although dark gray (ANSI color code 08) characters appear black - you can see the brackets when I change the background to white.

Screenshot 2026-02-24 at 9.03.37 AM.png Screenshot 2026-02-24 at 9.19.18 AM.png
 
I think the usual path for optimization and acceleration would work, in our case:

(a) identify the hot spots - where the the most time is spent in QD calls under normal use. That's what should be targeted first. No need to accelerate something that takes no time! Internal trap _BitBlt I've done for the *FPGA, though not necessarily quite cleanly or comprehensively - it could be an interesting first attempt as there is something to compare with
(b) reverse engineer the trap(s) so they can work when fully substituted by an INIT (mostly for reference purposes, that code is not really needed, we want the one from the next step)
(c) refactor the reverse engineered version to expose a "clean" function (per trap, maybe more than one per trap if needed) that does the bulk of the FB updates; by "clean" I mean
* all the required inputs are either (1) by-value parameters or (2) pointers to framebuffer areas (so no peeking and poking in the stack, the QD globals, etc. all such accesses must be done prior to entry and passed as by-value parameters)
* no QD calls, libraries calls, system calls, etc; just local C code updating the FB based on the parameter
* there should be no outputs other than FB update ; if some updates to e.g. QD structures are needed, they must be handled pre- or post- the "clean" function
* ... and should take enough time that acceleration is worth it (i.e. if it takes more time to send the parameters than do the actual update, it's not very useful!), which may be depth-dependent

That way, the "clean" function should be directly usable via recompilation on an accelerator (no dependency on MacOS), and the parameters can easily be passed via e.g. hardware registers or some dedicated parameter area in FB memory or some other accessible memory, while the QD housekeeping stays on the Mac side of things.
and here's where it gets fun. I fed the context of what we've been talking about here, into the context of my desktopfix prompt....yeah...it's ready to rock on this, ahahahah....

1771953585534.png


Thank you for the CP437 character support! CQ II BBS looks much better now, although dark gray (ANSI color code 08) characters appear black - you can see the brackets when I change the background to white.

View attachment 95970 View attachment 95971
I swore it looked the skull had a busted tooth but i didn't drill into it further at the time! That's good debug info though, I'll probably dig into this later today when i have some spare time to dive deeper.
 
I'm only going to say this once: I find the unrelenting vitriol toward @Xero over his use of AI quite disheartening, especially given his honesty and transparency.

And although I don't feel comfortable with AI, I do feel that in the proper context and with adequate guardrails, it can be a useful tool because, for as sophisticated as it is, it's really just a collection of algorithms and routines just like any other software.

That said, if someone uses it as a tool for creating their own software, that's OK, provided there's full disclosure and said someone goes through the effort to review and refine what the AI generates to ensure it's sensible.

c
 
Back
Top