This prompted me (no pun intended) to have a look. Naturally, I saw some familiar operations like "construct a directory's HFS path", but the implementation is different than my own. The general architecture doesn't match MacRelix, either. But I suppose it has to come from
somewhere — if not my project, then someone else's. Code doesn't magically appear from nothing. And if it seems to, then it merely means that we don't understand the process.
They're not even written in the same language (C vs C++) which makes all the accusations even more asinine, frankly, but I digress. I do think my approach is fundamentally different - I expressed in my original post that I had some inspiration from things like busybox, a single binary that represents tons of unix commands, but, it's not based on that either. It really can't be for numerous reasons, anyway, but it's a similar type of approach at a conceptual level.
They're models, so it comes from training data, a glorified search engine on steroids. Stack exchange+++, lol. Vector databases. I mean, long before all this, I watched how databases evolved, having used and configured many over the years in various capacities, from SQL, to NoSQL (mongo,etc), key value stores, then elasticsearch was all the rage...there's columnar db's as well now....but this stuff is heavy on vector DBs and then GPU optimizations. There's plenty you can learn about this, it's not magic, it's just, far outside the realm of a normal individual to be able to buy an 8XH100 for your house.
I made an article comparing the price of AI hardware to the SGI workstations of the 90s - early 00s, kind of as a joke. I remember going on their site when I was a teenager and trying to configure the most expensive machine possible, and they were similarly unobtainable to any normal individual:
Ultimately, an SGI render farm was also outside of most of our reach, though a pair of high-end GPUs could give you a nice little playground running some older models, if you wanted a home-lab.
My biggest concern at the moment is that many of the new commits are monstrously large (and hence unreviewable), and all of them are tagged "Co-Authored-By: Claude Opus 4.6" — even
https://github.com/LXXero/SevenTTY/commit/a34134f8e674d76c164d8fdd51c8bfc4d3e900ed, which is a three-line edit to constants.r that could have been done by a simple search-and-replace, or even faster by hand. I fear that the LLM user, and therefore SevenTTY's users in turn, are utterly dependent on the LLM service. Even now, the cost of using LLMs is exhorbitant (although it's been partly externalized onto innocent consumers of RAM, SSDs, and GPUs, not to mention water and electricity), and I expect even the end user price will eventually rise.
By an individual, maybe? but, set off codex in extra-high thinking mode as my reviewer against claude's output, and go back/forth a few rounds, then do another extensive round of UI/UX testing, and if need be, write automated tests if you want to take it further, which, would be interesting to do for a retro mac setup. The thought has crossed my mind to come up with an automated E2E test-suite for 68k/ppc development.
It's that attachment to the process that slows people down the most when it comes to learning/adapting to this technology. You have to think more about the results now, and testing those results. If you're attached by the hip to the process, it's going to drive you crazy. Tabs vs spaces arguments and code styling and all that stuff I might have cared about in a past life, just, isn't the point anymore. You validate results and do tons of UX testing, it changes your role a bit.
I'll try to avoid getting deeply into the politics, but I do think what we're experiencing is a bit like the search engine wars of the late 90s-early 00s, where eventually google won out, but right now you have more players in the mix, and big names backing many of them, ironically google's model is often the weaker in the mix for coding, though I do like notebookLM to summarize youtube videos and stuff like that. Things definitely can and will change, but they aren't going away any time soon, the same way IDE's and other such developer tools haven't just gone away, but they certainly did evolve over the years, and so will these models. Heck, I think many people started by using AI as "auto-complete" with-in their IDE, and then moved into deeper uses, I know that's how it started for me, anyway.
As for the three line edit thing, I'm still a hardcore VIM guy at heart, but, why should I fire up VIM just to change those 3 lines and then manually git add/git commit/git push, when i can say "update my version and push" and it just does it for me? It's not like I can't still do it the old way, but I assure you, it's not faster by hand once you get good at this.