Most of my machines are sitting around for weeks / months / years compiling.
There's "niche" and then there's "you might actually be the only one on the forum or in the scene doing that." This is closer to the latter, I think.
Anecdotally, I have really good experience with regular SSDs on my modern computers. The two SSDs I've had longest are still kicking, being relocated from machine to machine as I find a use case wherever they move to for something bigger and put the money into it.
Once I'm done with The Big Torrent I'll probably put my 180GB Intel 520 in the Mac mini, replacing the 2.5" 2TB disk I bought and installed in it.
My ThinkPad T400 still has a 2010-era 128GB Toshiba SSD that came out of a MacBook Pro in it.
Around ten years ago there was some math pertaining to some of the then-current SSDs that basically suggested most modern regular-duty SSDs should be capable of endurance along the lines of a couple decades of full-effort use (like, write the full capacity of the ssd continuously) for some number of decades.
The one SSD I've personally used that has died was a Sandisk in my Mac mini. It was kind of tough because I did some patches and the machine never came back. It's absolutely my own fault I wasn't running more frequent updates, and the one most critical piece of data there that seemed lost, was eventually found in an image backup I had made from the previous year.
Otherwise, I largely trust my SSDs to keep truckin' on, but I'm moving away from storing data directly on client computers, with the exception of anything stationary enough to keep a daily backup running on.
I thought the whole point of modern SSDs was that the controller managed all of the TRIM-related stuff internally so that the host didn't have to be concerned with it.
There's a lot more going on behind the scenes these days, but I've never explicitly heard that having the OS do TRIM is obsolete as a concept.
High-end CF cards do this too, don't they?
Not that I'm aware of. CFast might, but that's an entirely new (and, more modern) standard.
That was always a big selling point for CF over SD: the internal controller was usually faster and more intelligent than that used for the average SD card and thus you'd get better performance and longevity from CF vs SD.
I can't actually say that I've
ever heard this. Ten years ago, when cameras still shipped with CF, it was performance and capacity. Most of the use cases for both kinds of cards involved a lot less random read and write than exists today. The cycle was almost always "write until full", then swap cards, read, delete, repeat.
'hey don't use this sector anymore' and skip it rather than try to remap it but apparently that's too much to ask of "modern" computers.
Can't speak to "flash" media such as USB pen/key flash drives, CF/SD/CFast, but on SATA SSDs, there's a percentage of NAND that exists that has been over-provisioned for exactly this purpose.
Given that SSD media generally features 0 seek time, this shouldn't have any performance impact.