• Updated 2023-07-12: Hello, Guest! Welcome back, and be sure to check out this follow-up post about our outage a week or so ago.

Not exactly a Mac, but I picked up a distant cousin: an HP PA-RISC 712/100 workstation!

Huxley

Well-known member
Thanks to some extreme good Craigslist luck and a willingness to take a long drive, I’ve now joined a pretty cool club: I’m the owner of an Hewlett Packard PA-RISC 712/100 UNIX workstation! Obviously this is not a classic Mac, but as one of the few "white box" machines that supported NeXTSTEP (apparently done in part to entice HP to consider buying NeXT before Apple gobbled them up), it can be seen as a sort of distant cousin to the Apple family tree...

This is a ~$15,000 (in 1995 dollars!) UNIX workstation built around HP's PA-7100LC CPU running at 100MHz. It featured HP’s insanely-clever “Color Recovery” system for displaying photorealistic graphics on an 8-bit graphics system. This particular model would've been the "top of the line" entry, sitting atop similar machines clocked at 60MHz and 80MHz. As the top-end version, this one runs faster and has 2 extra RAM slots. Along with running HP’s own HP/UX variant of UNIX operating system, these could also run Linux, OpenBSD, NetBSD and coolest of all (and honestly the reason I’m so psyched) a special PA-RISC edition of NeXTSTEP!
Along with the CPU and RAM noted above, the machine also has:

- RS232 serial x 2
- Audio in/out
- VGA
- PS/2 keyboard & mouse support
- Parallel port
- Fast SCSI
- Twisted Pair Ethernet
- Cursed Ethernet

There were some language barriers between me and the original owner I got it from, but he assured me that it has a working install of HP/UX 11 on one of the internal SCSI drives, and 64mb RAM. Until I get it running, I won't know how much VRAM it has, or what stuff may be found on those SCSI drives. I asked the original owner what kind of work he used to do on the machine, and he replied "computer work!" so it really is a mystery!

Interestingly, the second SCSI drive isn’t actually connected to the machine. The original owner was very insistent that it just needs a “SCSI Y-cable” but I’m unsure if that’s actually a thing. Either way, I’m excited to explore this machine!

I’ll end with a request: if anyone can share part numbers or info about the keyboard and mouse this would’ve used originally, I’d be grateful. Despite the presence of “standard” PS/2 ports, I’ve seen reports that these machines are very picky about needing the correct accessories, and I'd like to start searching for the right gear to use with this system. I'm also in need of an external SCSI CD-ROM drive which could work with this machine and/or my NeXT machines - if you happen to have one, let me know :D  















 


 








 


 








 


 








 


 
















 

Juror22

Well-known member
if anyone can share part numbers or info about the keyboard and mouse this would’ve used originally, I’d be grateful. Despite the presence of “standard” PS/2 ports, I’ve seen reports that these machines are very picky about needing the correct accessories, and I'd like to start searching for the right gear to use with this system. I'm also in need of an external SCSI CD-ROM drive which could work with this machine and/or my NeXT machines
I'm headed back to Illinois this week, so I can take a look in my stash of keyboards/mice and at least come up with a part number for the keyboard/mouse (I don't think that I have any spares, sorry)

I have given away more vintage HP workstations than I care to remember

 - 360 (2 of these 68K machines)

 - 370 (1 of these, still sad I let this one go, it was chocked full of memory and fast)

 - 735 (at one time I had 20 or so of these, but I gave them away until I had only 2 left that were maxed out and I gave those to a friend, who was learning UNIX)

 - C360 (1)

 - C3700 (I actually still have 2 of these, though I haven't booted them in years)

 - I had an Itanium workstation that I triple booted with Windows/RHEL/HP-UX - fun to setup, but once it was done, what do you do with it?

I still have quite a lot of the software CD's, tapes and their codes, but I passed on a lot of those as well.  I have some old memory modules too, but I'm not sure which ones without looking up the numbers and those are packed away as well.

I am going to be installing NEXTStep on my Sun box at some point (realistically over the holidays, before I get enough free time to work on it)

In my experience, quite a bit of the Sun peripherals will work with HP devices, almost as well as their HP counterparts.

I am of course envious and think this is a fabulous find.  Enjoy!

 

Juror22

Well-known member
Sorry, I thought my HP keyboard was from the C360, but it is a newer one for the C3700 that has a USB connector.  However, I found the following, online:

http://ftp.parisc-linux.org/docs/platforms/712_service-handbook.pdf



[SIZE=9pt]Table 6–5. Keyboard and Mouse Model Numbers[/SIZE]











[SIZE=9pt]Model Number[/SIZE]




[SIZE=9pt]Description[/SIZE]






[SIZE=9pt]A2840A #[/SIZE][SIZE=9pt]xx[/SIZE][SIZE=9pt]* A2839A[/SIZE]




[SIZE=9pt]Keyboard Mouse[/SIZE]






[SIZE=9pt]* [/SIZE][SIZE=9pt]xx [/SIZE][SIZE=9pt]represents the localization designator[/SIZE]










There is a similar mouse  https://www.ebay.com/itm/HP-A2839B-M-S30-PS-2-mouse/252796536697

and keyboard https://www.ebay.com/itm/Very-Nice-HP-Hewlett-Packard-A2840-60201-A2840B-PS2-Wired-Keyboard-Clicky/143627535457


 

Franklinstein

Well-known member
Nice. Running NEXTStep is the only interesting thing I could think of to do with one of these; I was never terribly interested in PA-RISC because it seemed like an also-ran architecture with an even smaller niche than its contemporaries (except maybe the 88k and AT&T's attempts with CRISP and Hobbit) along with a very high price. For *nix-based A/V workstations I prefer SGI's MIPS offerings, or for general-purpose *nix I prefer Alpha, which is probably another reason I dislike PA-RISC (and its successor the Itanium): Compaq/HP cancelled Alpha development when they thought Itanium would be the next big thing. Sadly, the mythical compilers that were supposed to make Itanium not terrible never appeared and this wholly disappointing architecture languished for years before Intel stopped development in 2017ish whereas before it was axed Alpha was a mature platform with proven performance (used in several top-500 supercomputers and was the first 64-bit CPU to 1GHz) and a clear roadmap to future improvements. What a waste.

Anyway, "SCSI Y-cables" are kind of a thing, only not by that name: they're normally just known as multi-connector SCSI cables. It looks like that box uses narrow SCSI so you could have up to 8 devices (including the computer) on the one bus. It looks like that chassis would take either two HDs, a standard MFM floppy and a HD, only a FD, or possibly no drives at all if used as a network client. The drives currently installed look pretty power-hungry though so you'd probably want to check you power supply's output before running both simultaneously.

 

Cory5412

Daring Pioneer of the Future
Staff member
Just as some thoughts on RISC UNIX meta: 

I always consider the loss of Alpha to be a bummer, but in reality while it was on the TOP500 for a while, its use in HPC was almost entirely mooted by the Pentium 4 and the switch from scale-up to scale-out clustering, and it spent the last few years being used exclusively for money-counting. Every other workload is splittable over a network.

Pentium 4 and the Athlon chips of the time were thoroughly outperforming almost every RISC UNIX architecture in almost every meaningful way except for being able to have a very large amount of memory. AMD64 went a long way to address that, but over, say, 256 gigs of memory didn't become reasonable on a single x86 system until 2012 or so with the Sandy Bridge EP systems. A terabyte would take a couple more years, and so on. (At least in the middle 2-processor band, there are/have been 4-8 CPU x86 servers, but those usually aren't scaling *enough* up to meet the demand for, say, money counting.

I suspectthe reason it took Itanium until 2017 to die off (new versions stop coming out) was mostly because HP was bankrolling it while working on acquiring the technology to just build up an SGI-style giant architecture off of SGI's old IP, which they acquired just around then, and then build a multi-box scale-up x86 system. I now wonder if Intel would have stopped development on itanium even earlier if it hadn't been for HP. Mildly hilariously, it looks like you, at least on paper, may still be able to buy an HP Integrity with an Itanium processor. Every now in again I consider writing in and asking what it would take to buy like an RX2800i4 or something (I guess it's the i6 now) like that, mostly for laughs, as that system is roughly comparable to a regular 2-socket PC server like an HP DL380 or a Dell R7x0.

And now you can buy 32-socket Xeon servers from HP, to run your multi-terabyte SAP HANA database on (same link as multi-box scale-up).

SPARC and POWER survived, mostly out of having been less behind than the others and out of sheer force of will on IBM's and Sun's parts, although how much longer SPARC lasts at this point is, I would argue, debatable. Oracle could kill SPARC development entirely and port its big-memory products to HP's big scale-up Xeon boxes. Also on IBM's part they have a couple OSes with a large legacy installed base (banks, business) that are willing to pay for it, whereas academic, research, science, et al jumped ship from RISC UNIX literally the instant it was possible, I think the only reason things like the XServe G5 cluster were ever under consideration was because they were cheap, relative to what SPARC, MIPS, Alpha, and POWER had been costing. General purpose IT infrastructure moved to x86 as soon as it was reasonably possible for, basically the same reason. Even, say, Solaris shops, it often made mroe sense to buy Sun's/Oracle's x86/amd64 Solaris servers rather than the SPARC ones unless you were doing something like scale-up or you had some legacy commercial application for which there was only a sparc binary.

For someone into vintage UNIXes in general, HPUX would probably be a more interesting/relevant use of one of these machines. For someone into NeXT stuff, it's the weakest of the architectures that run it. 68k and x86 got the msot software and most versions, SPARC was next behind, getting some software and a SPARC-based OpenStep4.2 release, but HPPA pretty much only got NeXTStep 3.3, and my suspicion is that that only happened because that was the era that NeXT and HP were working on porting openstep-the-frameworks to hpux-on-hppa, but I might be misrmembering that.

 

rplacd

Well-known member
Just as some thoughts on RISC UNIX meta: 

I always consider the loss of Alpha to be a bummer
There's a slight historical irony here: IIRC, Dan Dobberpuhl, who led the Alpha design team and then the StrongARM team, eventually ended up founding PA Semi, which was in turn acquired by Apple – and the rest is history.

 

macdoogie

Well-known member
Haha! I used these in college and at my first internship! That HP/UX is likely to have my favorite UNIX desktop manager: The CDE!

Also, SCSI is always daisy chained, never "Y-cabled". All you have to do is find a long enough 50-pin ribbon cable and add another IDC-50F in the middle "pointing" the same direction as the one that goes to the end drive. Also, The last drive on the cable is the only one that should be terminated or have it's active termination enabled...

 

paws

Well-known member
Isn't a daisy chain when devices have an in and out and you connect one to the other? In SCSI all devices are connected in parallel, even on external cases that have two ports, they're just paralleled together. I think that's what's meant by a Y-cable.

 

johnklos

Well-known member
its use in HPC was almost entirely mooted by the Pentium 4 and the switch from scale-up to scale-out clustering


Ha ha ha... No, no. This is completely wrong.

Alpha died because it was killed. Compaq bought Digital, then HP bought Compaq, and HP made a deal with Intel to kill off both Alpha and PA-RISC in favor of Itanic. This isn't speculation - it's confirmed fact.

Even when Compaq, then HP turned the dial way down on Alpha development, it still went on to be a very popular platform for scientific and supercomputer uses and beat the asses of everyone else. We can only imagine how impressive it'd have been had they actually tried to improve it.

My Sun Fire V245, which is a dual processor 1.5 GHz UltraSPARC IIIi system with DDR2 from 2006 or so, gets handily spanked by my AlphaServer DS25, which is a dual processor 1 GHz Alpha 21264C (EV68) system with DDR from 2002.

The Pentium 4 was a marketing dog and pony show. They slowed it down significantly - it was substantially slower in terms of instructions per clock - so they could raise the clock speed for marketing purposes. It wasn't a serious contender for any contest except pure clock speed.

 

johnklos

Well-known member
In SCSI all devices are connected in parallel, even on external cases that have two ports, they're just paralleled together. I think that's what's meant by a Y-cable.


SCSI is parallel, yes, but each SCSI device is part of the daisy chain. Nobody calls it "Y", though.

Things like LocalTalk adapters are sometimes called "Y" because of the pigtail that goes from the part that allows daisy-chaining to the computer, but that doesn't change the fact that it's still daisy-chained. SCSI doesn't have anything like that, aside from edge cases like short cables to go from HBA to a box which adapts differential and LVD.

 

johnklos

Well-known member
Oops. I made a mistake - my Sun Fire v245 has 400 MHz DDR, and my AlphaServer DS25 has 125 MHz SDR.

 

Cory5412

Daring Pioneer of the Future
Staff member
Ha ha ha... No, no. This is completely wrong.


So, there was two things going on:

1)

HPC and multimedia computing was moving toward doing things in clusters because it turns out that 64 CPUs and a terabyte of RAM in the year 1999 wasn't really necessary since most HPC and multimedia tasks can be done in parallel. (Apple's sell for this at the time was to hook Compressor up to xgrid, for example.) This played out in Alpha's entries in the HPC space, I looked at a few of the Top500 lists from the early 2000s and they were all HP SC45s (I believe this is just a "cluster node" version of the ES45) lashed up in the hundreds/thousands working independently on different parts of a large dataset.

I'm not wrong about that part of it. That every UNIX vendor was still selling scale-up systems like the Origin (and later the Altix) and the GS either means that they were all trying to capture the money-counter market, they all misunderstood their classic markets (SGI in particular) or both.

Heck, lots of early incursions into using clusters for problem solving were based on AlphaStations and PWS series machines. (Probably because that was happening in 1996 when an Alpha@whatever was legitimately faster than a Pentium at everything.)

2)

And, no matter the reason, (and, I  also wish Alpha hadn't died, because I have aesthetic preferences for it), Alpha died, and when it died, it was a legacy platform that was significantly slower than everything else around it.

Even if you don't want to admit Pentium 4 (1) was gaining ground, Pentium III definitely had and Pentium M and Core* absolutely did. (Granted, like, CoreDuo is a 2006 thing and it appears Alpha dev stopped somewhere between 2000 and 2004.) And even if each Pentium III wasn't faster than a contemporary Alpha, HPC had moved to clusters and PIII was using less energy, shipping in smaller systems, and cost a fraction of what Alpha did.

Wikipedia quotes some SPEC bench numbers from 2000 that suggests that an Alpha/833 is roughly 3x faster at floating point than a Pentium III@1000, which is great if you have the money and a heavy floating point workload, but, to the point above, it's my understanding that you could buy a handful fo PIII boxes and some networking gear a fair bit less than what a single Alpha/833 cost in the year two thousand. (And, in 2000, the only system with the 21264/833 in it was the ES40, so you're talking about a pretty high bar for entry, relative to later on when the DS20E and DS20L became available. (2)

 The TL;DR here is: 

If you are working for an institution in the early 2000s and you are building a high performance computer using grant money to get some piece of work done, and not to wave your e-wang around, it's your responsibility to build as much computer as you need or as is possible in the constraints of a dollar amount, floor space, and energy envelope. It was 2000, if not earlier, by the time that Pentium III was the right choice for that.

=====

I, too, have a fanfiction delusion that Alpha could have stayed relevant if only Compaq hadn't been bought and HP hadn't brought it out back and pointed a shotgun at it (and/or if they had dumped a raftload of money into it) but, as with Itanium after, "stayed relevant" is far from winning, and history is unkind to anything that isn't the victors. The unfortunate thing here is that, really, industry-wide, there wasn't enough. money to do this for every CPU architecture that was rapidly hitting the wall. Itanium was the most palatable possible solution in the sense that when Itanium was being floated in the late '90s, there was HPPA, MIPS, SPARC, Alpha, and POWER/PowerPC, and probably a couple I'm forgetting, all of whose users would need a home on some platform that supported scale-up things that x86 just didn't at the time. (Although, this was all being planned before work on separate-system clustering was as well known, so, like, in 1997 it's not unreasonable per se for SGI to have believed that int he year 2003 HPC would still be a thousand CPUs running on a single system image, just that the CPUs would be Itaniums instead of an R10k derivative.) (In reality Itanium didn't pan out, but, I mean, how much of that is that because literally everybody who wanted to do HPC had jumped off because it turned out you could stuff 8 gigs of RAM into a late P4 box and stick a thousand of those into racks for a quarter of what the equivalent FLOPS would cost in Itaniums, and do the same work?)

As noted here, Itanium is literally still on sale and in use in banks all over counting money. It's just that counting money isn't an exciting task and the size of the computing market needed to sustain the task of counting money is miniscule relative to the HPC market Intel thought Intel would capture. (Although, Itanium in 2020 is in the same place Alpha was 2002-2007 where it's on sale purely for people ot get their last orders in before it gets discontinued and receives a further couple years of security updates.)

But, Itanium killed everybody's favorite homegrown RISC UNIX platforms so nobody's out here trying to talk about its benefits relative to the Pentium 4 and Pentium D.

I'd love a newer Alpha, but I'm under no delusion that it's faster or more practical than what was on sale alongside it. (Especially the later on you get, like, that last generation of Alpha systems were on sale long enough that they.... just weren't practical computers for anything other than legacy applications at banks and mega-corps.)

=====

(1) The Pentium 4 and Netburst in general had a quite long life and the overall PC platform basically went from "Vintage" to "Modern" under its watch, with the last Netburst chips actually running with SATA storage, DDR2 memory, and PCIe connectivity on the 965 chipset, and by the end, Pentium 4 was pretty much neck-and-neck, (like, within a couple percent at almost all tests) with contemporary AMD platforms, and was consistently beating the RISC UNIX platforms (incl. Mac) at real workloads -- and then Core happened.

(2) Incidentally, when the DS20L launched, the base price for one was $18,000 for a config with 1x833 and 512mb/18gb An SC20 with just 8 CPUs was expected to sell for $290,000. HP's press release doesn't list the exact configuration except to say that it has 4GB of memory, so I'm expecting four dual-833 nodes with 1GB of RAM each, or perhaps just eight 1p units, but, that would be weird shenanigans right. because 8 base DS20Ls with 1x833/512/18 should cost you around $144,000 -- so, I don't really know what HP was doing with the SC20 in as-sold configs for, you know, twice that. Is there really $145,000 worth of networking and management in the base SC20 loadout? The CS20 QuickSpecs correctly notes implies that a CS20 can consist of, well, hundreds, probably thousands of individual DS20Ls. Perhaps what HP did was plan on selling the CS20 as a half-rack and then reneged on that plan and the minimum viable config was two full racks of equipment. That's tough to tell without spending a few more hours on this, and that's a project for Future Cory, if ever.

EDIT: w/re (2) - it turns out $18,000 was for 2x833/512/18, which makes slightly more sense. I'm looking at October 2001 Computer Shopper (since that's closer to when then 833 DS20L and DS20E were available) for what an equivalent amount of ProLiant or PowerEdge might have cost.

EDIT 2 - October 2001 ProLiant DL360 1u PIII/933 w/ 128MB of RAM $2040. Add a disk, times by six and you're probably still below $18,000, but admittedly you're now on the hook for a couple more Us than you were a year ago when the only Alpha system with an 833 in it was the ES40, which is like, a 10u box. There are much cheaper ProLiants, like the ML330e with a PIII/933 for under a thousand, but that's an SMB tower.

Dell has a PowerEdge with 1x933 and 128/9gb for $999, although that's a minitower, and, no word what the second 933 costs. These should support faster PIIIs (the 1.4 might have been out by now) but the cheapest PowerEdges are usually aimed in their default configs as file/print/mail servers for small businesses, so focus is given to storage and disk upgradeability below a certain price. (For ex. the PowerEdge 500SC which is a Celeron/800, 128/10 (IDE) for $699.)

In 2000-2001 you would have had the advantage of being able to call either of these companies to get pricing details on the exact config you want.

I'll have to see if I can find a print catalog from one of these PC OEMs, that'll likely show some more suitable different configs, rather than just the starting config and the most popular disk upgrade.

EDIT 3: It occurs to me that I'm of course not an early 2000s-era IT manager, right? So, this is mostly speculative based on what's available on the Internet quickly, but, I'm confident that in 2001 when HP announced a $18,000 server that was only 3x faster than what Dell was selling for like $2000, it's not because they thought they could win, it's because the server was a good deal compared to whatever its bigger and older sibling had cost for people who wanted to upgrade from older versions of the same platform, because their time and/or their ability to use a legacy app was more important than their money. And, like, to be totally fair here, in the kind of org that bought Alpha, business continuity was more important than $18,000. (This is the same thing that's happening with big organizations and AWS/Azure today.)

 
Last edited by a moderator:

johnklos

Well-known member
If you are working for an institution in the early 2000s and you are building a high performance computer using grant money to get some piece of work done, and not to wave your e-wang around, it's your responsibility to build as much computer as you need or as is possible in the constraints of a dollar amount, floor space, and energy envelope. It was 2000, if not earlier, by the time that Pentium III was the right choice for that.


That's not how anything works. It didn't work that way then, and it doesn't work that way now. In the scientific and academic worlds, people will buy the best equipment for the problems they primarily want to solve. Clustering already existed and worked in DEC's OSes. We still, in 2020, don't have the kind of clustering robustness that DEC had in the 1990s.

Sure, for the money, people then could've just bought cheap PCs and gotten more performance per $ than if they had bought Alpha, but trying to get those PCs to form a robust and reliable cluster that just works without lots of extra work? Nope. There's no competition at all. Those are two totally different worlds you're talking about.

And Itanic didn't kill anything. Intel tried to use their position in the market to force adoption of the Itanic, and it failed miserably. The fact that people use them and the fact that you can still buy them now doesn't really mean anything beyond evidence that Intel was just dumb and wasteful. They've never been a serious contender in any market.

HP's PA-RISC was a good CPU, but the platform was attractive because of HP-UX. Alpha was an excellent CPU with a kick-ass OS that allowed people to stand up and run supercomputers without needing tons of extra junk. Itanic had nothing going for it, and by the time it picked up the pieces of the platforms which were actively deprecated for its sake, it still had too little going for it.

 

Cory5412

Daring Pioneer of the Future
Staff member
DEC's clustering stuff (VMS cluster in particular) was reliability oriented for stuff like railway signalling and 911 emergency operations centers. The idea was to fail out single machine sized tasks over to other computers when a machine failed, not to actually do HPC. 

Any HPC that happened on it was probably incidental, and it wasn't long before "Linux on alphastations and PWSes" is the way budget-oriented HPC was going. Without any of DEC's special sauce hardware, or even without DEC's special-sauce software, and without VMS's special sauce clustering. Just... Linux, a bunch of computers, and a couple ethernet switches.

That both things were achievable on the same hardware is basically a coincidence of the fact that DEC-Then-Compaq used to sell workstations and small servers competitively priced with Pentium Pros. By 2000, the cheapest Alpha machines cost 10x what comparably capable Pentium III/Xeon workstations and small servers cost.

Those things and what I've been describing as money-counters, big single machines that exist at rack scale (see also: mainframes) are all three different needs that don't really need to be served by the same hardware. 

Even when Alpha was viable, Compaq and HP were marketing different machines to the three different needs. The ES/GS series were for big ERP and money-counting deployments where running many-gig databases in RAM is more important than out-and-out flops, and the SC series were for splittable HPC tasks where the overall result of a particular dataset doesn't really depend on an individual sub-task succeeding in one go. (Lots of the utterly wild things that desktop, laptop, and tablet-scale computers can achieve that would have needed a full rack in previous decades rely on basically the same thing.)

That the SC series was made out of the DS20L and the ES40/40 pretty much tells you what you need to know. Ultimately, the extreme interpretation of this is to go the SETI/Folding route and just build out so much power that you can sanity-check the results a handful of times before calling them confirmed. Just this year, "Folding@home" surpassed the top entry on the November 2020 Top500 by over 100 TFLOPS. At an individual scale, an SC20 or SC45 wouldn't have had anything over a xeon server at the time, so most of what Compaq/HP were making money on on that series was pre-rolled management software to do.... the same stuff Linux people had been doing for time-and-effort for a few years already. 

At one point Compaq sold a reliability-oriented Proliant and the way they decided to architect it was to have two identical proliants running together connected to the same SCSI storage and running all the same tasks in lockstep, and if one of them failed physically, your work still completed. Those ran Windows NT.

The "railway signalling and 911 call center" market will basically run on whatever it can get. And, the hardware has been good enough for it for a very long time ago. HP NonStop now runs on x86 hardware. You can even run it in VMware. So, DEC's innovation on that front lives on and the whole point of it, arguably, was that mini computers (now microcomputers) could substitute for mainframes in some tasks so long as you architected the software right.

If you can make a task run reliably on three medium-cost computers instead of laying out for one high-cost one, why wouldn't you? Hell, if you, as a vendor, can integrate three low-cost or medium-cost computers and sell it for a bit less than one high-cost one, why wouldn't you.

 

Cory5412

Daring Pioneer of the Future
Staff member
To focus things a little bit, so that we can move beyond "well you don't know" -- what HPC (academic/scientific stuff) tasks require single large system images and never-stopping execution? 

 

Cory5412

Daring Pioneer of the Future
Staff member
badly mis-read the Anandtech article. "beats the first Top500 machine by 100 tflops" is for Windows GPUs only. In aggregate, all of the Folding@Home computing power is closer to twice what the top top500 machine is doing.

The article lists tflops and x86 tflops separately, which I think means that they're accounting for graphics and x86 cores separately, which makes the full total flops 3-4x the top entry in top500.

Granted, interest has waned and now folding has, at total, similar performance to that top top500 machine.

Of course, as a volunteer oriented project you end up running units twice and the amount of computer time varies with the utilization of endpoint computers. One of the things Apple was proposing with netboot+xgrid back in the day was overnight rendering and compute using lab and office computers, the idea being that if nobody was logged in at a certain time, a machine would reboot to a compute image over the network and do work out of an xgrid queue.

That's a cute idea but I think faster HPC machines and faster desktop machines plus stuff like GPGPU obviated that a little bit. Plus, it must have been a management nightmare. Plus if you're doing it within a more formalized HPC environment where the nodes accurately report success/failure/errors you can reduce the need to re-run specific work units.

 

johnklos

Well-known member
Last note: Linux on Alpha wasn't a player. You're forgetting that Tru64 could run on one system or it could run on the #2 supercomputer in the world. Nothing on x86 at that time could even come close to that kind of spread. Reliability in the context of scientific computing wasn't about train switching and 911 operations, but I think you know that.

I'd love to hear if the HP PA-RISC this thread is about will eventually run NeXTStep. Astr0baby runs lots of older OSes and even gets modern toolchains and software running on them. Check out the site :)

What're your plans with it, @Huxley?

 

Huxley

Well-known member
I'd love to hear if the HP PA-RISC this thread is about will eventually run NeXTStep. Astr0baby runs lots of older OSes and even gets modern toolchains and software running on them. Check out the site :)

What're your plans with it, @Huxley?


No solid plans at the moment, beyond the following:

1. My son (now ~10.5 years old) has recently started expressing an interest in "hacking," which I believe is heavily informed by age-appropriate adventure shows he's been watching. He's got a vague awareness that "hacking" involves "typing code to unlock stuff" so I'm going to talk him through some simple CLI commands on the HP machine to reset the root password so we can then get past the CDE login window. I think he'll be thrilled to be a "real hacker" for an evening :)

2. I really want to get NeXTSTEP running on this machine, because I think it would be cool to compare performance against my NeXT TurboColor Slab, and because I believe NeXTSTEP on the HP PA-RISC is the only (or one of the only?) way to actually use HP's bizarre / clever "Color Recovery" graphics system, which apparently creates "millions of colors"-type quality using 8-bit graphics hardware. That is such an over-simplification that it's almost certainly better described as "wrong," but still - it sounds awesome. More info here: https://bytecellar.com/2005/02/09/my_hp_9000_7126/

All that said, I'm completely unfamiliar with these PA-RISC systems and I'll definitely be leaving a bootable install of HP/UX on one of the two SCSI drives, so if anyone has suggestions on anything cool or interesting or fun I can do in that OS, please share!

Huxley

 
Top