Jump to content

All Activity

This stream auto-updates     

  1. Yesterday
  2. Doh. Obviously I meant to include that the part I'm referring to starts on page 332. There are references to other pieces of documentation in Inside Macintosh, technotes, etc. I still imagine it'd be a very tall order to implement a QC accelerator without some source code to reference. I am curious how other vendors that made accelerated cards pulled it off; the fact that they did does imply there's some kind of reference implementation? Question might be whether it required an NDA to see it.
  3. I never upgraded the ram in my 4400, side effect of having too many Macs, but on a Beige G3 a similar problem exists, and the machine only saw half of the ram. I never had a problem with that anomaly though.
  4. Franklinstein

    PB 2400c USB: cannot boot with card installed

    I generally remove cards on boot anyway, unless they're used for booting (which USB isn't on something that old). The 2400 (and 3400 on which it's based) is not supposed to have CardBus so there are probably some software routines that don't run properly. I don't consider it a problem big enough to find a solution.
  5. Franklinstein

    Boxed 4400/200

    Yeah I guess their marketing team decided that Vimage had brand recognition outside of Japan or something, so Interware used that name on their processor upgrades sold overseas (previously it was specifically applied to their video cards, while their processor upgrades were sold under the Booster name). I prefer OS 8.6 to 8.1 for a number of reasons, but the main reason to use it here is the FW/USB combo card which doesn't have support on 8.1.
  6. PotatoFi

    Brightness Knob on Macintosh SE

    I took the analog board back out, and gave the knob a bit of a twist to line it up better... a lot smoother now. I think I'm going to call this one fixed! We'll see if the old potentiometer is still bad after a good cleaning when I repair the next SE in the lineup.
  7. jessenator

    Potentially Stupid RAM Hardware Mod Question

    Sweet. I'll give em a look see tonight/this weekend
  8. Newertech's RAMometer which is later subsumed in their Gauges set has a continuous memory test. Run it as long/as many iterations as you like. I think it must alter the data patterns from one run to the next, because back when I was buying new DIMMs, I'd run it on new memory and usually faulty stuff would fail within 10 - 15 iterations, but there was some stuff that only failed around the 1200 - 1500 iteration mark. And it failed consistently in the same place. Cell leakage in capacitive memory cells based on data patterns in surrounding cells is a whole other topic... I keep a copy here: https://www.prismnet.com/~trag/Ramometer.sea.hqx and https://www.prismnet.com/~trag/NewerTech/ or maybe: https://www.prismnet.com/~trag/gaugepro1.1.sea.hqx https://www.prismnet.com/~trag/gaugepro.hqx
  9. Nathanplus

    Help screen shrinking mac plus

    So I have come across a problem with my macintosh plus and that over time the screen horizontally starts to shrink. I dont know what could cause this and im not home to do research. It might be the flyback but idk.
  10. The only remotely possible case I can think of for that being true might be the Macintosh Display Card 8•24 GC (you mentioned as well documented) doing a block transfers of frame buffer/CLUT data from and then back to a second NuBus card for which it is providing QuickDraw acceleration? Can't imagine it's not hijacking the frame buffer input and CLUT at the CPU QuickDraw output level for that card to itself for distributing the accelerated buffer data as block transfers to the unaccelerated card's frame buffer/CLUT it would be supporting, Thought it might be worth mentioning if someone digs up that documentation. I'll go quiet again now.
  11. I was going to point you to a pared down summary that's floating around the web but I think I found the thing it's based on. Which still isn't anywhere near enough to write your own Quickdraw implementation, but it at least mentions some of the conceptual bugaboos. https://vintageapple.org/develop/pdf/develop-03_9007_July_1990.pdf For just black and white the shift register is fine. But if you need to suck up doing indexed colors anyway I think the sliding window thing is the way some real video cards do it. Note another way this "sliding window" could be accomplished is essentially making the buffer you load the data you've read into *itself* kind of a shift register. IE, if we go back to that 16 color example, on the first pixel clock you read the first 4 bits/slash/mask the rest of the bits with zeros. Before the next read you simply rotate the data 4 bits over, thereby assigning the data you've already rendered to the ash heap of history, so the next read gets the next four bits, etc. (And how far you shift depends on your color depth.) Or to put it simply, treat your read buffer as a stack and POP each pixel's worth of data off, however many you need, in parallel. Okay, this is actually better, because now you don't need to load multiple copies of your palette in the CLUT, you can always just use the lowest slots. So... Offhand observation: Technically speaking you still might want the CLUT in the path even in a 1-bit mode. I do *not* know if Macs support this, but I think both EGA and VGA (and presumably their descendants for a while) and many other systems with similar hardware technically let you change the palette for 1-bit mode. Granted that's a totally weird edge case.
  12. Yes. And a shift register is a much better/cleaner idea.
  13. The way the real hardware did it back in the day was generally to feed your byte (or word) size memory read into a parallel-load shift register and simply clock out a bit on every pixel clock. With your HDMI output I imagine you could just effectively do the same thing, IE, you read 8 or 16 bits (depending on the width of your video buffer), stuff it into a shift register, and as you clock out out you effectively just multiply the 0 or 1 you get into the appropriate bit mask that'll generate an all white or all black pixel. (Or, if you want to be stupid, design it so you can choose what color is used for lit pixels so you can emulate a green or amber monitor. That'd be a totally useful feature.) For 2 and 4 bit indexed modes you'll sort of deal with a similar problem, IE, each byte read will have two or four pixels in it. My suggestion might be to load the pixel data read from RAM into a register that implements a "sliding window" that moves in appropriate size chunks down the register and only supplies the bits in the window as the address for the CLUT. IE, in an 8-bit mode if you have a pixel that looks like this: 10010011 Then you simply read the 147th position in the CLUT for the output, but if you have the same value representing two 4 bit pixels on the first read you mask it like this: 0011 and the color is item #3 in the CLUT, and the second read: 1001 gives directs you to color #9. Strictly speaking B&W could be a special case of this; the 1 bit selects from a 1-bit CLUT load where 0 is black and 1 is white. I don't know if this would strictly be kosher, we'd have to see if Quickdraw ever *reads* the CLUT off a card instead of just loading it, but here's an optimization: For every color depth less than 8-bit have the driver load the 256 item CLUT with 16, 64 or 128 identical copies of the palette. Then when the hardware is processing each pixel it simply masks all the bits belonging to all the other pixels in the read buffer with zeros, thereby automatically applying only the data that matters to its own positional copy of the palette. Then you don't even have to move/shift/"right justify" the pixels you're using, the same CLUT hardware is used unchanged for all indexed pixel depths. Hopefully what I just said there makes some rational sense...
  14. Yeah. Well, a big part of what I want to get out of this, is to learn about FPGAs and FPGA logic design. Any plans with ARM would be way down the line. Perhaps soft cores might be worth looking into for QuickDraw acceleration later? Where might one read more about that? Yes. It might be worth exploring if SDRAM becomes intractable. It is, thanks. That's roughly what I expected for a CLUT. I don't think B&W support would be hard either? I imagine (naively) you'd probably want to read a byte (for example) every eight pixel clock cycles, and on every pixel clock cycle perform some bit masking to determine the state of the current 1-bit pixel, then output the appropriate 24-bit pixel. That is more or less the same progression I see, except HDMI and SDRAM instead of VGA and DDR2, respectively. I used "streaming" kind of loosely, sorry. There is no primitive (some sort of DMA?) for grabbing bytes from memory and pumping them to the TMDS encoder that I am aware of. I agree CLUT is not a big deal, at least in theory. I was simply lacking imagination at 4am last night for how it might be done in an FPGA but BMoW's pseudocode totally makes sense. Yeah, I briefly looked at the equivalent ADV7123 DAC, since that's what's used on the optional daughter board (which I did not purchase) of the QMTECH FPGA dev board I have. QMTECH ships an example design to drive that IC. The link I shared earlier about hacking 1080p @ 60Hz output from a Spartan 6 appeared to imply an HDMI TX IC would be able to workaround the serialization bandwidth limitations of that FPGA, so I also looked at the TFP410 and ADV7513. --- BTW thanks again for all the input and discussion. I'm definitely going to need all the help I can get to have any hope of realizing this project.
  15. Trash80toHP_Mini

    Farallon ETHERMAC LC NSC w/NuBus drivers in the SE/30 PDS?

    Thanks for that crystal clear explanation. That hack was my reason for suggesting a vertical slot adapter/card LC NIC installation should such become a feasible proposition. What are your thoughts on /NuBus and chances for its use with the NuBus drivers in a workaround should the need arise? edit: also wondering about the 8MHz clock issue in the SE. C16M is present on its PDS, but will the NIC clock asynchronous?
  16. The original SE has a different chassis without the cutout and mounting tabs to allow vertical expansion cards. Later SEs and the SE/30 share the same chassis with the cutout and the tabs to properly mount vertical cards. I have modified a MacCon SE to mount vertically because I have an accelerator mounted on top of the 68000 which blocks the use of a standard horizontal SE upgrade card.
  17. Trash80toHP_Mini

    Farallon ETHERMAC LC NSC w/NuBus drivers in the SE/30 PDS?

    That would be my prediction as well, but still holding some hope out for the NuBus driver working. I've yet to mention the fourth header in the address line block on the board. That's for the /NuBus line of the 68030/IIsi PDS. SE/30 is BusMaster hostile where the IIsi is not. LC NIC/all LC slot cards sound to be NonMaster implementations? Explanation of the LC NIC's /SLOTIRQ address line above (hardwired $E implementation) as a NonMaster NuBus IRQ address interrupt request presents a possible a detour around SE/30 PDS driver addressing issues when used in conjunction with the address location of the IIsi NuBus Adapter Slot. It was an interesting blurb in the LC II DevNote, may be applicable, may not, but implementation on the board is trivial being but one unused pin on the 030 PDS. Wire wrap may not be the most current form of prototyping, but rework for connection corrections/modifications to wire wrap circuits on a prototype board is trivial.
  18. jessenator

    Potentially Stupid RAM Hardware Mod Question

    Thanks for the detailed explanation, trag. One day I hope to understand it all Is there a test, say in MacBench 4, that I could run it through to see if I'm overloading the data bus? Or for that matter, any test that I can use to give the RAM a workout? I did forget an attempt at popping the 64MB DIMM into the 1st slot, so I'll try that tonight. If that doesn't work, boo-hoo < /s > I think at any rate I want to replace that extra tall DIMM, so I'll probably end up getting 2x 64MB DIMMs from that ebay listing at the top. 160 isn't bad, but I'll see if 192 is possible.
  19. Those early PPC601s can overheat within tens of seconds, so don't try to operate it without the cooler. Clean off the crusty old heat sink compound and reapply and replace the heat sink.
  20. maceffects

    Farallon ETHERMAC LC NSC w/NuBus drivers in the SE/30 PDS?

    I have looked into the issue more, and I think there is a reasonable solution we can do. As a result, we need not be dissuaded from pursuing that goal as well. Let's just wait and see what the final wire wrap reveals for us. I bet when complete TattleTech will see it, but the drivers won't work.
  21. If you're seeing the full RAM you should be fine. The currents involved are tiny. They could cause signal integrity/ringing problems if not balanced properly, but they won't blow the board. I suppose in a ridiculously sensitive machine, it might be possible to destroy the output buffers on the memory controller. The bits of logic that output the actual signals to the DRAM modules are rated to output a certain level of current. If you connect a whole bunch of extra RAM chips to them, then for example, the address output buffers might see too low of a resistance (too much current drawn) and this could in effect be the equivalent of being exposed to a short to GND or 5V. But that's really far fetched. The main thing I was concerned with, when I read your first message, is memory that seems to work, but isn't actually reliable. This would only be a concern if your two bank memory was seen as 1/2 capacity. If the board was truly one bank only, then it would be activating all the /RAS signals to a DIMM socket for every transaction. On a dual bank board, half the RAM is connected to two of the /RAS lines and the other half of the RAM is connected to the other two /RAS lines. The computer selects between them by selectively activating the /RAS lines. Otherwise all the signals between the two banks are common. So, when a machine accesses a double-banked DIMM/SIMM as if it were single banked, it is actually performing all the reads and writes to both banks simultaneously. This overloads the data bus on reads (a little) but has the bigger issue that tiny variations in process (manufacturing process) might cause the RAM chips to try to drive the data bus at different levels, the contention can cause all kinds of signal noise. In practice, I think this rarely happens.
  22. The challenge with SDRAM is that typical SDRAM controller logic presents a transaction interface, and doesn't have any fixed or guaranteed address-to-data timing. It's a black box with variable timing depending on what other memory transaction are in progress. That's no good for a streaming framebuffer where you need to be constantly reading out pixel data at some fixed rate, while also interleaving reads and writes from the CPU. To accomplish that, you may need to jettison the canned or wizard-created SDRAM controller and write your own from scratch, which is a decent-sized project all by itself. This was my downfall in my attempts to build a DIY graphics card, years ago. In contrast, doing it with SRAM is trivial. Some FPGA dev boards do have SRAM, such as the Altera DE1 that I used to build Plus Too. But SDRAM is certainly fine too if you can solve the interface challenges easily enough.
  23. A couple comments: #1: If you look at the advertising flyers for Apple's *own products* (I looked up a few when I was checking to see what grayscale depth things like the Portrait card support) Apple actually lists as a feature in some of them that the card can let you use fewer colors for greater performance. I *really* think you might need to take a step back and think about how fast these machines actually are. The Quadra 950 went to 24 bit color at 832x624 with PDS-speed VRAM and even at the time I don't think anyone would have described that mode as "fast". The counter argument is, of course, that cards like SuperMac's wares that went up to 1600x1200 did exist, and in fact existed over Nubus, but they *were* accelerated. #2: Handwaving games is great and all, but the fact is you *will* be sacrificing software compatibility if your main display is locked only into True Color mode, and while it may be all right for some people to say "oh, well for that stuff I keep around this old Multisync I connect to this other card/motherboard video" that's not a super helpful suggestion for a lot of other use cases. #3: Seriously, a CLUT isn't hard and I wish I hadn't mentioned it. The sample driver code in the manual *does* lay out the boilerplate for what functions you need to support for handling the Quickdraw transactions that write values to the CLUT, the part that was missing was the code for actually writing updates onto the Toby hardware, which strictly speaking isn't important unless the goal is to make a register-level compatible hardware clone of the Toby. The reason I bemoaned that is at least when I skimmed it the first time it was unclear to me exactly where the hardware registers (including those to set the CLUT table, but a lot of the others too) are mapped in slot space and, possibly importantly how the division between the "RAM space" and "Control Space" is handled when both need to be crammed into a 1MB slice while running in 24 bit mode. This is a thing you're going to have to at least minimally figure out even if you have a card lacking a CLUT. And I'm sure the information is there somewhere, I just didn't really grok it the first time. But, *shrug*, whatever. The dev board that's been bandied around for this has 32MB of (DDR?) SDRAM with a 16 bit bus width on it, the assumption so far has been that the framebuffer will live in that. (Which of course is going to necessitate a read/write buffer for the Mac to be able to reach it, but that shouldn't be a huge deal.) But, yeah, the CLUT should definitely live internally. It only needs to hold 256 24 bit words of memory, IE, less than 1K, so it shouldn't be a big deal. One question about that? Does the TDMS encoder operate on "a word at a time", or is there some kind of streaming function with it? IE, are there primitives hard-coded into the FPGA board's hardware design that accelerate grabbing bytes straight off the DRAM memory? Unless something like that is in play then I don't see the problem with inserting a CLUT; as BMoW's pseudocode shows, you can basically think of the CLUT as if it were a 256 pixel long framebuffer, with the pixel value the output circuitry reads from the "clutbuffer" determined by using the data value fetched from the actual framebuffer as the address. Even if there is some kind of "streaming" where "streaming" is a FIFO of some size on the FPGA that still shouldn't be a problem, you can load that FIFO with the results of the indirection described above, right?
  24. PotatoFi

    Brightness Knob on Macintosh SE

    Ok, I recapped the entire analog board (excluding that 3.9 uF cap and the power supply caps), and it didn't fix it. So I desoldered a working pot from my other SE, cleaned it out thoroughly with alcohol, and soldered it in... problem fixed! Except now, I have a new problem: the knob is very stiff due to it's interaction with the front panel (it turned normally before the analog board was installed). So I gotta figure that out. I am also going to try reinstalling the old potentiometer after cleaning it with alcohol.
  25. Ultimately, for VGA output, I was looking at something in this family for a, geeze, I'm having vocabulary failure today, the thing a majigger that acts as the D to A converter and changes a stream of pixel data into video out data. ths8135.pdf ths8200.pdf THS8133.pdf ths8134b.pdf The THS8135 is less than $10 at Digikey. But to start with, using the VGA output provided on the/a development system simplifies matters tremendously. Leave all the CLUT stuff for later, unless the Macintosh requires it and won't work without it.
  26. Trash80toHP_Mini

    SCSI to SC cable

    There are threads in the Hacks Forum on building that cable and I think a PCB version might be available from a member? I took a look at doing a very simple IDC edge card to edge card adapter PCB (think old school PC FDD cables) that would be a stealth installation for adapter alone underneath the stock HDD replacement. There could be enough overhead in that cubic for a stealth installation of the entire SCSI2SD setup? IDC edge connectors are a bit shy of 1/2" thick if memory serves and definitely fit, but measuring the available height for comparison to SCSI2SD spec is easy enough.
  1. Load more activity
×