• Updated 2023-07-12: Hello, Guest! Welcome back, and be sure to check out this follow-up post about our outage a week or so ago.

Homebrew microcomputers & video generation

Bunsen

Admin-Witchfinder-General
MiniB SUX!
Certainly not "just as good", but not entirely useless, either. The main thing missing from it vs a real Beeb is video output. I corresponded with sprow once about that, and he indicated that adding it would be "possible", but he has no interest in doing it himself.

That cypress psoc looks fantastic. Very interesting. I had seen the freesoc
Yes, they do look interesting. I'm planning to buy a couple for poking at when I have a functional workshop again.

On the topic of OSes...
Check out Minix. It's an OS designed to be part of an educational course in OS design (and, incidentally, the precursor to Unix). The course books (by Andrew Tannenbaum) are available - if not free/online, at least via Amazon & ebay. And they are highly recommended by people who appear to know what they're talking about.

Speaking of which, I wonder if anyone's ported it to the R-Pi yet? If not, it could be an interesting project; one that could be tackled a step at a time, using the existing Minix courseware and the R-Pi Foundation's docs.

Well, if I could list the projects I've thought about but haven't finished... :-/
HAH! *fistbump*

Lately, I've been trying to rein in my brainfarts by writing them down someplace, forgetting about them, and getting back to the "having a functional workshop again" project with will, in theory, make all the other projects possible. Though even that has been severely backburnered due to Life and Stuff[tm].

FPGA and smaller ARM SoC
PiXi-200 is a cheapish FPGA addon/piggyback board for the R-Pi.

 

Bunsen

Admin-Witchfinder-General
9 Chip CP/M Machine
That's pretty cool. I see he has a DIY Jupiter Ace clone too, if one felt like having a self-hosting FORTH system instead of BASIC or CP/M.

Originally I had looked at dual ported RAMs with the intention of using them with some sort of external video generator, but they seem uncommon and the ones I could find (on Digikey) were way too expensive to even consider.
There are cheap ones on ebay if you only need a handful.

a 500k Xilinx Spartan-3E board, so I'm going to see if I can put some sort of soft core on there.
Pardon me for not remembering or googling for names, but there are some CPU softcores out there which, rather than being direct copies of known hard-CPUs, are original designs optimized for synthesis on FGPAs in the least gate-space. Poking around opencores.org should turn them up.

Still, there should be plenty of space on a 3E for a core or two of some description.

Note that in most computers that *don't* use the 6502 and have to use bus arbitration for DMA video it's generally transparent to the programmer as well, it just has some side effects with regard to instruction timings. Broadly speaking the system is going to run slightly slower than its clock speed would indicate compared to a system that doesn't share RAM with video.
The ZX-81 had a "fast" mode where it would stop updating the screen to run a chunk of code at double speed.

a parallel channel for "tinkering" / it looks like some of the "Stellaris" ARM Cortex CPUs do essentially that, offering a glob of GPIO pins that can be configured to run as a bus called "EPI".
Interesting, I'll have to check them out.

what I want is a small physical computing device that is entirely self hosting, and it should be able to reasonably utilize standard keyboards, mice, and displays.
Pardon me again for dropping bare unexplained searchterms, but ...

(propterm, playpower, http://www.8052.com/, http://www.6502.org)

 

onlyonemac

Well-known member
MiniB SUX!
Certainly not "just as good", but not entirely useless, either. The main thing missing from it vs a real Beeb is video output.
The main thing missing from it vs a real Beeb is the fact that a real Beeb is a real Beeb!
(It's a bit like BMOW's "Plus Too"-it's not a real Mac so it won't take the place of one! That's not to say it's useless though.)

 

Gorgonops

Moderator
Staff member
The main thing missing from it vs a real Beeb is the fact that a real Beeb is a real Beeb!
That's not a particularly constructive attitude. Why participate in a conversation about building your own computer if all you're going to do is trash homebuilds for not being as good as something that cost (in inflation-adjusted terms) ten or twenty times as much? (And by definition isn't going to be as collectable as the original because, well, it's not collectable, it's brand new.)

Certainly not "just as good", but not entirely useless, either. The main thing missing from it vs a real Beeb is video output.
I would say the lack of similar graphics capability *does* disqualify the thing from really qualifying as a "Beeb" in the minds of most users of the original. Sure, if you were using your BBC micro for some sort of process control function or whatever the "recreation" can do that just as well without wasting up to 20k of RAM on a useless framebuffer, but I imagine most "Beeb"-ers will be sorely disappointed it doesn't play Elite or whatnot.

(What I suppose it does do is demonstrate how device-independence was a strength of the original BBC firmware/operating system. It's pretty impressive that even the BASIC interpreter doesn't rely on pounding directly on the underlying hardware, most of which is missing in that board. Of course, it also demonstrates my point that homebrewing your own computer that incorporates better than port-mapped terminal graphics is apparently "hard" because almost nobody does it.)

Note that in most computers that *don't* use the 6502 and have to use bus arbitration for DMA video it's generally transparent to the programmer as well, it just has some side effects with regard to instruction timings. Broadly speaking the system is going to run slightly slower than its clock speed would indicate compared to a system that doesn't share RAM with video.
The ZX-81 had a "fast" mode where it would stop updating the screen to run a chunk of code at double speed.
The ZX-81 (descendant of the ZX-80) is one of those really unusual cases where the CPU is *directly involved* with updating the screen, so it's sort of in its own category in terms of the magnitude of the performance hit that comes with rendering the display.

In a typical DMA video the graphics hardware consist of a pile of counters designed to generate memory addresses to be fed into the RAM as needed to fetch the values needed to generate the display. (These can be discrete parts, as in the very early systems, or part of a dedicated video IC like the very common 6845 CRTC). In system incorporating a CPU like the Z-80, 68000, or whatever, anything that doesn't have as predictable a bus cycle as the 6502, the possibility exists for the video hardware and the CPU to want RAM at the same time. You can solve that either by letting the CPU win (which will cause a dropout on the display), or you can tell the CPU to wait. (Obviously you can't just let the video system "win" without pausing the CPU, that will cause a crash when the CPU fetches garbage instead a valid data byte.) If you're gruesomely interested in the details you can flip to page 16/27 of the TRS-80 Model III technical reference and read how it does it, it's about as simple of an example as you'll find. Systems which use "main memory" instead of a dedicated block may use slightly more granular systems; the TRS-80 only generates a wait if the processor accesses the contended area, but if it does it's stuck halting the entire system until the end of a line of pixels is reached. If the CPU is always executing code from the same RAM as video then either the CPU will be halted what usually amounts to about 1/3rd-1/2 of the time (IE, it only executes code during the blanking areas all the time), or if the clock/memory speed is fast enough it may only be halted "as needed", with reads from video going through latches that grab a byte FAST and leave as much time open for the CPU as possible. (This still wastes a fair amount of CPU time, the exact amount depending on the speed of the memory subsystem. That's why dedicated memory video cards are still considered more desirable than the alternative. Worst case even a single-ported DRAM-based card will only make you wait a few ticks now and again, and only when you're specifically writing to it.)

What makes the Sinclair ZX-80/81 special is that they lack those address counters, and in fact lack dedicated video memory at all. Instead the video display is a linked list of arbitrary size, and to display video the CPU is jumped to the beginning of the linked list at the start of a line and is fed NOPs by the video hardware, which causes the CPU to advance its own address bus a byte at a time until it hits the "end of line" marker. (the bytes read during those NOP periods are fed to the character generator and shift register.) It's a clever way to get rid of some hardware, but it comes at the cost of monopolizing the CPU entirely while displaying a frame of video. (And all the CPU's state needs to be pushed onto the stack and restored when it's time to run user code.) The ZX-80 is always in "Fast Mode", because it "manually" has to keep track of when to take time out of its busy schedule to render the display and it's not at all sophisticated about it. (Basically it only shows a display when it's waiting for user input.) "Slow Mode" on the ZX-81 uses a little bit of additional hardware in the timing circuitry to generate a non-maskable interrupt to grab the CPU's attention when necessary, telling it to drop everything and go render the display. Rendering the display monopolizes *most* of the CPU's time, but it does let it juggle running a user program at the same time. Slowly. With considerably more pain than a more "sophisticated" system experiences.

But again, blawblawblaw. (I just like talking about these systems because I've been mulling over ways to substitute a quick MCU for all that timing hardware, because I badly want DMA video on *my* homebrew computer. The smallest flexible alternative is something like a 6845, but they still need a fair amount in the way of support chips and in theory if I can make something like a Propeller do what I want I can reconfigure the video layout just by loading a new firmware image.)

Right now I am more interested in creating the bare minimum of necessary firmware. I just want a monitor program to start. I think it's important to make the system unobtrusive and modular; like a set of tools rather than a guiding framework. That's somewhat intentionally vague... I haven't thought much about it yet, but my basic feeling is that "useful set of tools" is more enticing than "esoteric operating system."
Well, there is sort of a reason why people want a real OS kernel. If you don't have one you are, in essence, stuck with having to make every program you run on the system the OS. To a first or second order of approximation that's how, for instance, the original MacOS worked, and it was *not* a trivial system to program for. (And also, obviously, if your system lacks a real OS, even if it's just a simple DOS, it's going to be very difficult for it to be "self hosting".) Arguably architectures like that can work well for embedded systems that require real-time response, but offhand the only application I can think of that calls for rich graphics in that category is something like a game console. Which... would certainly be an interesting thing to build, and if it's cool enough you might be able to find people wanting to hack on it, but without Googling up an exact list I can say that there's already quite a few DIY game console projects out there...

I honestly had no idea that the blobs were so pervasive. I knew that the RPi had one and that my beagleboard's omap3530 had one, but I didn't know they were so deeply integrated into the low-level functions. I thought they were just for graphics and DSP stuff. Maybe it's not such a big deal. Owning every aspect of the system isn't a philosophical requirement of mine...
I'm sort of "eh" when it comes to blobs, as long as the blobs can be distributed and loaded from an arbitrary operating system and present a documented API once loaded. That way the hardware isn't useless under alternative OSes, you're just stuck accepting that you can't fix any bugs in the peripheral; it works or it doesn't. Hardware from the day before blobs could have bugs too, and you'd be just as screwed if it's bad. They're EVIL when they're locked completely inside a binary driver for a particular OS and only speak ancient Greek to it and it alone, rendering the thing a doorstop under any other circumstances. Unfortunately the latter is *not* rare.

I know I have limits, but I'm trying to learn as much as possible, and I can cut things out and/or relax my requirements as I find things that are beyond my abilities. I can't solder BGA by hand, but I can manage QFP and SSOP just fine, and those are still somewhat common. My high frequency PCB design skills are lacking and some system integration topics give me trouble, but app notes are my friend there. Plus, with my preferred PCB service (OSHPark) I can get high quality 4 layer boards for $10 per square inch, so prototyping costs aren't too high.
So, regarding your original goal of using a 68000-series CPU, have you seen this:

Project Kiwi

By any chance? It uses mostly late-80's vintage components so it doesn't meet your "modern" specification, but it would probably be educational to at least see what he's done there as from a software environment standpoint it looks a lot like what you might be after.

You might also, just for the heck of it, want to get yourself a Propeller Demo board to play with if you haven't seen one in action. (I have one myself and have been slowly working through the tutorial.) It's only 32k of core RAM and has some irritating gotchyas here and there, but it's also a *really* cool toy for banging on hardware at a very low level. There are several projects on the Propeller forums that involve adding SPI RAM and an SD card interface to the plain old demo board and transforming it into a simple little self-hosted computer, IE, something you can program in BASIC and load .spin or assembly code files on without tethering to a Mothership. (And there's also all the emulator projects, but I sort of think those are a little silly other than demonstrating how *fast* the basic hardware is.) I know it's puny compared to your ultimate ambitions but if nothing else it could be a good "how graphics work" primer.

 

Grackle

Member
Check out Minix. It's an OS designed to be part of an educational course in OS design (and, incidentally, the precursor to Unix). The course books (by Andrew Tannenbaum) are available - if not free/online, at least via Amazon & ebay. And they are highly recommended by people who appear to know what they're talking about.
Yeah, Minix! I remember Bill Buzbee got it running on his wire-wrapped TTL CPU "Magic-1." Seems like a good option.

Lately, I've been trying to rein in my brainfarts by writing them down someplace, forgetting about them, and getting back to the "having a functional workshop again" project with will, in theory, make all the other projects possible. Though even that has been severely backburnered due to Life and Stuff[tm].
Hah, "life and stuff," always getting in the way. Big things (good things) are happening with me at work these days, which keeps me quite busy, but in a couple weeks my project will finally end and I will have some significant time off. That sounds like my chance!

a 500k Xilinx Spartan-3E board, so I'm going to see if I can put some sort of soft core on there.
Pardon me for not remembering or googling for names, but there are some CPU softcores out there which, rather than being direct copies of known hard-CPUs, are original designs optimized for synthesis on FGPAs in the least gate-space. Poking around opencores.org should turn them up.

Still, there should be plenty of space on a 3E for a core or two of some description.
Yeah, there are a lot of soft core options. I see that the OpenRISC minsoc (minimal system on chip) runs successfully on the spartan 3e, so I think I'll give that a shot tonight.

 

Bunsen

Admin-Witchfinder-General
CPU softcores out there which / are original designs optimized for synthesis on FGPAs in the least gate-space.
Hah, speaking of bigmessofwires: TinyCPU. There's even a readymade PCB for it, though it should also be easily fitted into your existing device.

There's another one I can't recall right now which is loosely based on one of the ARM ISAs. And ARM have an official softcore too, though Zeus only knows what the licensing situation is with that.

/ETA/ Here's one: http://www.actel.com/products/mpu/coremp7/default.aspx#software

/ETA/ And another: http://embdev.net/articles/ZPU:_Softcore_implementation_on_a_Spartan-3_FPGA - which also implements the open-source Wishbone expansion bus. Maybe a hardware implementation of Wishbone fits in with your plans, Grackle?

But again, blawblawblaw.
I look forward to your coredumps, myself, Gorgonops. You have a knack for explaining hellishly tricky low-level stuff in a clear and understandable way.

I've been mulling over ways to substitute a quick MCU for all that timing hardware, because I badly want DMA video on *my* homebrew computer. The smallest flexible alternative is something like a 6845
Considered an SAA5050? Or possibly more useful, its descendant:

The SAA5050 was later superseded by the SAA5243, a similar teletext video generator chip, not only a character generator but a complete stand alone video generator, controlled through I²C.
 

Bunsen

Admin-Witchfinder-General
BTW, anybody mind if I change the topic heading to "Homebrew microcomputers"? Possibly "and video generation"?

/ETA/ Done.

 
Last edited by a moderator:

Bunsen

Admin-Witchfinder-General
Mentioned on Hackaday:

This is where FTDI’s new chip, EVE, comes in. It’s a single chip video engine designed for QVGA (320×240) and WQVGA (400×240) displays with a parallel RGB interface. From the diagrams up on FTDI’s site, getting a display running is as simple as connecting a microcontroller via an I2C or SPI interface, and then hooking up the video lines. There’s also support for touch screen interfaces and audio out, so if you’re looking to build a graphical home automation remote EVE might be the way to go.
Though looking at the Yamaha V9990 video controller used in the Kiwi computer Gorgonops linked above ... 640x480 at 32k colours is not bad.

 

commodorejohn

Well-known member
Yeah, the 99xx VDPs after the 9918 are all-round pretty decent. The only irritating thing about them on the MSX is the whole access via dedicated I/O ports, but in a homebrew system with more than 64KB address space, it would be pretty doable to simply map VRAM right into main memory; then you'd really be set.

 

Bunsen

Admin-Witchfinder-General
Reading up further, it turns out that VRAM is dual-ported RAM, so maybe a stick from an old Mac/PC is a cheaper source than finding the bare ICs, and easier to work with.

Here's a homebrew VGA GPU board for a modular, backplane-based 6502 computer: http://hackaday.com/2013/01/11/veronica-vga-board-finalized/

Here's a self-hosting Propeller computer:

http://propellerpowered.us/index.php?route=product/product&path=25_73&product_id=55

And a FORTH-based 8 bitter:

https://sites.google.com/site/libby8dev/fignition

A different homebrew Kiwi, also 68k (DragonBall SoC)

http://scanlime.org/2011/07/kiwi/

StickOS® BASIC is an entirely MCU-resident patented interactive programming environment

http://www.cpustick.com/

 

commodorejohn

Well-known member
That's a modern, specifized definition of the term "VRAM," though. Dual-ported memory for video didn't become standard until much, much later on; heck, even getting dual-ported palette memory is a nice feature on, say, ISA VGA cards.

 

Bunsen

Admin-Witchfinder-General
I'm not saying dual-ported VRAM is "historically authentic"; I'm saying it's reasonably available, today, for homebrew projects.

was once commonly used / invented /in 1980 / first commercial use /in 1986 / Prior to the development of VRAM, dual-ported memory was quite expensive / Through the 1990s, many graphic subsystems used VRAM
The linked article also goes into some depth about how to use it.

Meanwhile, here's DVI-D video out from an FPGA: http://hackaday.com/2012/08/03/moving-an-fpga-project-from-vga-to-dvi-d/

Perhaps outputting DVI-D is a worthwhile approach, whatever the base platform. Fewer pins, less messing about in the analog domain, and same signals as HDMI, so a simple adapter would let you hook up to a modern TV.

 

Bunsen

Admin-Witchfinder-General
http://www.pliner.com/macminix/

Why MacMinix?
In an educational environment, MacMinix is ideal. It's easy to install, it runs on top of the Mac OS, and it starts up very quickly. (Just like MachTen.) You can recompile the OS, quit it, and restart the MacMinix application, very simply, without restarting the whole computer. And if you mess something up, you can easily revert to an old version. This is one advantage MacMinix has over the other versions of MINIX. Plus, it utilizes 68K assembly code, which may be more suitable for an educational enviroment than PowerPC, or even Intel instruction sets.
There's also an official, active but unfinished ARM port of Minix3.

 

Bunsen

Admin-Witchfinder-General
Here is a DIY GPU to drive a classic Mac display, using a cheap 72MHz ARM over USB.

http://spritesmods.com/?art=macsearm&page=5

[ Related thread ]

Here's a roundup of DIY 68k systems:

http://hackaday.com/2012/09/04/homebrew-68k-extravaganza/

/ETA/ from the comment thread:

The student manual for The Art of Electronics provide a series of labs that result in a stand alone computer based upon the 68k series of chips…
For real old-school homebrewing of the 68000, you have to read the original, early 80′s “DTACK Grounded” newsletters, written by one of the first hobbyists to try to use in a simple system, not a heavy Unix or fancy GUI box. Heavy with the hardware and software details. Be prepared to set some time aside! All written by one engineer with very strong opinions, right in the middle of the 80′s PC craze.
http://www.easy68k.com/paulrsm/dg/
Also, from one of the linked projects:

Now the one truly remarkable feature the 68K CPU S-100 board has going for it is the fact that back in 1987 a guy named Alan Wilcox wrote a whole book describing a complete S-100 based 68K CPU board. The book is the only one of its kind -- describing the construction of an S-100 CPU board, and goes into considerable detail chapter by chapter building up a completely functional board. The book is an absolute "must read" for anybody building a 68K system. The title is "68000 Microcomputer Systems Designing & Troubleshooting" by Alan D. Wilcox. Prentice-Hall Inc. Publishers, 1987. Copies can be obtained from time to time on eBay and Amazon.
 

Gorgonops

Moderator
Staff member
For the record, if you were porting Minix to a homebrew 68k machine MacMinix probably would not be the best place to start for the very reason that it *is* encapsulated inside of a MacOS program. There were other 68k versions (Atari ST and Amiga, for instance) that were more "bare metal" and thus would provide more in the way of useful bootloader/initialization/etc. code.

For anything more powerful like an ARM system... not to be cruel to it, but Minix is pretty straight-up a waste of time. Minix is slower than Linux, by design, doesn't exactly have much of a track record when it comes to multi-platform-ity, and suffers from an absolute dearth of device drivers even on its native 386 platform. If you absolutely abhor Linux but still want something UNIX-like there are ARM ports of FreeBSD and NetBSD, they'd probably be your best options. (But peripheral support is sorely lacking at this point for most "multimedia" SoCs, in large part because of "blob issues". But drivers *may* actually happen someday, unlike the prospects for Minix.)

Here is a DIY GPU to drive a classic Mac display, using a cheap 72MHz ARM over USB.http://spritesmods.com/?art=macsearm&page=5

[ Related thread ]
This project (which I'd forgotten about) is indeed interesting, if for no reason than it demonstrates the sort of overhead you can expect from "completely emulating" video hardware with a small 70-ish Mhz ARM slaved to an external RAM via GPIO pins. You would have to optimize this a *lot* to squeeze color out of it, let alone substantially higher resolutions.

I look forward to your coredumps, myself, Gorgonops. You have a knack for explaining hellishly tricky low-level stuff in a clear and understandable way.
Thank you. Again, a lot of it is just thinking out loud. Occasionally when you do that you'll manage to have someone with a good idea or two overhear it. ;)

I've been mulling over ways to substitute a quick MCU for all that timing hardware, because I badly want DMA video on *my* homebrew computer. The smallest flexible alternative is something like a 6845
Considered an SAA5050? Or possibly more useful, its descendant:

The SAA5050 was later superseded by the SAA5243, a similar teletext video generator chip, not only a character generator but a complete stand alone video generator, controlled through I²C.
The SAA5050 still requires something like a 6845 for video address generation. It's still a semi-interesting chip; it combines a "character generator" (which in most systems was literally nothing more than a ROM full of character glyphs) with the actual pixel generation hardware/shift register, so if you're happy with the text-mode character set it's a way to save a few chips. (Its successor basically has the 6845 built in. Again, if you're happy with a simple text display it'll do but if you're after graphics or want your own character set it's not ideal.)

The outline of my Propeller idea starts with the assumption that the Propeller itself is a *little* too slow to just act like a memory device itself, IE, present itself so a "master CPU" can just shove bytes into its internal HUB RAM. Projects like the Prop6502 deal with this by actually making the "master" CPU a slave from a clocking perspective, but from what I've seen of that approach it just doesn't seem to scale too well. My idea is to instead have the "master" CPU talk to plain old SRAM and have the Prop regularly copy segments into its internal buffer. For "text/tile" systems like the Commodore PET or TRS-80 doing that would actually cut the memory bandwidth required for video a *lot*. For instance, the TRS-80's 64x16 video system only uses 1K of RAM, but each character is 12 scanlines tall, so each 64 bytes on each character line is scanned 12 times, making for a total of 12K's worth of memory reads for each frame. At 60 FPS NTSC that's 720KB/sec. If you only had to read the 1K once per frame then you've cut the speed requirement to 60KB/sec. That's so low that you could easily allow the Prop to only steal a RAM cycle every ten CPU cycles or so and operate somewhat asynchronously. In *theory*.

Obviously the limitation is that if you're doing full-screen graphics instead of characters/tiling pulling a whole framebuffer puts the bandwidth requirements back to even with "hardwired". My thought there is for graphics applications the Prop could be set up to accept *commands* written to some area in RAM and behave basically like a 9918A or similar. (Code already exists to emulate sprite engines like the Commodore VIC-II.) Unused HUB RAM on the Prop could be used as a tile cache and sprite buffer, meaning after an initial load the screen could be manipulated very quickly with minimal bandwidth requirements. (The design could include the capability for the Prop to write data back to RAM, in order to update things like sprite position, collisions, mouse cursors?, etc. That would also, again in theory, make it possible to do things like let a PS/2 keyboard on the Prop emulate a memory-mapped matrix with no additional hardware.)

Anyway. Again, blawblaw.

 

Bunsen

Admin-Witchfinder-General
For the record, if you were porting Minix to a homebrew 68k machine MacMinix probably would not be the best place to start for the very reason that it *is* encapsulated inside of a MacOS program.
Ah, yeah. Didn't explain my train of thought there. I was thinking (for myself, anyway) that as an educational venture, building and tweaking Minix inside an app would be more appealing than doing so on DIY hardware that I would have to concurrently debug. And possibly useful in terms of understanding what the heck is involved when it comes time to do it in hardware.

not to be cruel to it, but Minix is pretty straight-up a waste of time.
It's mostly building it in conjunction with the Tanenbaum courseware that appeals to me, not setting it up as a rock-solid end-user experience.

My idea is to instead have the "master" CPU talk to plain old SRAM and have the Prop regularly copy segments into its internal buffer.
OK, pardon me if I've got this sideways -- why copy it into the Prop? So you can use the Prop's dedicated video timers/libraries and whatnot?

I'm imagining something where the Prop tickles the SRAM in just the right way to make it clock the data straight out to video, without having to ingest and regurgitate it.

My thought there is for graphics applications the Prop could be set up to accept *commands* written to some area in RAM
Sort of like -- "You there! I need a circle yea big, here and here, and move sprite #3 a smidge to the left. You handle the details, I have stuff to do." ?

 

Gorgonops

Moderator
Staff member
OK, pardon me if I've got this sideways -- why copy it into the Prop? So you can use the Prop's dedicated video timers/libraries and whatnot?
I'm imagining something where the Prop tickles the SRAM in just the right way to make it clock the data straight out to video, without having to ingest and regurgitate it.
So, my reasoning goes that if you're "tickling the SRAM directly" so it spits out bytes on cue then you're again having to make the memory system a slave to the very rigid timing requirements necessary for the video system. Which... on one hand doesn't seem like it should be a big deal, since that's how those 80's 8 bit computers were designed originally, but on the other it means that you're not really gaining much by using the Prop; you'll basically be using it as if it were something like a 6845.

The "caching" idea is that you can take a more relaxed approach: the Prop needs to grab one or two K inside of every 1/60th of a second window, but it could do it by, say, halting the CPU *once* and doing the transfer in one big chunk at the end of every vertical refresh cycle. IE, instead of having to master the intricate cadence of tossing the memory bus back and forth between CPU and video once every X cycles at precisely the right time you just halt everything for a thousand cycles once you're "offscreen", suck the contents of the RAM locations allocated to video at warp speed, or the closest it can achieve into the Prop, and then let everything go about its merry way again. That does have some implications for timing issues if you were explicitly trying to emulate an existing system, but I don't see it being a huge deal, and the "pauses" would at least be quite predictable.

Anyway, how well it would actually work in practice depends on just how quickly the Prop could suck bytes in from an SRAM. One thought I had would be you'd supplement the Prop with an 8 bit latch and a binary counter like a 74HC590; for a typical 16 bit address bus application before starting a transfer you'd load the high byte (let's call that the page address) into the latch (which sets the top 8 bits of the address bus) and reset the counter to zero. An actual transfer would consist of setting the 8 GPIO pins on the prop to inputs (or outputs, depending on whether you're writing or reading) and then just *tearing* through 256 memory locations at a time by triggering the increment line on the counter (which is wired to the bottom 8 bits of the address bus) for each transaction. Hardware-assisted block transfers like this are essentially how the optional SRAM card for the "Hydra" game system works, and the literature for that claims that you can read from that card into a COG as fast as a COG can access HUB memory. (Which means in theory you could clock the RAM *really fast* when the Prop controls it, while using a more leisurely clock for the master CPU. Another bonus of using the latch/counter system is you only need to use about a dozen GPIO lines out of the Prop's 32, instead of the essentially all of them you suck up trying to directly use them as address/data lines, and it's still less complicated than the alternative of using multiple latches and multiplexing. That leaves you with lots of I/O lines to use for other things.)

Again, that's totally just the *idea*.

Sort of like -- "You there! I need a circle yea big, here and here, and move sprite #3 a smidge to the left. You handle the details, I have stuff to do." ?
Basically, yeah. All the truly classic 8 bit graphics chips (9918A, VIC-II, GTIA/Antic, etc) all have capabilities like that. ("Okay, move the space invaders left two pixels, let me know if the missile fired by the player hit any of the invaders, and if not go ahead and move it up a few pixels".). The cool thing about the Propeller, of course, is it's a fully general purpose CPU so in theory you could order it to do things like line/circle drawing, vectors, etc. (Also, you could wire a mouse to the Propeller and let the Prop completely handle the cursor generation, tracking, and even some of the high-level aspects of a GUI like window drawing and text handling. Think of it as a sort of Fisher-Price My First NeXT.)

 

onlyonemac

Well-known member
Also, you could wire a mouse to the Propeller and let the Prop completely handle the cursor generation, tracking, and even some of the high-level aspects of a GUI like window drawing and text handling. Think of it as a sort of Fisher-Price My First NeXT.
Now that would be cool! :cool:
 

James1095

Well-known member
How about looking at the video generation circuitry used in the B&W compact Macs? Schematics are out there, it's a simple, elegant design, built from commodity parts. It would make a reasonably starting point.

Another option is to use one of the various CRT controllers that were used in early video cards and some embedded appliances like word processors and terminals. Or you could try driving an ISA video card.

Seems like a giant project though, you might have better luck starting with something a bit simpler, maybe make a single board computer with just a serial port for a console to start with. If you expose the CPU bus on an expansion connector, you can interface anything you like to it later on.

 

Grackle

Member
Hey all, I haven't popped in here in a while, so I wanted to give a quick update.

I have my Digilent Nexys 2 FPGA board set up, and I've got all the development tools installed in linux. I tested out some VGA signal generation code from a friend, and it works! I'm not going to use his though; I need to spend a little time to learn verilog on my own, and then I'll write my own implementation. Luckily VGA isn't exactly complex, so I think I'll be able to do that over the weekend (or maybe sooner.)

The current plan is to let the FPGA handle video generation and framebuffer, and use an atmel SAM7SE as the CPU. $7 for the FPGA (which will be capable of VGA and HDMI) and $6 for the CPU. Not bad!

Hooray! I was afraid I wouldn't like playing with the FPGA, but this is pretty cool. :approve:

 
Top