• Updated 2023-07-12: Hello, Guest! Welcome back, and be sure to check out this follow-up post about our outage a week or so ago.

Homebrew microcomputers & video generation

Grackle

Member
Hey all, I don't know if any of you remember me, but quite a while back I was interested in building my own 68k microcomputer. The best way to understand what I was shooting for is to imagine what an Apple II would be like if it had a 68k processor. Unfortunately, that path kinda petered out; 68k processors are expensive now, and I wasn't particularly happy with freescale's coldfire offerings. The ones with the peripherals I wanted were expensive and complex, and the architecture seemed a mess from an assembly language programmer's standpoint. I found those prospects uninspiring, and I lost interest.

Anyway, now quite a while later, I have picked up the project again. I took a liking to arm, and I decided to use the NXP LPC2420. It's a ROM-less ARMV7TDMI CPU in a 208 pin LQFP package, with an external memory interface that supports both static memories and SDRAM. It also has four external interrupts and a bunch of handy peripherals. Perfect!

Right now I'm trying to settle down on the details of the design and for the most part I have been able to make progress, but I'm stuck on the video generation hardware. Video framebuffers are big and they have to be moved around a lot. Maybe I'm just making a big deal out of something that isn't, but it seems daunting even with my 72MHz ARM processor, and that makes me wonder how Apple did it with their much slower 68k machines.

Consider 640x480@60Hz, 8bpp. If you update the framebuffer on every vertical refresh, that's 640*480 pixels * 8 bits per pixel / (1/60Hz), which equals approximately 17.5MB/s. NuBus can do 40MB/s in bursts, but it can only handle an average of 10 to 20MB/s. On an old NuBus Macintosh, that 17.5MB/s would be a considerable chunk of total bus bandwidth. So how did Apple do it? Is that all there is to it and they just dealt with the throughput fine, or was something else going on, like lower framerates and dirty rects and other trickery as necessary?

 
Last edited by a moderator:

commodorejohn

Well-known member
Bus bandwidth would only come into it if the video generator was pulling its framebuffer from system memory over the expansion bus, which I've never, ever seen. A Nubus video card would almost certainly have its own memory, and onboard video wouldn't pull it over the Nubus. And MacOS certainly doesn't rewrite the whole framebuffer every refresh - most games don't even do that.

Also, trying to generate video with a CPU (which, if I read your post correctly, is what you're doing) is a much heavier affair than using a proper video generator IC. You can pretty much do video generation with a DMA controller and a shift register.

 

Bunsen

Admin-Witchfinder-General
Dual-ported RAM is one oldschool solution. Usually it has a parallel (8 bit, for example) port on one side, and a serial port on the other. The CPU can address and alter bits/bytes randomly from the parallel side, while the RAM is also outputting an appropriately clocked serial stream to the video circuitry on the serial side.

But I get the impression that you are planning to have a single RAM space and a single type of RAM on the board?

BTW, direct video generation from micros is totally possible. Even a lowly PIC-18. Admittedly, that's still more powerful than a 128k Mac.

You might also investigate the Parallax Propeller, which has timers and whatnot, and prebuilt libraries, set up to do PAL, NTSC and VGA at thousands of colours. There are various projects around which use it as a sort of multipurpose I/O buffer to another CPU/micro.

Regarding the original Macs, it's worth remembering that they were one bit per pixel ;)

Potentially useful threads:

viewtopic.php?f=7&t=11040

 

Gorgonops

Moderator
Staff member
To reiterate again, yes, the NuBus bandwidth doesn't count here, it's the bandwidth of the dedicated memory on the video card itself that does. Many late 386/early 486 machines had 1MB video cards capable of 1024x768x8 8514/a resolutions stranded on an ISA bus only capable of doing 4-5mb/sec with the aid of a stiff tailwind. With a good 2D accelerator those machines could handle tasks like word processing acceptably but trying to display video streams at anything more than postage-stamp size turned things into a slideshow. Even Macs that used main memory for video refresh (the original black-and-white toasters, the Macintosh IIsi/ci, the Power Macintosh 6100/7100, etc.) don't waste the CPU's time stuffing bytes it into the output DAC. The video hardware has its own address counters and "steals the bus" as needed to grab bytes from RAM on its own. (see below.)

Dual-ported RAM is one oldschool solution. Usually it has a parallel (8 bit, for example) port on one side, and a serial port on the other. The CPU can address and alter bits/bytes randomly from the parallel side, while the RAM is also outputting an appropriately clocked serial stream to the video circuitry on the serial side.
Actually, most dual-ported RAM isn't "serial" on the output side (that would make it difficult to use the same RAM chip for different video geometries), but it wasn't unusual for the output port to have an increment-on-read address generator so successive words could be read "in a serial fashion" without having to feed an address for every cycle. Most inexpensive video systems just used normal SRAM or DRAM and would either time multiplex it (this was extremely common on 6502-based machines, where the CPU only used the address/data bus every other clock cycle and thus allowed another device to share the same memory pretty easily) or just force the CPU to wait until the end of a scanline (or even until the vertical blanking interval) if it tried to access RAM at the same time as the video hardware.

(In some very old systems, like the TRS-80 Model I or the original IBM CGA card, the CPU actually has priority and if it writes to the video area during screen generation it will cause a dropout, described as "snow" or "static", in the resulting picture.

BTW, direct video generation from micros is totally possible. Even a lowly PIC-18. Admittedly, that's still more powerful than a 128k Mac.
"Hybrid" systems where the video timing itself was generated in hardware but the video data was fed to it by the CPU during every refresh weren't terribly uncommon in the early years of computing. (For instance, the Sinclair ZX-80 and the Atari 2600 both do this to differing degrees.)

But... yeah, in any case, back to the original topic: Rolling your own video output is probably the hardest part of making a homebrew computer, full stop. If you're going to try to build this yourself the first thing you're going to have to do is define what your target is. You mention the Apple II; that machine in its original form only used 7k for its framebuffer in high-res mode. (it's really basically a monochrome machine that uses some ugly tricks to "colorize" the output, which gives the ][ its distinctive messy graphics display.) Refreshing that display 60 times per second requires less than 500Kp/s of bandwidth, which coincidentally can essentially be had for "free" when a 1Mhz clocked memory is paired up with a 6502 CPU. Clearly you're hoping for more than that but be realistic: what do you actually want/NEED to achieve? Are you trying to make a simple homebrew programming/interfacing machine, or are you aiming at a roll-your-own graphics workstation? And if it's the latter are you talking mid-1980's quality graphics or something that wouldn't get laughed off the stage today?

Most of the "simple" homebrew systems you see out there don't bother with video at all. They either use a serial text console or they go one step further and interface an MCU like an AVR, PIC, or Propeller which can reasonably easily do video output with minimal hardware and use that like a built-in terminal. (From a logical standpoint the difference is slight.) Doing something like that does *not* give you easy access to any graphics capability. Your video device will be interfaced as one or more simple 8 bit ports, which means if you do try to work any graphics into the design any pixel setting or line drawing you do from the main CPU will have to be executed as a series of commands shoved byte-by-byte into the slave processor.

(Technically you can still manage some "pretty good" graphics that way. The TMS 9918A graphics chip used in machines like the ColecoVision game console and the Japanese MSX computers didn't allow direct memory access to its "private" frame buffer from the main CPU. Of the super-cheap ways to do video the Propeller is probably the most capable as it can, with almost no hardware, manage super-VGA-sized monitors with a palette of 64 colors, but it only has 32k of RAM, some of which will be required for code, and is thus mostly limited to "tile-based" video displays. You can't really do a full-screen Mac-style bitmap at anything but fairly low resolutions. Best you could manage would be something around 512x384 monochrome, much lower in color. Even a 320x200 8 bit VGA screen requires 63k of RAM, and it's *not trivial* to use external RAM on the Propeller for anything, let alone video generation. But if you're happy with 80's game-console-quality graphics and TTY text you could do worse.)

The next option would be to interface a DAC and use the main CPU to shove bytes under it under software control. Those bandwidth numbers you tossed out there don't sound that intimidating for a 70mhz ARM, honestly. For reasons that are difficult to explain (and in some cases over my head) an ARM CPU is probably *not* a good choice to try to "software emulate" the entire video stream, but... glancing at the datasheet for part you named maybe you could do it with some of the GPIO pins. Alternatively you could use external circuitry (like maybe a Propeller?) to generate the video timings and have that circuitry fire off an interrupt at the start of each scanline to tell the main CPU to start fetching bytes from the framebuffer and shoving them into a FIFO leading to a DAC where they'll be combined with the timing signals and sent to the monitor. I imagine this would be totally doable... if somewhat inefficient (Your CPU will be occupied roughly half the time shovelling bits from RAM out the door) and something of a nightmare to program.

The final step up would be to build your own DMA video system. As described by commodorejohn a simple monochrome system *could* consist of not much more than a handful of logic, a few crystals, and some analog voodoo. The analog voodoo is the hard part, and the fact that you're looking at a CPU with a 70+ mhz clock speed means this will be a pretty intimidating homebuild. (Electronics start getting "hard" at double-digit Mhz speeds. Something like the original Mac's 512x342 resolution you could get away with wire-wrapping, but even 640x480x256 requires pushing data about *20 times faster*.) This would get *really* complicated if you wanted to use SDRAM controlled by the on-chip RAM controller; your homebuilt "video card" would have to operate synchronously with it and essentially have to "bus master" and do arbitration at the full speed of SDRAM, which again, is going to be *really fast* by homebrew standards. Your life would probably be easier if you went with a dedicated framebuffer, but even then you're looking at building a high-speed VGA card from scratch.

Frankly if big colorful bitmaps are your priority I'd say dump your current target chip and pick an ARM SoC that has a VGA or LCD controller onboard. They're a dime a dozen and solve the whole problem. (Heck, you're not even stuck with analog VGA; you can get HDMI easily enough.) But... if you're going to do this, why not just pay $30 for a Raspberry Pi and call it a day?

Out of curiosity, have you built a homebrew computer before or would this be your first?

 
Last edited by a moderator:

Grackle

Member
Hi guys, thanks for all the responses! You've given me a lot to think about.

To start off I'll try to explain my goal a little better:

I'm taking inspiration from the old 80s microcomputers. I want to build something simple & hackable with some external bus expansion slots, video output, sound output, keyboard, serial, etc, and a ROM toolbox with monitor and high level language interpreter and useful routines and drivers for the on-board hardware.

That's what comes to mind when I think of the Apple II... It's a highly expandable yet relatively simple computer that has everything you need to get started built right into the ROM. I want to build something like that, but with common and inexpensive modern components.

Gorgonops asked if this is my first homebrew computer project. Yeah, it is. This is something I've wanted to do for a long time, and I've done a lot of reading about computer system architecture stuff, but that and some computer and microcontroller programming experience are all I have going for me. At best, I'm familiar with some of the concepts involved.

Anyway, getting back to video. Originally I had looked at dual ported RAMs with the intention of using them with some sort of external video generator, but they seem uncommon and the ones I could find (on Digikey) were way too expensive to even consider. So, I went off trying to figure out how to manage with just the CPU and system memory, and when I figured out the throughput required, I panicked and came here for help. :|

It seems that using the CPU to generate the signal is pretty much out. It's entirely possible to do it, either with lots of "analog voodoo" (hah) like Gorgonops described, or using a video DAC with a built-in timing generator like the TI THS8200 (something I had looked at earlier), but doing so would be a monumental waste of CPU time. Even with DMA like Commodorejohn suggested, the memory/expansion bus will have a lot of work to do. Overall... not a good way to do it.

So if I can't use the typical method with dual ported RAM (because it's too expensive,) and I can't use the CPU and system memory (because that's just plain terrible,) what's left?

The RPi, or devices like it... Very tempting. My biggest argument against that option is that I really want an expansion bus on the external memory interface. If I made my own board, there's the added complexity of package-on-package devices and BGA devices in general, even higher clock speeds and faster edge rates than what I'm dealing with now, etc.

Next up... an FPGA. It could handle the timing and talk to a simple DAC (or DACs), and it could interface with its own framebuffer memory. With a fast enough device and fast enough memory and proper synchronization, my CPU could talk to the FPGA as if it were an external memory. Plus there's all sorts of flexibility there, like ian.finder said, I could even use an FPGA CPU core. Hell, I could probably do everything on the FPGA. I'm a little hesitant to pick this option though, it would be a steep learning curve for me.

At this point I was going to talk about using a second CPU to generate video signals, but while I was searching around I discovered a couple other NXP processors that sound promising. First there's this Cortex M4 with M0 coprocessor and LCD controller in an LQFP208 package. Then there's this ARM720T SoC with LCD controller in an LQFP208 or LQFP176 package. Somehow I missed these devices in my previous searches. I'll have to mull over the datasheets; maybe they're a better fit than the LPC2420.

 

commodorejohn

Well-known member
An FPGA could certainly work, if you know how to create hardware from programmable logic or are inclined to learn. A second CPU could also work, but is kind of a waste of resources. If you really want to roll your own video card, then I'd say you're more than likely going to have to adjust your expectations down from the SVGA world. You can get some decent oldschool video generators in parts form (the TMS9918 VDP is a fairly capable little NTSC chip, and if you get the 9938 instead you even get expanded capabilities and RGB output instead of composite.) Or you could get really hardcore and build a video generator out of logic. Analog video really isn't that complicated to understand - composite video only gets complicated once you start adding color into the mix, and RGB/VGA doesn't even have that.

In any case, though, if you're really attached to the idea of high-res and high-color, you're probably better off going for a microcontroller with built-in VGA or HDMI.

 

Gorgonops

Moderator
Staff member
Okay...

For what you want to do I'd probably seriously recommend getting a Raspberry Pi or similar to use as a development platform and tackling this from a software angle. I was about to suggest something else but then I noticed that the spec was to use "modern components", and... to be honest, if that's in your desired feature set I don't think *need* an expansion bus based on the memory interface. Strictly speaking, why would you? Many of the sort of devices today you'd interface to a homebrew computer communicate via I2C or SPI. The GPIO connector on a Raspberry PI has pins pre-defined for both sorts of bus, and by adding a demultiplexer onto some of the remaining GPI pins you could add a *lot* of SPI chip selects. (On the far end breaking SPI out into an 8 bit parallel port requires little more than a single chip, so it's a perfectly legitimate way to interface "simple" devices.) And for faster peripherals you have USB 2.0. Yes, it's "only" good for 40-50-ish MB/sec, but that's faster than the memory bus of most 486 class computers, more than good enough for things like sound cards, storage devices, image capture devices, etc, all of which can be had off the shelf for the price of writing a driver. You're not going to do much better than that with a parallel expansion bus unless you make it fast, wide, and *way* too difficult to interface inexpensive homebrew devices to.

(As I alluded to earlier, you can be pretty sloppy when you're playing with one or two Mhz signals like those present on an Apple II bus connector. You just can't do that when you're looking at the speeds involved with the memory bus on a remotely modern system. Again, if we take outperforming USB 2.0 as our baseline specification you're talking about needing to put together a parallel bus that runs faster than NuBus, Microchannel, or EISA, all of which required a big investment in transceiver logic for every card in the system and very careful design to avoid electrical noise on the card edge connectors. The other option would be to run the bus much slower than system memory, doing something like ISA for the slots, but if you do that then you're not gaining much over simply using SPI. )

Strictly speaking, if you're looking at SoC's you're not really talking about building up system "from scratch" anyway. The one reason I could see for pursuing making your own board would be to get the maximum number of GPIO pins. If the point of the system is to do real world I/O and the "expansion bus" is simply a means to that end then if you can get the GPIO pins directly you dispense with the need for an expansion bus. Win!

Anyway. Personally I'd probably suggest, if you do want to go all out with making your own parallel bus system, starting with something more accessible, like a microcontroller or Z-80 based system. For most of what a home experimenter would want to do an ARM system is probably *vast* overkill from a computational standpoint anyway, and it's going to be *much* easier to prototype something that only requires 40 or so wires for a completely non-multiplexed bus.

 

Grackle

Member
Man.

Everything you're saying is sensible, and that really bums me out. I suppose bare microprocessors don't really make sense for manufacturers to make anymore. Like you said, using a SoC isn't really building a system "from scratch," but it's all I had to work with if I wanted to use a modern CPU. But again, like you said, all the interesting peripherals chips now are SPI and I2C anyway. Higher bandwidth stuff gets integrated into the microcontroller, so you don't really see much in the way of parallel devices anymore.

But... Just to defend my idea for one last moment... I don't think a fast bus is necessarily impractical for a hobbyist system. My plan was to use a couple universal bus transceivers on a card with a prototyping area, so you'd have a 32 pin IO port along with the other signals each card gets (the noteworthy ones being I2C and an interrupt line). That would give you a nice and safe way to talk to the CPU with just about any little bits of hardware you might plan to use, and the price would be very low (those chips are less than a dollar in quantity.)

Side note: My friends used to call me the "dream crusher" because I would bring harsh reality to their wild plans. So... I guess this is fair. Dream crusher. :p :lol:

 

Grackle

Member
Oh, re: continuing this as a software project. I think that's what I'll do. Well, sort of. The only development board I have right now is a 500k Xilinx Spartan-3E board, so I'm going to see if I can put some sort of soft core on there. I'm hoping it's not as difficult as I imagined.

 

onlyonemac

Well-known member
The BBC Micro used a system where the microprocessor was clocked on only every second clock pulse (or something to that effect), so the RAM was accessed, on one pulse, by the processor, and on the alternate pulse, by the video circuitry.

I think it's kind of like the "Double-Port" thing everyone's on about.

All I know is-it worked beautifly, and didn't interfere with programming, as it was completely transparent to the processor.

(BTW-I've been designing some sort of bare-bones computer for a while now, and I'm planning to use this sytem for the video output.)

 

Gorgonops

Moderator
Staff member
I think it's kind of like the "Double-Port" thing everyone's on about.
No, that's the 50% bus duty cycle feature/quirk/whatever of the 6502 I mentioned earlier. Practically *every* 6502-based home computer uses it, the BBC Micro included. Dual-Ported VRAM is something *completely* different.

Note that in most computers that *don't* use the 6502 and have to use bus arbitration for DMA video it's generally transparent to the programmer as well, it just has some side effects with regard to instruction timings. Broadly speaking the system is going to run slightly slower than its clock speed would indicate compared to a system that doesn't share RAM with video. Whether the impact on instruction timing is 100% predictable or not and of what magnitude depends on the hardware design. For example: the Mac Plus benchmarks slightly slower than a Mac SE despite both using the same CPU at an identical clock speed, and the reason is that the simplistic method used in the Plus to share RAM between CPU and video hands the bus over to the video output 50% of the time while the SE trims that closer to the ~25% it actually needs. The SE isn't half again faster than the Plus because the 68000 itself only needs the bus some of the time, thus the greater utilization doesn't impact it quite as much as the numbers would suggest.

Anyway. Blawblawblaw.

But... Just to defend my idea for one last moment... I don't think a fast bus is necessarily impractical for a hobbyist system. My plan was to use a couple universal bus transceivers on a card...
Heh. Well... that's certainly a nice part, but scanning the data sheet it doesn't look like it's much more than a buffer/latch with some provisions for automatically cycling the latch based on a clock input. To make it into a bus you need to design a bus protocol and controller to implement it. Buffered pins are a start, but you'll need a clock source and work out how you're going to phase data transfers, IE are you going to need an address for every transfer or are you going to favor "bursting" bytes/words across it? With only 30-something pins obviously you'll have to multiplex data and addressing over the same pins, you'll need to work out the timing for that, and what about interrupts? Or arbitration? Etc, etc. A bus controller is a complicated piece of kit and it'll have to be fast if you're going to hang it off the RAM bus of a 70+Mhz SoC. And of course, on each card you'll need an IC to demultiplex your complicated and fast bus. Honestly that really doesn't sound very "hobby-friendly" to me. The reason systems like S-100 and the Apple II were successful and "easy to use" was that they presented a bus which didn't have the complication of asynchronous burst transfer modes, multiplexed address/data lines, etc, etc. Notice how *nobody* is turning out NuBus cards as a hobby these days? That's because NuBus is "hard", and I can't help thinking your bus sounds "hard" too. Maybe I'm just too intimidated by the "big and fast" that seem to naturally come along with 32 bit CPUs.

If you were using a SoC with a lot of GPIO pins you could, I suppose, just slap some buffers/line drivers in front of them and provide the cycling for a sort of "synthetic" bus in software. Obviously that's going to consume CPU cycles and you won't get DMA, but if what you really want is a parallel channel for "tinkering" that could work pretty well. (Actually, doing a quick Google it looks like some of the "Stellaris" ARM Cortex CPUs do essentially that, offering a glob of GPIO pins that can be configured to run as a bus called "EPI".) But... do you really have in mind a solid idea what sort of bandwidth numbers you hope to achieve and what your real-world use cases are? Most of what I think of when I think of "modern hobby computer" are people are doing cool things with easy-to-use MCU frameworks like Arduino, and terms of bandwidth and computational ability those compare pretty evenly with the hobby computers of the 1970's and 1980's. The big breakthrough is instead of needing a $2000 Apple II and a wire-wrapped PIO interface card to interface to and control "X" you can simply fit the programming, computer, and I/O controller into a single chip so cheap it's practically disposable . (And if you want a graphical interface to control X then you provide it by tethering your device to a laptop over serial/usb/bluetooth/whatever.) For that sort of fiddling both an ARM SoC is serious overkill and a bus actually gets in your way. You want cheapness, accessibility, and GPIO pins, the rest is fluff.

Side note: My friends used to call me the "dream crusher" because I would bring harsh reality to their wild plans. So... I guess this is fair. Dream crusher. :p :lol:
*snicker* I'm sorry I've sounded so down on your Dream. I do think, however, that you might be having an issue with defining what your dream actually *is*, since it seems like it's somewhat all over the map.

Are you chasing the dream of coming up with your own super-cool hardware platform, or is it the all about software framework idea. Is this something just for you or something intended to be shared with the world as an open platform... or what? You mention the Apple II as inspiration, but then seem to hyper-focus on performance (without setting an actual goal) and sort of freaked out at the idea that you might be "wasting" CPU power if you did X instead of Y. Hobby computers aren't about maximal performance, CPU efficiency or even necessarily design cleanliness. Heck, the Apple II is a poster child for a brand of engineering that actually trades gobs of theoretical performance for minimal hardware requirements, even at the cost of relying on ugly, arcane and difficult to understand software to drive parts of it. (Like the disk controller.)

One of the things that makes the Raspberry Pi interesting is it actually seems to provide the hardware you really want, IE, something that *does* include the peripherals you need for a graphical operating environment but is still designed to allow some simple real-world interfacing. The Pi is designed to be an educational toy; the inspiration is actually the BBC Micro and it's successor the "Archimedes", and in fact Risc OS and the accompanying BBC BASIC are available for it, for free. (Some of the examples for using the GPIO pins on the project's wiki are written in BASIC.) That's why I suggested you might be best off tackling the software side first on the Raspberry hardware, since it seems other than this "parallel port bus" thing it's a darn close fit for what you want. I assume you have notes, or at least an idea in your head, what your "toolbox ROM" is going to consist of? Something like an ARM port of Applesoft BASIC around a simple monitor/kernel of some sort? The Pi appears to have sufficient documentation available to write "bare metal" OSes for it, and if you make it go on the Pi and later decide you simply must make your own board featuring the expansion bus of your dreams it'll probably be easier to port your working code to a different SoC than it would be to start with a naked chip and have to bring it all up from scratch. (Most computers are bootstrapped by writing code on other computers early in their development, after all.)

But again, that's just my best guess. If I'm totally wrong then don't let me discourage you from attacking this your own way.

 

Grackle

Member
I do think, however, that you might be having an issue with defining what your dream actually *is*, since it seems like it's somewhat all over the map.
I won't argue with that, haha.

Edited to add: Part of the reason I seem all over the place is because I'm thinking very long and hard about what it is I really want to do. I've been sitting here writing and thinking much longer than I would like to admit (hah), but I think this post is a big step in the right direction.

Are you chasing the dream of coming up with your own super-cool hardware platform, or is it the all about software framework idea. Is this something just for you or something intended to be shared with the world as an open platform... or what?
Yeah, it would be a dream come true if I could develop an open platform that others wanted to use. In reality it's probably something that only I will ever play with, but part of the fun is trying to come up with a design that satiates my desires while being practical and valuable to a broader audience.

In the simplest possible terms, what I want is a small physical computing device that is entirely self hosting, and it should be able to reasonably utilize standard keyboards, mice, and displays. The latter part is what appeals to me about the old microcomputers. You plunk it down and plug it into the TV you already have, and you're ready to go. Of course nowadays we have PCs, so we tend to have mice and keyboards and fancier displays lying around, so it should plug into those. The physical computing bit is just to specify that it should be easy to connect to "the outside world." Switches, sensors, whatever.

Now, I know that basically describes the Raspberry Pi, but I think there's a lot of room for improvement there. As a purist, and from my own (perhaps unique) perspective, I just don't like the fact that there are so many levels of abstraction. Firmware, an operating system kernel, a C library, a high level language interpreter, libraries in that language, and finally your own user code. Ridiculous! And then there's all the cruft and complexity around that, entirely unrelated to whatever it is you're trying to do. What a mess!

Of course, there are advantages to a platform that is basically a regular PC. Linux is ostensibly robust, and there is a huge wealth of already-written programs and libraries out there, plus you get support for all sorts of devices and programming languages. Yet... from what I have seen, and from my experience as a newbie playing with various development boards in the past, little linux devices can be overwhelming. Relative to a PC, they're slow and awkward to use, so they're not always a whole lot of fun to play with. If you have a project in mind, the complexity and breadth of the system can be off-putting, and finding help is often hard because of the vast variety of hardware in use. If you do finish your project, it can be difficult to share with others who are running different distributions or lack the packages you've installed or modifications you've made. I'll say it again: it's a mess.

So, what if I eschew linux in favor of a much simpler system; one that is more straightfoward and has fewer barriers to entry? I would lose the flexibility and breadth of software that you get with linux, but I think there are plenty of cases where it would be a worthwhile tradeoff.

That brings me to a point you've mentioned a couple times now: this is mostly a software issue. I guess don't really need to come up with my own board. Maybe I wouldn't be able to fully realize the potential of the RPi, but at $25, it's probably not a bad place to start. I haven't bought one yet though, and I have my FPGA board, so I'm going to see what it takes to run a soft core.

 

commodorejohn

Well-known member
Linux is more rotund than robust, if you get my drift. And I completely sympathize on the complexities of what are ostensibly hobbyist systems - I'm exceedingly frustrated by how long the Raspberry Pi foundation is taking to get the rest of the hardware documentation out (I know, I know, it wasn't initially intended for the tinkerer market, but the hobbyists were a huge part of the initial funding, and it'd be nice if there was a little more respect for that.) I haven't taken an in-depth look at the PandaBoard or BeagleBoard systems and the extent to which they're documented, but they might be good options. Then again, they also cost $100+ per unit.

 

CelGen

Well-known member
If you can shoot for co-processors, there WAS that 68008 card you could build for the Apple II.

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDQQFjAA&url=http%3A%2F%2Fwww.harrowalsh.de%2FElektronik%2FAPPLEBOX%2F68008CPUcard%2F68008translation.pdf&ei=X6QtUfWbOOWliQLLlYGwBg&usg=AFQjCNGVz9aHt2dv6mUB-93ZUzwFE8Uy4g&sig2=MFXycrtsxtxGjR22TZQuKQ&bvm=bv.42965579,d.cGE&cad=rja

It's all 74LS logic and the MC68008 is amazingly cheap on ebay. They even include etch templates so you can fire them off to the etcher of your choice for a nice authentic board.

 

Bunsen

Admin-Witchfinder-General
Still catching up with this thread - it's kind of information dense.

I just wanted to drop in another IC that you might like to check out. The Cypress Semi PSoC series - Programmable System On Chip. They're a CPU core (your choice of ARM or 8051) with a wad of FPGA/CPLD-like glue logic surrounding them, and looots of IO pins. Relatively cheap. There's a low-cost dev board on Kickstarter called FreeSoC - it may have launched by now.

Also, if you have your heart set on a 6502-like flat memory-mapped IO system, have you checked out Sprow's MiniB - a floppy-sized SBC based on the BBC Micro?

In any case, if bigmessofwires can build and design a CPU and OS from scratch out of TTL logic, I say hang the practicalities and chase that dream! If the purpose is self-education and amusement, do it your way.

But if, like me, you are easily discouraged and distracted, treading a path already partly worn by others may well help, at least for Mark 1.

 

Gorgonops

Moderator
Staff member
In the simplest possible terms, what I want is a small physical computing device that is entirely self hosting...


As a purist, and from my own (perhaps unique) perspective, I just don't like the fact that there are so many levels of abstraction. Firmware, an operating system kernel, a C library, a high level language interpreter, libraries in that language, and finally your own user code. Ridiculous! And then there's all the cruft and complexity around that, entirely unrelated to whatever it is you're trying to do. What a mess!
I know this is a somewhat silly observation, but don't you need an OS for computer to be considered "self-hosting"? Granted "OS" is a broad term, but unless you're talking about building a machine like an IMSAI or Altair that includes a switch-driven front panel for manually punching in machine-language programs byte-by-byte you're going to have to have an "OS", even if it's just a machine-language monitor that offers a simple ML editor/debugger and some routines to save programs and data to some sort of storage media, before you can call a machine "self-hosted". Merely getting a video display, keyboard, and storage device wired up the CPU doesn't get you anywhere if you don't have *some sort* of boot ROM/firmware to drive the things at some minimal level when you flip them on.

As noted, the Pi doesn't *have* to run Linux if you think it's too fat. There are several alternative codebases running on it already (like RiscOS), and the foundation has a basic course up talking about how to bootstrap up the Pi from bare metal. (And it sounds like they're genuinely trying to provide as much documentation as they can, but see below.) It would be completely, 100% possible to write your own machine language monitor for it that provides for whatever level of interactivity, built-in languages and driver facilities, and other "hand-holding" you desire. However:

... I'm exceedingly frustrated by how long the Raspberry Pi foundation is taking to get the rest of the hardware documentation out (I know, I know, it wasn't initially intended for the tinkerer market, but the hobbyists were a huge part of the initial funding, and it'd be nice if there was a little more respect for that.) I haven't taken an in-depth look at the PandaBoard or BeagleBoard systems and the extent to which they're documented, but they might be good options...
If you're a purist and you really want to own everything that goes on with your computer you have no choice but to stay away from ARM SoCs. Period. Nearly every SoC manufactured* uses a custom DSP to drive the video and sound hardware they package alongside the ARM CPU and *nobody* releases sufficient information to write your own driver for it. You are stuck with having to use a binary firmware blob to drive those features if you intend to use them, and the PandaBoard/BeagleBoard are no different from the Raspberry Pi in that respect. On all of those systems even if you're "bare metal" on the ARM side you're talking through a firmware blob to the SoC. The Raspberry Pi is actually superior to some of the other development boards because the Pi foundation was able to work with the manufacturer to create a "blob" providing an API that can be used by "generic" code running at a bare metal level to provide at least partial functionality; from what I can glean from the Wiki, for instance, the Pandaboard's graphics hardware is basically completely useless unless you're running Linux or Android on it. Beagleboard does have a RISC OS port but also requires a binary blob and it appears that its blobs are also OS specific.

(One "scary" aspect of many SoC's is it's often technically the undocumented DSP that's in control on power-on, with it providing at the lowest level the basic storage device/file system support to grab the necessary blobs and an ARM bootloader off the flash filesystem. So again, another point for staying away from SoC's if dealing with "firmware" isn't something you want to have to do. There's no way to escape it.)

In any case, if bigmessofwires can build and design a CPU and OS from scratch out of TTL logic, I say hang the practicalities and chase that dream! If the purpose is self-education and amusement, do it your way.
Honestly I think someone would probably be better off trying to build a trivial CPU out of TTL logic as a first project than they would be trying to make from scratch a computer that's not a replica of some preexisting architecture using an FPGA. An FPGA is a gigantic wad of electronic tinkertoys that *can* be turned into a computer but to do it you need to be able to describe one in meticulous detail. If you're fuzzy on what you really want you're going to be spending a lot of time scratching your head and thinking "Huh, okay, I downloaded the code for a soft-68k CPU... now what?".

But if, like me, you are easily discouraged and distracted, treading a path already partly worn by others may well help, at least for Mark 1.
+1

If you'd like to laugh at it, here's my dream for making a "from scratch" computer. My plan, if I ever have the time to do it, is first to copy this:

9 Chip CP/M Machine

Frankly I don't even like CP/M but this design manages to provide with just 9 chips (and a Compact Flash card) everything you need to make a fully functional and hackable computer with a mass storage device. The only thing it's missing is a built-in I/O console, but once I have my copy working my follow-up plan is to create a console for it using a Propeller MCU; I have some ideas in mind for doing "psuedo-DMA" video using a Prop and this looks just about perfect for trying them out on. (The goal is actually to massage the system from being a CP/M compatible to a TRS-80 Model I/III semi-clone, which will either involve patching the ROM and TRS-DOS to make direct use of the Compact Flash device, or seeing if I can make the Propeller emulate the original disk controller for those machines. Or both. Next step after that is I'm thinking of redoing the whole thing with a 6502 instead and aiming at Commodore PET emulation...) Obviously that's not particularly ambitious but it's something I might have a chance of actually carrying out without having to learn a *whole pile* of accessory skills. (Multi-layer PCB design, surface mount or BGA soldering, etc. And, of course, the amount of firmware code I'm looking at having to write, hack and debug is measured in kilobytes, not megabytes.)

But again, hey, if you're really up for pulling off the creation of a completely modern high-speed 1.8v logic rmachine with your current skillset don't let anyone stop you.

* Note this "binary blob* problem extends far beyond ARM SoCs. Many peripherals of all sorts depend on "blobs" usually embedded into OS specific drivers, and even x86 SoC chipsets like Geode have them lurking in their BIOS code.

 

commodorejohn

Well-known member
(One "scary" aspect of many SoC's is it's often technically the undocumented DSP that's in control on power-on, with it providing at the lowest level the basic storage device/file system support to grab the necessary blobs and an ARM bootloader off the flash filesystem. So again, another point for staying away from SoC's if dealing with "firmware" isn't something you want to have to do. There's no way to escape it.)
Yeah, that's something I'm beginning to grasp. Incredibly frustrating. And you're right, the Raspberry Pi foundation is better than most, I just wish they could take it the rest of the way...

 

onlyonemac

Well-known member
have you checked out Sprow's MiniB - a floppy-sized SBC based on the BBC Micro?
MiniB SUX! I've been on the lookout for the real thing-dad tried to tell me that MiniB was cheaper and just as good!
 

Grackle

Member
That cypress psoc looks fantastic. Very interesting. I had seen the freesoc before but I didn't think it was anything more than a funky arduino clone.

Linux is more rotund than robust, if you get my drift.
Hah, yes. I suppose I was trying not to seem completely self-serving in my characterization of linux systems, but I do agree with you there.

On the topic of OSes... I haven't looked too deeply into existing ones, but they're in the back of my mind. FreeRTOS or Contiki or something? I'm not sure. Right now I am more interested in creating the bare minimum of necessary firmware. I just want a monitor program to start. I think it's important to make the system unobtrusive and modular; like a set of tools rather than a guiding framework. That's somewhat intentionally vague... I haven't thought much about it yet, but my basic feeling is that "useful set of tools" is more enticing than "esoteric operating system." While everybody likes their own pet operating systems, far fewer people want to work on someone else's. Ramble ramble. Like I said, I haven't put a whole lot into this topic yet.

Anyway even if I do end up with something more complex, I know I need to start small if I want to get anywhere, so the first goal is just the monitor.

If you're a purist and you really want to own everything that goes on with your computer you have no choice but to stay away from ARM SoCs. Period.
...

(One "scary" aspect of many SoC's is it's often technically the undocumented DSP that's in control on power-on, with it providing at the lowest level the basic storage device/file system support to grab the necessary blobs and an ARM bootloader off the flash filesystem.
I honestly had no idea that the blobs were so pervasive. I knew that the RPi had one and that my beagleboard's omap3530 had one, but I didn't know they were so deeply integrated into the low-level functions. I thought they were just for graphics and DSP stuff.

Maybe it's not such a big deal. Owning every aspect of the system isn't a philosophical requirement of mine, but it may end up being a practical requirement (if I find that it's more mess/effort than finding a lesser device that doesn't have blobs.) I'll have to do some digging and see what options are out there, as the topic is new ground for me.

In the meantime I'm considering the option of using a combination of FPGA and smaller ARM SoC... The FPGA could handle the video interface and some cheap/large/fast DDR2 memory to act as the system memory and video framebuffer. Microsemi (formerly actel) even makes some FPGAs with integrated hard arm cores, but they're a little on the expensive side.

In any case, if bigmessofwires can build and design a CPU and OS from scratch out of TTL logic, I say hang the practicalities and chase that dream! If the purpose is self-education and amusement, do it your way.
Honestly I think someone would probably be better off trying to build a trivial CPU out of TTL logic as a first project than they would be trying to make from scratch a computer that's not a replica of some preexisting architecture using an FPGA. An FPGA is a gigantic wad of electronic tinkertoys that *can* be turned into a computer but to do it you need to be able to describe one in meticulous detail. If you're fuzzy on what you really want you're going to be spending a lot of time scratching your head and thinking "Huh, okay, I downloaded the code for a soft-68k CPU... now what?".

But if, like me, you are easily discouraged and distracted, treading a path already partly worn by others may well help, at least for Mark 1.
Well, if I could list the projects I've thought about but haven't finished... :-/

This is one I've been thinking about for a long time though, and each time I'm in a better position to make things happen.

But again, hey, if you're really up for pulling off the creation of a completely modern high-speed 1.8v logic machine with your current skillset don't let anyone stop you.
Indeed... I know I have limits, but I'm trying to learn as much as possible, and I can cut things out and/or relax my requirements as I find things that are beyond my abilities. I can't solder BGA by hand, but I can manage QFP and SSOP just fine, and those are still somewhat common. My high frequency PCB design skills are lacking and some system integration topics give me trouble, but app notes are my friend there. Plus, with my preferred PCB service (OSHPark) I can get high quality 4 layer boards for $10 per square inch, so prototyping costs aren't too high.

Those are my thoughts for now. No "real" updates as I've been very busy with work, but the weekend is coming up and I'll have time to get back to the fun stuff.

 
Top