• Updated 2023-07-12: Hello, Guest! Welcome back, and be sure to check out this follow-up post about our outage a week or so ago.

TashKM: ADB Daisy-Chained Keyboard/Mouse Controller

tashtari

PIC Whisperer
I wrote a replacement to Synergy
This is cool! Unfortunately, part of the reason that I gave up on Synergy/Barrier is that I couldn't find a way to make absolute mouse positioning work on ADB, and I'd be even less likely to be able to make it work on PS/2. It's not to entirely preclude something like this in the future, but right now I think I have to deprioritize it.
One hack you could attempt is to use the UART to do a kind of 1b/2b encoding
Oh, if I only knew of a way to make the PIC's UART do this, TashTalk would have been a far simpler project. =D

One thought that did occur to me is that I could use the PIC's curious "CCP" (Capture/Compare/PWM) peripheral to shoulder some of the work of an inversion-based protocol. It can detect an inversion on the line and asynchronously record the value off a timer when it happens, I'd just have to service it once per bit instead of once per byte. I did consider using this for TashTalk, but unfortunately the bit rate used by LocalTalk was too fast for it to be viable - but in this case, I could use a slower bit rate. The protocol would be unidirectional, but for a KM situation, that just means no num/caps/scroll lock LEDs...
 

cheesestraws

Well-known member
Oh, if I only knew of a way to make the PIC's UART do this, TashTalk would have been a far simpler project

Well, I meant more, you don't tell the UART you're doing it and do it as a data transformation after/before the event. 4 or 5 instructions per bit? I don't know enough about the PIC to be sure :| perhaps it isn't possible.
 

tashtari

PIC Whisperer
Hm. Something where I set the UART's baud rate to twice the 'clock' rate? i.e.:

Payload: 0xAB 0xCD = 0b10101011 0b11001101
DME encoded payload: 11010010110100110011010100110100
UART reads: 0b11010010 0b11010011 0b00110101 0b00110100
...and decodes it in its own time?

I'd love to be able to do that, unfortunately there isn't a way to configure the PIC's UART not to care about start/stop bits and just self-time based on transitions when they come; it times everything off the start bit in a character. It'd end up being more like:

UART reads: S11010010P (I) S10011001P S10100110P S0...
Where S is a 0 interpreted as a start bit, P is a 1 bit interpreted as a stop bit, and I is a 1 bit interpreted as an idle bus. It didn't happen with this sequence of bits, but if a 0 shows up when the UART expects a stop bit, it'd be taken as a framing error.

Is that what you were proposing? Forgive me if I've misinterpreted.
 

gsteemso

Well-known member
Summarized, your original goal was a cross between "the poor man's lots-of-monitors" - which enslaves external computers to run them, rather than hosting an implausible number of on-board video cards - and a normal KVM. The combination would allow you simultaneous control of many machines via a single mouse/keyboard/etc., and simultaneously give _all_ of them a hugely extended desktop.

Some further observations:

(1) You have cut so much from your initially proposed scope that it is now merely a very inefficient KVM, with a great deal of work invested to - in effect - eliminate important pieces of its functionality while distributing the remainder amongst high-cost peripherals, placed one per machine.

(2) With your original concept, which was a useful and entirely plausible target, there are some design considerations that I don't believe you had completely analyzed for your proposed implementation. For example, "input goes to the machine currently hosting the mouse pointer" is a bit of a disaster with respect to both ergonomics and control logic. While I was thinking on this I also tripped over a few others that are not immediately coming to mind, but as the design needs to be changed anyway, they are not terribly important.

(3) As I analyze the original concept, I see the following as natural design partitions:

- In terms of hardware:

H1. The master unit, which connects to the user's I/O peripherals (keyboard, mouse, joystick(s), headset, thumb-drive / SD-card / etc. reader, disability-assistance thingumabobs, ...) and controls the entire system.

H2. The per-machine slave - "dongle" - unit, which must be as simple as possible while still supporting whatever I/O is to be included. Ideally, it would be bus-powered and/or leech off of the host machine it's attached to; this could be achieved by, for example, leeching just enough bus power to listen for a "Hey! You! Fire up your host machine!" signal whenever said CPU isn't already running.

H3. Some sort of network connecting all of the dongle units to the master unit.

- In terms of software:

S1. The master unit needs to be able to:

a: Interface to whichever I/O peripherals (see examples at point (3)H1 above) are to be supported.

b: Send instructions/data to (and ideally receive data from) the dongle units, both individually and en masse. (Ideally, the latter would accommodate multicasting [to multiple individual targets] as well as broadcasting [to all dongles].)

c: Maintain sufficient understanding of the user session's overall state to be able to send the correct commands to the correct dongles' hosts.

d: Have a well-defined conceptual structure behind the (high-level) user session and (low-level) per-host commands mentioned in point (3)S1.c above. Without that, it will be much harder to design the software in the first place, and an intolerable headache to make major changes during development.

S2. The dongle units need to be able to:

a: Emulate (any subset of) the user I/O peripherals chosen for point (3)H1 above.

b: Interpret whatever control signals from the master unit are ultimately defined within this system. Ideally, data transfers to and from the master unit would be supported; that would enable a range of functionality from keyboard LEDs and force-feedback game controllers (at the low end) to SD cards and thumb drives (at the high end).

S3. The control network needs to be:

a: As cheap and simple as possible; ideally, using only two physical wires and very simple, freely-available protocols.

b: Indifferent to the state and/or level of functionality of individual connected units - with the possible exception of the Master Unit, as the system has by definition fallen over _anyway_ if that one is not running.

(4) If you ignore the proposed two-wire cabling restriction, the foregoing analysis bears a startling resemblance to a USB device tree; which is natively supported to some extent by a lot of microcontrollers, and I suspect most of us already own a good number of old 1.1-era hubs and such for. It isn't like vintage hardware has much use for higher speeds than that anyway, is it?

(5) Point (4) is a bit of a double-edged knife. If you borrow off-the-shelf USB functionality for the interface between the master unit and the dongles, on the software side you need only concern yourself with the actual architecture, command, and protocol definitions, and on the hardware side emulating the user peripherals for each dongle-host machine needs no more work than you were already going to be doing; but USB support requires much more capable microcontrollers in the dongle units than would be the case with a simpler choice.
 

tashtari

PIC Whisperer
I'm really confused.

You're talking of SD cards and thumbdrives, force-feedback in game controllers, multicast and broadcast... this is all well beyond the scope of this project. These aren't features I cut, this was only ever meant to be keyboard and mouse - hence the "KM". Regarding the USB device... maybe a USB peripheral that emulates an ADB keyboard and mouse would be a useful piece of hardware, but that's not what this project is.

Further, I don't see how it's a "very inefficient" KVM, nor how the hardware, which I estimate will come in at under $10 per machine, is "high-cost".

I think you want something different than I'm building here. I'm sorry to disappoint you.
 

Scott Squires

Well-known member
(1) You have cut so much from your initially proposed scope that it is now merely a very inefficient KVM, with a great deal of work invested to - in effect - eliminate important pieces of its functionality while distributing the remainder amongst high-cost peripherals, placed one per machine.

This take makes no sense to me at all. KVM was never involved, only KM. I can't see any "inefficiency," or relatively "great deal of work," or "high cost peripherals." The reduction in scope makes this a more realistic, less complicated, more affordable design.

the foregoing analysis bears a startling resemblance to a USB device tree

The basis of this whole design is a bus topology, whereas USB is a star topology. USB is fundamentally an opposite design. I find it highly odd to call the idea "a great deal of work" and then follow up by suggesting USB.
 

gsteemso

Well-known member
Ah, I see. That many people telling me "you misread that!" is a pretty solid hint. :¬)

I also hasten to agree, you're each entirely correct; it was always a KM, not a KVM. I absent-mindedly used the wrong initialism as shorthand for the "dingus that remotely operates several machines from one set of controls" that I was thinking in terms of.

It's pretty obvious I misconstrued the original intent here. For the sake of morbidly quantifying the bucketsful of egg I'm scraping off my face :¬), what exactly _was_ the goal?
 

gsteemso

Well-known member
I'd also like to clarify, the whole "absurdly bloated list of peripherals" thing was meant as _examples_ - I kind of got stuck on the minor question, "just what _could_ I want to have portably hooked up to my KM switch?" Despite what I now see were appearances, I didn't mean to imply anything in there was included in the actual scope of the project; it was merely my train of thought getting sidetracked (a nuisance to which I am, vexingly, entirely too prone).
 

tashtari

PIC Whisperer
what exactly _was_ the goal?
The goal was/is easy KM switching for a bunch of Macs.

Under the current design, it piggybacks off of an existing PhoneNet network by using the inner pair of wires while PhoneNet uses the outer pair. Basically you have a PhoneNet dongle hanging off your printer port and a TashKM dongle hanging off your ADB port, but they're strung together with phone cabling like two units on the same PhoneNet network.

A single control unit (possibly an RPi with a hat, possibly something else) somewhere on the network, with a mouse and keyboard attached to it, sends signals to the TashKM units to move their emulated mice and type on their emulated keyboards when selected, probably by some special key combination.

Future plans can include TashKM dongles for other platforms, but it's just ADB right now.
 

tashtari

PIC Whisperer
It seems I'm running up against my lack of any real electrical engineering background.

I came up with a simple inversion-based protocol: an inversion starts a data sequence, an inversion 100 us after the previous is a zero, an inversion 200 us after the previous is a one. Worked fine when directly connected, but when connected through PhoneNet isolation transformers, no good. I think what's happening is that the protocol's transfer rate, which is about 34 times slower than LocalTalk (by design, I didn't want either end to have to work too hard), isn't fast enough to count as alternating current for purposes of the transformers and thus gets filtered out.

Pulses seem to come through, though. Maybe I could adapt the same idea to a pulse-based protocol without significantly changing the firmware?
 

cheesestraws

Well-known member
Handwavy numbers here (from someone who doesn't really know what they're talking about).

200 µs = 0.2 ms = 5 kHz square wave, so to accurately reconstruct a series of zeroes you'd need to pass 5 kHz accurately.

LocalTalk is 230ish kbit/sec, but because of FM-0, the actual signal is a 460ish kHz square wave. Call it 500 kHz.

So you're trying to pass a signal through the transformer that's about 1/100 of the frequency for which it's actually intended. I don't ... know much about this, but purely on the basis of the number of zeroes involved, that does sound like it might be a recipe for "not working".

Pulses would probably be a better bet. (edit: your other option, of course, is just to use different transformers: and since presumably you don't want to be cannibalising PhoneNet boxes forever, that might be a better idea...)

(I am willing to be corrected on any of this by anyone who actually knows about analogue stuff)
 

tashtari

PIC Whisperer
Pulse-based protocol works over PhoneNet transceivers!

I think the firmware for the ADB devices, at least, is in a solid enough place where I'd like to make a PCB. The challenge is going to be in finding an isolation transformer. Ideally, it'd be one that exactly matches those in the PhoneNet boxes, but I'm not really sure how to make that happen.

"The transformer is a 1:1 turns ratio transformer with tight coupling between primary and secondary, and electrostatic shielding to give excellent common mode isolation. The primary is wound as two windings of #32 AWG wire in series with one wound below the secondary and one above it.

Core Material: Siemens B65651-K000-R030 (or equivalent)
Bobbin: Siemens B65652-PC1,L (or equivalent)
Retaining Clip: Siemens B65653-T (or equivalent)
Magnetizing Inductance: 20 mH minimum
Leakage Inductance: 15 uH max
Capacitance: 5 pF max (primary to secondary with electrostatic shield and core guarded)"

...I know what some of those words mean. Also, there's the fact that the actual transformer used in the PhoneNet boxes I have appears to be slightly different from the LocalTalk spec - it's the only component on the board in the box, so it does whatever differentiates the TxD lines from the RxD lines inside itself... not really sure how to ask for that.

Here're some photos of the inside:

PXL_20210903_031817115.jpgPXL_20210903_031851076.jpg

Anyone have any tips on transformer-purchasing?
 

bdurbrow

Well-known member
Uhh... can I ask the stupid question? Given the existence of the TashTalkHat... why not do it all in software? You'd need an INIT/System Extension on each Mac on the network, and a server on the Pi; but no other hardware would be needed?
 

cheesestraws

Well-known member
Ideally, it'd be one that exactly matches those in the PhoneNet boxes, but I'm not really sure how to make that happen.

Why does it need to be one of these specifically? Can't you just choose a cheap and available transformer and fiddle with the pulse/transition duration to fit?

why not do it all in software?

For the fun of it, I assume :). Also avoids the bootstrapping problem.
 

tashtari

PIC Whisperer
Are your pulses differential
Yeah, I'm using the same driver IC as I was using with TashTalk, the SN65HVD08.
why not do it all in software?
@cheesestraws has it - bitbanging protocols is fun, writing software for vintage macs feels like work. =D
Why does it need to be one of these specifically? Can't you just choose a cheap and available transformer and fiddle with the pulse/transition duration to fit?
Well, it doesn't - I'm just lacking for knowledge of what characteristics of a transformer are important and which ones have bearing on what characteristics of the protocol, which makes it difficult to make that choice. Having taken a look at DigiKey and friends, I don't seem to be able to find anything that matches the characteristics of the LocalTalk transformer, so I think I'm going to have to find something else...
 

bdurbrow

Well-known member
have to do some unholy things

I was under the impression that the LocalTalk stack was tied into an interrupt so that packets didn't get dropped when something in the foreground went into a tight loop? Or am I remembering that wrong?

I'd have to go review the relevant sections of Inside Macintosh, but I think that there are some well-defined points to hook to make this work... and even if they aren't "official" - well, I don't really think Apple is going to release a System update that will break the hypothetical INIT anytime soon, if you know what I mean. ;)

I'd also have a look at the source code to some emulators - they've already solved this problem; but I'm not sure if they are emulating the relevant quadrature-mouse-and-keyboard or ADB hardware, or if they've just patched out a driver (hmm... probably varies by emulator).

As for setting the absolute position of the mouse cursor; over the years I've been responsible for machines with digitizing tablets from CalComp (attached via serial port) and Wacom (via ADB) and they all supported absolute-positioning tracking: pick the pen up from place A on the tablet, put it down at place B, and the cursor would jump on the screen to the correct location. This worked while using the menu system as well. Ergo, there must be a call or memory location somewhere to inject that data.

bitbanging protocols is fun, writing software for vintage macs feels like work.
Or, I suppose, you could do both - bitbang a protocol on a vintage Mac. 😜
 
Top