Search results

  1. physics32

    MacMind: a neural network in a HyperCard stack

    Vocab size is 10 (digits 0–9), sequence length is 8. No BOS/EOS, fixed-length sequences, so the model always knows it’s looking at exactly 8 positions. Keeps things simpler and avoids needing the model to learn when to stop. I’d love to hear more about your MNIST work. What was the...
  2. physics32

    MacMind: a neural network in a HyperCard stack

    MacMind is a complete transformer neural network implemented entirely in HyperTalk. Embeddings, positional encoding, self-attention, backpropagation, and gradient descent, all running inside a HyperCard stack. The task is learning the bit-reversal permutation, the opening step of the FFT. The...
Back
Top