DEV Community

Charles Michael Vaughn
Charles Michael Vaughn

Posted on

TensorFlow for Commodore 65s

Given that TF Lite for Microcontrollers runs on some heavily resource-constrained devices, I got to wondering whether or not I could run inferences against these models on a Commodore 64.

To do this, I chose not to use an interpreter. The TF Lite Micro team explains why they did in their paper (i.e. portability, maintainability). And that was a good choice for the project to be sure, but I'm dealing with nearly 40 year old hardware, so I cannot afford the overhead of an interpreter. Instead, I modified the TF Lite Micro source code so that when running an interpreter on the host computer, it will emit all of the important details about the model, e.g.: operations to perform, filter values, biases, etc. Additionally, I analyzed the source code for all operations involved with running the model so that I could reproduce the functionality.

CH then parsed that output with a Python script to turn it into C64-compatible BASIC (this could be updated to produce 6502 assembly code, but for this proof of concept, BASIC was actually fast enough).

To test things out, I built TensorFlow's Hello World example that builds and trains a small, 3 layer neural network that learns to approximate the sine function. After running it on the host computer and emitting the model info, I used my parser to create this BASIC code that can be used to run arbitrary inferences against the neural network on a Commodore 64. Each inference takes a few seconds to run on a physical C64 computer.

Since the code running on the C64 is the same thing logically as what runs on the host computer (or microcontroller), it performs equally well in all environments. There is no accuracy reduction from running on the C64.

Top comments (0)