AnnouncementsMatrixEventsFunnyVideosMusicAncapsTechnologyEconomicsPrivacyGIFSCringeAnarchyFilmPicsThemesIdeas4MatrixAskMatrixHelpTop Subs
3
Add topics

Comment preview

[-]x0x70(0|0)

Symmetric terinary is an interesting way to quantize AI models. In fact AI can sort of be described as a machine built with a super-set of logic gates, and symmetric terinary is a more natural sub-set of that than is binary.

The most basic gates, AND and OR, can be thought of as the same operation. "Add and threshold." AND has a threshold of 2 and OR has a threshold of 1. Then if you allow negation you can build anything else.

Then if we broaden further and allow weighted sums and different thresholds we get a broader class of gates even more capable. Note that the prior sub-set is enough to create things that are turning complete (aka, should be able to do anything). We are still technically in binary because our output of our gates are still 1 or 0.

But we can broaden further. If we allow inputs themselves to be continuous instead of discrete, and we allow for a softer transition between 0 and 1 on the output we now have the core unit of what the first multi-layer neural networks were made out of, a sigmoid perceptron.

Everything else in AI is just a nuance on that concept. Different activation functions. The choice to interpret arrays of these things though a linear algebra lens. It doesn't change the fact that what AI basically is is a continuous version of computer hardware modeled in software, or really a super-set model of computer hardware. Because it's continuous it can be trained. But if you wanted to re-hardware-ize it and descretize it symetric terinary would be a good fit. But maybe there is an argument for not descretizing it. You could just use analog.

Though with analog it would be hard to get a transistor to match your software activation functions. If the models work ok when quantized some loss of fidelity is already shown to be acceptable. You could just swap the activation function and assume it doesn't matter. Or you could do a final retune of the model in software with an activation function that matches your transistors.