Quantization
In ML, quantization means converting real-valued parameters (weights, activations, etc.)—which are usually represented as 32-bit or 16-bit floating-point numbers—into lower-precision integer or fixed-point representations (like 8-bit integers). This is done to reduce memory and bandwidth costs and speed up inference, which also means speeding up proving inference. Plus, a floating-point model can’t be realistically proven inside a ZKP; encoding floats directly would explode circuit size and complexity
The downside of quantization is that it may degrade the accuracy of the model. However, the loss of accuracy may be considered insignificant, as described in https://eprint.iacr.org/2024/1018.pdf,
"Most existing models are trained with 32-bit floating points (FP32), which provides greater precision than needed. Model pruning and quantization techniques have been developed to address these issues by transforming dense, high-precision parameters (e.g., FP32) into sparse, lower-bit representations (e.g., 8-bit integers, INT8)."
Currently, our models are quantized to 8-bits integers.
Last updated