Q2D2: A geometry-aware audio codec leveraging two-dimensional quantization

0. Contents

1. Abstract

Recent neural audio codecs have achieved impressive reconstruction quality, typically relying on quantization methods such as Residual Vector Quantization (RVQ), Vector Quantization (VQ) and Finite Scalar Quantization (FSQ). However, these quantization techniques limit the geometric structure of the latent space, make it harder to capture correlations between features leading to inefficiency in representation learning, codebook utilization and token rate. In this paper we introduce Two-Dimensional Quantization (Q2D2), a quantization scheme in which feature pairs are projected onto structured 2D grids—such as hexagonal, rhombic, or rectangular tiling—and quantized to the nearest grid values, yielding an implicit codebook defined by the product of grid levels, with codebook sizes comparable to conventional methods. Despite its simple geometric formulation, Q2D2 improves audio compression efficiency, with low token rates and high codebook utilization while maintaining state-of-the-art (SOTA) reconstruction quality. Specifically, Q2D2 achieves competitive to superior performance in various objective and subjective reconstruction metrics, across extensive experiments in speech domain compared to SOTA models. Comprehensive ablation studies further confirm the effectiveness of our design choices.

Q2D2 Figure

Figure 1: Visualization of quantization grids used in Q2D2

2. Comparison of Codecs on Audio Reconstruction: LibriSpeech and LJSpeech

GT

WavTokenizer @ 0.9kbps

Q2D2 @ 1kbps

Q2D2 @ 3.3kbps

Encodec @ 6kbps

DAC @ 9kbps

3. Comparison of Codecs on Audio Reconstruction: LibriTTS-test-clean and LibriTTS-test-other

GT

WavTokenizer @ 0.9kbps

Q2D2 @ 1kbps

Q2D2 @ 3.3kbps

Q2D2 @ 6.9kbps