Xue Chen \Emailxuechen1989@ustc.edu.cn
\addrUniversity of Science and Technology of China, Hefei 230026, China, Hefei National Laboratory, Hefei 230088, China.
and \NameWenxuan Shu \Emailwxshu@mail.ustc.edu.cn
\addrUniversity of Science and Technology of China, Hefei 230026, China.
and \NameZhaienhe Zhou \Emailzhaienhezhou@gmail.com
\addrUniversity of Science and Technology of China, Hefei 230026, China.
Algorithms for Sparse LPN and LSPN Against Low-noise
Abstract
We consider sparse variants of the classical Learning Parities with random Noise (LPN) problem. Our main contribution is a new algorithmic framework that provides learning algorithms against low-noise for both Learning Sparse Parities (LSPN) problem and sparse LPN problem. Different from previous approaches for LSPN and sparse LPN (Grig11; valiant2012finding; KKK18; RRS17; GKM), this framework has a simple structure without fast matrix multiplication or tensor methods such that its algorithms are easy to implement and run in polynomial space. Let be the dimension, denote the sparsity, and be the noise rate such that each label gets flipped with probability .
As a fundamental problem in computational learning theory (Feldman09), Learning Sparse Parities with Noise (LSPN) assumes the hidden parity is -sparse instead of a potentially dense vector. While the simple enumeration algorithm takes time, previously known results stills need at least time for any noise rate (Grig11; valiant2012finding; KKK18). Our framework provides a LSPN algorithm runs in time for any noise rate , which improves the state-of-the-art of LSPN whenever .
The sparse LPN problem is closely related to the classical problem of refuting random -CSP (FKO06; RRS17; GKM) and has been widely used in cryptography as the hardness assumption (e.g., Alekhnovich; ABW10; ADINZ18; DIJL_sparseLPN). Different from the standard LPN that samples random vectors in , it samples random -sparse vectors. Because the number of -sparse vectors is , sparse LPN has learning algorithms in polynomial time when . However, much less is known about learning algorithms for constant like 3 and samples, except the Gaussian elimination algorithm of time . Our framework provides a learning algorithm in time given and samples. This improves previous learning algorithms in a wide range of parameters. For example, in the classical setting of and (FKO06; ABW10), our algorithm would be faster than for any .
Since these two algorithms are based on one algorithmic framework, our conceptual contribution is a connection between sparse LPN and LSPN.
keywords:
computational learning theory, Learning Parities with Noise (LPN), Learning Sparse Parities with Noise (LSPN), sparse LPNXue Chen is supported by Innovation Program for Quantum Science and Technology 2021ZD0302901, NSFC 62372424, and CCF-HuaweiLK2023006.
1 Introduction
Learning Parities with Noise (LPN) problem and its variants are ubiquitous in learning theory. It is equivalent to the famous problem of decoding random linear codes in coding theory, and has been widely used in cryptography as the security assumption. In a dimension- LPN problem of noise rate , the algorithm is trying to learn a hidden vector in , called in this work. However, this algorithm only has access to an oracle that generates random vectors with labels , where is uniformly distributed in and equals the inner product with probability .
While there is a long line of research on this problem