This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\coltauthor\Name

Xue Chen \Emailxuechen1989@ustc.edu.cn
\addrUniversity of Science and Technology of China, Hefei 230026, China, Hefei National Laboratory, Hefei 230088, China. and \NameWenxuan Shu \Emailwxshu@mail.ustc.edu.cn
\addrUniversity of Science and Technology of China, Hefei 230026, China. and \NameZhaienhe Zhou \Emailzhaienhezhou@gmail.com
\addrUniversity of Science and Technology of China, Hefei 230026, China.

Algorithms for Sparse LPN and LSPN Against Low-noise

Abstract

We consider sparse variants of the classical Learning Parities with random Noise (LPN) problem. Our main contribution is a new algorithmic framework that provides learning algorithms against low-noise for both Learning Sparse Parities (LSPN) problem and sparse LPN problem. Different from previous approaches for LSPN and sparse LPN (Grig11; valiant2012finding; KKK18; RRS17; GKM), this framework has a simple structure without fast matrix multiplication or tensor methods such that its algorithms are easy to implement and run in polynomial space. Let nn be the dimension, kk denote the sparsity, and η\eta be the noise rate such that each label gets flipped with probability η\eta.

As a fundamental problem in computational learning theory (Feldman09), Learning Sparse Parities with Noise (LSPN) assumes the hidden parity is kk-sparse instead of a potentially dense vector. While the simple enumeration algorithm takes (nk)=O(n/k)k{n\choose k}=O(n/k)^{k} time, previously known results stills need at least (nk/2)=Ω(n/k)k/2{n\choose k/2}=\Omega(n/k)^{k/2} time for any noise rate η\eta (Grig11; valiant2012finding; KKK18). Our framework provides a LSPN algorithm runs in time O(ηn/k)kO(\eta\cdot n/k)^{k} for any noise rate η\eta, which improves the state-of-the-art of LSPN whenever η(k/n,k/n)\eta\in(\sqrt{k/n},k/n).

The sparse LPN problem is closely related to the classical problem of refuting random kk-CSP (FKO06; RRS17; GKM) and has been widely used in cryptography as the hardness assumption (e.g., Alekhnovich; ABW10; ADINZ18; DIJL_sparseLPN). Different from the standard LPN that samples random vectors in 𝐅2n\mathbf{F}_{2}^{n}, it samples random kk-sparse vectors. Because the number of kk-sparse vectors is (nk)<nk{n\choose k}<n^{k}, sparse LPN has learning algorithms in polynomial time when m>nk/2m>n^{k/2}. However, much less is known about learning algorithms for constant kk like 3 and m<nk/2m<n^{k/2} samples, except the Gaussian elimination algorithm of time eηne^{\eta n}. Our framework provides a learning algorithm in eO~(ηnδ+12)e^{\tilde{O}(\eta\cdot n^{\frac{\delta+1}{2}})} time given δ(0,1)\delta\in(0,1) and m=max{1,ηnδ+12k2}n1+(1δ)k12m=\max\{1,\frac{\eta\cdot n^{\frac{\delta+1}{2}}}{k^{2}}\}\cdot n^{1+(1-\delta)\cdot\frac{k-1}{2}} samples. This improves previous learning algorithms in a wide range of parameters. For example, in the classical setting of k=3k=3 and m=n1.4m=n^{1.4} (FKO06; ABW10), our algorithm would be faster than eηne^{\eta n} for any η<n0.6\eta<n^{-0.6}.

Since these two algorithms are based on one algorithmic framework, our conceptual contribution is a connection between sparse LPN and LSPN.

keywords:
computational learning theory, Learning Parities with Noise (LPN), Learning Sparse Parities with Noise (LSPN), sparse LPN
\acks

Xue Chen is supported by Innovation Program for Quantum Science and Technology 2021ZD0302901, NSFC 62372424, and CCF-HuaweiLK2023006.

1 Introduction

Learning Parities with Noise (LPN) problem and its variants are ubiquitous in learning theory. It is equivalent to the famous problem of decoding random linear codes in coding theory, and has been widely used in cryptography as the security assumption. In a dimension-nn LPN problem of noise rate η\eta, the algorithm is trying to learn a hidden vector in 𝐅2n\mathbf{F}_{2}^{n}, called 𝐬𝐞𝐜𝐫𝐞𝐭\mathbf{secret} in this work. However, this algorithm only has access to an oracle that generates random vectors 𝐱i{\mathbf{x}}_{i} with labels 𝒚i\boldsymbol{y}_{i}, where 𝐱i{\mathbf{x}}_{i} is uniformly distributed in 𝐅2n\mathbf{F}_{2}^{n} and 𝒚i𝐅2\boldsymbol{y}_{i}\in\mathbf{F}_{2} equals the inner product 𝐬𝐞𝐜𝐫𝐞𝐭,𝐱i\langle\mathbf{secret},{\mathbf{x}}_{i}\rangle with probability 1η1-\eta.

While there is a long line of research on this problem