TY - GEN
T1 - Near-optimal erasure list-decodable codes
AU - Ben-Aroya, Avraham
AU - Doron, Dean
AU - Ta-Shma, Amnon
N1 - Publisher Copyright:
© Avraham Ben-Aroya, Dean Doron, and Amnon Ta-Shma; licensed under Creative Commons License CC-BY 35th Computational Complexity Conference (CCC 2020).
PY - 2020/7/1
Y1 - 2020/7/1
N2 - A code C ⊆ {0, 1}n¯ is (s, L) erasure list-decodable if for every word w, after erasing any s symbols of w, the remaining n¯ − s symbols have at most L possible completions into a codeword of C. Non-explicitly, there exist binary ((1 − τ)¯ n, L) erasure list-decodable codes with rate approaching τ and tiny list-size L = O(log τ1 ). Achieving either of these parameters explicitly is a natural open problem (see, e.g., [26, 24, 25]). While partial progress on the problem has been achieved, no prior nontrivial explicit construction achieved rate better than Ω(τ2) or list-size smaller than Ω(1/τ). Furthermore, Guruswami showed no linear code can have list-size smaller than Ω(1/τ) [24]. We construct an explicit binary ((1 − τ)¯ n, L) erasure list-decodable code having rate τ1+γ (for any constant γ > 0 and small τ) and list-size poly(log τ1 ), answering simultaneously both questions, and exhibiting an explicit non-linear code that provably beats the best possible linear code. The binary erasure list-decoding problem is equivalent to the construction of explicit, low-error, strong dispersers outputting one bit with minimal entropy-loss and seed-length. For error ε, no prior explicit construction achieved seed-length better than 2 log(1ε ) or entropy-loss smaller than 2 log(1ε ), which are the best possible parameters for extractors. We explicitly construct an ε-error one-bit strong disperser with near-optimal seed-length (1 + γ) log(1ε ) and entropy-loss O(log log 1ε ). The main ingredient in our construction is a new (and almost-optimal) unbalanced two-source extractor. The extractor extracts one bit with constant error from two independent sources, where one source has length n and tiny min-entropy O(log log n) and the other source has length O(log n) and arbitrarily small constant min-entropy rate. When instantiated as a balanced two-source extractor, it improves upon Raz's extractor [39] in the constant error regime. The construction incorporates recent components and ideas from extractor theory with a delicate and novel analysis needed in order to solve dependency and error issues that prevented previous papers (such as [27, 9, 13]) from achieving the above results.
AB - A code C ⊆ {0, 1}n¯ is (s, L) erasure list-decodable if for every word w, after erasing any s symbols of w, the remaining n¯ − s symbols have at most L possible completions into a codeword of C. Non-explicitly, there exist binary ((1 − τ)¯ n, L) erasure list-decodable codes with rate approaching τ and tiny list-size L = O(log τ1 ). Achieving either of these parameters explicitly is a natural open problem (see, e.g., [26, 24, 25]). While partial progress on the problem has been achieved, no prior nontrivial explicit construction achieved rate better than Ω(τ2) or list-size smaller than Ω(1/τ). Furthermore, Guruswami showed no linear code can have list-size smaller than Ω(1/τ) [24]. We construct an explicit binary ((1 − τ)¯ n, L) erasure list-decodable code having rate τ1+γ (for any constant γ > 0 and small τ) and list-size poly(log τ1 ), answering simultaneously both questions, and exhibiting an explicit non-linear code that provably beats the best possible linear code. The binary erasure list-decoding problem is equivalent to the construction of explicit, low-error, strong dispersers outputting one bit with minimal entropy-loss and seed-length. For error ε, no prior explicit construction achieved seed-length better than 2 log(1ε ) or entropy-loss smaller than 2 log(1ε ), which are the best possible parameters for extractors. We explicitly construct an ε-error one-bit strong disperser with near-optimal seed-length (1 + γ) log(1ε ) and entropy-loss O(log log 1ε ). The main ingredient in our construction is a new (and almost-optimal) unbalanced two-source extractor. The extractor extracts one bit with constant error from two independent sources, where one source has length n and tiny min-entropy O(log log n) and the other source has length O(log n) and arbitrarily small constant min-entropy rate. When instantiated as a balanced two-source extractor, it improves upon Raz's extractor [39] in the constant error regime. The construction incorporates recent components and ideas from extractor theory with a delicate and novel analysis needed in order to solve dependency and error issues that prevented previous papers (such as [27, 9, 13]) from achieving the above results.
KW - Dispersers
KW - Erasure codes
KW - List decoding
KW - Ramsey graphs
KW - Two-source extractors
UR - http://www.scopus.com/inward/record.url?scp=85089403244&partnerID=8YFLogxK
U2 - 10.4230/LIPIcs.CCC.2020.1
DO - 10.4230/LIPIcs.CCC.2020.1
M3 - Conference contribution
AN - SCOPUS:85089403244
T3 - Leibniz International Proceedings in Informatics, LIPIcs
BT - 35th Computational Complexity Conference, CCC 2020
A2 - Saraf, Shubhangi
PB - Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
T2 - 35th Computational Complexity Conference, CCC 2020
Y2 - 28 July 2020 through 31 July 2020
ER -