skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Zero Deletion/Insertion Codes and Zero Error Capacity
In this paper the theory and design of codes capable of correcting t insertion/deletion of the symbol 0 in each and every bucket of zeros (i. e., zeros in between two consecutive ones) are studied. It is shown that this problem is related to the zero error capacity achieving codes in limited magnitude error channel. Close to optimal non-systematic code designs and the encoding/decoding algorithms are described.  more » « less
Award ID(s):
2006571
PAR ID:
10562485
Author(s) / Creator(s):
; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
978-1-6654-2159-1
Page Range / eLocation ID:
986 to 991
Subject(s) / Keyword(s):
Codes , Systematics , Symbols , Process control , Turbo codes , Parity check codes , Encoding
Format(s):
Medium: X
Location:
Espoo, Finland
Sponsoring Org:
National Science Foundation
More Like this
  1. We analyze metrics for how close an entire function of genus one is to having only real roots. These metrics arise from truncated Hankel matrix positivity-type conditions built from power series coefficients at each real point. Specifically, if such a function satisfies our positivity conditions and has well-spaced zeros, we show that all of its zeros have to (in some explicitly quantified sense) be far away from the real axis. The obvious interesting example arises from the Riemann zeta function, where our positivity conditions yield a family of relaxations of the Riemann hypothesis. One might guess that as we tighten our relaxation, the zeros of the zeta function must be close to the critical line. We show that the opposite occurs: any poten- tial non-real zeros are forced to be farther and farther away from the critical line. 
    more » « less
  2. null (Ed.)
    Abstract Zero-inflated and hurdle models are widely applied to count data possessing excess zeros, where they can simultaneously model the process from how the zeros were generated and potentially help mitigate the effects of overdispersion relative to the assumed count distribution. Which model to use depends on how the zeros are generated: zero-inflated models add an additional probability mass on zero, while hurdle models are two-part models comprised of a degenerate distribution for the zeros and a zero-truncated distribution. Developing confidence intervals for such models is challenging since no closed-form function is available to calculate the mean. In this study, generalized fiducial inference is used to construct confidence intervals for the means of zero-inflated Poisson and Poisson hurdle models. The proposed methods are assessed by an intensive simulation study. An illustrative example demonstrates the inference methods. 
    more » « less
  3. Abstract Researchers view vast zeros in single-cell RNA-seq data differently: some regard zeros as biological signals representing no or low gene expression, while others regard zeros as missing data to be corrected. To help address the controversy, here we discuss the sources of biological and non-biological zeros; introduce five mechanisms of adding non-biological zeros in computational benchmarking; evaluate the impacts of non-biological zeros on data analysis; benchmark three input data types: observed counts, imputed counts, and binarized counts; discuss the open questions regarding non-biological zeros; and advocate the importance of transparent analysis. 
    more » « less
  4. This paper presents the first proof of polarization for the deletion channel with a constant deletion rate and a regular hidden-Markov input distribution. A key part of this work involves representing the deletion channel using a trellis and describing the plus and minus polar-decoding operations on this trellis. In particular, the plus and minus operations can be seen as combining adjacent trellis stages to yield a new trellis with half as many stages. Using this viewpoint, we prove a weak polarization theorem for standard polar codes on the deletion channel. To achieve strong polarization, we modify this scheme by adding guard bands of repeated zeros between various parts of the codeword. Using this approach, we obtain a scheme whose rate approaches the mutual information and whose probability of error decays exponentially in the cube-root of the block length. 
    more » « less
  5. This paper gives some theory and efficient design of binary block codes capable of controlling the deletions of the symbol “0” (referred to as 0-deletions) and/or the insertions of the symbol “0” (referred to as 0-insertions). This problem of controlling 0-deletions and/or 0-insertions (referred to as 0-errors) is shown to be equivalent to the efficient design of L1 metric asymmetric error control codes over the natural alphabet, IIN . In this way, it is shown that the t0 -insertion correcting codes are actually capable of controlling much more; namely, they can correct t0 -errors, detect (t+1)0 -errors and, simultaneously, detect all occurrences of only 0-deletions or only 0-insertions in every received word (briefly, they are t -Symmetric 0-Error Correcting/ (t+1) -Symmetric 0-Error Detecting/All Unidirectional 0-Error Detecting ( t -Sy0EC/ (t+1) -Sy0ED/AU0ED) codes). From the relations with the L1 distance error control codes, new improved bounds are given for the optimal t0 -error correcting codes. Optimal non-systematic code designs are given. Decoding can be efficiently performed by algebraic means using the Extended Euclidean Algorithm (EEA). 
    more » « less