Shashi Kiran Chilappagari

Publications

HomeResearchPublicationsLinks

Recent Publications

  • S. K. Chilappagari, M. G. Stepanov, M. Chertkov and B. Vasic, "Analysis of Error Floors of LDPC Codes under LP Decoding over the BSC", submitted to ISIT 2009

    We consider Linear Programming (LP) decoding of a fixed Low-Density Parity-Check (LDPC) code over the Binary Symmetric Channel (BSC). The LP decoder fails when it outputs a pseudo-codeword which is not a codeword. We propose an efficient algorithm termed the Instanton Search Algorithm (ISA) which, given a random input, generates a set of flips called the BSC-instanton and prove that: (a) the LP decoder fails for any set of flips with support vector including an instanton; (b) for any input, the algorithm outputs an instanton in the number of steps upper-bounded by twice the number of flips in the input. Repeated sufficient number of times, the ISA outcomes the number of unique instantons of different sizes. We use the instanton statistics to predict the performance of the LP decoding over the BSC in the error floor region. We also propose an efficient semi-analytical method to predict the performance of LP decoding over a large range of transition probabilities of the BSC.
  • S. K. Chilappagari, M. Chertkov, M. G. Stepanov and B. Vasic, "Instanton-based Techniques for Analysis and Reduction of Error Floors of LDPC Codes", to appear in IEEE JSAC on Capacity Approaching Codes

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.
  • S. K. Chilappagari, M. Chertkov and B. Vasic, "Provably efficient instanton search algorithm for LP decoding of LDPC codes over the BSC", submitted to IEEE Transcations on Information Theory, September 2008

    We consider Linear Programming (LP) decoding of a fixed Low-Density Parity-Check (LDPC) code over the Binary Symmetric Channel (BSC). The LP decoder fails when it outputs a pseudo-codeword which is not a codeword. We design an efficient algorithm termed the Instanton Search Algorithm (ISA) which, given a random input, generates a set of flips called the BSC-instanton. We prove that: (a) the LP decoder fails for any set of flips with support vector including an instanton; (b) for any input, the algorithm outputs an instanton in the number of steps upper-bounded by twice the number of flips in the input. Repeated sufficient number of times, the ISA outcomes the number of unique instantons of different sizes.
  • S. K. Chilappagari, D. V. Nguyen, B. Vasic and M. W. Marcellin, "On Trapping Sets and Guaranteed Error Correction Capability of LDPC Codes and GLDPC Codes", under second round of review, IEEE Transactions on Information Theory

    The relation between the girth and the guaranteed error correction capability of $\gamma$-left-regular LDPC codes when decoded using the bit flipping (serial and parallel) algorithms is investigated. A lower bound on the size of variable node sets which expand by a factor of at least $3 \gamma/4$ is found based on the Moore bound. This bound, combined with the well known expander based arguments, leads to a lower bound on the guaranteed error correction capability. The decoding failures of the bit flipping algorithms are characterized using the notions of trapping sets and fixed sets. The relation between fixed sets and a class of graphs known as cage graphs is studied. Upper bounds on the guaranteed error correction capability are then established based on the order of cage graphs. The results are extended to left-regular and right-uniform generalized LDPC codes. It is shown that this class of generalized LDPC codes can correct a linear number of worst case errors (in the codelength) under the parallel bit flipping algorithm when the underlying Tanner graph is a good expander. It is also shown that the bound on the required expansion cannot be improved when the left degree is even. A lower bound on the size of variable node sets which have the required expansion is established.
  • S. K. Chilappagari, D. V. Nguyen, B. Vasic and M. W. Marcellin, "Error Correction Capability of Column-Weight-Three LDPC Codes: Part II", submitted to IEEE Transactions on Information Theory

    The relation between the girth and the error correction capability of column-weight-three LDPC codes is investigated. Specifically, it is shown that the Gallager A algorithm can correct $g/2-1$ errors in $g/2$ iterations on a Tanner graph of girth $g \geq 10$.
  • S. K. Chilappagari and B. Vasic, "Error Correction Capability of COlumn-Weight-Three LDPC Codes", to appear in IEEE Transactions on Information Theory, May 2009

    In this paper, the error correction capability of column-weight-three LDPC codes when decoded using the Gallager A algorithm is investigated. It is proved that a necessary condition for a code to correct all error patterns with up to $k \geq 5$ errors is to avoid cycles of length up to $2k$ in its Tanner graph. As a consequence of this result, it is shown that given any $\alpha>0, \exists N $ such that $\forall n>N$, no code in the ensemble of column-weight-three codes can correct all $\alpha n$ or fewer errors. The results are extended to the bit flipping algorithms.