Guessing random additive noise decoding with soft detection symbol reliability information (SGRAND)
Assuming hard detection from an additive noise channel, we recently introduced two new channel decoding algorithms that can be used with any code-book construction. For uniform sources using arbitrary code-books, Guessing Random Additive Noise Decoding (GRAND) identifies a Maximum Likelihood (ML) decoding, while GRAND with abandonment (GRANDAB) either identifies a ML decoding or declares an error after a fixed number of computations. Both algorithms exhibit the unusual feature that their complexity decreases as the code-book rate increases. With an appropriately chosen abandonment threshold for GRANDAB, we have previously established that both decoding algorithms are capacity achieving when used with random code-books for a broad class of noise processes. Here we extend the GRAND approach to a situation where soft detection symbol reliability information is available at the receiver. In particular, we assume that symbols received from the channel are declared to be error free or to have been potentially subject to independent additive noise. We introduce variants of GRAND and GRANDAB that readily incorporate this soft detection information, where Soft GRAND (SGRAND) identifies a ML decoding and Soft GRANDAB (SGRANDAB) either identifies a ML decoding or declares an error. These algorithms inherit desirable properties of their hard detection equivalents, such as being capacity achieving when used with random code-books, and having complexities that reduce as the code-rate increases. With this additional symbol reliability information, the algorithms have, with respect to their hard detection counterparts, reduced complexity, and are capable of achieving higher rates with lower error probabilities.
READ FULL TEXT