Reed Solomon codes part 2
Asymptotic complexity of best known (to me ;) ) decoding algorithm
O(n log n + t log2 t) for a (n,k) RS code over GF(n+1) and t=n-k
The proof for this is quite easy, syndrom calculation is just evaluating a polynomial at n-k points, and evaluating a polynomial (in GF(n)) at all points can be done with the GFFT actually evaluation at all points is the GFFT of the polynomial. Multiplying 2 polynomials is just 2xGFFT + componentwise multiplication + IGFFT. Finding the roots of a polynomial can as well be done by just evaluating it at all points. The only non trivial operation left for normal RS decoding is solving the key equation which is equivalent to euclids GCD algorithm as well as schönhages GCD algorithm, later has O(t log2 t) complexity (log2 t == (log t)2 in case thats unclear).
An alternative to GF(2x)
Normally RS codes are build over GF(2x) that way the bits of the elements of an RS codeword have a nice 1:1 mapping to x bits which can then be stored or transmitted, but it has a big disadvantage and that is that the GFFT for GF(y) needed for fast RS decoding is done with y-1 points and so it cannot use the well known power of 2 style FFT algorithms as 2x – 1 is not a multiple of 2. The solution is to use GF(2x+1), though note GF(2x+1) does not exist for all integer values of x, it only exists if 2x+1 is a power of a prime that is pj, 2 obvious choices using fermat primes are GF(28+1) and GF(216+1)
How do you store 2x+1 values in 2x values
Trivial ;)
The data part of our RS code is specified by the user and so it simply doesnt use the 2x+1 th symbol, actually it would be messy to use it. So the only problem left are the n-k parity symbols, which can trivially be transformed to not contain the annoying 2x+1 th symbol while at the same time maintaining the property of being an RS code
Let us assume that we have a symbol (at position y with value yv) in our k input symbols which is guranteed to have a value yv < 2x – n + k that is in practice less than one unused bit. Let p be the RS codeword with all k-1 data symbols 0 and the symbol at position y 1. The next step is to find all the values of the y element in our original codeword which would cause no parity symbol to have that annoying 2x+1 th value, for encoding we simply select the yv th element of this list as new yv element. For decoding we choose the number of elements in the list which are smaller than yv as our new element. As last step we just need to add a scaled version of p so as to actually have the wanted yv element and avoiding the nasty too large elements while also still having an RS code
As far as I understand, your scheme with storing 2^x+1 values in 2^x bits has many problems.
First, it will not work if all input data consist of binary 1’s. And if you are planning to use this scheme encoding realworld data you cannot guarantee, that there wil be no such input packets.
Second, position need to be stored with RS-codeword for decoder, but it will be not protected by code, if transmitted in the same packet, so you need to encode this position also and transmit in some other packet (and recursively on).
Third, is there any guarantee, that you will _always_ find at least yv values of selected input symbol, that will not give 2^x+1 codes as parity symbols? Are there any proves of this?
So, it’s not that trivial to store these 2^x+1 codes, I think…
Comment by Anonymous — 2009-08-10 @ 10:23
> First, it will not work if all input data consist of binary 1’s. And if you are planning
> to use this scheme encoding realworld data you cannot guarantee, that there wil be no
> such input packets.
This specific scheme needs one 0 bit per packet in worst case, this does not seem to be a big issue, any real world case will need many bits for header data, one bit more isnt much.
> Second, position need to be stored with RS-codeword for decoder, but it will be not
> protected by code, if transmitted in the same packet, so you need to encode this
> position also and transmit in some other packet (and recursively on).
The position is fixed, its a design decission where to place it. Its not transmitted thus not affected by any possible damage.
> Third, is there any guarantee, that you will _always_ find at least yv values of
> selected input symbol, that will not give 2^x+1 codes as parity symbols? Are there any
> proves of this?
yes, you can proof this by looking at the opposite, that is how many values of a single symbol can produce a 2^x+1 for a specific parity symbol (the awnser is 1) so for n-k parity symbols only n-k values of any specific symbol can produce any 2^x+1 anywhere
Comment by Michael — 2009-10-29 @ 00:28