Lair Of The Multimedia Guru

2006-10-29

Integer precision conversation

You have a 8bit grayscale value (or a audio sample) and want to convert it to 16bit, so without thinking about it you write x<<8 but that generally isnt correct, because the the new least significant bits are all zero so that the largest 8bit value will become a value 255 below the largest 16bit value

So what is correct?

This depends on your definition of what all the values mean, if 0 is black then simply multiplying by the new value for white and dividing by the old value for white with rounding to nearest will do the trick

For the specific case where the largest representable value is white, increasing precision can be done by setting the new least significant bits to the most significant ones, for example for 8->16bit x+(x<<8)=x*65535/255=x*257, for 8->10bit (x<<2)+(x>>6) or if you are pedantic (x*1023+127)/255=(x*341+42)/85 and for 2->8 x+(x<<2)+(x<<4)+(x<<6)=x*85

For the specific case where the largest+1 (256 in case of 8bit) is white, indeed just setting the low bits to 0 is correct, but at least for low precision this is not how grayscale values are defined, just think about 1bit monochrome, converting that to 8bit by setting the low ones to 0 will give you 128 for 1 while white is 255 (or 256)

What about decreasing precision?

Well thats simply the inverse, so for x*257 its (x+128)/257 which can be approximated by (x*255+(1<<15))>>16

Filed under: Uncategorized — Michael @ 01:48

11 Comments »

  1. When increasing dynamic & without dithering it could lead to banding and/or audio artefacts.
    Dithering can be really cheap with a pseudo random table.

    (Ps : Precission ?)

    Comment by mean — 2006-11-02 @ 08:42

  2. yes, when the number of bits is decreased some sort of dithering should be added, random dither as suggested is an option, though random dither is probably the worst choice, ordered dither where the distortion is shifted to less vissible high frequencies (or less audible frequencies) is always supperior, and has the same computational complexity at runtime, even better are error diffusion style dithers for images …

    Comment by Michael — 2006-11-02 @ 12:30

  3. Dither (or similar) is more needed when increasing the number of bits, than decreasing..

    Comment by Rich — 2006-11-03 @ 01:51

  4. dither is adding noise to reduce quantization errors of specific parts of the signal (thats what my memory says and wikipedia agrees) theres no quantization and no quantization error when increasing the number of bits so no dither is possible

    what is possible is adding random noise but that will always increase the errors when increasing the number of bits (the error is 0 relative to what we have without noise)
    when decreasing bits, adding noise can reduce the error of specific parts of the spectrum, though the average error of course increases too

    another thing that is possible is to guess the original signal for a quantized signal and then dither that to a higher number of bits, but i think that wasnt what you meant?

    Comment by Michael — 2006-11-03 @ 02:36

  5. oops, yes it was decreasing not increasing.

    Correct me if i’m wrong but error diffusion is more expensive than simple pseudo random noise wich is alsmost free.
    The thing being the eye is very sensitive to banding (for image)
    The error induced by the dithering is 1, whatever the dynamic is.

    Furthermore, the per-pixel-error is the same as without dithering but the average error is lowered.

    Taking an example, if you go from dynamic 0–255 to 0–1 (black and white)
    Let’s say that a block of pixels is 128, the per-pixel error is 128 without dithering and the cumulative error is 128*nb_pixel

    If you do pseudo random dithering, the per-pixel error is still 128 but the cumulative error is null is enough pixels are present

    (cumulative error = sum of errors without abs nor ^2) and noticeably more pleasing to the eye.

    It is a simple example, but works with more complicated one. The thing is that the pseudo random must so that the probability is related to the value, and dithering is not added after quantization without taking the quantization into account.

    For info the pseudo random dithering i have in mind and usually use is

    random_table[]= random numbers between 0 and table_size-1, each appearing only once.
    if( quant_error > random_table[x]) target++;

    x being computed to be as random as possible, the simplest way being
    static int x;
    x=(x+1) & (LOG2_SIZE-1);

    So it is not pure random, but correlated with the probablity = the quant_error.

    Not sure it is that clear :)

    Comment by mean — 2006-11-05 @ 12:13

  6. > Correct me if i’m wrong but error diffusion is more expensive than simple pseudo
    > random noise wich is alsmost free.

    correct

    > The thing being the eye is very sensitive to banding (for image)

    sure sure

    > The error induced by the dithering is 1, whatever the dynamic is.

    no this is not true in general, take a image of constant color 250 of 0..255 convert it to 0,1

    > Furthermore, the per-pixel-error is the same as without dithering

    no, the per pixel error increases with dither

    > but the average error is lowered.

    depends on how you define average error, the normal sum of squares or sum of abs is not lowered
    if you do some psychovissual stuff or do some frequency dependant weighting then yes the error can and likely will decreases with dither

    > Taking an example, if you go from dynamic 0–255 to 0–1 (black and white)
    > Let’s say that a block of pixels is 128, the per-pixel error is 128 without
    > dithering and the cumulative error is 128*nb_pixel

    no 255-128=127 and the cumulative error as defined by you is 127*nb_pixel

    > If you do pseudo random dithering, the per-pixel error is still 128 but the

    no it will alternate between 127 and 128

    > cumulative error is null is enough pixels are present

    no, with the definition below the error will increase and diverge from 0 for random dither as you add more pixels

    > (cumulative error = sum of errors without abs nor ^2) and noticeably more pleasing
    > to the eye.
    […]

    Comment by Michael — 2006-11-05 @ 16:14

  7. The error was not normalized.
    The error was not normalized.

    Let me put it that way, the average value is closer when using dithering

    Let’s take a sample, Source 0–255 target 0–1

    Source : 100 100 100 100 100 Average normalized value = 100/255 ~ 0.4 Average normalized error= 0 (since it is the source :) )
    No dithering : 0 0 0 0 0 Average normalized value = 0 Average normalized error = (100/255)= 0.4
    With dithering : 0 0 1 0 1 Average normalized value= 2/5=0.4 Average normalized error = (3*100+2*155)/(5*255)= 0.47

    So you increase slightly the error, but the average value is closer to what it should be.
    The main point being to avoid banding.

    Comment by mean — 2006-11-05 @ 19:50

  8. And (x

    Comment by TjOeNeR — 2006-12-20 @ 19:19

  9. 16

    Comment by Ivan Kalvachev — 2006-12-31 @ 14:48

  10. 8 to 16
    y=(x

    Comment by Ivan Kalvachev — 2006-12-31 @ 14:50

  11. I’ starting to hate WordPress. I can’t write any descent expression. There is no preview or help for the expressions.

    Last try.

    Converting from 8bit to 16 could be done by moving x to left with 8 bits, then adding it again at normal position. 0 will be 0, 0xff becomes 0xffff

    y=(x shl 8) | x;

    I’ve seen this done in the old Voodoo cards that stored textures as 16 bit, but worked on 32 bit internally.

    Comment by Ivan Kalvachev — 2006-12-31 @ 14:57

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress