Lair Of The Multimedia Guru

May 11, 2007

Tic Tac Toe

Java sucks, all remotely sane humans know that nowadays, but long long ago i did the unimagineable and volunteerly wrote some small java programs (until i realized how badly java sucks). Todays or actually tonights blog entry is dedicated to one of these, a little Tic-Tac-Toe game, which you should be able to see below in the unlikely case that you have java enabled in your browser

8? years old GPL source is available too, and in case you are wondering the images where created with povray, and yes patches to improve it are welcome ;)

PS: if you win send me a bugreport ;)

Filed under: Off Topic,Uncategorized — Michael @ 1:38

March 25, 2007


Resampling is the problem of converting a signal which is sampled at a given samplerate to one which is sampled at another samplerate. This is the same as converting a discretely sampled signal to a continuous one and then converting it back to a discretely sampled one.
There are many ways to look at this problem which dont lead to the same solutions

The optimization point of view

The problem can be seen as one of optimization that is choose the function which converts the discrete samples to a continous signal so that the error to the original correct signal is minimized. And then select the function which converts the continuous signal to discreet samples so that together with a optimal back conversation the error is minimzed.
While this is a nice generic definition of the problem it doesnt point to a obvious solution, also the optimal solution will almost certainly not be remotely close to a common sense definition of what samples represent, it might be more similar to a vector quantizer or other odd non linear thingy …

The linear optimization point of view

Like in the previous point we again try to find continuous->discrete->continuous convertion functions which minimize the error to the original but we restrict ourselfs to linear convolutional filters. Note continuous signal here means one which is either given analytically (like a polynom) or discretely sampled at higher resolution.
Here its a matter of linear algebra and a training set of signals to find the
optimal discrete->continuous filter for a given continuous->discrete filter. And the optimal continuous->discrete filter for a given discrete->continuous filter.
In this framework it is also possible to consider output and input devices. For example a monitor which converts the discretely sampled data into very tiny fuzzy colorfull dots, which when they are all added together give a continous signal. The monitor here is a discrete -> continuous filter for which we can find the optimal linear least squares filter which changes a given signal to a discretly sampled one.

The bandlimited point of view

Its possible to look at a amplitude per time signal also as a amplitude per frequency signal. If our signal now happens to not contain any non zero frequency component above a specific frequency X then this signal is said to be band limited. If we now sample such a signal with a sampling rate >2X then we can exactly restore the original continuous signal from these samples by simple convolution with the sinc filter (sin(x)/x) (the discrete samples here would be at x=…, -pi, 0, pi, 2pi, …).
And to remove all frequencies above a given frequency X while retaining all below again simply convolution with sinc is the awnser. So resampling becomes a simple matter of using the sinc filter with the correct scaling to find the new samples. Sadly there are a few problems and missunderstandings with this approch.

Summing a infinite number of samples fallacy

Some people say sinc cannot be used because it would require summing an infinite number of samples, this is of course not true as images and audio on computers tend not to contain infinite many samples to begin with. Or more precissely a signal of N samples can simply be converted into frequency domain with a FFT the too high frequency coefficients could then be set to 0 and then a inverese FFT could be applied. This is exactly equivalent to infinite periodic extension and application of the sinc filter. Direct applicaton of the sinc filter in time domain is trickier.

sincs latency

As the sinc filter has infinite support (theres no X after which all values are 0) it cannot be applied in realtime, it needs all input samples before it can output anything.
That also means all applications and libraries which claim to use sinc for resampling in realtime lie.

Causality and sinc

Most output samples depend on most (practically all) future input samples

Shooting sinc and the whole bandlimited theory down

Imagine absolute silence, open space, no obstacles to reflect sound and a gun, theres no doubt that the correct continuous sound amplitude will be exactly 0 (the air pressure will not change), now the gun is fired, which causes a very loud “bang” pretty much a very sharp spike in the amplitude and air pressure and then perfect silence again. Now this signal is clearly not bandlimited, so lets make it bandlimited by convolving it with sinc, the resulting signal will of course look like the sinc filter itself (convolution of a single spike is simply a identity operation). At that point we will have a bandlimited signal of a gunshot. Lets assume our signal is sampled at 8khz
in that case the amplitude 1 second before the gunshot will be 1/(pi*8000) of the loudness of the gunshot while this seems very little given a loud enough gunshot its audible and this simply isnt correct. Also for shorter distances like for example 5 samples before the gushot its 1/(pi*5) ~ 1/16 which for example when resampling an image with a sharp edge leads to terrible ringing artifacts.

Windowed sinc

An alternative to the infinite support sinc filter are windowed sinc filters, which are simply sinc componentwisely multiplied by a window function. These filers have compact support (are zero everywhere except a few samples). For them the the size of the transition band where frequencies are neither left at their amplitude nor attenuated to near 0 cannot be 0. Its size, the length of the filter the stopband attenuation and all the other parameters depend on each other and a trandeoff must be made …

Filed under: Uncategorized — Michael @ 3:50

December 29, 2006

A few days ago chkrootkit told me “Enye LKM found”

My first thought was, shit …, but there was something odd, it wasnt reproduceable, subsequent runs of chkrootkit found nothing, was it just a false positive?
Also mutt had died with on odd error message, “Real-time signal 24” to be precisse, was there a relation? I decided to look at the chkrootkit source to clarify all that, i found the following:

   /* Check for Enye LKM */
   if (kill (12345, 58) >= 0)
      printf("Enye LKM found\n");
      retdir+= errno;

chkrootkit simply sends a “random” signal to a “random” pid, no check if anything normal is running at that pid not to mention the likelyness that a intruder would leave the pid/signal id at the default

Filed under: Uncategorized — Michael @ 23:25

December 12, 2006

Hadamard transform

In the last blog entry ive mentioned the hadamard transform, so its high time to dissuss it
the hadamard transform is a linear transform, and all linear transforms can be written as a vector-matrix multiplication, here the matrix is orthogonal, symmetric and all it elements are either a or -a. Actually this is commonly written as a matrix of 1 and -1 which is multiplied by a which is sqrt(size of the transform), you also might want to take a look at Wikipedia/Hadamard_transform.
The hadamard transform is also somewhat similar to the DCT but simpler, thats also why its used as part of a SATD motion estimation comparission function, the dct would be slightly better quality wise but significantly slower …

How to calculate it

Well there are many ways
a slow and simple one is:

for(i=0; i<n; i++){
    for(j=0; j<n; j++){
        int sign= parity(i&j) ? -1 : 1;
        dst[i] += src[j] * sign;

a faster one is:

void transform(int *data, int step, int n){
    int i, j;
    for(i=0; i<n; i+=2*step){
        for(j=i; j<i+step; j++){
            int a= data[j];
            data[j]      = a + data[j+step];
            data[j+step] = a - data[j+step];
        transform(data, 2*step, n);

You should of course not implement this recursively, but instead remove recursion, unroll the loops where cleanly possible and replace step and n by constants if you can.
Ive just written it like above as its IMHO easier to understand that way then 2 pages of unrolled loops

Filed under: Uncategorized — Michael @ 1:41

Motion Estimation (1. comparission functions)

This months ;) Todays blog entry is about motion estimation for block based video encoders, more precissely integer pel style, no half pel or qpel to keep this reasonable simple. In practice the sub-pel motion estimation (hpel/qpel…) is normaly done after normal integer pel anyway so we can nicely ignore sub pel today

Now how do we estimate the motion of a square or rectangular block of pixels?
Nothing easier then that you will likely say just find the best matching block in the reference frame for each block we need to encode, but what is “best” and how to find it?

Compare functions

To find the best matching block we first need to give each pair of blocks a score, there are several popular ways to do that:

SAD Sum of absolute differences
      for(x=0; x<SIZE; x++){
          for(y=0; y<SIZE; y++){
              score += abs(reference[y][x] - current[y][x]);
SSD Sum of squared differences
      for(x=0; x<SIZE; x++){
          for(y=0; y<SIZE; y++){
              error= reference[y][x] - current[y][x];
              score += error*error;
SATD Sum of absolute hadamard transformed differences
      for(x=0; x<SIZE; x++)
          for(y=0; y<SIZE; y++)
              difference[y][x]= reference[y][x] - current[y][x];
      horizontal_hadamard_transform(difference); // vertical and horizontal can be exchanged
      for(x=0; x<SIZE; x++)
          for(y=0; y<SIZE; y++)
              score += abs(difference[y][x]);

And the theoretical ideal comparission function is to completely encode the block and then use the weighted sum of the number of bits and the distortion be it SSD or a psychovissual one as score

And how to find the best block with such a comparssion function will be the subject of another blog entry, dont fear, iam just trying to keep the blog entries small, selfcontained and easy digestable

Filed under: Uncategorized — Michael @ 0:17

October 29, 2006

Integer precision conversation

You have a 8bit grayscale value (or a audio sample) and want to convert it to 16bit, so without thinking about it you write x<<8 but that generally isnt correct, because the the new least significant bits are all zero so that the largest 8bit value will become a value 255 below the largest 16bit value

So what is correct?

This depends on your definition of what all the values mean, if 0 is black then simply multiplying by the new value for white and dividing by the old value for white with rounding to nearest will do the trick

For the specific case where the largest representable value is white, increasing precision can be done by setting the new least significant bits to the most significant ones, for example for 8->16bit x+(x<<8)=x*65535/255=x*257, for 8->10bit (x<<2)+(x>>6) or if you are pedantic (x*1023+127)/255=(x*341+42)/85 and for 2->8 x+(x<<2)+(x<<4)+(x<<6)=x*85

For the specific case where the largest+1 (256 in case of 8bit) is white, indeed just setting the low bits to 0 is correct, but at least for low precision this is not how grayscale values are defined, just think about 1bit monochrome, converting that to 8bit by setting the low ones to 0 will give you 128 for 1 while white is 255 (or 256)

What about decreasing precision?

Well thats simply the inverse, so for x*257 its (x+128)/257 which can be approximated by (x*255+(1<<15))>>16

Filed under: Uncategorized — Michael @ 1:48

August 28, 2006

The highlevel structure of the mp3 bitstream

Todays blog entry descibes the mp3 bitstream structure, it wont describe the actual mp3 decoding and neither will it descibe where and how each variable is precissely stored …
First the mp3 bitstream is made of packets, these packets vary in size, yes they can even vary for “CBR” though only by +-1 byte

Each such packet begins with a 11bit “startcode” followed by 21 header bits which encode all of the most important info like samplerate, bitrate, number of channels, layer, padding, …. from these 21bits can the packet size be calculated

The remainder of the packet is split in 2 parts, the first contains everything except the scale factors and coefficients, the second contains the scale factors and coefficients which make it the larger part, the first is just encoded normally from position 4 (0-3 is the header) to the end

The second though is encoded much more stupidly, first we have (from the first part of the packet) a backstep value which is an amount of bytes from an internal buffer which we must prepend to the second part of the packet before we can begin to decode it, we will describe whats in that internal buffer in a moment, the second part now is split into channels and granules, for each we have a length in bits, we decode them normally until we either have decoded all scale factors and coefficients or we hit or run over the amount of bits assigned for that channel-granule if we dont hit it precissely but run over it as the bit-amount end doesnt coincide with the end of a vlc than we must step back by one vlc, undo its effects and skip the half vlc, whats left of the second part of our packet (that can include part of the internal buffer) will be put into the internal buffer to be used by future packets

Filed under: Uncategorized — Michael @ 21:50

August 17, 2006


ddrescue is a nice and simple tool to rescue data from a half broken disk or cd, it does that by simply repeatly trying to copy chunks, in the first pass it will try big chunks which is fairly fast, in the second pass it will try to split the damaged chunks into damaged and undamaged areas, and in the following passes it will just repeatly try to read the still damaged chunks

This takes quite some time if the disk/cd contains many damaged areas and recovery may or may not be successfull in the end but one thing which had a significantly positive effect on recovery of a old cdr (no there was nothing important on it, i was just playing around with ddrescue …) was changing the cdrom speed with hdparm every 10 secs, yes thats probably not healthy for the drive so kids try it with someone elses drive, not your own :)

While playing with ddrescue i added some ascii vissualization of the recovery process

Filed under: Uncategorized — Michael @ 0:57

July 21, 2006

Division by zero

Have you ever been curious why you cant divide by zero?

The intuitive explanation with division of positve integers

To divide a by b means to remove b as often as possible from a and assign the number of times which you could do that to a/b, and whats left as the remainder of the division
if b is 0 you can remove it as often or as rare as you want, nothing will ever change

The intuitive explanation with division as the inverse of multiplication

If you have a number a and multiply that by b then divide
by b, so a*b/b then you should get a again if division reverses what multiplication did, but if b is 0 then a*b is a*0 which is 0 no matter what a was, so theres no information left about the value a had and consequently no way to reverse it

The Analytical explanation

if a is non zero constant, then a/b diverges to infinity as b approaches 0, and it converges to x if a=xb
note its important to remember here that the limit of a/b as b approaches 0 is not the same as a/0

The Algebraic explanation

All the above really just deals with the common numbers like integers, reals, complex numbers, …
so you might ask yourself if its not maybe possible to invent some other algebra (set of elements with some operations like addition and multiplication) in which we can divide by zero, well the awnser is of course you can, the problem is just that you have to give up some of the well known properties of common numbers
Lets see which set of properties is incompatible with division by zero
to speak about division by zero we need at least

  1. a set of elements S for our algebra A
  2. an additative identity element (x + 0 = 0 + x = x for all x)
    we use “0” here for simplicity even though the element doesnt need to be related to our well known zero
  3. of course a + and * operation, which again of course doesnt have to be the same as common addition and multiplication
  4. 0 to have a inverse ((x*0)*0-1 = x for all x)

with that set of rules one can easily find many algebras in which you can divide by zero, for example just define the * operation to be identical to + and that identical to the common addition, division by zero is just subtraction by zero, totally useless but it works :)
anoter property we likely want is a multiplicative identity element (x * 1 = 1 * x = x for all x) and the distributive law (a+b)*c= a*c + b*c and c*(a+b)= c*a + c*b sadly that already starts causing problems:

0*0 + 0= 0*0 definition of the identity element of +
0*0 + 1*0 = 0*0 definition of the identity element of *
(0 + 1)*0 = 0*0 distributive law
1*0 = 0*0 definition of the identity element of +
1 = 0 inverting the multiplication by 0

so our algebra would need the identity element of + and the identity element of * to be identical, hmm …

and another odd result:

x*1 = x definition of the identity element of *
x*(1+1) = x definition of the identity element of + and 1=0
x+x = x distributive law

now the question is, is there actualy a non trivial (more then 1 element) algebra left at all?
it seems there is: assume our algebra has 2 elements: “0” and “#”
and we define addition and multiplication like:

+/* 0 #
0 0 #
# # #

its immedeatly obvious that the rules for the 2 identity elements hold
does the distributive law hold too? it does which can be proofen by simply looking at the 8 possible cases, and an inverse element for multiply by 0 well 0 is its own inverse obviously

next, we drop the multiplicative identity element again and try to add a unique multiplicative inverse element x for every element instead of just for zero (a*x=b for all a,b), without that we would either just change the division by zero in a division by foobar problem or we wouldnt be able to reach some elements, sadly only the trivial 1 element algebra is left then:

a*b = a*(b+0) definition of the identity element of +
a*b = a*b+a*0 distributive law
0 = 0+a*0 choose b so that a*b=0
0 = a*0 definition of the identity element of +
0*0-1 = a invert multiplication by zero

the last line means: every algebra with the properties above will have exactly one element and be useless
alternatively if we replace the property that every element must have a unique multiplicative inverse by requiring addition to be invertable ((a+b) + (-b) = a) then we too end up with the useless 1 element algebra:

a=(a*0)*o-1 definition of the inverse of 0
a=(a*(0+0))*0-1 definition of the identity element of +
a=(a*0 + a*0)*0-1 distributve law
a=(a*0)*0-1+(a*0)*0-1 distributve law
a=a + a substituting from row 1
0=a inverting the addition of a

i guess thats enough math for today, and enough chances to embarrass myself with a trivial typo somewhere …

Filed under: Uncategorized — Michael @ 11:25

July 8, 2006


UTF-8 is the most common encoding to store unicode, the following table summarizes how it works

number of bytes byte1 byte2 byte3 byte4 range of unicode symbols
1 0XXX XXXX 0-127
2 110X XXXX 10XX XXXX 128-2047
3 1110 XXXX 10XX XXXX 10XX XXXX 2048-65535
4 1111 0XXX 10XX XXXX 10XX XXXX 10XX XXXX 65536-2097151


  • 0-127 ASCII is stored as is in UTF-8
  • bytewise substring matching works as is, or in other words one UTF-8 symbol will never match anything but one and the same UTF-8 symbol, not maybe just the last byte of a symbol
  • UTF-8 is easy to distinguish from a random byte sequence
  • when starting at a random byte, resynchronization is trivial and always happens at the next symbol


The biggest problem of UTF-8 is that its not very compact, it needs more bytes then other (non-unicode) encodings, can we do better without loosing its advantages?, well apparently yes, its sad because youd expect the standard comittees to do better …
Why “S1D-7”, well 1 “stop” bit to determine if the byte is the last one and 7 data bits for each byte

number of bytes byte1 byte2 byte3 byte4 range of unicode symbols
1 0XXX XXXX 0-127
2 1XXX XXXX 0XXX XXXX 128-16383
3 1XXX XXXX 1XXX XXXX 0XXX XXXX 16384-2097151

Compactness differences

  • In all cases is S1D-7 able to store as many or more symbols per number of bytes
  • where UTF-8 needs up to 4 bytes to store all symbols, S1D-7 never needs more then 3
  • Many languages which need 3 bytes per symbol in UTF-8 can be encoded with 2 bytes per symbol, just look at the Basic Multilingual Plane at wikipedia, all the pages which are in the range of 0x07FF to 0x3FFF need 3 bytes per symbol in UTF-8 and 2 bytes per symbol in S1D-7


Well, the argumentation at the wikipedia page to justify UTF-8 bloatedness is that its “advantages outweigh this concern”, well so lets see if our simpler encoding which requires fewer bytes per symbol and is much more compact looses any features

0-127 ASCII is stored as is in UTF-8

same, no change here

Substring matching

in UTF-8 we can simply use bytewise substring matching while in S1D-7 we need to write a slightly differnt substring matching function, though both are very simple, see below

int match_utf8_or_ascii(uint8_t *heap, uint8_t *needle){
    int i,j;

    for(i=0; heap[i]; i++){
        for(j=0; needle[j]; j++){
            if(heap[i+j] != needle[j])
            return i;
    return -1;

int match_S1D8(uint8_t *heap, uint8_t *needle){
    int i,j;

    for(i=0; heap[i]; i++){
        for(j=0; needle[j]; j++){
            if(heap[i+j] != needle[j])
        if(!needle[j] && !(i && heap[i-1]>127))
            return i;
    return -1;
Distinguishing S1D-7 from random bytes

Due to the lower overhead there is less to check and it consequently needs more text to detect reliably, but the check is very simple, just check if there are any consecutive 3 bytes which are each larger then 127 and you know its not S1D-7, to distingissh it from UTF-8 just run your UTF-8 detection over it, its very unlikely that S1D-7 will be parsed correctly by a UTF-8 parser
not to mention, the encoding should be specified somehow anyway and not guessed by such encoding analysis


is even more trivial then in UTF-8, in UTF-8 you search for a byte <128 or a byte > 192 when you find it you know its the first byte of a symbol
in S1D-7 you search for a byte < 128 when you find it you know that the next byte is the first byte of a symbol


Parsing S1D-7 is much easier then UTF-8, see:

read_utf8_symbol(uin8_t **b){
    int val= *((*b)++);
    int ones=0;


    if(ones==1 || ones>4)
        return -1;
    val&= 127>>ones;
    while(--ones > 0){
        int tmp= *((*b)++) - 128;
            return -1;
        val= (val<<6) + tmp;
    return val;

read_s1d7_symbol(uin8_t **b){
    int i, val=0;

    for(i=0; i<3; i++){
        int tmp= *((*b)++) - 128;
        val= (val<<7) + tmp;
        if(tmp < 0)
            return val + 128;
    return -1;
zero byte occurances (new 2006-07-21)

This issue has been raised by a comment …
While UTF-8 gurantees that a zero byte in the encoding will occur only in a zero symbol, S1D-7 does not gurantee that, for example the value 256 is encoded as 0x82 0x00, UTF-16 also has this problem
zero bytes are problematic if the encoded strings are passed through code which has been written for one byte per symbol zero terminated strings, of course you should never pass something through code which is designed for a different encoding but if for whatever odd reason you still do that then one possible solution is to simply change the value before encoding and after decoding to avoid zero bytes, this is actually very easy:

int avoid_zero(int x){
    x+= x<<7;
    return (x + (x>>14))>>7;
int reverse_avoid_zero(int x){
    return x - (x>>7);

these transforms have almost no effect on the space efficiency of S1D-7

occurance of specific bytes (new 2006-07-21)

This issue has been raised by a comment …
While UTF-8 gurantees that no ascii bytes below 128 will occur in the encoding of symbols larger then 128, S1D-7 does not gurantee that, UTF-16 also has this “problem”
again this “problem” is limited to the case of passing encoded strings through code which expects some other encoding, and thats something you should never ever do, not only will the code not be able to make sense of at least some symbols its likely going to lead to bugs and exploits, if strings have to be passed through code which expects another encoding then you should convert your strings to that encoding and escape the problematic symbols somehow
S1D-7 can also easily be changed to avoid specific low <128 bytes in its encoded representation, allthough thats just for demonstration and shouldnt be done in practice as its very bad to pass strings though code which expects another encoding, if your filesystem for example expects filenames in ascii without specific letters then thats what the filename should be encoded with, and letters which cannot be represented should be encode with some sort of escape sequence

int avoid_some_bytes(int x){
    if(x>127) x= (x<<1) ^ table1[x&63];
    return x;
int reverse_avoid_some_bytes(int x){
    if(x>127) x= (x ^ table2[x&127])>>1;
    return x;

with these transforms any 64 low ascii bytes can be avoided at the cost of 1 bit per symbol, that makes S1D-7 somewhat worse then before but still much better then UTF-8 in space efficiency

Filed under: Uncategorized — Michael @ 12:40
« Previous PageNext Page »

Powered by WordPress