Lair Of The Multimedia Guru

2006-09-27

X86 assembly instructions you always wanted but intel didnt give them to you

Everyone who wrote assembly code probably has cursed the lack of some instuctions in the instrcution set of their target architecture, be it due to the work needed to workaround it or the complexity and consequently speed loss of the resulting code

Heres a list of what i think is lacking in the x86 instruction set …

btw, i had to use shl and shr instead of 2 less than and 2 right than symbols as wordpress randomizes the html code if & lt ; or & gt ; occur in it (i dunno why, but that did work in the past)

cadd/csub (conditional add / sub)
While there are conditional move instructions there are no conditional add/subtract instructions even though code like if(...) x+=... is pretty common
max/min
While there are mmx and sse maximum and minimum instruction there are sadly no normal integer equivalents
negative index addressing
addressing stuff like base[constant+index] is possible base[constant-index] is not, not only would that be usefull on its own it would also allow to subtract a left shifted register from another like: lea eax, [eax-8*ebx]
imulfix
For fixed point calculations the mul and imul instructions are very hard or slow to use as using the low 32bit result often would overflow and the high 32bit limits the coefficient to -0.5 … 0.5, a instruction which does (a*b + (1 shl 23)) shr 24 would solve this
sarr (shift arithmetic right with rounding)
Shift right is nice but often you need to do it with rounding, a (a + (0.5 shl s)) shr s would come in handy in these cases
random
a pseudo random number generator and a true random number generator would be nice
abs
While there is pabs for mmx theres no equivalent for normal integers
select
A a&c | b&(~c) would come in handy for combining bits from 2 registers
readbits
a hardware based bitstream reader which takes a base pointer, index in bits and length of bits it should read would be very usefull for many audio and video formats, bitstream&vlc decoding generally takes 1/3 of the time spend decoding
nand/nor
(a&~b) and (a|~b) and maybe others
pavgb_nornd (packed byte average without rounding)
while theres a average with rounding theres none without sadly mpeg need the later too
integer instructions like sarr and select for mmx/sse too
pcmpneq / pcmplt / pcmple / pcmpge
Just the missing compare instructions for mmx
punpckh/l0
unpack instructions which unpack the source with an implicit 0, this would avoid 1 move per 2 unpack and unpacking is very common
packsrr (pack with shift right and rounding)
like pack* but with rounding and shift right integrated
plea
A mmx/sse instruction similar to lea which does the extreemly common operation of adding a left shifted value to another, and if this isnt able to do a=2*a+-b then an additional instruction for that as it would reduce the very common a-b, a+b butterfly operation to 2 instructions instead of 3
p(add/sub/mul)unpackh/l
having to unpack 2 source operands into 4 to perform 1 operation with them sucks (=needs 6-8 instructions) so combined foobar+unpack high/low would be pretty nice, of course only if its fast though
ps(r/l)lb (packed right/left shift unsigned byte)
the missing packed byte shifts
pdabs (packed absolute difference)
absolute value of the difference of 2 values
paddsusb
add a signed into a unsiged (byte) value with unsigned saturation, usefull for brightness correcture and loop/postprocessing filters
Filed under: Optimization — Michael @ 13:16

2006-08-28

The highlevel structure of the mp3 bitstream

Todays blog entry descibes the mp3 bitstream structure, it wont describe the actual mp3 decoding and neither will it descibe where and how each variable is precissely stored …
First the mp3 bitstream is made of packets, these packets vary in size, yes they can even vary for “CBR” though only by +-1 byte

Each such packet begins with a 11bit “startcode” followed by 21 header bits which encode all of the most important info like samplerate, bitrate, number of channels, layer, padding, …. from these 21bits can the packet size be calculated

The remainder of the packet is split in 2 parts, the first contains everything except the scale factors and coefficients, the second contains the scale factors and coefficients which make it the larger part, the first is just encoded normally from position 4 (0-3 is the header) to the end

The second though is encoded much more stupidly, first we have (from the first part of the packet) a backstep value which is an amount of bytes from an internal buffer which we must prepend to the second part of the packet before we can begin to decode it, we will describe whats in that internal buffer in a moment, the second part now is split into channels and granules, for each we have a length in bits, we decode them normally until we either have decoded all scale factors and coefficients or we hit or run over the amount of bits assigned for that channel-granule if we dont hit it precissely but run over it as the bit-amount end doesnt coincide with the end of a vlc than we must step back by one vlc, undo its effects and skip the half vlc, whats left of the second part of our packet (that can include part of the internal buffer) will be put into the internal buffer to be used by future packets

Filed under: Uncategorized — Michael @ 21:50

2006-08-17

ddrescue

ddrescue is a nice and simple tool to rescue data from a half broken disk or cd, it does that by simply repeatly trying to copy chunks, in the first pass it will try big chunks which is fairly fast, in the second pass it will try to split the damaged chunks into damaged and undamaged areas, and in the following passes it will just repeatly try to read the still damaged chunks

This takes quite some time if the disk/cd contains many damaged areas and recovery may or may not be successfull in the end but one thing which had a significantly positive effect on recovery of a old cdr (no there was nothing important on it, i was just playing around with ddrescue …) was changing the cdrom speed with hdparm every 10 secs, yes thats probably not healthy for the drive so kids try it with someone elses drive, not your own :)

While playing with ddrescue i added some ascii vissualization of the recovery process

Filed under: Uncategorized — Michael @ 00:57

2006-07-21

Division by zero

Have you ever been curious why you cant divide by zero?

The intuitive explanation with division of positve integers

To divide a by b means to remove b as often as possible from a and assign the number of times which you could do that to a/b, and whats left as the remainder of the division
if b is 0 you can remove it as often or as rare as you want, nothing will ever change

The intuitive explanation with division as the inverse of multiplication

If you have a number a and multiply that by b then divide
by b, so a*b/b then you should get a again if division reverses what multiplication did, but if b is 0 then a*b is a*0 which is 0 no matter what a was, so theres no information left about the value a had and consequently no way to reverse it

The Analytical explanation

if a is non zero constant, then a/b diverges to infinity as b approaches 0, and it converges to x if a=xb
note its important to remember here that the limit of a/b as b approaches 0 is not the same as a/0

The Algebraic explanation

All the above really just deals with the common numbers like integers, reals, complex numbers, …
so you might ask yourself if its not maybe possible to invent some other algebra (set of elements with some operations like addition and multiplication) in which we can divide by zero, well the awnser is of course you can, the problem is just that you have to give up some of the well known properties of common numbers
Lets see which set of properties is incompatible with division by zero
to speak about division by zero we need at least

  1. a set of elements S for our algebra A
  2. an additative identity element (x + 0 = 0 + x = x for all x)
    we use “0” here for simplicity even though the element doesnt need to be related to our well known zero
  3. of course a + and * operation, which again of course doesnt have to be the same as common addition and multiplication
  4. 0 to have a inverse ((x*0)*0-1 = x for all x)

with that set of rules one can easily find many algebras in which you can divide by zero, for example just define the * operation to be identical to + and that identical to the common addition, division by zero is just subtraction by zero, totally useless but it works :)
anoter property we likely want is a multiplicative identity element (x * 1 = 1 * x = x for all x) and the distributive law (a+b)*c= a*c + b*c and c*(a+b)= c*a + c*b sadly that already starts causing problems:

0*0 + 0= 0*0 definition of the identity element of +
0*0 + 1*0 = 0*0 definition of the identity element of *
(0 + 1)*0 = 0*0 distributive law
1*0 = 0*0 definition of the identity element of +
1 = 0 inverting the multiplication by 0

so our algebra would need the identity element of + and the identity element of * to be identical, hmm …

and another odd result:

x*1 = x definition of the identity element of *
x*(1+1) = x definition of the identity element of + and 1=0
x+x = x distributive law

now the question is, is there actualy a non trivial (more then 1 element) algebra left at all?
it seems there is: assume our algebra has 2 elements: “0” and “#”
and we define addition and multiplication like:

+/* 0 #
0 0 #
# # #

its immedeatly obvious that the rules for the 2 identity elements hold
does the distributive law hold too? it does which can be proofen by simply looking at the 8 possible cases, and an inverse element for multiply by 0 well 0 is its own inverse obviously

next, we drop the multiplicative identity element again and try to add a unique multiplicative inverse element x for every element instead of just for zero (a*x=b for all a,b), without that we would either just change the division by zero in a division by foobar problem or we wouldnt be able to reach some elements, sadly only the trivial 1 element algebra is left then:

a*b = a*(b+0) definition of the identity element of +
a*b = a*b+a*0 distributive law
0 = 0+a*0 choose b so that a*b=0
0 = a*0 definition of the identity element of +
0*0-1 = a invert multiplication by zero

the last line means: every algebra with the properties above will have exactly one element and be useless
alternatively if we replace the property that every element must have a unique multiplicative inverse by requiring addition to be invertable ((a+b) + (-b) = a) then we too end up with the useless 1 element algebra:

a=(a*0)*o-1 definition of the inverse of 0
a=(a*(0+0))*0-1 definition of the identity element of +
a=(a*0 + a*0)*0-1 distributve law
a=(a*0)*0-1+(a*0)*0-1 distributve law
a=a + a substituting from row 1
0=a inverting the addition of a

i guess thats enough math for today, and enough chances to embarrass myself with a trivial typo somewhere …

Filed under: Uncategorized — Michael @ 11:25

2006-07-08

UTF-8

UTF-8 is the most common encoding to store unicode, the following table summarizes how it works

number of bytes byte1 byte2 byte3 byte4 range of unicode symbols
1 0XXX XXXX 0-127
2 110X XXXX 10XX XXXX 128-2047
3 1110 XXXX 10XX XXXX 10XX XXXX 2048-65535
4 1111 0XXX 10XX XXXX 10XX XXXX 10XX XXXX 65536-2097151

Features

  • 0-127 ASCII is stored as is in UTF-8
  • bytewise substring matching works as is, or in other words one UTF-8 symbol will never match anything but one and the same UTF-8 symbol, not maybe just the last byte of a symbol
  • UTF-8 is easy to distinguish from a random byte sequence
  • when starting at a random byte, resynchronization is trivial and always happens at the next symbol

S1D-7

The biggest problem of UTF-8 is that its not very compact, it needs more bytes then other (non-unicode) encodings, can we do better without loosing its advantages?, well apparently yes, its sad because youd expect the standard comittees to do better …
Why “S1D-7”, well 1 “stop” bit to determine if the byte is the last one and 7 data bits for each byte

number of bytes byte1 byte2 byte3 byte4 range of unicode symbols
1 0XXX XXXX 0-127
2 1XXX XXXX 0XXX XXXX 128-16383
3 1XXX XXXX 1XXX XXXX 0XXX XXXX 16384-2097151

Compactness differences

  • In all cases is S1D-7 able to store as many or more symbols per number of bytes
  • where UTF-8 needs up to 4 bytes to store all symbols, S1D-7 never needs more then 3
  • Many languages which need 3 bytes per symbol in UTF-8 can be encoded with 2 bytes per symbol, just look at the Basic Multilingual Plane at wikipedia, all the pages which are in the range of 0x07FF to 0x3FFF need 3 bytes per symbol in UTF-8 and 2 bytes per symbol in S1D-7

Features

Well, the argumentation at the wikipedia page to justify UTF-8 bloatedness is that its “advantages outweigh this concern”, well so lets see if our simpler encoding which requires fewer bytes per symbol and is much more compact looses any features

0-127 ASCII is stored as is in UTF-8

same, no change here

Substring matching

in UTF-8 we can simply use bytewise substring matching while in S1D-7 we need to write a slightly differnt substring matching function, though both are very simple, see below

int match_utf8_or_ascii(uint8_t *heap, uint8_t *needle){
    int i,j;

    for(i=0; heap[i]; i++){
        for(j=0; needle[j]; j++){
            if(heap[i+j] != needle[j])
                break;
        }
        if(!needle[j])
            return i;
    }
    return -1;
}

int match_S1D8(uint8_t *heap, uint8_t *needle){
    int i,j;

    for(i=0; heap[i]; i++){
        for(j=0; needle[j]; j++){
            if(heap[i+j] != needle[j])
                break;
        }
        if(!needle[j] && !(i && heap[i-1]>127))
            return i;
    }
    return -1;
}
Distinguishing S1D-7 from random bytes

Due to the lower overhead there is less to check and it consequently needs more text to detect reliably, but the check is very simple, just check if there are any consecutive 3 bytes which are each larger then 127 and you know its not S1D-7, to distingissh it from UTF-8 just run your UTF-8 detection over it, its very unlikely that S1D-7 will be parsed correctly by a UTF-8 parser
not to mention, the encoding should be specified somehow anyway and not guessed by such encoding analysis

resynchronization

is even more trivial then in UTF-8, in UTF-8 you search for a byte <128 or a byte > 192 when you find it you know its the first byte of a symbol
in S1D-7 you search for a byte < 128 when you find it you know that the next byte is the first byte of a symbol

parsing

Parsing S1D-7 is much easier then UTF-8, see:

read_utf8_symbol(uin8_t **b){
    int val= *((*b)++);
    int ones=0;

    while(val&(128>>ones))
        ones++;

    if(ones==1 || ones>4)
        return -1;
    val&= 127>>ones;
    while(--ones > 0){
        int tmp= *((*b)++) - 128;
         if(tmp>>6)
            return -1;
        val= (val<<6) + tmp;
    }
    return val;
}

read_s1d7_symbol(uin8_t **b){
    int i, val=0;

    for(i=0; i<3; i++){
        int tmp= *((*b)++) - 128;
        val= (val<<7) + tmp;
        if(tmp < 0)
            return val + 128;
    }
    return -1;
}
zero byte occurances (new 2006-07-21)

This issue has been raised by a comment …
While UTF-8 gurantees that a zero byte in the encoding will occur only in a zero symbol, S1D-7 does not gurantee that, for example the value 256 is encoded as 0x82 0x00, UTF-16 also has this problem
zero bytes are problematic if the encoded strings are passed through code which has been written for one byte per symbol zero terminated strings, of course you should never pass something through code which is designed for a different encoding but if for whatever odd reason you still do that then one possible solution is to simply change the value before encoding and after decoding to avoid zero bytes, this is actually very easy:

int avoid_zero(int x){
    x+= x<<7;
    return (x + (x>>14))>>7;
}
int reverse_avoid_zero(int x){
    return x - (x>>7);
}

these transforms have almost no effect on the space efficiency of S1D-7

occurance of specific bytes (new 2006-07-21)

This issue has been raised by a comment …
While UTF-8 gurantees that no ascii bytes below 128 will occur in the encoding of symbols larger then 128, S1D-7 does not gurantee that, UTF-16 also has this “problem”
again this “problem” is limited to the case of passing encoded strings through code which expects some other encoding, and thats something you should never ever do, not only will the code not be able to make sense of at least some symbols its likely going to lead to bugs and exploits, if strings have to be passed through code which expects another encoding then you should convert your strings to that encoding and escape the problematic symbols somehow
S1D-7 can also easily be changed to avoid specific low <128 bytes in its encoded representation, allthough thats just for demonstration and shouldnt be done in practice as its very bad to pass strings though code which expects another encoding, if your filesystem for example expects filenames in ascii without specific letters then thats what the filename should be encoded with, and letters which cannot be represented should be encode with some sort of escape sequence

int avoid_some_bytes(int x){
    if(x>127) x= (x<<1) ^ table1[x&63];
    return x;
}
int reverse_avoid_some_bytes(int x){
    if(x>127) x= (x ^ table2[x&127])>>1;
    return x;
}

with these transforms any 64 low ascii bytes can be avoided at the cost of 1 bit per symbol, that makes S1D-7 somewhat worse then before but still much better then UTF-8 in space efficiency

Filed under: Uncategorized — Michael @ 12:40

2006-06-09

Deinterlacing filters

Below you will find a comparission of various deinterlacing filters, the interlaced source was created with mplayer using the phase=t and tinterlace=1 filters from the well known foreman video

all pictures where exported with mplayer -vo pnm and ppmtojpg --optimize --quality 90 why not -vo jpeg? well i didnt had -vo jpeg compiled in and -vo jpeg only accepts rgb input, so there wouldnt have been any advantage in using it …

Most deinterlacing filters only output one frame for every 2 fields that combined with the fact that some always choose the even, some always the odd and some always the first no matter if thats even or odd made this comparission less funny then i hoped originally, due to this little issue several filters (pp=ci and kerndeint IIRC) had to be feeded with vertically fliped video and tomsmocomp even needed -vf phase=b to output a frame based on the correct field and yes i of coarse did specify the top vs. bottom field first flag correctly as long as the filter had such an option …

Original non interlaced image
interlaced image mplayer -vf phase=t,tinterlace=1
mplayer -vf pp=lb (linear blend)
mplayer -vf pp=l5 (5tap lowpass filter)
transcode -J smartyuv
mplayer -vf pp=fd
mplayer -vf pp=md (median deinterlacer)
mplayer -vf pp=li (linear interpolate)
mplayer -vf pp=ci (cubic interpolate)
transcode -J dilyuvmmx
mplayer -vf kerndeint (Donald Graft’s adaptive kernel deinterlacer)
transcode -J smartdeinter (VirtualDub’s smart deinterlacer)
transcode -J tomsmocomp (Tom’s Motion Compensation deinterlacing filter)
mplayer -vf yadif=1:1
mplayer -vf yadif=3:1
mplayer -vf yadif=1:1,mcdeint=2:1:10
mplayer -vf yadif=3:1,mcdeint=2:1:10
Filed under: Uncategorized — Michael @ 16:48

2006-06-02

Optimal (A)DPCM encoding

ADPCM in general encodes differences between samples using a small number of possible difference values and adapts which values are allowed depending upon past difference values
( … we all know that)

ADPCM encoders normally work based on the principle that they select the best possible difference between the last sample and the current sample while ignoring both future and past, thats not optimal at all

a viterbi based encoder would select the optimal sequence of differences which minimize some distortion measure like the sum of squares, it achives this by successively finding the optimal encodings up to a specific state or in other words
instead of keeping a single sequence of encoded differenes of the past samples and then encoding the next difference
we keep track of the optimal sequence of differences/encoded bitstream up to the current sample for every state (0..88 step + a few sample values around the ideal one) next we just calculate the optimal bitstreams up to the next sample using the current sample and the optimal ones up to the last sample

maybe a concrete example would help, we use non adaptive DPCM here but the principle is the same …

lets say we have some input samples 0,2,6,4,-4
and our example encoder can encode +4,+1,-1,-4 differences
a conventional encoder would output 0,1,5,4, 0 distortion=18
optimal would be 1,2,6,2,-2 distortion=9
if we would force the first output to 1 and then use a conventional encoder 1,2,6,5, 1 distortion=27

viterbi encoder:

1. iteration
-2              distortion 4
-1              distortion 1
 0              distortion 0
 1              distortion 1
 2              distortion 4
2. iteration
 1, 0           distortion 5
 0, 1           distortion 1
 1, 2           distortion 1
-1, 3           distortion 2
 0, 4           distortion 4
3. iteration
-1, 3, 4        distortion 6
 0, 1, 5        distortion 2
 1, 2, 6        distortion 1
-1, 3, 7        distortion 3
 0, 4, 8        distortion 8
4. iteration
 1, 2, 6, 2     distortion 5
-1, 3, 7, 3     distortion 4
 0, 1, 5, 4     distortion 2
 1, 2, 6, 5     distortion 2
 0, 1, 5, 6     distortion 6
5. iteration
not possibl,-6
not possibl,-5
not possibl,-4
not possibl,-3
 1, 2, 6, 2,-2  distortion 9
Filed under: Uncategorized — Michael @ 20:18

2006-05-23

PT_GNU_STACK

You upgrade your linux distro and suddenly things are failing with error messages like "error while loading shared libraries: libfoobar.so.123: cannot enable executable stack as shared object requires: Permission denied" what happened?

Well, you just have become yet another witness of the decay of the GNU toolchain. Besides that gcc and glibc becomes more and more bloated every day, gcc bugreports being randomly closed without fixing them and so on, we since some time have the beautifull PT_GNU_STACK feature in every GNU tool which tries very hard to make the stack of application executable, without grsec your system will be more vulnerable, with grsec stuff will fail hard as grsec will not give apps an executable stack if they dont need one and the redhead ehm i mean gnu libc/loader/… will then simply refuse to continue if the unneccesary executablity isnt provided

what exactly is PT_GNU_STACK?

PT_GNU_STACK is a entry in the elf file format which contains the access rights (read, write, execute) of the stack, so far so good, nothing bad here, the problem is how these access rights are choosen by the tools …

The idea behind PT_GNU_STACK / choosing the access rights

The stack is always read and writeable obviously thats needed, so we are left with the question about it being executable, when linking objects together into an library or libraries into an application, a single object requireing an executable stack will cause the final application and all libraries which contain that object to need an executable stack, if no object needs an executable stack the final application wont need one either

The design and implementation of PT_GNU_STACK

Every tool in the gnu toolchain will from source over assembly, objects and libraries to final application merge the PT_GNU_STACK access rights so that a single object needing a executable stack will cause the final application to need one, in case of doubt (no PT_GNU_STACK entry) the default is that the specific object needs an executable stack

The problem with this / the total failure of the PT_GNU_STACK idea

Undecideablility at the compilation and assembly level

Deciding if a piece of code needs an executable stack is proofably not possible, its indeed equivalent to solving the halting problem
so gcc, gas and so on guess if a pice of code needs an executeable stack and if they are unsure, they claim that it does

Undecideablility at the global level

For example calling a function through a global pointer, the compiler compiling that file cant decide about the stack access rights as it doesnt know all code which changes the pointer, the linker later would have to dissassemble the object files again just to have the info needed to decide it …

several definitions of executability

If you execute code on the stack, does it need to be executable? if you use redheads execshield then AFAIK yes, if you use grsec it depends upon the code, as grsec can detect and emulate some common and harmless sequences of instrucions on the stack (trampolines)

Even if an application needs an executable stack for some feature

Theres no need to have an executable stack if that feature isnt used, actually applications might explicily try to enable / disable executability of a specific part of memory depening upon needs, gcc will in that case almost certainly claim that the app always needs an executable stack …

All these things cause applications to claim that they require an executable stack while they simply do not, which makes your system much more vulnerable to attacks

The alternative

Mark final applications which need an executable stack as such, make the default to not requireing an executable stack

whats the difference

Well, in practice more then 99% of all applications dont need an executable stack and if a application needs an executable stack but doesnt have one then that causes a clear failure which will be noticed immedeatly and can be fixed by marking that application as needing a executable stack
OTOH the current PT_GNU_STACK system with its pass the flags from source to final app ideally needs corretures to be done for each source file which the gnu tools cant deal with automatically, and there are many more source files then applications not to mention that the error rate at the application level seems to be far in excess of 1%
furthermore marking an app as requireing an executable stack while it actually doesnt, wont produce any warning, error or otherwise any hint that something is wrong while security is greatly impaired

Filed under: Off Topic — Michael @ 14:29

2006-05-19

Coding Style / Code Conventions

There are many coding style guides, IMO actually significantly too many … their existence often only justified by the single hypothesis that having everything in the same style is more readable then having one style per file and letting the original author / maintainer decide it, now maybe my memory sucks but i dont remember a single case where the difference in coding style in different parts of a project caused me any difficulty in understanding. It was rather the lack of any indention, exclusive use of single letter variables or simply the complexity of the code, lack of comments, and my stupidity of course which caused difficulty understanding something …

hmm, mphq cvs is still dead :( so lets try to write the ultimate coding style guide and hope mphq works when iam done so i can finally commit a few minor bugfixes

The ultimate coding (style) guide

Rule 1

Every rule must be justifyable and the justification must not apply to a contradicting rule too
yes, iam trying to avoid the “for consistancy” justification which can be applied to litterally anything

Rule 2a

Write consistant code / Changes should be approximately in the same style as the surrounding code
Justifiction: having the coding and indention style change every few lines makes code somewhat harder to read, while a overly strict style restriction will cause some volunteers (or employees) to leave

Rule 2b

Write indented code (code within a block should be at the same indention level, code in parent and child blocks should be at different levels), placement and indention of stuff like {} is not restricted here, just the actual code
Justification: see mplayer.c for what happens without that rule :)

Rule 2c

Avoid constructs which IDEs/editors have difficulty with, common examples are non-ASCII letters, tab / space mixes, differing tab length in different files, dos line endings, …
Justification: obvious

Rule 2d

avoid non english (in identiers and comments)
Justification: not everyone speaks every language, english is understood my most people, of course if every programmer you care about understands hungarian then its a perfect choice too

Rule 3

maximize usefull information at minimum size and complexity (simple code preferred)

so a variable name like szMyCompleteName is normaly worse then simply name as the type is probably obvious, “complete” also adds no new info … of course a function which converts zero terminated strings to length + chars style would benefit from having the type in the names, it all depends upon how and where a variable is used
comments should not say whats already obvious from the code like int i=sqrt(v); //i is the squre root of v but instead provide usefull info like //i is the largest factor v can contain, so we only need to test integers less then i but then in that case maybe renaming i to largest_factor would have provided more info at less size …
justificaion: the idea is to provide the reader with the info she is interrested in, not to duplicate and spread out the important stuff between noise
also less code means fewer places where a bug could be
Note: this rule also forbids code duplication, and other bloatedness

Rule 4

Write efficient code
Note: efficient here means little memory, disk, cpu, …
Justification: well this one should be obvious

Rule 5

Write portable code
Justification: well this one should be obvious too

Rule 6

Write localized / internationalized applications (user vissible stuff in her native language)
Note, maybe error messages should be dealt with differently as the developers need to understand them in bugreports …
Justification: well this one should be obvious too

Rule 7

Write compatible code (dont break compatibility of file formats, your last library release, …)
Justification: well this one should be obvious too

Rule 8

Write secure code (dont write over the end of an array, checking untrusted input carefully, be carefull with malloc(w*h) due to overflows of the *, …)
Justification: well this one should be obvious too

Rule 9

Write modular code / split code into independant parts
Justification: many dependancies makes code very hard to understand and maintain

Rule 10

If some rules contradict, use common sense (which goal is more important in a specific case)

Filed under: Off Topic — Michael @ 19:49

2006-05-15

TV sets

Its been a month since my last blog entry, iam not sure if thats bad or not, anyway heres some more halfway offtopic crap, i want(ed) to buy a new TV set as the one i have here is _old_ and doesnt really work very well …

First question CRT or LCD?

CRTs, are old, many shops dont have any CRT based TV sets anymore and everyone seems to know they are old and inferrior
LCDs, shiny new technology, flatter and better, well wait, why exactly does every LCD TV below 1000€ have a _vastly_ inferrior picture quality then my mothers el-cheapo noname CRT TV which probably costed < 200€ … and iam not saying the ones above 1000€ where better, just out of my acceptable price range and many showing high resolution HDTV which made fair comparission hard …

Size

for CRTs size seems strongly correlated with quality and features, no 55cm 100hz TV sets for example, it also seemed like larger isnt realy more expensive, for LCDs of coarse larger is much more expensive …

100hz CRT vs 50hz

50hz flickers, and actually i somehow cant get rid of the feeling that its stronger flickering then 50hz TV sets had a few years ago
100hz dosnt flicker, wait, actually i found just one 100hz TV set <300€ which didnt had at least part of the image flickering, all others had parts of the TV channels logo flickering also with 100hz you can choose between some residue interlacing artifacts or “staircase” artifacts on moving objects, ive not found any 100hz tv set which didnt show some artifacts allthough in the end i prefer 100hz over 50hz the artifcats are less annoying then the flickering IMHO

Comparing TV sets

Well you walk into a shop with TV sets, many TV sets … all showing different stuff, and if you are lucky and they show the same then its a crappy dark low quality music video from MTV completely unsuitable for comparing, but wait one shop had remote controls “attached” to each tv set so customers could change settings and channels as they liked, sadly less then half the remotes where working …

Summary, conclusion and personal oppinion (yeah as if the whole blog was anything but subjective personal oppinion)

LCDs have imporved, more in the written numbers then in reallity, CRTs seems to have become worse but still LCDs are behind CRTs both in quality and quality per price
many TV sets be it CRT or LCD show terrible artifacts
50hz CRTs: flickering
100hz CRTs: staircase or interlacing artifacts on moving objects
CRTs in general: some cases of brightness-size dependance, most have oversharpened and too strongly saturated images (yeah last 2 are purely an adjustment thing, probably done as most customers are idiots and prefer oversharpened and too strongly saturated images)
LCDs: washed out blurry images, colors more or less wrong, aritifacts caused by slow reaction times not to mention 2-4 times as expensive as CRTs

Filed under: Off Topic — Michael @ 11:56
« Previous PageNext Page »

Powered by WordPress