Decrypting VideoCrypt
To complement my last blog entry, heres a description of how VideoCrypt works and how it can be broken with reasonable quality
PAL
First before we can talk about VideoCrypt, we need to understand how PAL works
Video information in PAL (and NTSC and SECAM) is stored linewise top to bottom, and interlaced so all the even and then in the next field all the odd lines are stored (313 lines in even fields, 312 in odd from which ~288 lines contain a vissible picture for PAL)
each line has a sync pulse to mark the end of a line, a horizontal blanking interval during which the electron beam in crts “moves” back to the left after the end of the previous line at the right and obviously the vissible part in which brightness = amplitude
In case of Color the U and V components of the YUV color value are stored quadrature amplitude modulated at the color subcarier frequency and the V component has its sign inverted in every 2nd line of a field, theres also a short reference color burst added in the HBI
VideoCrypt
the PRNG seed
Binary data from the vertical blanking interval is feeded into a smartcard which then among other things produces a PRNG seed for the videocrypt decoder, but none of this matters to us here
the actual decryption
The PRNG (pseudo random number generator) outputs a sequence of 8bit values (cutpoints), one for each line of video, each of these lines is then split into 4 parts, parts 0 and 3 are outside the vissible area and arent touched by videocrypt, parts 1 and 2 are the vissible area and are exchanged, the location which seperates 1 and 2 is the cutpoint, videocrypt does nothing special with the color information it just exchanges raw digitally sampled data
because this exchange operation cannot be done with zero-delay the videocrypt decoder will also output parts 1 and 2 one line later then when they where transmitted
Breaking VideoCrypt
Lacking informaion about the PRNG we can only guess the cutpoints based on the assumtation that color/luminance values for pixels which are close will be strongly correlated while distant pixels will have more different color/lumi
still, naive bruteforce would require 256^288 images of w*288 pixels to be compared
depening on how we test a image for how good it is, we are able to significantly reduce the complexity, if we use a score function which is the sum of some compare function which uses just consecutive line pairs and their cutpoints then the dynamic programing algorithm / viterbi algorithm can be used to reduce the complexity to 256*256*288 such 2 line checks, if we now further limit ourselfs to the common cross correlation + some edge detector and only consider a few cutpoint differences close to the best result from cross correlation then the complexity would be w*log(w)*288 + 9*w*288 which is what markus kuhns antisky.c did, though antisky was far from realtime decoding, mainly because it was inefficiently written (floating point code everwhere at a time where no cpu had a reasonable fast fpu, …)
Note, if you think comparing 256^288 images would be stupid, its not, think about choosing the image which is compressed best with jpeg or another codec, this might (or might not) be better then the linewise method
Step 1 (downsampling)
to speedup the cross correlation and other things the whole image can be optionally downsampled
Step 2 (luminance cross correlation)
antisky.c used a FFT based cross correlation, this while asymptotically quite fast is really not fast at all for the case of finding the best matching cutpoint difference in reality, 2010 uses a adaptive comparission which first tries a few coarsly spaced cutpoint differences and then tries recursively more around the better ones, its simpler and faster
Step 3 (missmatched lines detection)
lines which simply cant be matched by cross correlation (mean under some threshold or variance under some threshold or best matching score from cross correlation not much better then average score)
must be marked otherwise the (random) cross correlation result will trash the other surrounding lines
Step 4 (PAL phase detection)
as already described somewhere above, PAL flips the sign of the V component every second line, the first line may or may not be fliped, and if the capture hw doesnt provide us with this info then we need to find it if we want to decode color, one way to solve this is to let the user set the flag and then just flip the flag after every frame, a much more robust method is to compare the color of the first and last pixel of the encrypted line which will be exchanged, these are guranteed to be always adjacent in the decrypted image so they will have almost the same color in the decrypted image, while in the encrypted image their difference depends upon the PAL phase thingy
Step 5 (finding the Chroma phase difference)
we know the approximate cutpoint difference from the luma cross correlation, so we can simply find the chroma phase/hue difference by calculating the complex correlation coefficient (U and V are real and imag values) (note a little care needs to be taken here due to various things i dont remember exactly …)
Step 6 (edge detection)
simply run your favorite edge detector over the 256 possible cutpoints to detect the picture edge, for example log(abs(l[x-2] + l[x-1] – l[x+0] – l[x+1]))
Step 7 (dynamic programming search)
Well here we simply combine all the previous stuff to find the best cutpoint sequence to reach every cutpoint of line X assuming we know the best sequence to all cutpoints of line X-1 already
lines which couldnt be matched to their previous line are a special case which is handled like the first line (every cutpoint has the same score) the same is done with yet another special case, namely the one where no cutpoint is possible anymore, that can happen as we restrict ourselfs to a few relative cutpoints around what the cross correlation guessed is correct and theres a small deadzone in which no cutpoints lie, so not every sequence of relative cutpoints is possible depending upon how you define relative
Step 8 (cutpoint sequence cache)
over some timeperiod cutpoint sequences where repeating after an hour or so, so i added a cache to 2010 which lead to perfect decoding until the repeations dissapeared …
Step 8a (finding the current sequence)
we select several consecutive relative cutpoint pairs and look them up in a hashtable, we also try to add +-1 to both as to compensate for errors in our cutpoint guess
for each pair we will get a list of pointers to cutpoint sequences
we count the number of occurances of each pointer and choose the most often occuring as the correct sequence if its number of occurances is above some threshold or return no match if not
Step 8b (merging the current cutpoint estimates)
we add the current relative cutpoints to the sequence so that each line has a list of relative cutpoints, if that gets too long the least often occuring is discarded
for each line the most often occuring relative cutpoint is choosen and used in later steps, the hashtable is also updated with it
Step 9 (cut and exchange)
this is simple, just 2-3 memcpys and rotating the UV values so as to compensate for the effects of the shift and 1 line delay on the color