Vinayak Tiwary , Sr. Manager , Technical ,UPTV- Bharat Samachar , Lucknow

For sending a one-second uncompressed video through satellite we need a transponder capacity of minimum 30 MHzs.

Video compression is a very interesting topic. Video is a temporal combination of frames. Each frame can be considered as an image comprising a spatial combination of pixels. Here arises the need to understand why it is necessary?

If we choose any frame of video resolution of 720 576, it consumes 720 576 24 (24 is the color bits, 8 bit for each color, i.e., RGB) =9953,280 bits means about 1 megabits. There are 25 or 30 frames in a one-second video, which means a one-second video consumes about 25 or 30 megabits. For sending a one-second uncompressed video through satellite we need a transponder capacity of minimum 30 MHzs. It will cost too much and that is why we send videos after compression where a transponder of less bandwidth can communicate easily or we can use one transponder for sending much more videos at a time.

This is not the only reason behind video compression. We need to minimize the storage used by such videos and for that also we need video compression.

Uncompressed 1080 HD video footage takes about 10.5 GB of space for each minute of video. Using a smart phone to shoot videos in 1080p resolution will take about 130 MB storage for each minute of the footage, while 4K video takes 375 MB of space for each minute filmed. Because it takes up so much space, the video must be compressed before uploading on the web. Compressed, just means that the information is packed into a smaller space.

The question then arises what will be the method for compressing video where data of video do not change even after compression.

Basically there are two types of compression – lossless and lossy.

  • lLossless – as the name suggests no information is lost after compression.
  • lLossy – some information is lost after the compression (not exactly the same) but we feel that we are getting the entire information. It is nothing but an illusion of our eyes.

In video compression both the methods, lossless and lossy compressions are used.

I am describing here basics of video compression. In satellite communication, one of the sources of video is the camera. Two basic principles are used – JPEG (joint picture expert group) – used to compress images by removing spatial redundancy that exists in each group and MPEG (motion picture expert group) – used to remove temporal redundancy of a set of frames.

JPEG – involves the following four steps – block preparation, discrete cosine transform, quantization, and compression.

The video is first digitized and converted into 640 480 pixels, each pixel having an RGB component (image is digitized). The process of digitizing the domain is called sampling. Next, each pixel is converted into YUV or YCbCr component. Y is the luminance because our eyes are more sensitive to luminance.

Actually the video is streaming in sequence, the sequence is first converted into groups and groups are converted into pictures (IBBBp); pictures are converted into slices and slices are converted into blocks. After block preparation the video is sent for the discrete cosine transform step. Each value is converted.

After the DCT steps, blocks are sent for the quantization process. They are divided by the different numbers and fraction parts are removed. The process of digitizing the range is called quantization. MPEG is lossy because of quantization steps. However, our eyes are sensitive to the differences.

Run length coding – A zigzag pattern is used to concentrate all the 0s together. The runs of 0s can be replaced by a single count (say, 38 0s).

Moving Picture Expert Group has made some standards like MPEG1, MPEG2, MPEG4 etc. and every standard has own set of techniques to compress video, data, and audio. MPEG provides tools for compression.