Zeeshan Rahman , Sr. Broadcast Executive , Times Television Network

Error resilient encoding can deal with packet loss and delay cognizant video coding is effective in dealing with delay variations. Combination of these techniques can provide range of the solutions to the problem of QoS fluctuations.

The ubiquitous nature of the cellular 3G/4G network and the Internet has made broadcasting video over these IP networks very attractive. However, compared to traditional video transmission, the best effort nature of these networks poses many challenges. The Internet does not offer quality of services (QoS) that is required to stream real-time video.

In streaming application, video is played out while parts of the content are being received and decoded. Due to its real-time nature, streaming video has bandwidth, delay, and also timing constraints. The audio and video must be played out continuously, that is must be synchronized. If there is any pause or synch-out then it is annoying to the end users.

Streaming video over the Internet basically consists of broadcasting content; an encoder to digitize it; protocols to transmit over a content delivery network, such as cellular network, a Wi-Fi hotspot, or the Internet; to distribute and deliver the content that meets the QoS requirements.

Raw video must be processed, graded, compressed before transmission to achieve efficiency. Scalable video coding can achieve high compression efficiency with low complexity. Error resilient encoding can deal with packet loss and delay cognizant video coding is effective in dealing with delay variations. Combination of these techniques can provide range of the solutions to the problem of QoS fluctuations.

Adaptive Bit Rates Video Encoding Using H.264/AVC

Further, H.264 and more advanced H.265 encoding standards provide good video quality at substantially lower bit rates. Adaptive bit rate (ABR) encoding maximizes the video over IP performance by dynamically adjusting the H.264 encoding. To maximize the video performance over IP networks, H.264 encoding is used with ABR encoding control. The H.264 standard supports the ability to use quarter pixel precision with double the motion prediction accuracy.

ABR encoding maximizes the video over IP performance through the network by dynamically adjusting real-time rate control parameters of H.264. A maximum video encoding rate for the network is set as a cap value where any encoding below this cap value can be dynamically throttled up and down and depends upon network bandwidth capabilities.

Streaming Network Protocol

Network protocols such as transmission control protocols (TCP), internet protocols (IP), and user datagram protocols (UDP) are used to stream video between the end users with cell phone, laptops, or tablets. Congestion control or error control is usually needed to cope with packet loss and delay in these wired or wireless IP networks.

When we are looking in depth then there are various protocols and their combinations have been designed to provide a complete streaming service between the end users over IP networks. Protocols used for streaming application can be mainly alignified into session control protocols, network-layer protocols, or transport protocols. Session control protocols such as real-time streaming protocols (RTSP) and session initiation protocols (SIP) provide control over delivering video during an established session. Network-layer protocols such as the IP support network addressing of the senders and receivers.

Transport protocols provide end-to-end functions for streaming application. There are two fundamental transport protocols used today: TCP and UDP.

Transport Control Protocol (TCP)

TCP provides a connection-oriented full-duplex between the sender and receiver and is widely used to transport data over the Internet. TCP requires a three-way handshaking to establish a connection.

After the connection has been established, TCP manages a reliable transmission where the receiver receives data in the same order in which the source sent it. It detects whether any packet is discarded by the network, duplicated, corrupted, or reordered. The sender maintains a copy of all transmitted packets until it receives an acknowledgement from the receiver of its correct transfer.

TCP employs congestion control to avoid sending too much traffic and adapts to the load condition within the network. There is no constant data transfer rate in TCP. If the sender senses that there is more available network capacity then it will increase its transfer rate until some form of network saturation is reached, conversely, if it experiences congestion, then the sender lowers its transfer rate to allow the network to recover. TCP also employs the flow control mechanism to prevent the receiver buffer from overflowing. Thus TCP tries to achieve highest possible data transfer rate without causing significant data loss.

User Datagram Protocols (UDP)

UDP provides a connectionless best effort service to transfer data within a network. It is unreliable as it provides no guarantee of packet delivery or protection from packet duplications. It does not provide congestion control and can choke the network by sending data at a higher rate than the available network capacity.

Both TCP and UDP have attractive features needed for broadcasting video over Internet. However TCP might not be efficient as it incurs a lot of overhead, delay, and the retransmitted packets might miss their playout, whereas UDP is not reliable and does not have flow control mechanism. Transport layer protocols such as stream control transmission protocols (SCTP) and reliable UDP (RUDP), attempt to adopt simple transmission model of UDP with the association of congestion control and error correction mechanism of TCP. Another attempt is to use an upper layer protocol such as Real Time Transport Protocol (RTP), which works on top of the UPD/TCP, to provide end-to-end transport functions for streaming applications.

Quality of Services (QoS) Control

Excessive delay and bursty nature of packet losses in the Internet can have a major impact on the performance of real-time video streaming application. In the absence of QoS support from the network, end users must employ application layer QoS control such as flow control or rate control and error control.

Rate Control

Rate control is necessary at the end system to help reduce packet loss and excessive delays. It determines the sending rate of the video stream based on available network capacity to avoid congestion and loss of packets. Source-based rate control and receiver-based rate control are two types of rate control.

Error Control

Error control schemes are necessary for reliable communication as they detect and correct packet errors. There are three types of error control schemes.

Forward error correction (FEC). FEC adds additional information on the original message so that they can be constructed correctly at the receiver end in the presence of packet loss.

Retransmission schemes. Retransmission is one of the main methods of recovering lost packets. However it is not suitable for real-time video streaming as re-transmitted packets might miss their playout time.

Error concealment. Error concealment aims to minimize the deterioration caused by the data transmission loss or error by masking the corrupt data.

Other transport protocols such as SCTP and RUDP have been designed as a trade-off between TCP and UDP for streaming real-time video over IP networks. Flow control and error control schemes are crucial because of excessive delay and bursty packet loss nature of the wired or wireless packets networks. Rate control determines the sending rate of the video stream based on available network capacity and thus avoids congestion and loss of packets. Error control schemes can detect and correct data error and are therefore necessary for reliable video broadcasting.

These attributes are crucial for broadcasting video over a bandwidth constraint for IP wired or wireless networks such as the cellular 3G/4G networks.