The video compression industry is on the cusp of technical disruption and uncertainty. On the one hand, there is continued pressure on broadcasters and pay-TV service providers to deliver more channels at improved quality and higher resolutions, in order to meet customer expectations and retain subscriber loyalty amidst growing competition. On the other hand, profitability is falling as costs increase and the ability to charge additional fees is limited. Thus, investing in new infrastructure for additional delivery capacity is a difficult proposition. The critical need is to achieve better compression such that more channels or better resolution can be delivered over existing infrastructure or with only modest expansions in infrastructure.

High Efficiency Video Coding (HEVC) or H.265 is the next-generation video compression technology which is designed to provide nearly 2 improvement in compression efficiency. The challenge with HEVC for now is that implementations are not fully mature. Additional restraints to adoption include lack of end-to-end workflow components, lack of decoders, and uncertainty in patent licensing terms. Until HEVC can come of age, there is thus an urgent and critical gap in meeting customer needs for improved compression efficiency.

This gap is creating restraints for sale of video encoding/transcoding equipment as well. At the same time, there is slow-but-steady movement forward, particularly in the areas of 4K, next-generation broadcast, and HDR. With future roadmaps and product strategies for many classes of companies hinging heavily on video codec strategy, there is the need for reliable and objective strategic direction around this new codec.

The predominant technology used for video compression today is H.264 or AVC. AVC implementations are widely considered to be mature, with a wide range of options across hardware and software technologies leveraging custom silicon, commodity silicon, FPGAs, or general purpose CPUs. A select set of companies are continuing to innovate in the area of AVC compression to continue to improve compression performance by leveraging state-of-the-art CPU innovations and improved algorithm designs. Competing options for next-generation video compression technology are placed in context, for both M&E and enterprise applications.

Compression Options

Modern video delivery systems rely on some form of compression, whether it is a traditional broadcast TV, cable service to a set-top box, or a streaming service that terminates on a gaming console, smartphone, or tablet device. Though several compression types have preceded it, MPEG is one of the most widely used compression standards. This compression type has predominantly required hardware encoding, but high-quality software-based solutions have become more prevalent due to the steady increase of computational capabilities in general purpose compute platforms.

Another influential factor in determining when to utilize hardware-assisted or software-based compression is recent shifts in the way video content is distributed and consumed. Media companies are now confronted by the need to produce more content than ever before and to distribute it to an ever-growing diversity of receiving devices. These demands require media companies to support a growing assortment of protocols, formats, and resolutions, which can require frequent upgrades and code revisions.

Despite the flexibility and cost-efficiency benefits of moving compression operations to software, media companies must consider an assortment of variables. Any shift from hardware-based encoding to software-based encoding will depend upon a number of factors, including replacement equipment cost, includes one-time purchase plus licensing and support, existing infrastructure, what can be reused, how complex will conversion be?, distribution requirements, what distribution
formats and resolutions are needed?, technical factors, such as video quality and latency.

Hardware Acceleration for Cloud-Based Streaming

Dedicated hardware encoding has, in general, been regarded as the better option for high video quality compression platforms because these encoders use processors that are specifically designed for the job at hand – they only process the algorithms required.

Hardware compression uses ASICs (application specific integrated circuits) for encoding processing and FPGAs (field programmable gate arrays) to support any additional transport stream (TS) processing or video analysis. But the higher processing performance and power efficiency come at a cost.

Dedicating CPUs to server-based encoding made sense back when most video was SD and a smaller percentage of the overall workload. In the near future, when video becomes 80 percent of network traffic and codec complexity is 1000 higher, a new specialized compute accelerator needs to be implemented to handle the job of encoding and processing video prior to streaming. FPGAs are inherently good at video acceleration because of the flexibility they provide and a main reason why hardware acceleration companies were invited to join the alliance for open media.

FPGAs are expected to accelerate AV1 encoding by at least a factor of 10 compared to software-based encoders that run on a CPU. Their programmable and reconfigurable capabilities allow for multiple optimizations across a wide range of encoding profiles in addition to enabling optimization for nonvideo workloads. As any codec evolves and improves over time, FPGAs ensure cloud data centers always have a state-of-the-art video acceleration solution.

Additional benefits of hardware encoding include lower latency. In hardware compression, latency depends on the encoding type and profile. Distribution encoding latencies are typically higher, but deliver better bitrate efficiency. Hardware encoders are often used in more static, point-to-point systems that are reconfigured rarely or not at all during their operation.

Advances in compression technologies, such as H.264 and HEVC, have enabled the delivery of more and higher-quality services over existing infrastructures like terrestrial, satellite, and cable.

While dedicated ASICs tend to be low cost for the performance they provide and require less power – two key components in the selection process for any encoding solution – a significant disadvantage is the lack of flexibility, and the fact that many key functions cannot be re-programmed. Every time a new codec is introduced, new hardware is needed. An alternative solution for hardware encoding is an FPGA-based encoder, which provides the flexibility of being reprogrammable and has a faster time to market. One key drawback, however, is their lower level of density and a greater cost tradeoff when compared to an ASIC-only implementation – not just in expense but in power consumption.

Hardware-based encoding often has higher upfront costs but lower running costs, such as maintenance and power consumption, than a service based software approach. The costs for a software-based approach, however, depend on whether the encoder is local or virtualized in the cloud. Outside of a cloud model, the upfront costs can be comparable to hardware only – in order to purchase the servers needed – as well as ongoing operational costs. A cloud model offers low initial costs but higher operating costs.

Video compression, with newer codecs such as HEVC and higher resolutions such as 4K (3840 2160 pixels), is very compute-intensive, particularly for live content.

To achieve reasonable power, space, cabling, and reliability requirements, hardware acceleration is often used. This can be in the form of dedicated ASICs, FPGAs, GPUs, or embedded encoding hardware inside Intel CPUs or integrated smartphone chipsets. Therefore, video compression systems often utilize one or several of these technologies to achieve the necessary performance.

Software Compression

The main difference between hardware and software encoding is that software encoding uses standard commercial-off-the-shelf (COTS) server platforms that are widely available through major IT providers – such as Hewlett Packard Enterprise and Microsoft – and can simultaneously process a multitude of computations.

Software-based encoders have historically been regarded as not delivering the same quality as hardware encoders in real time. But those deficiencies are disappearing. Video quality deficiencies in the past were not due to the software itself, but to the compute and processing power in the servers, which did not meet the needs for real-time performance. The processing power in IT servers is rising dramatically, meaning software can do more and more in real-time which could not be done in the past. We are now reaching a point where software is becoming cost-effective and is meeting the required level of channels and quality and performance targets that service providers need.

Running compression operations on generic server platforms offers significant flexibility and versatility. Software-based encoding is also well-tailored to cloud-based and OTT applications, which require the transcoding of multiple ABR (adaptive bit rate) streams of the same content to accommodate changes in transmission bandwidth and/or for differing receiving devices. For applications, a file-based compression or for live content with lower resolution video or simpler codecs, pure software-based implementations can offer acceptable performance in a virtual environment.

Asking whether software or hardware is better for video compression is the wrong question. Rather, one should ask, what is the best technology to solve my video compression problem? Depending on the use case, the choice of technology can differ.

Hybrid Software and Hardware Compression

Combining software and hardware compression solutions in a hybrid approach often provides media professionals with the ability to customize a compression strategy that most efficiently balances video quality, flexibility, and costs. A hybrid approach, for example, may rely on ASICs-based hardware to manage the complexities of encoding and software to manage other requirements, including TS processing or video analysis, removing the need for FPGAs.

A hybrid solution often gives media companies the ability to provide denser solutions while at the same time, simplifying the process of adding new features – all while retaining video quality due to the advanced computational power of the hardware used. The hybrid approach allows for fast application of the right tools for a given requirement at any time. While moving to software-based systems on standardized IT platforms allows for a more flexible and cost-effective approach for the distribution of content, the cost and operational disruption of a wholesale move to an all-IP environment is not feasible for most media companies. The move to an IP-based infrastructure, with rare exception, will be gradual.

Whether to provide high-quality linear service over cable and satellite or to provide hundreds of thousands of services to multiple devices implies a comprehensive and optimized compression strategy has never been more important.

Way Forward

Achieving a compression strategy that strikes the perfect balance between hardware and software depends on a number of variables. Broadcasters, content distributors, and others will need to prioritize the importance of flexibility, density, video quality, and price-performance on a service-by-service basis.

Next-generation video compression formats are likely to start hitting the mainstream this year. These technologies will significantly improve the performance of online videos by using complex mathematics to throw away superfluous data.Until now, the barriers to adoption for these technologies have been significant. A major back stock of devices that do not support these modern compression technologies is one of the main challenges. However, with these new next-gen formats coming, things might change this year.

Today the most common video compression standard remains H.264. H.264 is used for almost all online video, as well as on other platforms like Blu-Ray. Media companies will need to closely monitor additional factors and market drivers, including codec maturity, advances in the computational power of generic platforms, and shifts in video consumption practices.

In today's dynamic market, a well-tuned compression strategy is a competitive imperative. The ability to flexibly and cost-efficiently repurpose existing content to OTT and ABR (adaptive bitrate streaming) platforms can increase revenue and market share for both content owners and distributors.