T-Mobile recently announced "Binge On", a feature that allows users of mobile data plans to watch as much video as they like from popular providers such as Netflix, without using up their data plan budget. The catch? Video is delivered to your device at "DVD quality", i.e. around 480p resolution, SD in the graphic below. If you want to watch higher resolution video (HD or above), you have to pay for it.
This is an interesting step that seems to go against the recent push towards higher resolution video. But with some commentators questioning the merits of 4K, the Binge On service could be seen as a large-scale experiment to try and answer a question that has been debated for over 20 years.
Would you pay for higher resolution video?
Thursday, November 26, 2015
Friday, September 25, 2015
Are we compressed yet? How to cut through codec hype.
Are you confused by contradictory claims about video codec technologies? Finding out which codec is "best" can be difficult and challenging.
Luckily, the team developing the open-source Daala video codec have a solution. "AreWeCompressedYet.com", an online resource that allows you to compare the performance of popular codecs.
How it works:
Codec developers submit "jobs" to the site. Each job involves coding and decoding a set of common test videos at a range of bitrates. This can be a very time consuming process, but the site collects and stores the results from many previous test runs. The results of each job can be viewed as a rate-distortion curve, with bitrate on the x-axis and "quality" on the y-axis:
Luckily, the team developing the open-source Daala video codec have a solution. "AreWeCompressedYet.com", an online resource that allows you to compare the performance of popular codecs.
How it works:
Codec developers submit "jobs" to the site. Each job involves coding and decoding a set of common test videos at a range of bitrates. This can be a very time consuming process, but the site collects and stores the results from many previous test runs. The results of each job can be viewed as a rate-distortion curve, with bitrate on the x-axis and "quality" on the y-axis:
This example compares the performance of five video codecs, under the following conditions:
- Test video set: "ntt-short-1", a set of 1-second video clips with varying characteristics.
- Video resolution: 720p, 24 frames per second
- Picture quality metric: Peak SNR (PSNR)
In this example, HEVC and VP9 have the "best" performance (highest curves), followed by Thor, H.264 and Daala. There are a few things you should be aware of:
- Measuring picture quality is a notoriously inexact science. Changing the metric from PSNR to a different metric such as SSim may - and often will - change the ranking of the codecs.
- Changing the video set or resolution may affect the results, since each codec's performance depends on the actual video content.
- The online tool is comparing specific implementations of each codec. For example, different software implementation of HEVC will probably perform differently.
- There is probably significant scope for further improvements to the HEVC, VP9, Thor and Daala codecs, since these are all relatively new (or still-developing) formats.
With all this in mind, arewecompressedyet.com provides a quick and easy way to compare codec technologies under identical experimental conditions. Will it resolve the endless arguments about which codec is best? Maybe!
Thursday, June 18, 2015
USPTO invited talk, "structures in video coding"
This week I had the honour of presenting an invited talk to examiners at the US Patent and Trademark Office in Washington DC:
The topic of my talk was "structures in video coding". I explained how video codec structures have evolved from simple, repetitive 16x16 macroblocks in early standards such as MPEG-1....
The topic of my talk was "structures in video coding". I explained how video codec structures have evolved from simple, repetitive 16x16 macroblocks in early standards such as MPEG-1....
..... to complex hierarchies of blocks in recent standards such as HEVC:
I examined the effects of increasingly complex block structures. These changes have led to dramatic improvements in compression performance but also increasing computational demands. I left the audience with a question: after 25+ years of intensive research and development, why do mainstream video codecs still rely on rectangular block structures?
- Iain Richardson
Tuesday, June 2, 2015
7 important video compression concepts that are more than 20 years old
The latest MPEG / ITU video compression standard, H.265 or HEVC, was published in 2013. HEVC is a significant technical achievement, but it's partly based on fundamental work carried out many decades ago.
An HEVC video codec includes the basic building blocks of:
- Prediction : create an estimate or prediction of a current block of video data
- Transform : convert a block of samples into a spatial frequency representation
- Entropy coding : encode video information into a compressed bitstream
Here are seven important research papers and patents dating back to the 1950s that helped to shape
present day video coding technology.
Key:
1. “A Method for the Construction of
Minimum Redundancy Codes”, D A Huffman, Proceedings of the I.R.E., September
1952
- Variable length binary codes for
data compression.
2. “Transform coding of image
difference signals”, M R Schroeder, US Patent 3679821, 1972
- Coding moving images
using frame differencing, i.e. simple inter-frame prediction.
3. “Discrete Cosine Transform”, Ahmed,
Natarajan and Rao, IEEE Transactions on Computers, Jan 1974
- The classic paper on the DCT,
widely used in image and video compression.
4. “Generalized Kraft inequality and
arithmetic coding”, J J Rissanen, IBM J. Res. Dev. 20, May 1976
- Arithmetic coding, a forerunner
of H.264 and HEVC’s CABAC.
5. “Displacement measurement and its
application in interframe image coding”, J R Jain and A K Jain, IEEE Trans.
Communications, December 1981
- An early description of motion
compensated prediction for video coding.
6. “Variable size block matching motion
compensation with applications to video coding”, M H Chan, Y B Yu and A G
Constantinides, IEE Proceedings Vol 137, August 1990
- Motion compensated
prediction with variable size blocks.
7. “MPEG: A video compression standard
for multimedia applications”, D Le Gall, Communications of the ACM, Vol 34 No
4, April 1991
- Bidirectional prediction as used
in the MPEG-1 video compression standard.
Thursday, May 21, 2015
Talk: The Ultra HD Codecs - HEVC and VP9
I'm looking forward to giving this talk to broadcast professionals in Stornoway, Isle of Lewis, one of Scotland's beautiful Western Isles.
-----
The Ultra HD Codecs: HEVC and VP9
-----
The Ultra HD Codecs: HEVC and VP9
When: Friday 22nd May 2015, 11am
Where: MGAlba Studios, Seaforth Road, Stornoway
Like it or hate it, Ultra HD or 4K is making a big impact on the broadcast industry. 4K content has four times the number of pixels of full HD, making storing, transferring and streaming very demanding. This has significant implications for workflows.
The new HEVC/H.265 and VP9 video compression codecs are designed to help handle the challenges of UHD / 4K video. This talk will introduce you to these new codecs. You will learn:
- how the codecs compress and deliver 4K video
- what’s changed from older codecs such as H.264
- how the new codecs perform
- what software and hardware support is available.
-----
If you are interested in arranging a specialist lecture on video coding or streaming technology, please get in touch.
Monday, April 27, 2015
The challenges facing a new codec
I was asked recently to comment on how easy or difficult it might be to introduce a new video compression codec. Here's a summary of my opinion - you can read the full article by John Moulding here.
At present, the market is dominated by standards-based and open source solutions, including H.264 / AVC, VP8, HEVC and VP9. What challenges might be faced by a new entrant to the market?
1. Interoperability: One of the key motivations for the development of standards has been interoperability. Streaming provider X needs to be compatible with playback client Y. Building a critical mass of support for a new codec requires widespread adoption of encoders and decoders that interoperate with each other.
2. Performance: The development of standards such as H.264 or HEVC involved rigorous and thorough testing using agreed protocols for measurement of quality, bitrate, computational requirements, etc. Performance testing needs to be repeatable, such that multiple organisations can check performance and reach the same conclusions.
3. Intellectual property: An open source or standardised solution provides a degree of transparency about who might own intellectual property that is used in a compression solution. For example, implementors of the H.264/AVC standard can take a license to several hundred patents that may be essential to the standard, via the MPEG-LA patent pool.
4. Hardware support: When you play back or stream a video on a consumer device such as a cellphone or tablet, the computationally intensive process of decoding video is assisted by dedicated hardware, enabling smoother playback and better battery life. Support for existing formats such as H.264 and VP8 is built in to chipsets, operating systems and protocols.
A new codec technology has to overcome many hurdles if it is to be widely adopted. However, I am always interested in genuinely new and disruptive approaches to video compression. Is there a challenger out there to the 25-year-old codec model that has been the basis of video compression standards from MPEG-1 to HEVC?
At present, the market is dominated by standards-based and open source solutions, including H.264 / AVC, VP8, HEVC and VP9. What challenges might be faced by a new entrant to the market?
1. Interoperability: One of the key motivations for the development of standards has been interoperability. Streaming provider X needs to be compatible with playback client Y. Building a critical mass of support for a new codec requires widespread adoption of encoders and decoders that interoperate with each other.
2. Performance: The development of standards such as H.264 or HEVC involved rigorous and thorough testing using agreed protocols for measurement of quality, bitrate, computational requirements, etc. Performance testing needs to be repeatable, such that multiple organisations can check performance and reach the same conclusions.
3. Intellectual property: An open source or standardised solution provides a degree of transparency about who might own intellectual property that is used in a compression solution. For example, implementors of the H.264/AVC standard can take a license to several hundred patents that may be essential to the standard, via the MPEG-LA patent pool.
4. Hardware support: When you play back or stream a video on a consumer device such as a cellphone or tablet, the computationally intensive process of decoding video is assisted by dedicated hardware, enabling smoother playback and better battery life. Support for existing formats such as H.264 and VP8 is built in to chipsets, operating systems and protocols.
A new codec technology has to overcome many hurdles if it is to be widely adopted. However, I am always interested in genuinely new and disruptive approaches to video compression. Is there a challenger out there to the 25-year-old codec model that has been the basis of video compression standards from MPEG-1 to HEVC?
Subscribe to:
Posts (Atom)