Thursday, September 4, 2014

How to stream better quality video: Part 1 - Basics and good practices

by Iain Richardson and Abharana Bhat

You're trying to stream a video to a work colleague who is complaining that it's very, very slow to load. Or perhaps you are trying to embed a video clip in your web page and are not satisfied with the quality of the result. Is there anything that can be done to improve the situation?

Video streaming sends your video (source) to play back on a computer or mobile device (client), using an internet connection to transport the video. Streaming video is a complex process involving many different components. In this series if articles, we help you understand the basics of streaming videoand the various factors that make a big difference to the quality of your video clip.

In this first article, we look at the basics of streaming video and discuss good and bad practice.


1. Streaming video: the basics
These are the basic components involved in streaming video from source to destination.

Source - A video clip you've recorded or created.

Editing and compression - Once you have edited your media file, it is converted into a form suitable for sending over a network. This involves compressing or encoding the video and audio information.

Server - Stores the media file and sends it to the client on demand.

Network - Transports (streams) the video from the server to the client.

Client - Receives the media file, extracts (decodes) video and audio information and plays back the video clip.  

"Streaming" means that the client can start playing the video clip before it's fully downloaded, as soon as there is enough data to start decoding and playback. In this screenshot, the client has received and stored enough data to decode and play the clip, even though the whole clip hasn't been downloaded yet. The video clip plays smoothly, while the client continues to download and store ("buffer") the remaining data.

Figure 2: Streaming playback


2. Compression and codecs

2.1 The basics of compression

A codec is an encoder and a decoder. An encoder compresses audio or video so it takes up less disk space. A decoder extracts audio or video information from the compressed file. Video and audio compression is a complex technical process, but the basic aim of a codec is quite straightforward:
(a) Reduce the size of the compressed media file as much as possible, but...(b) Keep the quality of the decoded audio and video as good as possible.
Most codecs work by removing information that is not noticeable to the viewer / listener and by exploiting similarities in the audio / video data.
Example 1: Small, subtle variations in texture will tend to be "masked" by strong shadows and edges. A video codec can exploit this by keeping stronger image features and throwing away some of the fine detail.
Example 2: A video clip contains 25 or more frames per second. Instead of sending the entire frame every time, a video codec can save a lot of data by only sending the differences between frames. 
Any device that records or plays back video contains a codec, either as part of a chip (hardware) or in software. When you record video on your mobile handset, watch anything on TV, or play video on the web, you're using a codec.

2.2 What's in a compressed media file?

Video or audio files on your computer or mobile device are usually stored in a compressed form. Common media file formats have extensions such as .mp4, .mov, .avi, etc.

There are three main components of a compressed media file: compressed video, compressed audio and the structure of the file itself.

The compressed video will typically use one of a number of standard video codec formats, such as MPEG-2, MPEG-4 or H.264, VP8, etc. The compressed audio will use one of a number of audio codec formats, such as MP3 or AAC. The compressed video and audio are typically packaged into a container format, i.e. a special type of file that is designed to carry video and audio along with other information known as metadata. Examples of container formats include MP4, MOV, MKV and AVI.

Many combinations of video codec, audio codec and container format are possible. One challenge you may face is that not all combinations are supported by every computer or mobile client.

3. Important concepts

Bitrate and bandwidth - The information-carrying capacity of a network connection is commonly known as bitrate or bandwidth and is measured in giga/mega/kilobits per second. A 3G mobile connection might be capable of carrying a few tens of kilobits per second; a home broadband connection may support hundreds of kilobits up to a few megabits per second; a fibre or leased line may be capable of carrying tens of megabits or more per second.

To stream video smoothly, it is important that the bitrate or bandwidth of the network connection is greater than the bitrate of the streaming media file. Even after compression, video and audio can require a significant bitrate capacity.

Video resolution - Most cameras, even on mobile devices, are capable of recording video in High Definition (HD) resolution. However, significantly more bandwidth or bitrate is required to handle HD video compared with lower resolution video.

A single frame of "full HD" or "1080p" video (1920x1080 pixels) has more than twice the pixels of a single frame of "720p" video (1280x720 pixels), which in turn is more than twice the size of a single frame of "Standard Definition" or "SD" video (720x576 pixels or smaller).

Transcoding - If your video/audio material is not suitable for streaming, you may need to transcode it. This may involve:
Changing resolution: For example, resample from 1080p HD down to 720p or SD.
Changing codec or container: For example, change the video codec from MPEG-2 to H.264, or change the file format (container) from AVI to MP4.
Improving streaming performance: For example, reorganising the audio and video samples in the container file, or converting to an adaptive streaming format.

4. Tips for better streaming performance

  • Keep the camera as steady as possible. More movement means more data after compression, which means that you will need more network bandwidth to achieve good quality video playback.

  • Use good lighting where possible. Poor lighting can produce increased camera “noise” which leads to more data after compression. Here's an example: the first scene is well lit, the second is dark and "grainy" in appearance. The poorly-lit scene will require more network bandwidth to transmit.
  • Choose your video codec carefully. Older codec formats such as MPEG-2 are less efficient, i.e. the coded video file will be considerably larger than a newer codec such as H.264. The latest codecs, such as HEVC and VP9, may give the best compression but there is limited software and playback support for these codecs at present (see Part 3).
  • Use a lower resolution (e.g. SD rather than HD) or consider adaptive streaming (see Part 2).
  • Be aware that video content has a significant effect on the bandwidth required for streaming and/or on the video playback quality. For example, if two SD video clips are both compressed at the same bitrate (400 kilobits per second). The clip with more movement will result in the video quality looking worst even though the video resolution and the streaming bitrate are the same.

  • Does your editing software have options such as “slow” or "multi-pass" encoding? If so, consider selecting these options. Processing the file will take longer because the software will spend more time to choose the best compression options for each video frame. However, the end result may be better video quality and/or a smaller compressed file.

5. Going further

Part 2 of this series of articles will cover more advanced topics such as Adaptive Streaming. Part 3 will compare the latest video codecs in depth.

To find out more about video compression and streaming, visit Iain Richardson's website: vcodex.com


Copyright (c) Vcodex Limited / Onecodec Limited, 2014

No comments: