Design of Broadcast Video Basic System Using FPGA

This article refers to the address: http://

The proliferation of HDTV video content creation and the delivery of such video content in bandwidth-constrained broadcast channel environments continue to spawn new video compression standards and related video image processing devices. In the past, only cable TV and satellite TV operators provided video transmission services. Now, telecom companies are also interested in this field. They use the latest video codec/decoder (CODEC) and video processing technology, and send digital video via IPTV. To the user.

The entire digital broadcast infrastructure began with video content creation in television studios or photo studios. The authoring process interface uses the Serial Data Interface (SDI) to send raw video to a storage device or some type of non-linear editor (NLE) for video editing and performance enhancement. The edited video is compressed with MPEG-2, JPEG2000 or H.264 at the time of encoding and then sent to the user via cable, satellite, terrestrial broadcast or the latest IPTV technology network. Figure 1 shows the block diagram of a broadcast system infrastructure.

Video and image processing trends

Many exciting new technologies, such as HDTV and digital cinema, are associated with video and image processing technologies, and these technologies are still evolving rapidly. Image capture and display resolution jumps, advanced compression technology and video intelligence are the driving forces behind these technologies.

Advanced compression technologies are replacing the previous technology in a comprehensive way, they have better stream processing capabilities, higher compression ratios at a given quality, and shorter delays. JPEG2000 has also been well developed in storage and digital cinema, and the standards committee continues to enhance the performance of H.264 and JPEG2000 as these new compression solutions are put to practical use.

In the past decade, standard definition television (SDTV) in digital television broadcasting has been using the MPEG-2 standard. H.264-AVC (MPEG-4-Part10) and Microsoft version of VC1 will eventually replace MPEG-2 and become the best video coding method for SDTV and HDTV. To meet current and future needs, broadcast equipment manufacturers must support a variety of coding standards. In addition to various core video CODEC standards, there are a number of different types of video pre- and post-processing algorithms that can be used to enhance overall image quality.

As resolution and compression rates continue to increase, the industry demands high performance while maintaining the flexibility of the architecture for rapid upgrades. In addition, as technology matures and usage increases, it is required to continuously reduce costs. Because programmable logic devices (PLDs) can provide solutions to these needs, they can play an important role in emerging digital video broadcasting infrastructure systems.


Figure 1: Schematic diagram of the broadcast infrastructure system

Video content generation

The first loop in the video broadcast chain is to capture audio and video content with professional digital video cameras. The video can be SD or HD. Such digital cameras typically have an SDI output defined by the Society of Photographers and Television Engineers (SMPTE). SDI is an uncompressed video stream that can be 270Mbps (standard definition), 1.485Gbps (high definition) or 2.97Gbps (1,080p HD). Altera's Stratix II GX FPGAs come with a serial/deserializer (SERDES) and clock/data recovery (CDR) circuitry that can be used to process the video stream on the SDI output of the camera.

Video pre-processing / post-processing

The channel broadcast standard NTSC used in North America has a fixed bandwidth of 6 MHz per channel, and the PAL standard used in Europe and other regions has a bandwidth of 8 MHz per channel. This bandwidth limitation is much earlier than the appearance of digital television, and this analog bandwidth limitation also affects the current broadcast standards for digital television. Digital video quality is much better than traditional analog video. The higher the digital resolution, the greater the bandwidth required to transmit or send video data. Sending high quality video requires preprocessing the video source.

Compressing the video too much can produce mosaic noise or mosaic effects due to the DCT of the block-based CODEC. After the video is pre-processed/post-processed, the encoder can be compressed more easily, further improving image quality and reducing transmission bandwidth requirements. This capability is particularly important for the wired, satellite, telecommunications, and IPTV broadcast business models, as meeting high quality requirements must be achieved with very narrow bandwidth constraints. Some pre-processing may include the use of two-dimensional filtering techniques to remove certain high frequency components before the video enters the encoder, thereby effectively reducing mosaic noise. Altera's video and image processing suite includes two-dimensional finite impulse response (FIR) and median filter functions. They provide a flexible and efficient method for performing two-dimensional FIR filtering operations using a 3x3, 5x5 or 7x7 constant coefficient matrix. Therefore, in order to achieve optimal performance in a bandwidth-constrained environment, pre/post processing is a critical point of difference for any video compression method.

Video compression

The next step is to compress the preprocessed raw video data before sending it to the end user. From MPEG-1 to MPEG-2, there have been multiple generations of compression standards. There are four compression methods: discrete cosine transform (DCT), vector quantization (VQ), fractal compression, and discrete wavelet transform (DWT). .

In the case of digital television, the MPEG-2 standard dominates the world, and digital cable, satellite and terrestrial broadcasts use this standard. As the broadcast industry moves toward higher definition content, a given transmission bandwidth is under increasing pressure to adapt to the specified analog bandwidth. With the rise of IPTV in traditional telecom cable systems, it is no longer economically feasible to deliver video programs to users using the MPEG-2 standard.

The ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) are pushing the MPEG4-Part 10 (also known as H.264) standard. H.264 can provide high quality image quality at much lower bit rates than previous standards, and there is not much improvement in complexity. Another goal is to make the standard flexible enough to accommodate a variety of applications (including low bitrate and high bitrate, low resolution and high resolution video) and work well on a variety of networks and systems. . There are other compression standards (such as JPEG2000) that use state of the art based on wavelet algorithms.

Video transmission

The compressed video can be transmitted over short distances in the broadcast room using the ASI standard. The industry trend is to use IP video technology to send video data over long distances. Altera's IP video reference design has the ability to send MPEG-2 Transport Streams (TS) over IP networks. This reference design bridges one or more compressed video streams to IP packets on 100 Mbps or 1 Gbps Ethernet. At the same time, Altera also provides ASI encoding and decoding reference designs. Digital Video Broadcasting Asynchronous Serial Port (DVB-ASI) is a serial data transmission protocol used to transport MPEG-2 packets over a copper or fiber-optic network.

Video scaling and deinterlacing

Studios and front-end devices often need to perform video scaling and deinterlacing for applications such as SD and HD. Other applications include filters for edge detection processing, vertical motion filters, and inter-field motion filters.

For many professional studios, one of the most common requirements is to display various standard SDTV or HDTV signals using a single or multiple display devices. Easily switching between these different video sources with remote control is critical to creating a professional, easy-to-use system, so video scaling and deinterlacing are important for video switching devices/routers (swticher/router) It allows the switching device/router to handle different types of video resolutions for video switching, routing and local display.

Chroma space conversion and video format

Since broadcasters must provide different video formats depending on the geographic location of the end user, the broadcast studio must be able to switch between different chroma spaces and video formats. Colors are generally represented by different color space domains, each associated with a different application depending on system requirements. The color information is determined by two independent chrominance signals Cb and Cr, which are also a function of the third signal - luminance or illuminance signal Y. The RGB chrominance space is determined by three color components - red, green, and blue. When transferring data between devices using different chroma space models, a chroma space conversion is required. For example, to transfer a television image to a computer display, it is necessary to convert the image from the YCbCr chromaticity space model to the RGB chromaticity space. Conversely, transferring images from a computer display to a television requires conversion from RGB chrominance space to YCbCr chromaticity space. Altera's color space converter MegaCore function can be used to implement these color conversions in a variety of applications.

Video and image processing system architecture

The system architecture can choose standard cell ASICs, ASSPs, and programmable solutions such as DSP or media processors and FPGAs. Each method has its own advantages and disadvantages, ultimately depending on the final equipment requirements and the availability of the solution. From the trends discussed above, an ideal architecture needs to have the following characteristics: high performance, flexibility, ease of upgrade, low development cost, and a gradual decrease in cost as the application matures and usage increases.

High performance

Performance is not only about compression, but also about pre- and post-processing functions. In many practical applications, these functions take up more resources than the compression algorithm itself. These features include scaling, deinterlacing, filtering, and chroma space conversion. The high-performance requirements of the broadcast market rule out architecture-only solutions because they cannot rely on a single device to meet performance requirements. The most advanced DSP operating at 1GHz can't complete H.264 HD image decoding, and the complexity of H.264 HD encoding is about 10 times higher than decoding. FPGAs are the only programmable solution that can solve this problem. In some cases, the best solution is a combination of an FPGA and an external DSP processor.

2. Flexibility can speed up time to market and facilitate upgrades

While technology is rapidly evolving, the architecture must be fairly flexible and easy to upgrade. Since standard cell ASICs and ASSPs do not have this feature, they cannot be used for this purpose. ASSPs, which are typically designed for very large consumer markets, will soon become obsolete, so the risk of using ASSPs is too high for most applications.

3. Low development costs

Taking into account the cost of masks and wafers, software, design verification and layout, the development cost of a typical 90nm standard cell ASIC will reach $30 million. Only the largest batch of consumer markets can digest such high development costs. FPGAs are best considered when designing small-volume devices because they don't require the exact functionality of an ASSP, and even the best off-the-shelf solutions have a high risk of being outdated quickly.

Altera Video and Image Processing Solutions

For these reasons, FPGAs are particularly well-suited for use with many video and image processing devices. Altera's FPGAs have the following features: high performance, flexibility, low development cost, prevention of outdated, low-cost, structured ASIC transition paths, and Altera's video and image processing solutions (including DSP design flow, Altera's) Video and image processing suites, interfaces and third-party video compression intellectual property, and video reference designs).

1. Implement ASSP-like functionality on an FPGA/Structured ASIC

As the number of solutions increases, Altera and partners can already offer ASSP functionality on FPGAs or structured ASICs. ATEME's H.264 main class standard definition encoder product is a good example. With this product, users can use the FPGA like an ASSP. Compared to traditional ASSP methods, FPGA solutions can be quickly updated without the risk of obsolescence.

2. DSP design process

Altera provides an optimized DSP design flow for custom development that allows for the presentation of designs in a number of different ways, including VHDL/Verilog, modeled design, and C-based design. Altera's suite of video and image processing features can be combined with any of these design flow options.

Altera and MathWorks have teamed up to create a comprehensive DSP development process that allows designers to take full advantage of MathWork's modeling tool, Simulink. Altera's DSP Builder is a DSP development tool for connecting Simulink and Altera's advanced Quartus II development software. DSP Builder provides a seamless design flow where designers can perform algorithm development in MATLAB software, system-level design in Simulink software, and then export the design as a hardware description language (HDL) file for Quartus II software. Tightly integrated with the SOPC Builder tool, the DSP Builder tool helps users build systems that integrate Simulink design, Altera's embedded processors and intellectual property cores. This development process is intuitive and easy to use for designers who have little experience with programmable logic design software.

3. Video and Image Processing Suite

The video and image processing suite consists of nine functions that can be statically changed, and in some cases even dynamically changed. A typical video system using a video and image processing suite is shown in Figure 2.


Figure 2: A block diagram of a typical video system using a video and image processing suite

4. Video Development Kit

Altera has two new video development kits: one is the audio and video development kit Stratix II GX Edition, which provides 2 channels of composite video output, VGA output, 96kHz audio I/O, 256MB of DDRII DRAM and Cyclone II devices. The other is the video development kit Stratix II GX Edition, which supports 4-channel HD SDI, ASI, DVI, HDMI, USB, Gigabit Ethernet, 1394 and DDRII SDRAM. The development kit also includes a video reference design using the Video and Image Processing Suite, DSP Builder and SOPC Builder development Tools. In addition to these kits, there are several Altera third-party development kits for video solutions.

Strain Clamp

Electric Equipment,Electric Accessory,High Voltage Line Equipment

Standard Cable Ties Co., Ltd. , http://www.nscableties.com