Anda di halaman 1dari 4

Commodore transformer platform The volumes and costs of video storage and transmission are soaring.

This situation can only be ameliorated by massive investments in infrastructure or by technological breakthroughs or both. Commodore has developed and demonstrated patent protected technology to reduce the size of any video file compressed by any existing video codec, e.g., MPEG-4, H.264, DivX, VC-1, etc, with 70% to 85% of the original compressed size without loss of the video quality resulting from the decompression and display by the codec of its compressed file. The unique Commodore pre-transformer involves the preparation of the frames of the original video file before passing them on to a given codec. The codec then processes the received frames in its usual way to produce a much smaller compressed video file than without the initial Commodore processing. The compressed video can then be stored and/or transmitted to any device. For playback, the codec decompresses the compressed video frames in its usual way. Then passes them on to the Commodore post-transformer for final processing before displaying. The quality is indistinguishable from that produced by the codec alone without using the pre- and post-processing by Commodore. The mathematical principles behind the Commodore algorithms are those of the Wavelet Transform (WT). A picture (or video frame) result in an enormous quantity of digital data. Our human brains are not able visualize all information. Only a particular spectrum we are able to process. This is important because in WT you discard all irrelevant information. In the Commodore invention we are applying visual data by our algorithm to WT in a new and unique way. The first ideas are patented in 2008 and recently a new patent with more powerful algorithms is applied. Such new algorithms for video data pre- and post-processing to be used with any given video codec without modifying the codec in any way are the basis for the extraordinary typical results shown in the following table.

Copyrights: Commodore Technology AG

Commodore Improvement of Video Codec Compression Capabilities (Without loss of video quality for any given codec) Typical Results for MPEG-4, H-264, VC-1 HD 1080P Videos Original Size Codec-Compressed Bitrate = 6 Mbps Codec+ Commodore Bitrate = 1.5 Mbps 61 MB 121 MB 125 MB 64 MB 140 MB 132 MB Codec+ Commodore Bitrate = 0.75 Mbps 35 MB 65 MB 66 MB 36 MB 73 MB 69 MB

Living Landscapes Casino Royale Planet Earth Yellowstone Over California Steelers

36.6 GB 73 GB 76 GB 38 GB 82 GB 79 GB

226 MB 451 MB 460 MB 238 MB 508 MB 490 MB

These results can be reached by adding a small piece of software in existing systems. This software is part of the multimedia platform of Windows, Android, Xbox, PS3, Symbian and other devices. As demonstrated in above table we are able to reduce substantially current costs of storage and transmission of high quality video including High Definition Video by 70% to 85%. The users is able to use his existing infrastructure without any degradation of quality of service. The used process flow of the date is showed in the table below.

In case of any question or interest please dont hesitate to contact us: E-Mail: j.bruyn@commodorecorp.com

Copyrights: Commodore Technology AG

Introduction in Standard Wavelet Video Coding Wavelet video coding is an emerging technology that could pave a way for a high adaptability of video. It provides the same functionalities as scalable video coding as presently developed within H264, and its further potentials are being investigated within the MPEG group. Motivation The demand on higher mobility of video content across different platforms requires a solution that provides high degrees of scalability in spatial, temporal and quality domains. As the wavelet based coding technologies provide full embeddness in these three domains, it is obviously as a strong candidate for achieving such requirements. Overview of technology Wavelet video coding provides a framework for highly scalable video coding as enabled by the multi resolution properties of wavelet transform. It is based on two key technologies - spatial wavelet transform and motion compensated temporal filtering (MCTF). It has been shown that different orders of these two decomposition strategies bring different advantages to the coding efficiency. Fig. 1. shows a general framework that includes wavelet transform in spatial and temporal direction. The main difference between possible wavelet based schemes comes from chosen number of pre- and post- 2D spatial decompositions. Omitting pre-spatial decomposition introduces high compression gain and provides excellent quality for applications that do not need high-quality spatial scalability. On the other hand employment of pre-spatial transform contributes to enhanced lower resolution visual quality.

Fig. 1: General coding framework H264 (market standard) encoding and decoding Because H.264 encoding and decoding requires significant computing power in specific types of arithmetic operations, software implementations that run on general-purpose CPUs are typically less power efficient. However, the latest quadcore general-purpose x86 CPUs have sufficient computation power to perform real-time SD and HD encoding. Compression efficiency depends on video algorithmic implementations, not on whether hardware or software implementation is used. Therefore, the difference between hardware and software based implementation is more on power-efficiency, flexibility and cost. To improve the power efficiency and reduce hardware form-factor, special-purpose hardware may be employed, either for the complete encoding or decoding process, or for acceleration assistance within a CPU-controlled environment. CPU based solution is known to be much more flexible, particularly when encoding must be done concurrently in multiple formats, multiple bit rates and resolutions (multi-screen), and possibly with additional features on container format support, advanced integrated advertising features, etc. CPU based software solution generally makes it much easier to load balance multiple concurrent encoding sessions within the same CPU. The 2nd generation Intel Core i processors i3/i5/i7 (code named "Sandy Bridge") introduced at the January 2011 CES (Consumer Electronics Show) offer an on-chip hardware full HD H.264 encoder. The Intel marketing name for the on-chip H.264 encoder feature is "Intel Quick Sync Video". A hardware H.264 encoder can be an ASIC or an FPGA. An FPGA is a general programmable chip. To use an FPGA as a hardware encoder, an H.264 encoder design is required to customize the chip for the application. A full HD H.264 encoder could run on a single low cost FPGA chip by 2009 (High profile, level 4.1, 1080p, 30fps).

Copyrights: Commodore Technology AG

ASIC encoders with H.264 encoder functionality are available from many different semiconductor companies, but the core design used in the ASIC is typically licensed from one of a few companies such as Chips & Media, On2 (formally, Hantro. Now merged to Google), Imagination. Some companies have both FPGA and ASIC product offerings. Texas Instruments manufactures a line of ARM + DSP cores that perform DSP H264 BP encoding 1080p at 30fps. This permits flexibility with respect to codecs (which are implemented as highly optimized DSP code) while being more efficient than software on a generic CPU.

Copyrights: Commodore Technology AG

Anda mungkin juga menyukai