Anda di halaman 1dari 10

LECTURE NOTES

Course: Video 1 Course Code: COMM2303 Program Code: BP203 Semester: Semester 1, 2012 Lecturer: Dr Shaun Wilson

OVERVIEW Video is a moving image recording, broadcast system and format used for many years as a cheap and instantaneous alternative to film. It has been used extensively over the past 45 years in both the commercial and creative industries and is now an accepted as a viable and in many cases a standard replacement of 16mm and 35mm film stock in moving image capture and recording systems. The origins of the medium came from research and development in the 1940s by the United States military and later; by the efforts of NASA to develop a practical image capture system for the moon landings. However, the first consumer version was released by Sony in the early 1960s with the now infamous Sony Portapak (left). Since then the technologies engaged with video industries have grown from its early beginnings to a multi trillion dollar market which now is extended to High Definition and beyond (known as 2 to 12k and UHD or Ultra High Definition) Video has, since its early development, been an instantaneous method of recording moving images onto magnetic tape and in recent years as a digital file acquired on either on solid-state memory cards or external hard drives.

Traditionally video was produced and transmitted in three main types of distribution formatting designated to the TV broadcast standards of PAL (Europe, Asia, Australia etc), NTSC (Japan, USA, Canada etc) and SECAM (Russia, parts of Africa, France etc).

The two consumer level milestones in recent video are the release of Digital Video (DV) in 1995 and High Definition Video (HDV) in 2003. Both these types of formats use a computer chip called a CCD to aid image recording. Most video cameras these days have three CCDs that are nominated to control and capture red, blue and green (otherwise known as RGB) streams. Today video is used extensively in every day life to cross over from movie cameras to the Internet, security and surveillance, the visual and computational-based arts, communication and transport, entertainment, and digital archives. It proves to be one of the most significant image capture inventions of the twentieth century and the main driver of motion pictures in the twentieth-first century. BROADCAST The broadcasting of primitive pre-video systems has been in use since the 1930s but mainstream commercial use occurred from the 1950s onwards in the USA, Europe, Africa, Oceania, and Asia Three main traditional analogue standards (PAL, NTSC, SECAM) have broadcast signals using different ways to translate and receive colour but are all being replaced worldwide by the one international system of digital ATSC standard over the next 10 years. This has been referred to in broadcast communities as the [next]core of culturei, coined from English author Roger Scruton.

NTSC (National Television Committee System) was originally devised as an operational broadcast format as BW in the USA in 1941 and in colour in 1953. Sometimes affectionately referred to as Not he Same Colour by editors and film makers alike, it is designed to broadcast moving images on an electrical grid running at 60 Hz. NTSC runs at 480 lines and broadcasts at 29.97 (30fps) frames per second. Discussion as to the performance and quality of this standard brought about the European sectors to device two other standards 2

(PAL and SECAM) that are thought of as an attempted improvement in both signal and picture quality. PAL (Phase Alternating Line) was originally devised as a European response in 1963 at improving NTSC broadcast. Sometimes affectionately called Perfect At Last by editors and film makers alike, it is designed to broadcast moving images on an electrical grid running at 50 Hz. PAL runs at 625 lines and broadcasts at 25 (25fps) frames per second. Because PAL images are captured and broadcast at 25fps rate the images are more closely matched to a film look (24 fps). Note all Standard Definition (SD) DVDs and DVD players worldwide are encoded to and run at 25fps. SECAM SECAM (Sequential Colour with Memory) was the first European colour TV standard and developed by France and Russia in reaction to NTSC tint problems running on a European broadcast system (and power grid). While it runs on 25fps broadcasting the difference is in the complex way in which colour signals are encoded and stored before and during transmission broadcast. The advantage of this standard is that there are no color or saturation losses during aptitude or phase errors that are otherwise common in NTSC or PAL transmission. ATSC (Advanced Television Systems Committee) is responsible for the DTV (Digital Television) Standard and it covers both High Definition and Standard Definition broadcasts. DTV is going to eventually replace the analog NTSC and PAL standards and their variants incrementally. It is already an active format in several of the larger viewer markets, and is continually becoming more widespread. HDTV is part of this standard and is designed for HD screens at both 720 and 1080 scans.ii An overview of these broadcast formats can bee seen below: SDTV: 480i (NTSC, 720480 split into two 240-line fields) SDTV: 576i (PAL, 720576 split into two 288-line fields) EDTV: 480p (NTSC, 720480) HDTV: 720p (1280720) HDTV: 1080i (12801080, 14401080, or 19201080 split into two 540-line fields) HDTV: 1080p (1920*1080 progressive scan)iii

ASPECT RATIO Aspect Ratio is simply the relation between width to length measured in units of space that often are used to gauge what type of screening medium you will use to play or broadcast your video. For example, a traditional 4:3 (otherwise known as 1:33:1 aspect ratio) is what early digital video cameras natively captured images with and was also, originally, the size of conventional theatre screens until the 1950s.

Video is shot in two kinds of main aspect ratios 4:3 and 16:9 but other formats such as 2:35:1 (Panavision) for example are used by high-end video cameras when the output of the image sequence is either cinema or widescreen TV. There are approximately 15 other kinds of aspect ratios that video can accommodate but for your short film the accepted (and expected) aspect ratio will be 16:9 (1:78:1).

The main advantages of 16:9 are that; the image can reveal more of the captured frame and is more appealing to natural human vision. Given that broadcast TV stations and computer monitors are all shifting towards a native widescreen format it makes sense to film your video in 16:9; and that 16:9 is the international standard format of many HDTV countries and the European Digital TV Standard.

4:3 aspect ratio

16:9 aspect ratio

Historically, 4:3 was the standard video aspect ratio because all mainstream TV sets were manufactured with a 4:3 screen however due to the new ATSC standards switching to widescreen the 4:3 format is now considered obsolete and reserved for niche markets such as Media Arts where two and three channel video projections are comprised of several 4:3 sized images put together. 16:9 images can be viewed in 4:3 formats by using letterboxing 4

(black bars top and bottom) and visa versa, 4:3 can be viewed in 16:9 format by using pillar boxing (black bars left and right hand side).

Aspect ratio variants according to pixel dimension and resolution. OVERVIEW OF FORMATS Video formats are the overall method of which is composed at the following standards: PAL (25 or 50fps, 625 lines, 720x576, 4:3 or 16:9, 576i or 576p) NTSC (29.97 or 59.98fps, 525 lines, 704 x480, 4:3 or 16:9, 480i or 480p) HD720 (24, 25. 29.97, 30, 50, 59.98, or 60fps, 720 lines, 1280x720, 16:9, 720p) HD1080 (24, 25. 29.97, 30, 50, 59.98, or 60fps, 1080 lines, 1440 x1080 or 1920 x1080, 16:9, 1080i or 1080p) PROGRESSIVE AND INTERLACE SCAN Video images are recorded using a scanning method as either Progressive (one entire frame) or Interlaced (two half alternating frames combined). Progressive scan achieves a more film like quality but has trouble capturing 5

high speed motion without stutter while interlace scan achieves a lesser film look but is better at capturing high speed motion (most sports video cameras use interlaced). These are indicated in the letters i or p (e.g.: 1080i, 1080p).

In non linear editing systems, progressive scan footage shot at 25fps will consist of 25 frames per second in your timeline but interlaced scan footage shot at 25fs will actually be 50 half frames per second in your timeline meaning that a non linear editing system will place these frames together which some times produces sharp edges which look like tiny fingers jutting in and out of the captured image. To put these fingers back together you need to de-interlace the footage using a process called interpolation. See diagram above. FRAME SIZE Frame sizes are the pixel sizes used in each frame. They affect the quality and dimension of how an image will be seen in a respective screen size. See the diagram below for reference:

HIGH DEFINITION VIDEO High Definition video was developed by JVC and Sony to make improvements to the existing DV format and released commercially in 2003. It has recently become available in consumer markets and consists of two popular formats 720p and 1080i. More recently with rapid advances in HD video technologies, 1080p is becoming increasingly available an in some cases replacing 1080i. There are two types of consumer level standards: High Definition Video (HDV: 720x1080 and 1440x1080) and HD (1920x1080) however it must be noted that not all of these variations record the same thing. The difference of each variant is in the way the camera compresses the images onto its acquisition capture, how the camera processes and records colour, the size and capacity of the CCD, and how much information the camera can store in the tape or digital file HDV 1 is a video codec which shoots natively at 720x1080 in progressive scan and uses a MPEG 2 encoding in a GOP (group of Pictures) structure on either tape or memory card/hard drive. It records at a colour structure of 4.2.0 component, sound at 8-bit (19.7 Mbits), pixel ration 1.0, and video at 19.7Mbits. It has a universal file extension of either .m2v or a variant of QuickTime. HDV 2 is a video codec which shoots natively at 1440x1080 in an interlaced or progressive scan and uses a native MPEG 2 encoding in a GOP (group of Pictures) structure on either tape or memory card/hard drive. It records at a colour structure of 4.2.0 component, sound at 8-bit (25 Mbits), pixel ration of 1.3, and video at 25 Mbits. It has a universal file extension of either .m2v or a variant of QuickTime.

HD video works slightly differently: DVCPRO HD is a video codec which shoots natively at 720x1080 or 1440x1080 in progressive or interlace scan and uses an advanced MPEG-2 encoding in a GOP (group of Pictures) structure on either tape (720) or P2 memory card/hard drive (1080). It records at a colour structure of 4.2.2 component, sound from 8-bit onwards (19.7 Mbits plus), pixel ration 6:7:1, and video between 40-100Mbits. It has a universal file extension of either .m2v or a variant of QuickTime. AVCHD a video codec for non-tape cameras only which shoots natively at 720x1080 or 1440x1080 in progressive or interlace scan and uses a MPEG-4 AVC/H.264 encoding in a GOP (group of Pictures) structure on either tape (720) or P2 memory card/hard drive (1080). It records at a colour structure of 4.2.0 component, sound from 8-bit (19.7 Mbits), pixel ration 6:7:1, and video between 17-24Mbits. It has a universal file extension of both .mts and. m2ts

Important to note that with BluRay technologies fast replacing all SD formats that H.264 (standard compression format for BluRay) will soon be replace MPEG 2 compression in HD video. HDV 1 and 2 format will probably either be replaced by HD or modify its existing compression standards. All HD cameras still come in either NTSC or PAL models. 7

FRAME RATES 60i/p 60 per second in progressive or interlace scan used in HDV 1 and 2, and HD cameras. PAL and NTSC formats. Excellent use for slow motion. 59.98i/p 59.98 frames per second used in progressive or interlace scan used in HDV 1 and 2, and HD cameras. PAL and NTSC FORMATS 50i/p 50 frames per second in progressive or interlace scan used in HDV 1 and 2, and HD cameras. PAL and NTSC FORMATS. Good use for slow motion. 30i/p 30 (in some cases its really 29.976) frames per second in progressive or interlace scan used in HDV 1 and 2. NTSC FORMATS 25i/p 25 frames per second in progressive or interlace scan used in HDV 1 and 2, and HD cameras. PAL FORMATS. Close to cine look. 24p - 24 frames per second in progressive or interlace scan used in HDV 1 and 2, and HD cameras. PAL and NTSC FORMATS. Cine look. 23.98p 23.98 frames per second in progressive or interlace scan used in HDV 1 and 2, and HD cameras. PAL FORMATS. Cine look but also captured in JVC HD cameras. VIDEO IMPORT Video can be imported in various ways to an editing system. For analogue SD formats this can be tape-to-tape recording using BETA, VHS or U-Matic. In digital systems this can be made through a digital capture or ingest (importing a digital file not on tape) mode via a FireWire cable, USB cable, memory card or hard drive. In Final Cut Pro use Log and Capture options for tape and Log and Ingest for digital file. SD analogue video SD digital video HD video Tape recorder FireWire400 FireWire400/800, memory card Hard Drive analogue system digital system digital system

HANDY TIP Remember this as a guide to filming: 24fps uses shutter speed of 48, 25fps uses shutter speed of 50, 30fps uses shutter speed of 60, 50 fps uses shutter speed of 100. Use your ISO (for DSLRs or RED), gain and aperture for optimal lighting control.

THE FUTURE Broadcasters and screen manufacturers are now moving away from HD sized screen resolutions (1920 x 1080) and transitioning into 4K2K (3896 x 2160) which is x4 times the pixel resolution of HD measured in quadrants, otherwise known as digital cinema. While this size is technically not 4K, used in cinematic cameras and projection systems, it is now being adopted as a prosumer broadcast format. The BBC is trialing another format being 8K or x8 quadrants of HD (7680 x 4320). We will have a clearer understanding of which way commercial broadcasters will go over the next three-four years that will undoubtedly feed back to other screen devices such as mobile phones, and tablets and computer screens. What this signifies for digital media such as optical discs are two things. First, the death of the DVD is heralded by the replacement with electronic files stored on external devices, online or inside the monitor itself, which are too large in data size to fit on a DVD or Blu Ray disc. The second is what is known as a Red Ray and a Black Ray optical disc that can hold a reported 200TB of data independently. The limitations of digital cinema files are that broadband restrictions and processor power still lacks a viable method of transfer and current HDMI cables can only transfer data at a capped rate (note the new JVC 4K2K camcorder can deliver a 4K signal spread across x4 HDMI cords). These of course are considerations that are in current development. While the idea of convenience regarding the disposal of optical disc storage in favor of digital file uploads/downloads straight into a monitor is a sound argument, the idea of downloading 8K format movies over current broadband infrastructures present a series of solid obstacles for it to become a viable financial option en mass. Another fascite is aspect ratio as 4K and 8K resolution becomes apparent so too does a changing replacement of the standard 16:9 ratio. While formats like the new 4K2K are based around HD quadrants in the existing 16:9 format, the 21:9 (2.35) and 25:9 (2.40 scope) aspect ratios are also in proposal to replace 16:9. With regards to scope monitors, this can present clever savings on data sizes whereby what is gained in horizontal pixels is lost in vertical pixels making a 4K image slightly less in file size. In simple terms, the less verticle pixels a frame size has the small it is in HD terms, a 1920 x 800 file (2.40 scope) will be a smaller than the same resolution framed at 1920 x 1080 (16:9). A savings of 280 vertical pixels means less rendering time, output time, and storage space, and so forth. We are nonetheless living in exciting times and the next 5 years will represent a massive change in workflows, production, entertainment and data acquisition.

REFERENCES
i

Scruton, R (2005), Modern Culture, Continuum, London. WIKIPEDIA http://en.wikipedia.org/Atsc iii WIKIPEDIA http://en.wikipedia.org/Video
ii

10

Anda mungkin juga menyukai