Anda di halaman 1dari 6

INTRODUCTION : The reasons for evolution of streamed stored audio and video is: 1)decrease in cost of disk storage

2)improvements in internet infrastructure 3)enormous pent-up demand for high quality video on demand OVERVIEW: Under network ,there are client and server. Client-who is going to send request message and server-who is going to respond the request message There are two different sockets available for transmission: TCP & UDP Before being sent into the network ,the audio/video file should be segmented and it should be encapsulated with special headers Protocol used for encapsulation is RTP(Real-time protocol) Protocol used for additional feature such as pause/play, jump within the file is RTSP(realtime streaming protocol) Request is passing through the web client(ie., browser) Helper application->media players(ex: window media player, real player) FUNCTION OF MEDIA PLAYER: The media player performs several functions: Decompression Jitter removal Error correction

Decompression: to save disk storage and network bandwidth Jitter removal: source- to- destination delays (packet jitter) It is done by using buffer in the receiver. Error correction : due to unpredictable congestion in the network

MANY STREAMING SYSTEMS ATTEMPT TO RECOVER FROM LOSSES BY : Reconstructing lost packets through transmission of redundant packets Client explicitly request retransmission of lost packets Masking loss by interpolating the missing data from the received data

TYPES OF SERVICE: 1.Accessing audio and video through a web server 2.Sending multimedia from streaming server to helper application 3.Real-time streaming protocol

1.ACCESSING AUDIO AND VIDEO THROUGH WEB SERVER:


Web server delivers the audio/video to the client over HTTP or non-HTTP PROTOCOL. In case of audio streaming, audio files resides on a web server just as HTML and JPEG files. Client sends a HTTP request message and server encapsulates the audio file in an HTTP response message and sends the response message back into TCP connection. In case of video streaming, audio and video are stored in two files. So ,two separate HTTP request and two separate TCP connections are used. Audio and video files are arrives at the client in parallel. It is upto the client to manage the synchronization of two streams. A NAIVE ARCHITECTURE FOR AUDIO AND VIDEO STREAMING : Browser process establishes TCP connection with the web server. Web server sends the audio/video file to the browser in an HTTP response message. The content type header line in the HTTP response message indicates a specific audio/video encoding. So that it can launches associated media player. The media player then renders the audio/video file.

Fig 1: A nave implementation for audio streaming

DRAWBACKS IN THIS TYPE: Media plyer must interact with the server through a web browser as an intermediary. Entire object must be downloaded before the browser passes the object to a helper application. This results in delay before playout can begin To overcome this, server send the audio/video file directly to the media player. It is done by making use of META file . META file provides information about the audio/video is to be streamed.

Fig 2. web server sends audio/video directly to the media player

A direct TCP connection between the server and media player is obtained as follows: User clicks hyperlink. Hyperlink point directly to meta file not to audio/video file. Server encapsulates the meta file includes content-type header line This content-type header line launches associated media player. The media player sets up TCP connection directly with server The media payer streams out the audio/video file.

2.SENDING MULTIMEDIA FROM STREAMING SERVER TO A HELPER APPLICATION:


Audio/video can be sent over UDP(rather than TCP) This architecture requires two servers: 1)HTTP server 2)streaming server Media player requests file from streaming server rather than web server Media player and streaming server can interact using their own protocols

Fig 3. Streaming from a streaming server to a media player. The above figure shows the delivering options of audio/video from the streaming server to the media player. A partial list of option is: Audio/video sent over UDP at a constant rate equal to the drain rate of receiver. Audio is compressed using GSM at rate of 13kbps. Media player delays playout for 2 5 seconds in order to eliminate network induced jitter. Once the client prefetched a few seconds of media, it begins to drain the buffer. Fill rate of buffer is x(t) Drain rate is d. After 2-5 seconds media player reads from its buffer. If drain rate d is more than fill rate x(t) means it makes the client buffer empty. This process is known as STARVATION Fill rate depends on buffer size also.

So, client buffer should always in big size.

3.REAL-TIME STREAMING PROTOCOL(RTSP)


This protocol is mainly used to control playback as in the DVD player. Before getting into the details of RTSP ,let us first indicate what RTSP does not do: o It doesnt define the compression schemes. o It doesnt define the encapsulation. o It doesnt define how the media is transported(UDP or TCP) o It doesnt restrict how the media player buffers the audio/video.(ie., entire download or not) Then what does RTSP do? o Control the transmission of media stream o Pause/resume o Fast-forward o Rewind RTSP is an out-of-band protocol. FTP also uses out-of band notation. RTSP PROCESS: As usual, HTTP message META file TCP connection Then, Setup Play Pause teardown

All the requests are in ASCII text. RTSP server keeps track of the state of client for each ongoing RTSP session. The session number is fixed throughout the entire session. Setup message includes client port number and transmission type(UDP) too. Additional feature of RTSP is allowing the user to record the ongoing media.(eg.real player)

Anda mungkin juga menyukai