

It should be possible to meet all the various requirements of your streaming video product development project with one or at most two different methods for streaming. In my opinion, the standards are a mess in this area. Even VC-1 – the “standardized” form of Windows Media Video – has an RTP profile. Profiles are defined for H.264, MPEG-4 video and audio, and many more. This means it is possible to carry a large number of codec types inside RTP for each protocol, the IETF defines an RTP profile that specifies any codec-specific details of mapping data from the codec into RTP packets. RTP – which you can read about in great detail in RFC 3550 – is codec-agnostic. As such, it performs some of the same functions as an MPEG-2 transport or program stream.

RTP is a system protocol that provides mechanisms to synchronize the presentation different streams – for instance audio and video.

Some background: RTP is used primarily to stream either H.264 or MPEG-4 video. Something I end up explaining relatively often has to do with all the various ways you can stream video encapsulated in the Real-time Transport Protocol, or RTP, and still claim to be standards compliant.
