I am trying to understand the full steps of live video streaming from broadcasting to viewer as YouTube does it.
I've read several docs but still some things are unclear. Here is what i know (broad strokes):
1. The broadcaster encodes his video game output and transmits the live stream (normally via RTMP) to a YouTube encoding server. The encoded output from the broadcaster could be in MP4 (h.264 + AAC) @ 1280x720p at 30fps
2. The YT encoding server segments the incoming streaming packets into 1 sec chunks of video clips. Each segment is the start of a new keyframe. The server then resizes the original segment to different dimensions (eg. 480p, 360p, 240p, 144p). An HDS / HLS / DASH (or other) manifest is also is created. This segment group + manifest is then passed onto the Origin Server
3. The viewer's Flash client makes a request to the YT CDN for the video data manifest. Since nothing is there the request is passed on to the Origin Server which passes the manifest file back. The viewer's flash client analyzes the manifest then makes a request for the first segment (or collection of segments to place into buffer) at the lowest display resolution (eg. 240p). A timer is set by the client to determine how long it takes for the file(s) the be returned. All subsequent requests to the CDN for the same data will be handled by the CDN since it is keeping a cache now but this cache can expire / be refreshed at some time in the future. The manifest file however may not be cached.
4. The viewer's Flash Client will begin playing the collection of segments and refer back to its timer / bandwidth metrics and determine if it should continue requesting segments at the set resolution or an alternate resolution (the user's speeds are very good or very bad). Once the client nears the end of the manifest it will make a fresh request for an updated manifest. The manifest grows in size as new segments come in from the broadcaster.
5. If the viewer scrubs the seekbar the Flash Client will first check its existing buffer of segments to see if it has that particular time (segment chunk) and if so it will play it, otherwise it will make a request for the segment(s).
The above is my understanding on how live streaming on YouTube operates. Is it correct or did i miss a few VITAL steps?
My issues are:
1. Is Google / YT using the Adobe Media Server (Flash Media Server) and HDS for live streaming?
2. How do they handle iOS, X360, Android, and Desktop requests? The transport protocol for streaming isnt the same.
3. I'm looking to make my own live streaming service operating similar to YT. Is the Flash Media Server even necessary for handling live stream requests from various platforms -- iOS, Android, Desktop -- as it claims to Dynamically repackage? If that's all can't one do the same thing with Apache and their own code?
+ Reply to Thread
Results 1 to 1 of 1
Last edited by quantass; 9th Feb 2014 at 12:43.