Let’s talk a bit about HTTP Adaptive streaming and GStreamer, what it is and how it works. Especially the implementation in GStreamer is not exactly trivial and can be a bit confusing at first sight.
If you’re just interested in knowing if GStreamer supports any HTTP adaptive streaming protocols and which you can stop after this paragraph: yes, and there are currently elements for handling HLS, MPEG DASH and Microsoft SmoothStreaming.
What is it?
So what exactly do I mean when talking about HTTP Adaptive Streaming. There are a few streaming protocols out there that basically work the following way:
- You download a Manifest file with metadata about the stream via HTTP. This Manifest contains the location of the actual media, possibly in multiple different bitrates and/or resolutions and/or languages and/or separate audio or subtitle streams or any other different type of variant. It might also contain the location of additional metadata or sub-Manifests that provide more information about a specific variant.
- The actual media are also downloaded via HTTP and split into fragments of a specific size, usually 2 seconds to 10 seconds. Depending on the actual protocol these separate fragments can be played standalone or need additional information from the Manifest. The actual media is usually a container format like MPEG TS or a variant of ISO MP4.
Examples of this are HLS, MPEG DASH and Microsoft SmoothStreaming.
This kind of protocols are used for both video-on-demand and “live” treaming, but obviously can’t provide low-latency live streams. When used for live streaming it is usually required to add more than one fragment of latency and then download one or more fragments, reload the playlist to get the location of the next fragments and then download those.
Why?
You might wonder why one would like to implement such a complicated protocol on top of HTTP that can’t even provide low-latency live streaming and why one would choose it over other streaming protocols like RTSP, or any RTP based protocol, or simply serving a stream over HTTP at a single location.
The main reason for this is that these other approaches don’t allow usage of the HTTP based CDNs that are deployed via thousands of servers all around the world, and that they don’t allow using their caching mechanisms. One would need to deploy a specialised CDN just for this other kind of streaming protocol.
A secondary reason is that especially UDP based protocols like RTP but also anything else not based on HTTP can be a bit complicated to deploy because of all the middle boxes (e.g. firewalls, NATs, proxies) that exist in almost any network out there. HTTP(S) generally works everywhere, in the end it’s the only protocol most people conciously use or know about.
And yet another reason is that this splitting of the streams in the fragments allows trivial to implement switching between bitrates or any other stream alternatives at fragment boundaries.
GStreamer client-side design
In GStreamer the above mentioned three protocols are implemented and after trying some different approaches for implementing this kind of protocol, all of them converged to a single design. This is what I’m going to describe now.
Source
The naive approach would consider implementing all this as a single source element. Because that’s how network protocols are usually implemented in GStreamer, right?
While this seems to make sense there’s one problem with this. There is no separate URI scheme defined for such HTTP adaptive streams. They’re all using normal HTTP URIs that point to the Manifest file. And GStreamer chooses the source element that should be used based on the URI scheme alone, and especially not from the data that would be received from that URI.
So what are we left with? HTTP URIs are handled by the standard GStreamer elements for HTTP, and using such a source element gives us a stream containing the Manifest. To make any sense of this collection of bytes and detect the type of it, we additionally have to implement a typefinder that provides the media type based on looking at the data and actually tells us that this is e.g. a MPEG DASH Manifest.
Demuxer
This Manifest stream is usually rather short and followed by the EOS event like every other stream. We now need another element that does something with this Manifest and implements the specific HTTP adaptive streaming protocol. In GStreamer terminology this would act like a demuxer, it has one stream of input and outputs one or more streams based on that. Strictly speaking it’s not really a demuxer though, it does not demultiplex the input stream into the separate streams but that’s just an internal detail in the end.
This demuxer now has to wait until it received the EOS event of the Manifest, then needs to parse it and do whatever the protocol defines. In specific it starts a second thread that handles the control flow of the protocol. This thread downloads any additional resources specified in the Manifest, decides which media fragment(s) are supposed to be downloaded next, starts downloading them and makes sure they leave the demuxer on the source pads in a meaningful way.
The demuxer also has to handle the SEEK event, and based on the time specified in there jump to a different media fragment.
Examples of such demuxers are hlsdemux, dashdemux and mssdemux.
Downloading of data
For downloading additional resources listed in the Manifest that are not actual media fragments (sub-Manifests, reloading the Manifest, headers, encryption keys, …) there is a helper object called GstURIDownloader, which basically provides a blocking (but cancellable) API like this: GstBuffer * buffer = fetch_uri(uri)
Internally it creates a GStreamer source element based on the URI, starts it with that URI, collects all buffers and then returns all of them as a single buffer.
Initially this was also used to download media fragments but it was noticed that this is not ideal. Mostly because the demuxer will have to download a complete fragment before it can be passed downstream, and very huge buffers are passed downstream that cause any buffering
elements to mostly jump between 0% and 100% all the time.
Instead, thanks to the recent work of Thiago Santos, the media fragments are now downloaded differently. The demuxer element is actually a GstBin now and it has a child element that is connected to the source pad: a source element for downloading the media fragments.
This allows to forward the data as it is downloaded immediately and allows downstream elements to handle the data one block after another instead of getting it in multi-second chunks. Especially it also means that playback can start much sooner as you don’t have to wait for a complete fragment but can fill up your buffers with a partial fragment already.
Internally this is a bit tricky as the demuxer will have to catch EOS events from the source (we don’t want to stop streaming just because a fragment is done), catch errors and other messages (maybe instead of forwarding the error we want to retry downloading this media fragment) and switch between different source elements (or just switch the URI of the existing one) during streaming once a media fragment is finished. I won’t describe this here, best to look at the code for that.
Switching between alternatives
Now what happens if the demuxer wants to switch between different alternative streams, e.g. because it has noticed that it can barely keep up with downloading streams of a high bitrate and wants to switch to a lower bitrate. Or event to an alternative that is audio-only. Here the demuxer has to select the next media fragment for the chosen alternative stream and forward that downstream.
But it’s of course not that simple because currently we don’t support renegotiation of decoder pipelines. It could easily happen that codecs change between different alternatives, or the topology changes (e.g. the video stream disappears). Note that not supporting automatic renegotiation for cases like this in decodebin and related elements is not a design deficit of GStreamer but just a limitation of the current implementation.
There already is a similar case that is handled in GStreamer already, which is handling of chained Ogg files (i.e. different Ogg files concatenated to each other). In that case it should also behave like a single stream, but codecs could change or the topology could change. Here the demuxer just has to add new pads for the following streams after having emitted the no-more-pads signal, and then remove all the old pads. decodebin and playbin then first drain the old streams and then handle the new ones, while making sure their times align perfectly with the old ones.
(uri)decodebin and playbin
If we now look at the complete (source) pipeline that is created by uridecodebin for this case we come up with the following (simplified):
We have a source element for the Manifest, which is added by uridecodebin. uridecodebin also uses a typefinder to detect that this is actually an HTTP adapative streaming Manifest. This is then connected to a decodebin instance, which is configured to do buffering after the demuxers with the multiqueue element. In the normal HTTP stream buffering, the buffering is done between the source element and decodebin with the queue2 element.
decodebin then selects an HTTP adaptive streaming protocol demuxer for the current protocol, waits until it has decided on what to output and the connects it to a multiqueue element like every other demuxer. However it uses different buffering settings as this demuxer is going to behave a bit different.
Followed by that is the actual demuxer for the media fragments, which could for example be tsdemux if they’re MPEG TS media fragments. If an elementary stream is used for the media fragment, decodebin will insert a parser for that elementary stream which will chunk the stream into codec frames and also put timing information on them.
Afterwards there comes another multiqueue element, which does not happen after a parser if no HTTP adapative streaming demuxer is used. decodebin will configure this multiqueue element to handle buffering and send buffering messages to the application to allow it to pause playback until the buffer is full.
In playbin and playsink no special handling is necessary. Now let’s take a look at a complete pipeline graph for an HLS stream that contains one video and one audio stream inside MPEG TS containers. Note: this is huge, like all playbin pipelines. Everything on the right half is for converting raw audio/video and postprocessing it, and then outputting it on the display and speakers.
Keep in mind that the audio and video streams, and also subtitle streams, could also be in separate media fragments. In that case the HTTP adaptive streaming demuxer would have multiple source pads, each of them followed by a demuxer or parser and multiqueues. And decodebin would aggregate the buffering messages of each of the multiqueues to give the application a consistent view of the buffering status.
Possible optimisations
Now all of this sounds rather complex and probably slow compared to the naive approach of just implementing all this in a single source element and not having so many different elements involved here. As time as shown it actually is not slow at all, and if you consider a design for
such a protocol in a single element you will notice that all the different components that are separate elements here will also show up in your design. But as GStreamer is like Lego and we like having lots of generic components that are put together to build a more complex whole, the current design seems to follow the idea of GStreamer in a more consistent way. And especially it is possible to reuse lots of existing elements and also allow to replace different elements with custom implements. Transparently due to GStreamer’s autoplugging mechanisms.
So let’s talk about a few possible optimisations here, some of which are implemented in the default GStreamer elements and should be kept in mind when replacing elements. Some of which could be implemented on top of the existing elements.
Keep-alive Connections and other HTTP features
All these HTTP adaptive streaming protocols require to create lots of HTTP requests, which traditionally required to create a new TCP connection for every single request. This involves quite some overhead and increases the latency because of TCP’s handshake protocol. Even worse if you use HTTPS and have to also handle the SSL/TLS handshake protocol on top of that. And we’re talking about multiple 100ms to seconds per connection setup here. HTTP 1.1 allows connections to be kept alive for some period of time and reuse them for multiple HTTP requests. Browsers are using this since a long time already to efficiently show you websites composed of many different files with low latency.
Also previously all GStreamer HTTP source elements closed their connection(s) when going back to the READY state, but it is required to set them to the READY state to switch URIs. Which basically means
that although HTTP 1.1 allows to reuse connections for multiple requests, we were not able to make use of this. Now the souphttpsrc HTTP source element keeps connections open until it goes back to the NULL state if the keep-alive property is set to TRUE, and other HTTP source elements could implement this too. The HTTP adaptive streaming demuxers are making use of this “implicit interface” to reuse connections for multiple requests as much as possible.
Compression
HTTP also defines a way for clients and servers to negotiate the encoding that both support. Especially this allows both to negotiate that the actual data (the response body) should be compressed with gzip (or another method) instead of transferring it as plaintext. For media fragments this is not very useful, for Manifests this can be very useful. Especially in the case of HLS, where the Manifest is a plaintext, ASCII file that can easily be a few 100 kb in size.
The HTTP adaptive streaming demuxers are using another “implicit interface” on the HTTP source element to enable compression (if supported by the server) for Manifest files. This is also currently only handled in the souphttpsrc element.
Other minor features
HTTP defines many other headers, and the HTTP adaptive streaming demuxers make use of two more if supported by the HTTP source element. The “implicit interface” for setting more headers in the HTTP request is the extra-headers property, which can be set to arbitrary headers.
The HTTP adaptive streaming demuxers are currently setting the Referer header to the URI of the Manifest file, which is not mandated by any standard to my knowledge so far but there are streams out there that actually forbid to download media fragments without that. And then the demuxers also set the Cache-Control header to 1) tell caches/proxies to update their internal copy of the Manifest file when redownloading it and b) to tell caches/proxies that some requests must not be cached (if indicated so in the Manifest). The latter can be ignored by the caches of course.
If you implement your own HTTP source element it is probably a good idea to copy the interface of the souphttpsrc element at least for these properties.
Caching HTTP source
Another area that could easily be optimised is implementing a cache for the downloaded media fragments and also Manifests. This is especially useful for video-on-demand streams, and even more when the user likes to seek in the stream. Without any cache it would be required to download all media fragments again after seeking, even if the seek position was already downloaded before.
A simple way for implementing this is a caching HTTP source element. This basically works like an HTTP cache/proxy like Squid, only one level higher. It behaves as if it is an HTTP source element, but actually it does magic inside.
From a conceptual point of view this caching HTTP source element would implement the GstURIHandler interface and handle the HTTP(S) protocols, and ideally also implement some of the properties of souphttpsrc as mentioned above. Internally it would actually be a GstBin that dynamically creates a pipeline for downloading an URI when transitioning from the READY to the PAUSED state. It could have the following internal configurations:
The only tricky bits here are proxying of the different properties to the relevant internal elements, and error handling. You probably don’t want to stop the complete pipeline if for some reason writing to your cache file fails, or reading from your cache file fails. For that you could catch the error messages and also intercept data flow (to ignore error flow returns from filesink), and then dynamically reconfigure the internal pipeline as if nothing has happened.
Based on the Cache-Control header you could implement different functionality, e.g. for refreshing data stored in your cache already. Based on the Referer header you could correlate URIs of media fragments to their corresponding Manifest URI. But how to actually implement the storage, cache invalidation and pruning is a standard problem of computer science and covered elsewhere already.
Creating Streams – The Server Side
And in the end some final words about the server side of things. How to create such HTTP adaptive streams with GStreamer and serve them to your users.
In general this is a relatively simple task with GStreamer and the standard tools and elements that are already available. We have muxers for all the relevant container formats (one for the MP4 variant used in MPEG DASH is available in Bugzilla), there is API to request keyframes at specific positions (so that you can actually start a new media fragment at that position) and the dynamic pipeline mechanisms allow for dynamically switching muxers to start a new media fragment.
On top of that it would only be required to create the Manifest files, and then serve all of this with some HTTP server. For which there are already too many implementations out there, and in the end you want to hide your server behind a CDN anyway.
Additionally there are the hlssink and dashsink elements, the latter one just being in Bugzilla right now. These implement the creation of the media fragments already with the above mentioned GStreamer tools and generate a Manifest file for you.
And then there is the GStreamer Streaming Server, which also has support for HTTP adaptive streaming protocols and can be used to easily serve multiple streams. But that one deserves its own article.
No mentions of AdobeHDS? 🙂
I never came across an AdobeHDS stream so far 🙂
Your blog is very good! Thank you for your information.
Thanks for the good article.
You said it is relatively simple to build a DASH server using gstreamer. I was wondering if you know the filters which can create DASH-compatible segments and MPD?
thanks
There are no elements for that yet (except for dashsink which is still in Bugzilla). You can easily create dash streams with dynamic pipelines and manually writing the MPD though.
Thanks.
Do you know which elements/filters I should look into for creating DASH streams?
Take a look at the dashsink code. Basically you need encoders that produce the audio/video streams you want and then the fragmented MP4 muxer. Whenever you want to finish one fragment and start the next you would 1) request a new keyframe from the video encoder, 2) when the keyframe arrives send EOS to the old MP4 muxer, 3) unlink the old MP4 muxer (fragment is complete), link to a new MP4 muxer, 4) let the keyframe travel to the new MP4 muxer. Around that you need to write code to generate the MPD for these fragments, and then a HTTP server to serve it all.
Thanks much for your answer..
See https://bugzilla.gnome.org/show_bug.cgi?id=736008 btw
Hi,
Is there any support on gstreamer 0.10 for Adaptive Streaming – Smooth Streaming, Dash etc. ?
Thanks
There is some initial support for HLS but don’t use 0.10 for anything really. It’s no longer maintained since more than 2 years and a lot has happened since then.
gstreamer-based encoder whith hls output : http://github.com/i4tv/gstreamill
Hello would you please Explain how can I compile your code for imx6 devices ( Wandboard ) when I try to compile it , it says no gstreamer1 found , but I have gstreamer 1 with all their dependency
Best Regards
hi slomo ,thanks for the great artical on this topic.i had a problem in working with Gstreamer Streaming server.i installed it on my PC Ubuntu12.04,but i am struck at one point ,i am unable to use my content,to stream ,by default this server streams few files in content folder ,any idea abot how to specify user content(Videos) ?
Thanks in advance
anil
I never used GSS myself. Best to ask on the GStreamer mailing list 🙂
Thank you Slomo for the quality of your article. is there any mean on how to find the currently implemented version of hls/dash/mss protocols, the supported/unsupported features ?
Many thanks
For HLS we support up to the latest version I think, except for alternative renditions (which are part of HLS version 4). For DASH/MSS I can’t remember, you’ll have to look at the source code to see if it supports the features you need. If you tell me which features you need, I can tell you if they supported though 😉
Hi,
I need to know how we could segment “Live” video streams into MPEG-DASH format.
Which is being received as a “udp” packets from remote device.
Ask on the mailing list please, this is not a support forum 🙂 http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
There’s a dashsink element in Bugzilla though, which could make this task a bit easier.
Great article Sebastian, very helpful to get introduced into DASH.
I am currently trying to understand how to write a manifest.
Is there a possible configuration where the segment index information is specified only in the manifest and not in the manifest + segment? If that exists, how does the video needs to be encoded for that configuration?
Thanks
Good blog 🙂
but if we jusr want to serv media file without any adaptation is it possible? In my case for example I have a lisoup server which send movies with using the “content-length” directly to the playbin on client side with GST_PLAY_FLAGS_DOWNLOAD activated but I get some errors…do you have some idea on how can I make it works?
Yes, see e.g. https://coaxion.net/blog/2013/10/streaming-gstreamer-pipelines-via-http/
For the errors, please try with the latest GStreamer version and if it still doesn’t work please file a bug with a debug log and testcase to reproduce.
thanks 🙂 will try this.
Very nice article, thanks.
Perhaps you can help me. I try to bridge an HLS Input to an UDP Multicast without Transcoding. When I use a playbin, the HLS stream ist played well, but if I use a uridecodebin it just plays the first segment. My pipeline looks like this:
gst-launch-1.0.exe ^
uridecodebin uri=http://artelive-lh.akamaihd.net/i/artelive_de@393591/master.m3u8 download=false ^
! queue ^
! x264enc ^
! mpegtsmux ^
! udpsink host=239.1.10.100 port=1234 auto-multicast=true
Do you or anyone else know whats wrong with that pipeline?
This has nothing to do with this specific article, please ask on the mailing list here: https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Thanks 🙂
(Problem might be that this is changing bitrates, if it has multiple, and then a new pad is added to uridecodebin and you can’t handle that in gst-launch, only in code)
Hi Sebastian,
do you have one sample for a successful playback of DASH-Livestream – given pipeline for gst-launch?
e.g. playback http://vm2.dashif.org/livesim/testpic_2s/Manifest.mpd
This would be so cool 🙂
thanks Martin
You can just use it in e.g. playbin, for example with gst-launch-1.0 like this: