Concatenate multiple streams gaplessly with GStreamer

Earlier this month I wrote a new GStreamer element that is now integrated into core and will be part of the 1.6 release. It solves yet another commonly asked question on the mailing lists and IRC: How to concatenate multiple streams without gaps between them as if they were a single stream. This is solved by the concat element now.

Here are some examples about how it can be used:

# 100 frames of the SMPTE test pattern, then the ball pattern
gst-launch-1.0 concat name=c ! videoconvert ! videoscale ! autovideosink  videotestsrc num-buffers=100 ! c.   videotestsrc num-buffers=100 pattern=ball ! c.

# Basically: $ cat file1 file2 > both
gst-launch-1.0 concat name=c ! filesink location=both   filesrc location=file1 ! c.   filesrc location=file2 ! c.

# Demuxing two MP4 files with h264 and passing them through the same decoder instance
# Note: this works better if both streams have the same h264 configuration
gst-launch-1.0 concat name=c ! queue ! avdec_h264 ! queue ! videoconvert ! videoscale ! autovideosink   filesrc location=1.mp4 ! qtdemux ! h264parse ! c.   filesrc location=2.mp4 ! qtdemux ! h264parse ! c.

If you run this in an application that also reports time and duration you will see that concat preserves the stream time, i.e. the position reporting goes back to 0 when switching to the next stream and the duration is always the one of the current stream. However the running time will be continuously increasing from stream to stream.

Also as you can notice, this only works for a single stream (i.e. one video stream or one audio stream, not a container stream with audio and video). To gaplessly concatenate multiple streams that contain multiple streams (e.g. one audio and one video track) one after another a more complex pipeline involving multiple concat elements and the streamsynchronizer element will be necessary to keep everything synchronized.


The concat element has request sinkpads, and it concatenates streams in the order in which those sinkpads were requested. All streams except for the currently playing one are blocked until the currently playing one sends an EOS event, and then the next stream will be unblocked. You can request and release sinkpads at any time, and releasing the currently playing sinkpad will cause concat to switch to the next one immediately.

Currently concat only works with segments in GST_FORMAT_TIME and GST_FORMAT_BYTES format, and requires all streams to have the same segment format.

From an application side you could implement the same behaviour as concat implements by using pad probes (waiting for EOS) and using pad offsets (gst_pad_set_offset()) to adjust the running times. But by using the concat element this should be a lot easier to implement.

25 thoughts on “Concatenate multiple streams gaplessly with GStreamer”

    1. Thank you. This solved my problem of figuring out how to do gapless. Where can I find information about 1.6?

      1. There’s no information about 1.6, you can build the latest code from GStreamer’s GIT master branch though… which at some point will become 1.6. Plan is to start doing “preview/testing” releases for 1.6 some time this month.

  1. I tried reading the code for your concatenation element, but some of what you do is still beyond me. I’m attempting to recreate what your element does based on the theory you provided. In a pad probe on the EOS event of a sink pad, I block the source pad of the pad before that sink pad and then I block the sink pad. Then, I unlink the same two pads. The element before the sink pad is a bin of many elements. I then remove the bin from the pipeline and put it in a linked list for disposal outside the probe. Then, I insert a bin that is a prerolled bin of elements like the one that was removed from the pipeline. The new bin was created using the same function I used to create the part I removed. The song before it plays successfully. I lastly set the prerolled bin to a playing state from paused. All of seems to work successfully to the point I described. The problem is when I try to link the pads of those two elements, the main thread locks up. I’m not sure why and debug info isn’t giving me and useful information. Here’s a graph of my pipeline directly before trying to link those two pads:

    The two blocked pads I’m trying to link are RG(replay gain) element and the source pad of the bin, which is attached to that queue element. It’s cool if you don’t want to help me. I just don’t know of a better place to ask this question.

    1. Blocking the sinkpad that you unlink is not required, the srcpad would be enough.

      For the deadlock, can you get a backtrace of all threads when that happens (with gdb)? Also having your code would be useful, in theory this should work just fine this way. Also see this. The sink example is very similar to what you want to do.

  2. Hmm. I’m not sure how to get a backtrace in GDB when the program deadlocks like that. I did read that post yesterday while attempting to find what I am doing wrong. Maybe it’s because I didn’t “initialize a variable that protects our probe callback from multiple”? Would an atomic bool be suited for it? . The file where I’m doing this is Playback/ The relevant code starts at line 556 and the linking that causes the deadlock is at 707. I’m sorry it’s such a mess.

    1. You would run it in gdb and then do
      threads apply all backtrace
      and then all the output you can get. You need to have debug symbols for everything relevant installed though.

  3. I ended up using your concat element instead of implementing something like it myself. I have it working “gaplessly” fine, but I have no idea why the duration of the pipeline doesn’t change to that of the new stream once it starts.

    Could you possibly have any idea why that is happening and/or what I could do to solve this problem?

    1. Do you query the duration explicitly all the time, or do you wait to get a duration-updated message before you query the duration again?

      1. I’m querying it all of the time. Why would that affect the time? What should I watch the duration-updated event with? I can’t find any information about it. Also, seeking my pipeline doesn’t work right now. It always seeks more than the position it should and doesn’t seem to be greater by a completely predictable amount. That only started once I introduced the concat element.

      2. If you query it all the time, then everything’s fine. But it’s not going to change anyway unless you get the duration-changed message.

        To find the reason for this, the only way to find out is to check the debug logs to see what answers the duration query and why it uses that number.

      3. I solved the problem with the duration not being updated. It was a result of not removing the previous stream once the next one starts. If the first stream is not removed from the pipeline after it ends, the pipeline uses its duration. I’m assuming this is by design.

        I still have a two problems, though. Still, when I try to seek the pipeline, it seeks farther than I choose to. I could do some more checks to completely verify it isn’t a problem I’m causing, but I’m fairly positive it is not my fault. Also, when trying to seek the pipeline in a paused state, the main thread locks up. Here is the backtrace:. It seems to be deadlocking while waiting for a mutex lock.

        I can’t think of any reasons that could cause those two problems other than the addition of the concat elelment. Could it be the reason for seeking being problematic? I didn’t have these problems before adding it to my pipeline.

        Other than those problems, the concat element works wonderfully. Maybe the fault is still mine, but so far it seems like the concat element might have a bug or two still. If not, I’m sorry to bother you with these problems. I just can’t seem to find any other reason that could cause those two problems.

      4. Best would be to report a bug at against GStreamer with a testcase to reproduce the problem. I’m sure there are still bugs in concat 🙂 If you provide backtraces, please install debug symbols of all packages involved and also get backtraces of all threads.

  4. but how to translate a.mp4 to b.ts with video and audio ?
    I use gst-launch-1.0 filesrc location=a.mp4 ! queue ! qtdemux name=de ! avdec_h264 ! x264enc ! mpegtsmux name=mux ! filesink location=b.ts de. ! aacparse ! mux.

    it says:
    Redistribute latency…

    here needs streamsynchronizer? plz help, tks

  5. I am trying to assemble time-lapse videos, AS EACH FRAME GETS CAPTURED, in a processor efficient way. My hope is that I could capture an image, have ‘gstreamer’ encode it into a H.264 video frame, then have ‘gstreamer’ merge that new frame with the “movie” that resulted from the previous frames. That way the “movie” is ALWAYS UP-TO-DATE (doesn’t need a lengthy encode process whenever you want to see the resulting movie).

    Your concatenate element seems like it might provide the needed functionality. My hope is that as each “frame” is encoded all in the same way that there is no re-encoding that would have to happen… so that the 1000th frame gets essentially appended without having to re-encode the previous ones. Only the file read/write operations would become more lengthy, over time, and that can be lessened by setting up a ram-disk, it seems.

    Could you suggest a pipeline that might do that? All of this, hopefully would run on a Raspberry Pi with a USB Webcam. No audio in the movie necessary. And of course I am assuming that ‘concat’ doesn’t force a full re-encode. If it does then all bets are off (it won’t buy me anything, really).

    1. You don’t really need the concat element for that. Just have a normal encoding pipeline, and pass each frame in there one-by-one as it becomes available. To be able to play the resulting file after each frame, you’ll a) need to configure the h264 encoder to always produce I-frames, b) configure the h264 encoder to have zero latency, c) use a container format that allows this (not MP4, Matroska in streaming mode, MPEG-TS, raw h264 in byte-stream format).

  6. Thanks slomo for the rapid response. I didn’t want to “pollute” this thread if it off the topic of ‘append’, but couldn’t find another way to contact you. Perhaps it is easy to wipe this comment and email me directly.?

    I’m a noob to gstreamer and wear too many hats so I don’t always know the nitty gritty details. If you can suggest a pipeline example, that might get me to where I can kick it into shape. I’m a bit lost on how an efficient source and sink arrangement would be set up. Using a pipe is probably inefficient (copying to/from the kernel) and I’m not sure if using a localhost IP port would be any better. One could start the ‘gstreamer’ and let it run in the background while feeding a filesystem node with image, I suppose, but I am just wildly musing now. So maybe you can see why I am asking for an example.

    And it comes to mind that wouldn’t the resulting file, as it is in an incomplete state, lack a “end-of-file” marker? And perhaps the file would be locked to the ‘gstreamer’ process? Too many questions, I know.!

    Again, sorry to bomb the thread. Hopefully we can take it offline. I’d like to be able to share a solution with the Maker world as there are a bunch of enthusiasts, mostly Raspberry Pi users, right now, who could use this.

  7. Hi Sebastian,

    I am trying to use concat with a uridecodebin source.
    My problem is that I want to link another source to concat in 2nd position.
    So I think I need to link the 2nd source from the pad-added signal from uridecodebin, after uridecodebin has been linked.
    But if I do so I have errors from the 2nd source saying “pad not activated yet” and “streaming stopped, reason not-linked (-1)”

    Can you please tell what I should do to be able link concat with uridecodebin in 1st position ?

  8. How about add a signal to notify the user when the active pad switched? In my usage case, I want to keep the pipeline running and feed clips dynamically and randomly to concat. I want the such a signal to have an opportunity to do some clean up on the clip and its elements after it reaches EOS. Or any other equivalent method already exists please help point out. Thanks.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.