VideoHelp Forum
+ Reply to Thread
Results 1 to 5 of 5
Thread
  1. I have been struggling with this problem for around 2 weeks.

    So, this is what I'm trying to do:
    • Read from two input sources (in this case 2 udp url streams sent from the ffmpeg command-line)
    • Apply a filter (e.g. something like `[0:v]scale=1024x768[scale];[scale][1:v]overlay`)
    • Decode the packet(s) into their respective frames
    • Use the `av_buffersrc_add_frame()` and `av_buffersink_get_frame()` functions to retrieve the final "filtered" frame to use later.

    When I try to call the av_buffersink_get_frame() function to retrieve the "filtered" frame, I constantly get a return value of -11 (AVERROR(EAGAIN)). According to the documentation, when this happens "more input frames must be added to the filtergraph to get more output". I assumed this means that I must start the process over again of decoding the packets into their frames and calling av_buffersrc_add_frame() and av_buffersink_get_frame() again. But when I do this, the result never seems to change. I continually get a result of -11.

    The strange thing is that when I only provide one input source instead of two, and provide the appropriate filter string, I don't have a problem. The "filtered" frame is valid and I can use it just fine...

    Here is all of my code (using ffmpeg v4.3.2)

    Functions

    Code:
    typedef struct VideoContext
    {
        AVFormatContext* FormatCtx;
        int StreamIndex;
        AVCodecContext* CodecCtx;
        AVPacket Packet;
        AVFrame* Frame;
        AVFilterContext* BufferSrcCtx;
    
        VideoContext() :
            FormatCtx(nullptr),
            StreamIndex(-1),
            CodecCtx(nullptr),
            Packet(),
            Frame(nullptr),
            BufferSrcCtx(nullptr),
            VidBuffer(nullptr)
        {
        }
    } VideoContext;
    
    typedef struct UrlInput
    {
        std::string UrlStr;
        DataTypes::VideoContext VidCtx;
    } UrlInput;
    
    void CleanupVideoContext(DataTypes::VideoContext& vidCtx)
    {
        if (vidCtx.BufferSrcCtx)
        {
            avfilter_free(vidCtx.BufferSrcCtx);
            vidCtx.BufferSrcCtx = NULL;
        }
    
        if (vidCtx.Frame)
        {
            av_frame_free(&vidCtx.Frame);
            vidCtx.Frame = NULL;
        }
    
        if (vidCtx.CodecCtx)
        {
            avcodec_free_context(&vidCtx.CodecCtx);
            vidCtx.CodecCtx = NULL;
        }
    
        if (vidCtx.FormatCtx)
        {
            avformat_close_input(&vidCtx.FormatCtx);
            vidCtx.FormatCtx = NULL;
        }
    }
    
    void Cleanup(AVFilterGraph*& filterGraph, AVFilterContext*& buffersinkCtx,
        AVFrame*& filterFrame, std::vector<UrlInput>& inputs)
    {
        for (int i = 0; i < inputs.size(); ++i)
        {
            CleanupVideoContext(inputs[i].VidCtx);
        }
    
        if (buffersinkCtx)
        {
            avfilter_free(buffersinkCtx);
            buffersinkCtx = NULL;
        }
    
        if (filterGraph)
        {
            avfilter_graph_free(&filterGraph);
            filterGraph = NULL;
        }
    
        if (filterFrame)
        {
            av_frame_free(&filterFrame);
            filterFrame = NULL;
        }
    }
    
    bool OpenUrlVideo(UrlInput& input)
    {
        int attemptsLeft = 5;
    
        do
        {
            int result = avformat_open_input(&input.VidCtx.FormatCtx,
                input.UrlStr.c_str(), NULL, NULL);
            if (result < 0)
            {
                Utils::LogAVMessage(result, AV_LOG_ERROR,
                    "Cannot open url...");
                CleanupVideoContext(input.VidCtx);
                continue;
            }
    
            result = avformat_find_stream_info(input.VidCtx.FormatCtx, NULL);
            if (result < 0)
            {
                Utils::LogAVMessage(result, AV_LOG_ERROR,
                    "Cannot find stream information...");
                CleanupVideoContext(input.VidCtx);
                continue;
            }
    
            AVCodec* codec = nullptr;
            int streamIdx = av_find_best_stream(input.VidCtx.FormatCtx,
                AVMEDIA_TYPE_VIDEO, -1, -1, &codec, 0);
            const char* errMsg = "";
            if (!Utils::Ffmpeg::IsStreamIndexValid(streamIdx, errMsg))
            {
                Utils::LogAVMessage(errMsg, AV_LOG_ERROR);
                CleanupVideoContext(input.VidCtx);
                continue;
            }
    
            AVStream* stream = input.VidCtx.FormatCtx->streams[streamIdx];
            input.VidCtx.StreamIndex = streamIdx;
    
            if (!codec)
            {
                const AVCodec* tempCodec =
                    avcodec_find_decoder(stream->codecpar->codec_id);
                codec = const_cast<AVCodec*>(tempCodec);
                if (!codec)
                {
                    Utils::LogAVMessage("Could not find decoder...",
                        AV_LOG_ERROR);
                    CleanupVideoContext(input.VidCtx);
                    continue;
                }
            }
    
            input.VidCtx.CodecCtx = avcodec_alloc_context3(codec);
            if (!input.VidCtx.CodecCtx)
            {
                Utils::LogAVMessage("Could not create decoding context...",
                    AV_LOG_ERROR);
                CleanupVideoContext(input.VidCtx);
                continue;
            }
    
            result = avcodec_parameters_to_context(input.VidCtx.CodecCtx,
                stream->codecpar);
            if (result < 0)
            {
                Utils::LogAVMessage(result, AV_LOG_ERROR,
                    "Could not assign codec parameters...");
                CleanupVideoContext(input.VidCtx);
                continue;
            }
    
            result = avcodec_open2(input.VidCtx.CodecCtx, codec, NULL);
            if (result < 0)
            {
                Utils::LogAVMessage(result, AV_LOG_ERROR,
                    "Cannot open video decoder...");
                CleanupVideoContext(input.VidCtx);
                continue;
            }
    
            if (input.VidCtx.CodecCtx->width == 0 ||
                input.VidCtx.CodecCtx->height == 0)
            {
                Utils::LogAVMessage("Codec Context's width and/or height
                    is 0...", AV_LOG_ERROR);
                CleanupVideoContext(input.VidCtx);
                continue;
            }
            break;
    
        } while (--attemptsLeft > 0);
        
        return true;
    }
    
    inline void FreeFilterInOuts(AVFilterInOut** outs, AVFilterInOut** ins)
    {
        if (outs)
        {
            avfilter_inout_free(outs);
            outs = NULL;
        }
    
        if (ins)
        {
            avfilter_inout_free(ins);
            ins = NULL;
        }
    }
    
    bool CreateFilterGraph(AVFilterGraph*& filterGraph,
        AVFilterContext*& buffersinkCtx, std::vector<UrlInput>& inputs,
        std::string filterStr)
    {
        const AVFilter* buffsrc = avfilter_get_by_name("buffer");
        const AVFilter* buffsink = avfilter_get_by_name("buffersink");
    
        AVFilterInOut* filterOuts = avfilter_inout_alloc();
        AVFilterInOut* filterIns = avfilter_inout_alloc();
    
        int numOfInputs = static_cast<int>(inputs.size());
    
        std::vector<char*> buffsrcNames;
        char* buffsinkName = "";
    
        filterGraph = avfilter_graph_alloc();
        if (!filterGraph)
        {
            Utils::LogAVMessage("Could not allocate filter graph...",
                AV_LOG_ERROR);
            FreeFilterInOuts(&filterOuts, &filterIns);
            filterGraph = NULL;
            return false;
        }
    
        std::string tempFilterStr = "";
    
        std::vector<char[512]> argsList(numOfInputs);
        for (int i = 0; i < numOfInputs; ++i)
        {
            // Store empty string in each argsList item
            strcpy(argsList[i], "");
    
            AVStream* stream = inputs[i].VidCtx.FormatCtx->
                streams[inputs[i].VidCtx.StreamIndex];
            int width = inputs[i].VidCtx.CodecCtx->width;
            int height = inputs[i].VidCtx.CodecCtx->height;
            AVPixelFormat pixFmt = inputs[i].VidCtx.CodecCtx->pix_fmt;
            AVRational timeBase = stream->time_base;
            AVRational aspectRatio =
                inputs[i].VidCtx.CodecCtx->sample_aspect_ratio;
    
            // Define input properties
            std::snprintf(argsList[i], sizeof(argsList[i]),
                "buffer=video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d[in_%d]; ",
                width, height, pixFmt, timeBase.num, timeBase.den,
                aspectRatio.num, aspectRatio.den, i);
    
            tempFilterStr += argsList[i];
        }
    
        filterStr = tempFilterStr + filterStr + "; [out] buffersink";
    
        Utils::LogAVMessage((std::string("\nFull Filter String:\n\t") +
            filterStr + "\n").c_str(), AV_LOG_WARNING);
    
        int result = avfilter_graph_parse2(filterGraph, filterStr.c_str(),
            &filterIns, &filterOuts);
        if (result < 0)
        {
            Utils::LogAVMessage(result, AV_LOG_ERROR,
                "Could not parse filter graph...");
            FreeFilterInOuts(&filterOuts, &filterIns);
            avfilter_graph_free(&filterGraph);
            return false;
        }
    
        result = avfilter_graph_config(filterGraph, NULL);
        if (result < 0)
        {
            Utils::LogAVMessage(result, AV_LOG_ERROR,
                "Could not configure filter graph...");
            FreeFilterInOuts(&filterOuts, &filterIns);
            avfilter_graph_free(&filterGraph);
            return false;
        }
    
        for (uint32_t i = 0; i < filterGraph->nb_filters; ++i)
        {
            if (Utils::ContainsString(filterGraph->filters[i]->name,
                "buffer_"))
            {
                buffsrcNames.push_back(filterGraph->filters[i]->name);
            }
    
            if (Utils::ContainsString(filterGraph->filters[i]->name,
                "buffersink_"))
            {
                buffsinkName = filterGraph->filters[i]->name;
            }
        }
    
        for (int i = 0; i < numOfInputs; ++i)
        {
            // Get the Parsed_buffer_x inputs(s)
            inputs[i].VidCtx.BufferSrcCtx =
                avfilter_graph_get_filter(filterGraph, buffsrcNames[i]);
            if (!inputs[i].VidCtx.BufferSrcCtx)
            {
                Utils::LogAVMessage(
                "avfilter_graph_get_filter() returned a NULL bufersrc context",
                AV_LOG_ERROR);
                FreeFilterInOuts(&filterOuts, &filterIns);
                avfilter_graph_free(&filterGraph);
                return false;
            }
        }
    
        buffersinkCtx = avfilter_graph_get_filter(filterGraph, buffsinkName);
        if (!buffersinkCtx)
        {
            Utils::LogAVMessage(
            "avfilter_graph_get_filter() returned a NULL buffersink context",
            AV_LOG_ERROR);
            FreeFilterInOuts(&filterOuts, &filterIns);
            avfilter_graph_free(&filterGraph);
            return false;
        }
    
        FreeFilterInOuts(&filterOuts, &filterIns);
        return true;
    }
    
    bool DecodeAndAddFrameToBuffersrc(DataTypes::VideoContext& vidCtx)
    {
        int result = 0;
        while (1)
        {
            result = av_read_frame(vidCtx.FormatCtx, &vidCtx.Packet);
            if (result < 0)
            {
                break;
            }
    
            if (vidCtx.Packet.stream_index == vidCtx.StreamIndex)
            {
                result = avcodec_send_packet(vidCtx.CodecCtx, &vidCtx.Packet);
                if (result < 0)
                {
                    Utils::LogAVMessage(result, AV_LOG_ERROR,
                        "Error while sending a packet to the decoder...");
                    break;
                }
    
                while (result >= 0)
                {
                    result = avcodec_receive_frame(vidCtx.CodecCtx,
                        vidCtx.Frame);
                    if (result == AVERROR(EAGAIN) || result == AVERROR_EOF)
                    {
                        break;
                    }
                    else if (result < 0)
                    {
                        Utils::LogAVMessage(result, AV_LOG_ERROR,
                        "Error while receiving a frame from the decoder...");
                        return false;
                    }
    
                    vidCtx.Frame->pts = vidCtx.Frame->best_effort_timestamp;
    
                    // push the decoded frame into the filtergraph
                    if (av_buffersrc_add_frame_flags(vidCtx.BufferSrcCtx,
                        vidCtx.Frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0)
                    {
                        Utils::LogAVMessage(
                        "Error while feeding the filtergraph...",
                        AV_LOG_ERROR);
                        break;
                    }
    
                    return true;
                }
            }
            av_packet_unref(&vidCtx.Packet);
        }
    
        return false;
    }

    How the Above Code Gets Used

    Code:
    int main(int argc, char** argv)
    {
        ConfigManager configManager("ConfigFile.xml");
    
        if (!configManager.IsLoaded())
        {
            return false;
        }
    
        std::vector<DataTypes::InputConfig> inputConfigs =
            configManager.GetInputConfigs();
        DataTypes::FilterConfig filterConfig = configManager.GetFilterConfig();
    
        if (!filterConfig.IsEnabled)
        {
            Utils::LogAVMessage("Forgot to provide a filter...", AV_LOG_ERROR);
            Logger::DestroyInstance();
            return -1;
        }
    
        // Create list of VideoContexts for inputs
        std::vector<UrlInput> urlInputs;
    
        for (int i = 0; i < inputConfigs.size(); ++i)
        {
            UrlInput input;
            input.UrlStr = inputConfigs[i].Option.Url.UrlString;
            if (!OpenUrlVideo(input))
            {
                Utils::LogAVMessage("Failed to Open Url Video...",
                    AV_LOG_ERROR);
                Logger::DestroyInstance();
                return -1;
            }
    
            input.VidCtx.Frame = av_frame_alloc();
            if (!input.VidCtx.Frame)
            {
                Utils::LogAVMessage("Failed to allocate frame...",
                    AV_LOG_ERROR);
                Logger::DestroyInstance();
                return -1;
            }
    
            // According to the ffmpeg doxygen 4.1 filter_video.c example,
            //    the AVPacket doens't get initialized
            // ...
    
            Utils::LogAVMessage("\nSuccessfully Opened Url Video\n",
                AV_LOG_WARNING);
    
            urlInputs.push_back(input);
        }
    
        AVFilterGraph* filterGraph = nullptr;
        AVFilterContext* buffersinkCtx = nullptr;
        AVFrame* filterFrame = nullptr;
    
        // Create and populate filter graph and filter contexts
        if (!CreateFilterGraph(filterGraph, buffersinkCtx, urlInputs,
            filterConfig.FilterString))
        {
            Utils::LogAVMessage(
                "Failed to create filter graph and filter contexts...",
                AV_LOG_ERROR);
            Cleanup(filterGraph, buffersinkCtx, filterFrame, urlInputs);
            Logger::DestroyInstance();
            return -1;
        }
    
        Utils::LogAVMessage(
            "Successfully Created Filter Graph and Filter Contexts",
            AV_LOG_WARNING);
    
        // Initialize Filter Frame
        filterFrame = av_frame_alloc();
        if (!filterFrame)
        {
            Utils::LogAVMessage("Failed to allocate filter frame...",
                AV_LOG_ERROR);
            Cleanup(filterGraph, buffersinkCtx, filterFrame, urlInputs);
            Logger::DestroyInstance();
            return -1;
        }
    
        // Decode Packets to Frames, Add them to Buffersrc Contextx
        //      then Retrieve Filter Frame from Buffersink Context
    
        int attemptsLeft = 10;
        bool succeeded = false;
    
        int framesLeft = 100;
        bool gotFrame = false;
    
        do
        {
            for (int i = 0; i < urlInputs.size(); ++i)
            {
                if (!DecodeAndAddFrameToBuffersrc(urlInputs[i].VidCtx))
                {
                    // May need to free some memory here...
                    --attemptsLeft;
                    succeeded = false;
                    continue;
                }
            }
    
            int result = 0;
            // Retrieve Filter Frame from Buffersink Context
            while (framesLeft > 0)
            {
                result = av_buffersink_get_frame(buffersinkCtx, filterFrame);
                if (result == AVERROR(EAGAIN) || result == AVERROR_EOF)
                {
                    if (gotFrame)
                    {
                        --framesLeft;
                        // Prep for next iteration
                        gotFrame = false;
                    }
                    else
                    {
                        Utils::LogAVMessage(
                            "FAILED TO RETRIEVE FILTER FRAME...",
                            AV_LOG_ERROR);
                        succeeded = false;
                    }
                    
                    if (framesLeft <= 0)
                    {
                        succeeded = true;
                    }
                    break;
                }
                
                if (result < 0)
                {
                    succeeded = false;
                    // Make sure we exit out of both loops
                    attemptsLeft = 0;
                    break;
                }
    
                // Display frame info
                std::string frameInfoStr = std::string("Frame width:") +
                    std::to_string(filterFrame->width) +
                    " Frame height:" + std::to_string(filterFrame->height) +
                    " Frame Pixel Format:" +
                    Utils::PixelFormatToString
                        static_cast<AVPixelFormat>(filterFrame->format)) +
                    " Frame pts:" + std::to_string(filterFrame->pts) +
                    " Frame dts:" + std::to_string(filterFrame->pkt_dts);
    
                Utils::LogAVMessage(frameInfoStr.c_str(), AV_LOG_WARNING);
                av_frame_unref(filterFrame);
    
                gotFrame = true;
            }
            for (int i = 0; i < urlInputs.size(); ++i)
            {
                av_frame_unref(urlInputs[i].VidCtx.Frame);
            }
    
        } while (attemptsLeft > 0 && !succeeded);
    
    
        if (succeeded)
        {
            Utils::LogAVMessage("Successfully retrieved filtered frame(s)...",
                AV_LOG_WARNING);
        }
        else
        {
            Utils::LogAVMessage("Failed to retrieve all filtered frames...",
                AV_LOG_ERROR);
        }
    
        Cleanup(filterGraph, buffersinkCtx, filterFrame, urlInputs);
        Logger::DestroyInstance();
    
        return 0;
    }
    I found out how to use the avfilter_graph_parse2() function from this
    stackoverflow question:
    https://stackoverflow.com/questions/22414753/implementing-a-multiple-input-filter-grap...brary-in-andro

    I found out how to decode "filtered" frames from the ffmpeg doxygen
    example:
    http://www.ffmpeg.org/doxygen/4.1/filtering_video_8c-example.html

    I apologize for the amount of code, but I want to be thorough with this question. Again, when I call the av_buffersink_get_frame() function, I always get a result of -11 (AVERROR(EAGAIN)). Also I didn't mention that within the CreateFilters() function, when I finish calling the avfilter_graph_config() function, the linkedlist of filters don't seem to be linked to each other for some reason... Please let me know what I'm doing wrong that is potentially leading to this issue. Thank you for your help in advance.
    Last edited by N77; 15th Nov 2021 at 10:39. Reason: To give a less complex example
    Quote Quote  
  2. Yeah i thought too that it might just be about your 2 input streams [0:v] and [1:v] that you need for the overlay. My guess would have been that you kind of miss to mark your input frames from which source they are coming. But quickly reading the docs and some code didnt give me any hint on what exactly is missing.
    However, the correct place to post such question would be the ffmpeg user chat on libera, usually you find someone who is able to help on how to use the API topics there.
    Quote Quote  
  3. Thank you so much for the reply. And could you provide a link to the user chat on libera? The current link in your response doesn't point to a user chat.
    Quote Quote  
  4. Originally Posted by N77 View Post
    The current link in your response doesn't point to a user chat.
    This is because i actually didnt provide any link, videohelp decided on it's own to highlight and link the word ffmpeg (kind of like it is in a wiki).

    Libera, it's an IRC chat channel provider https://libera.chat/ you need some IRC client to connect to it.
    I personally use a cloud service as IRC Client, https://www.irccloud.com/ (the free version of course). Creating my account on irccloud and connecting to the #ffmpeg channel was pretty straight forward.
    Just avoid the #ffmpeg channel that is provided on "freenode" servers, they are obsolete and you will not get response. When searching for #ffmpeg channels in your IRC client, you might find both, the libera and the freenode version.

    It can be tedious to get started there, you need to be patient. I'd post my question along with the code you have in the topic above, code is always good, minimized versions are even better. But use a pastebin service to post your code (so you just post the link to your code actually). I can select "post a text snippet" in my IRC client which does the pastebin stuff automatically.
    But you probably need to wait 1 hour or such in case nobody replies, in that case you can try to re-post your question or get online at another time of day when more users are online.
    Don't expect it to work as a usual chat, people don't really "chat" there. Typically there are lots of guys "listening until something interesting for them appears"... but a message like "hi, someone here?" is not interesting for anyone so you get better results when just posting in quick words along with code what you want.

    Last but not least, here the link to the ffmpeg page where they officially state that libera is their official IRC channel hoster:
    https://ffmpeg.org/contact.html#IRCChannels
    Last edited by emcodem; 19th Nov 2021 at 14:23.
    Quote Quote  
  5. Awesome thank you. Before I go to that site, I found a temporary "fix" for the code above.
    It turns out that the
    Code:
    vidCtx.Frame->pts = vidCtx.Frame->best_effort_timestamp;
    code was causing the problem. When I start both of my ffmpeg commandline apps (that send udp url streams) at around the same time, I don't get an issue. But when I launch them a few minutes apart, then run my app, I get the issue of always getting a result of -11 from av_buffersink_get_frame. I figure that means there's an issue with how the ffmpeg api is attempting to handle the different timestamps. So, I used a hack to replace
    Code:
    vidCtx.Frame->pts = vidCtx.Frame->best_effort_timestamp;
    with
    Code:
    vidCtx.Frame->pts = ++pts;
    where pts starts at 0. This allowed the application to run fine, but I found an issue. When one of the incoming udp urls is from a "screen-grab" command, that portion of the video being displayed as part of the output slows down overtime and lags behind the actual video speed continually. At this point, I'm working on finding out why that "screen-grab" video ends up continually lagging so much...
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!