VideoHelp Forum


Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays!


Try StreamFab Downloader and download streaming video from Youtube, Netflix, Amazon! Download free trial.


+ Reply to Thread
Results 1 to 13 of 13
Thread
  1. Member
    Join Date
    Feb 2021
    Location
    Sweden
    Search Comp PM
    I would like to script on Linux the download of a video stream that is playing on a webpage.
    I have tried to use yt-dlp using the --downloader ffmpeg argument and it works fine if I have the player URL available.
    But to get that I have to manually start the player and then right-click and select "Copy Video URL".

    So is there a way (on Linux) to script the extraction of the actual video URL to use for yt-dlp to grab the stream?

    The source page I am trying to download from is this.
    Once I have the URL I can do this:

    Code:
    yt-dlp -S 'res:500' --downloader ffmpeg --downloader-args "ffmpeg:-t 540" -o NewsNow.mp4 videourl
    So what I need is a way to automatically decode the video URL used by the player on the website.
    Any ideas?
    Quote Quote  
  2. Member
    Join Date
    Feb 2021
    Location
    Sweden
    Search Comp PM
    Really no-one know how/if this can be done?

    Here is where I am now:

    The Youtube URL embedded on the webpage has these problems:
    1) It only lasts for a short time, say a day or so.
    So I must find a way to programmatically extract the current youtube URL (or ID)

    2) I have tried using the URL with yt-dlp in order to download but I have found no timeout argument to use with it so I tried to:
    - Manually stop the download by hitting Ctrl-C on the keyboard after 30 s
    This creates an mp4 file on disk, but the playing time is a bit longer (not a big problem)

    - Starting the command like this using the timeout function works for me:
    Code:
    timeout --signal=SIGINT 30 yt-dlp -o testnew.mp4 https://youtu.be/videoID
    In this case there is an mp4 file output when it ends by itself!

    So I could use yt-dlp with timeout and it would solve my downloading problem!

    But I then need to be able to get the URL to the Youtube stream from the hosting site.
    How can that be done?
    Apparently the URL is not present in the original site source, only after I click the play button does it become available.
    And it seems to have a limited lifetime so it needs to be retrieved close to the time it will be used.
    Quote Quote  
  3. Don't use bash its fairly limited for this sort of thing, use python or php-cli. I prefer php-cli myself. Learn how to use curl and json_decode on php - you'll need to use both of these for that you're trying to achieve.

    Almost all streaming sites have api endpoints which can be queried for the mpd manifests and widevine license url's etc... these api end points are mostly all json.

    Sometimes there are no api endpoints, and the json is served in between <script></script> tags in the html of the video page, you can use regex preg_match() to extract this and then feed it through json_decode().

    Use Chrome dev tools to see what api endpoints are being queried.

    I've written scripts which query the api end points of some sites and download entire TV series, decrypt, mkvmerge and tag the whole lot in one go.
    Quote Quote  
  4. Member
    Join Date
    Feb 2021
    Location
    Sweden
    Search Comp PM
    I have tracked the URL for the last half day in order to see when it switches.
    It turns out it switched to a new one at about 7AM ET (between my time points 12:17 and 14:12)...

    I have also checked that the old URLs are still available but now as a 11:54:55 long video file so you get a duration in the player of almost 12 hours.
    The live stream continues on a different URL at that point.

    My script now reads the URL from a file that I need to update as the stream changes.
    I am doing updates manually but it is really not feasible for recordings being made at nighttime my time UTC+2..
    And with a 12 hour change rate...
    Last edited by BosseB; 27th Oct 2023 at 11:29.
    Quote Quote  
  5. code removed because it is not bash code
    Last edited by jack_666; 3rd Nov 2023 at 17:11.
    Quote Quote  
  6. Member
    Join Date
    Feb 2021
    Location
    Sweden
    Search Comp PM
    Thanks, but...
    I tried to run the command on the Linux command line but it returned this:

    Code:
    mpv 0.32.0 Copyright © 2000-2020 mpv/MPlayer/mplayer2 projects
     built on UNKNOWN
    ffmpeg library versions:
       libavutil       56.31.100
       libavcodec      58.54.100
       libavformat     58.29.100
       libswscale      5.5.100
       libavfilter     7.57.100
       libswresample   3.5.100
    ffmpeg version: 4.2.7-0ubuntu0.1+esm3
    
    Usage:   mpv [options] [url|path/]filename
    
    Basic options:
     --start=<time>    seek to given (percent, seconds, or hh:mm:ss) position
     --no-audio        do not play sound
     --no-video        do not play video
     --fs              fullscreen playback
     --sub-file=<file> specify subtitle file to use
     --playlist=<file> specify playlist file
    
     --list-options    list all mpv options
     --h=<string>      print options which contain the given string in their name
    I don't know what mpv is, though. But the arguments sent to it do not look OK (if I remove the last pipe and what foillows).

    And I do not understand how anything gets piped when the first curl command has this:
    Code:
    curl --silent --output /dev/null
    Will this not stop any output from being piped?

    By looking at the code I see that you had escaped a number of " quotes using \" so those will be OK, I guess...
    I also tried to remove (from the end) the sections separated by pipe (|) so I could see intermediate results but there was no sign of the Youtube URL ID.

    I have been checking the URL of the stream regularly for a while and unlike until midnight Friday-Saturday when the URL lasted just about 12 hours the one started then is still active 37 hours later...
    Right now it is:
    Code:
    https://youtu.be/0NHsQWhsrcM
    I guess they have a different expire rule over the weekends.

    Still can only get the URL by right-clicking the playing stream and selecting Copy URL.
    Quote Quote  
  7. Member
    Join Date
    Feb 2021
    Location
    Sweden
    Search Comp PM
    Now the week-end is over when it kept the same URL for 61 hours....
    After that it jumped back to about a 12 hour lifetime for the YT URL...
    So I would like to get a hint on how to extract the URL from the website in a script...

    Manual procedure
    On the page itself one has to refresh/load it, then wait for the player to appear fully.
    Next click the play link in the center of the player.
    Wait for the playback to start (and hit the mute button since the audio is set to max volume...)
    Now right-click the player and there will be a menu that contains a "Copy video url" item, click that.
    Now the YT URL is in the clipboard and can be pasted elsewhere.

    Can this be automated?
    Last edited by BosseB; 1st Nov 2023 at 10:03.
    Quote Quote  
  8. Member
    Join Date
    Feb 2021
    Location
    Sweden
    Search Comp PM
    Now confirmed, they (http://www.freeintertv.com/view/id-2563/1-1-2-1) switch YouTube URL at 6AM and 6PM EDT every weekday but during weekends it stays the same for the whole weekend.
    How can I automate the extraction of the currently used YouTube URL?
    Quote Quote  
  9. Process explanation .... look at devtools and filter on php and you will see this call
    Code:
    curl 'http://www.freeintertv.com/modules/View/screen.php?nametv=NBC+Live+%28USA%29&idtv=2563&TVSESS=e89ghfri7kcov2ilclb52urdi4' \
    .
    .
      -H 'Cookie: TVSESS=e89ghfri7kcov2ilclb52urdi4; lastnew=2563' \
    and the response .....


    var $myPlayList=['https://www.youtube.com/watch?v=RHnWvQCS4Io','https://www.youtube.com/watch?v=RHnWvQCS4Io'];


    the only header that is important is

    -H 'Cookie: TVSESS=e89ghfri7kcov2ilclb52urdi4; lastnew=2563'

    Therefore the required code


    Code:
    curl -H 'Cookie: TVSESS=e89ghfri7kcov2ilclb52urdi4; lastnew=2563' 'http://www.freeintertv.com/modules/View/screen.php?nametv=NBC+Live+%28USA%29&idtv=2563&TVSESS=e89ghfri7kcov2ilclb52urdi4
    The response of this call has to be parsed to capture

    https://www.youtube.com/watch?v=RHnWvQCS4Io

    the cookie TVSESS=e89ghfri7kcov2ilclb52urdi4 is the sesion id which has to generated.

    Use your knowledge of bash scripting to generate the script.
    Last edited by jack_666; 3rd Nov 2023 at 17:11.
    Quote Quote  
  10. Member
    Join Date
    Feb 2021
    Location
    Sweden
    Search Comp PM
    RESOLVED!
    So I have now solved the problem using your input here and some research about sed etc...
    End result is this:

    Code:
    #!/bin/bash
    #Get the Youtube URL for the NBC NOW stream
    
    #Retrieve the url is a 2-step operation, first get the cookie:
    TVSESS=$(curl -s -c - 'http://www.freeintertv.com/view/id-2563/1-1-2-1' | sed -rn 's/.*TVSESS=([^"]+).*/\1/p')
    #Next use it to get the Playlist output
    CMD="curl -s -H 'Cookie: TVSESS=${TVSESS}; lastnew=2563' 'http://www.freeintertv.com/modules/View/screen.php?nametv=NBC+Live+%28USA%29&idtv=2563&TVSESS=${TVSESS}' | grep 'youtube.com'"
    YTTMP=$(eval "$CMD")
    # Now extract the Youtube ID string:
    YTID=$(echo ${YTTMP} | sed -n "s/^.*youtube.*=\(.*\)'.*$/\1/p")
    #Build the final URL
    YTURL="https://youtu.be/${YTID}"
    #Output the final result, can be written to file also
    echo "$YTURL"
    I have put a call to this script into cron (with an update of an URL file with the result) so it runs once per hour to keep the URL current.
    Quote Quote  
  11. Member
    Join Date
    Feb 2021
    Location
    Sweden
    Search Comp PM
    Many Many Thanks to the users here who are so helpful!!
    I have been asking strange detailed questions for a long time now and mostly there has been solutions I could never have come up with on my own!
    I am so grateful for all the help about ffmpeg command details and website data extraction scripts and all of that I have asked about!

    Just wanted to tell you how great a resource this forum is!
    Quote Quote  
  12. I know this has nothing to do with it, but which one is more compatible with websites: video downloader or download manager, the question is which one will be compatible with any website I enter and will it download the media?
    Quote Quote  
  13. Member
    Join Date
    Feb 2021
    Location
    Sweden
    Search Comp PM
    I think you should start a new thread asking about this...
    It is OT here where the discussion is about extracting a video URL from a website player to be used with ffmpeg command line downloading.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!