VideoHelp Forum



Support our site by donate $5 directly to us Thanks!!!

Try StreamFab Downloader and download streaming video from Netflix, Amazon!



+ Reply to Thread
Results 1 to 18 of 18
  1. hi there,

    anyone here know python well ? i need py script to get all watch video link serie from streamingcommunity. from this site i can't find watch video links on network tab or also element tab, but just need to hold play to see the link. my goal is get all watch video link from py script and bypass the browser

    video link (just example):
    Code:
    https://streamingcommunityz.life/it/titles/4287-squid-game
    with py script i need to get:
    Code:
    https://streamingcommunityz.life/it/watch/4287?e=25568
    Code:
    https://streamingcommunityz.life/it/watch/4287?e=25569
    and all others ep watch links
    Quote Quote  
  2. no one (maybe from browser only) can find all watch links ? i can't find links from browser ...
    Quote Quote  
  3. Use Stream Detector
    Discord Sei#0555
    Quote Quote  
  4. start a loop with the the id number of episode 1 .... 25568

    execute this code ... conversion to python is up to you


    Code:
    curl -ks https://streamingcommunityz.life/it/watch/4287?e=xxxxx | sed -e "s#nextEpisode#\nnextEpisode#" | grep "nextEpisode" | sed -e "s#nextEpisode.\{23\}\([0-9]\{5\}\).*$#\1#"


    The windows cmd code will give you the episode id for the next episode
    Loop the code with this new episode id number


    Code:
    curl -ks https://streamingcommunityz.life/it/watch/4287?e=25568 | sed -e "s#nextEpisode#\nnextEpisode#" | grep "nextEpisode" | sed -e "s#nextEpisode.\{23\}\([0-9]\{5\}\).*$#\1#"
    25569
    
    
    
    curl -ks https://streamingcommunityz.life/it/watch/4287?e=25569 | sed -e "s#nextEpisode#\nnextEpisode#" | grep "nextEpisode" | sed -e "s#nextEpisode.\{23\}\([0-9]\{5\}\).*$#\1#"
    25570
    
    
    
    curl -ks https://streamingcommunityz.life/it/watch/4287?e=25570 | sed -e "s#nextEpisode#\nnextEpisode#" | grep "nextEpisode" | sed -e "s#nextEpisode.\{23\}\([0-9]\{5\}\).*$#\1#"
    25571
    Quote Quote  
  5. thank Gromyko but without browser the first ep 25568 is unknow value

    anyway, run
    Code:
    curl -ks https://streamingcommunityz.life/it/watch/4287?e=25568 | sed -e "s#nextEpisode#\nnextEpisode#" | grep "nextEpisode" | sed -e "s#nextEpisode.\{23\}\([0-9]\{5\}\).*$#\1#"
    i get "sed" is not recognized as an internal or external command

    i am win user. maybe work on linux, but no for me
    Quote Quote  
  6. ... without browser the first ep 25568 is unknow value

    Code:
    curl -ks "https://streamingcommunityz.life/it/titles/4287-squid-game" | sed -e "s#episodes#\nepisodes#g" | grep "episodes.quot;:...quot;id.quot;:" | sed -e "s#episodes.quot;:...quot;id.quot;:\([0-9]\{5\}\).*$#\1#"
    25568



    i get "sed" is not recognized as an internal or external command

    I must admit that this statement surprised me. I would have expected you to google sed and windows. If something did not work, I would have expected you to try and solve it on your own and not throw up your hands saying it did not work. I did write ...

    The windows cmd code will give you the episode id for the next episode
    Have a look at https://www.cygwin.com/


    I have attached sed and grep to this post.
    Image Attached Files
    Quote Quote  
  7. thanks for the explanation. your last command is different from the previous one
    however i've download and extract zip file on C, then run your command and have this error:
    Image
    [Attachment 88218 - Click to enlarge]


    maybe need to add grep and sed to env path ??
    Quote Quote  
  8. another simple and simply solution for me would be a pyscript to save page as html file. and i tried to do this with py script but the html file saved and then opened with np++ does not reveal the links to the various watch urls ... it's as if a javascript (or other) revealed the various watch urls only at second time ...

    for me this solution it would be appreciated provided that all the url watch links are present in html file
    Quote Quote  
  9. Interesting problem.
    There is a pypi package https://pypi.org/project/streamingcommunity-unofficialapi/ which allows search and retrieval of https://streamingcommunityz.life

    I put this together with the help of chatGPT

    Code:
    # pip install -U streamingcommunity-unofficialapi
    from urllib.parse import urljoin
    from scuapi import API
    
    BASE = "https://streamingcommunityz.life"
    
    sc = API("streamingcommunityz.life")  # <- no scheme, no trailing slash
    
    # robustly find the slug/id we need
    hits = sc.search("squid game")        # dict keyed by title names
    # pick the exact match with id 4287
    entry = next(v for v in hits.values() if v["id"] == 4287)
    
    slug = f'{entry["id"]}-{entry["slug"]}'  # "4287-squid-game"
    
    sc = API("streamingcommunityz.life/it")
    data = sc.load(slug)                # full series metadata incl. episodeList
    
    title_id = data["id"]
    links = [
        urljoin(BASE, f"/it/watch/{title_id}?e={ep['id']}")
        for ep in data["episodeList"]
        if ep.get("season") == 1
    ]
    
    for url in links:
        print(url)
    But it fails with a 404 not found error at the line sc.load(slug)

    Intercepting traffic in httptoolkit reveals the slug link to be correct and it gets a 200 OK response but the next call the scuapi package makes - https://streamingcommunityz.life/it/api/titles/preview/4287 gives a 404 error. Here I got bored as the issue is with the scuapi package.

    The package scuapi has a github https://github.com/Blu-Tiger/streamingcommunity-unofficialapi you could raise an issue with the maintainer but there is already an outstanding issue on sc.load
    Last edited by phased; 9th Aug 2025 at 05:31.
    Quote Quote  
  10. It seems there is a yt-dlp plugin !! https://github.com/Blu-Tiger/StreamingCommunity-yt-dlp-plugin Go to the site; read the instructions and install the plugin.

    cheat sheet
    Code:
    python3 -m pip install -U https://github.com/Blu-Tiger/StreamingCommunity-yt-dlp-plugin/archive/master.zip
    Then run

    Code:
    yt-dlp --verbose -F https://streamingcommunityz.life/it/titles/4287-squid-game
    That produces data for each episode

    Image
    [Attachment 88220 - Click to enlarge]


    Either just copy your links from that or save screen output to a text file with

    Code:
    yt-dlp --verbose -F https://streamingcommunityz.life/it/titles/4287-squid-game  >  squid.txt
    Then use this python code - call it streamingcom.py
    PHP Code:
    #!/usr/bin/env python3
    import argparse
    import re
    import sys
    from pathlib import Path

    # Match http/https URLs; keep it permissive, then trim trailing punctuation.
    URL_RE re.compile(r"https?://[^\s<>'\"`]+")

    TRAILING_PUNCT ".,;:!?)\]}>"  # common trailing chars that cling to URLs in prose

    def read_text(pathPath) -> str:
        try:
            return 
    path.read_text(encoding="utf-8"errors="ignore")
        
    except Exception as e:
            print(
    f"error: cannot read {path}: {e}"file=sys.stderr)
            
    sys.exit(1)

    def extract_links(textstr):
        
    links = []
        for 
    m in URL_RE.finditer(text):
            
    url m.group(0).rstrip(TRAILING_PUNCT)
            
    links.append(url)
        return 
    links

    def remove_links
    (textstr):
        
    def _sub(matchre.Match) -> str:
            return 
    match.group(0).rstrip(TRAILING_PUNCT).replace(match.group(0), "")
        
    # simpler: just drop the matched span (and any trailing punctuation we trimmed above)
        
    return re.sub(URL_RE""text)

    def main():
        
    argparse.ArgumentParser(
            
    description="Print http(s) links found in a text file. Optionally remove them."
        
    )
        
    p.add_argument("file"type=Pathhelp="path to the input .txt file")
        
    p.add_argument("--dedupe"action="store_true"help="print unique links only")
        
    p.add_argument("--remove"action="store_true"help="also output text with links removed")
        
    p.add_argument("--out"type=Pathhelp="when using --remove, write cleaned text here; otherwise print to stdout")
        
    args p.parse_args()

        
    text read_text(args.file)
        
    links extract_links(text)

        if 
    args.dedupe:
            
    seen set()
            
    unique = []
            for 
    u in links:
                if 
    u not in seen:
                    
    seen.add(u)
                    
    unique.append(u)
            
    links unique

        
    # Print links (one per line)
        
    for u in links:
            print(
    u)

        if 
    args.remove:
            
    cleaned remove_links(text)
            if 
    args.out:
                try:
                    
    args.out.write_text(cleanedencoding="utf-8")
                
    except Exception as e:
                    print(
    f"error: cannot write {args.out}: {e}"file=sys.stderr)
                    
    sys.exit(1)
            else:
                
    # write cleaned text to stdout after the link list
                
    print("\n--- cleaned text (links removed) ---\n")
                
    sys.stdout.write(cleaned)

    if 
    __name__ == "__main__":
        
    main() 
    And when run
    Code:
    python streamingcom.py  squid.txt
    produces..

    Code:
    https://streamingcommunityz.life/it/titles/4287-squid-game
    https://streamingcommunityz.life/it/titles/4287-squid-game/season-1
    https://streamingcommunityz.life/it/watch/4287?e=25568
    https://streamingcommunityz.life/it/watch/4287?e=25569
    https://streamingcommunityz.life/it/watch/4287?e=25570
    https://streamingcommunityz.life/it/watch/4287?e=25571
    https://streamingcommunityz.life/it/watch/4287?e=25572
    https://streamingcommunityz.life/it/watch/4287?e=25573
    https://streamingcommunityz.life/it/watch/4287?e=25574
    https://streamingcommunityz.life/it/watch/4287?e=25575
    https://streamingcommunityz.life/it/watch/4287?e=25576
    https://streamingcommunityz.life/it/titles/4287-squid-game/season-2
    https://streamingcommunityz.life/it/watch/4287?e=83180
    https://streamingcommunityz.life/it/watch/4287?e=83179
    https://streamingcommunityz.life/it/watch/4287?e=83181
    https://streamingcommunityz.life/it/watch/4287?e=83182
    https://streamingcommunityz.life/it/watch/4287?e=83183
    https://streamingcommunityz.life/it/watch/4287?e=83184
    https://streamingcommunityz.life/it/watch/4287?e=83185
    https://streamingcommunityz.life/it/titles/4287-squid-game/season-3
    https://streamingcommunityz.life/it/watch/4287?e=92043
    https://streamingcommunityz.life/it/watch/4287?e=92042
    https://streamingcommunityz.life/it/watch/4287?e=92045
    https://streamingcommunityz.life/it/watch/4287?e=92044
    https://streamingcommunityz.life/it/watch/4287?e=92047
    https://streamingcommunityz.life/it/watch/4287?e=92046
    Quad erat demonstrandum - where shall I send the bill?

    But since there is a plugin
    Code:
     yt-dlp https://streamingcommunityz.life/it/titles/4287-squid-game
    will get the whole lot. You will need to specify more arguments to the yt-dlp command like selecting quality, language and specifying output details - I leave that to you. As a windows user -
    do all your commands in PowerShell or Terminal from MS Store
    Last edited by phased; 9th Aug 2025 at 06:46.
    Quote Quote  
  11. Member
    Join Date
    Feb 2022
    Location
    Search the forum first!
    Search PM
    Originally Posted by Gromyko View Post
    I must admit that this statement surprised me....
    Hi Jack.
    Noob Starter Pack. Just download every Widevine mpd! Not kidding!.
    https://files.videohelp.com/u/301890/hellyes6.zip
    Quote Quote  
  12. @phased

    wow !! no words to say thank you enough, so BIG thanks
    i didn't know about the existence of unofficial api before, and plugin for yt-dlp too. very great discover
    ok so i can test all your tips soon. thanks again, god bless you
    Quote Quote  
  13. Originally Posted by Gromyko View Post
    Code:
    curl -ks "https://streamingcommunityz.life/it/titles/4287-squid-game" | sed -e "s#episodes#\nepisodes#g" | grep "episodes.quot;:...quot;id.quot;:" | sed -e "s#episodes.quot;:...quot;id.quot;:\([0-9]\{5\}\).*$#\1#"
    25568
    because i get error, i've trim the command:
    1)
    Code:
    curl -ks "https://streamingcommunityz.life/it/titles/4287-squid-game" | sed -e "s#episodes#\nepisodes#g" > temp_output.txt
    2)
    Code:
    grep "episodes.quot;:...quot;id.quot;:" temp_output.txt | sed -e "s#episodes.quot;:...quot;id.quot;:$[0-9]\{5\}$.*$#\1#"
    but just on temp_output.txt file i can't see (from np++) any watch url (like i try to save page as html file)
    here my output file https://files.videohelp.com/u/301058/temp_output.txt
    Quote Quote  
  14. You know that the videos on that site are all re-encoded, right? It's still better to look for torrents.
    Quote Quote  
  15. @whs912km
    First, I would like to thank you for giving sed and grep a try.


    Have a second look at this ...


    my code
    sed -e "s#episodes.quot;:...quot;id.quot;:\([0-9]\{5\}\).*$#\1#"




    your code
    sed -e "s#episodes.quot;:...quot;id.quot;:$[0-9]\{5\}$.*$#\1#"




    \( is not $


    They are not the same




    Code:
    grep "episodes.quot;:...quot;id.quot;:" temp_output.txt |   sed -e "s#episodes.quot;:...quot;id.quot;:\([0-9]\{5\}\).*$#\1#"
    25568








    Quote Quote  
  16. i have just copy your command from post#6
    copy and paste. i don't have edit ...

    but question is: from first command
    Code:
    curl -ks "https://streamingcommunityz.life/it/titles/4287-squid-game" | sed -e "s#episodes#\nepisodes#g" > temp_output.txt
    can't have watch link on temp_output.txt ... this is the real issue
    look my post #13 to see the output
    Quote Quote  
  17. i understand you want bypass the browser, but this site work trough js so you can get watch url only with browser
    so load video url, from Element tab filter for "episode-slider-mobile" and you can find all watch urls

    Image
    [Attachment 88242 - Click to enlarge]


    now copy element <div data-v-d2be8733 data-v-be2814e2 class="episode-slider-mobile"> into txt file (squid.txt)
    then run this script and you get just only watch urls
    Code:
    import re
    
    file_name = input("add file name (no ext): ") + '.txt'
    
    pattern = r'https://streamingcommunityz\.life/it/watch/(\d+)\?e=(\d+)'
    
    try:
        with open(file_name, 'r') as file:
            content = file.read()
    
        matches = re.findall(pattern, content)
    
        for match in matches:
            link = f'https://streamingcommunityz.life/it/watch/{match[0]}?e={match[1]}'
            print(link)
    
    except FileNotFoundError:
        print(f"The file '{file_name}' was not found")
    Image
    [Attachment 88243 - Click to enlarge]
    Quote Quote  
  18. oh wow @lomero you found out where the links are from Element tab, i couldn't find them...
    thanks for this, but always from browser ... but well another great news. thanks
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!