no one (maybe from browser only) can find all watch links ? i can't find links from browser ...
hi there,
anyone here know python well ? i need py script to get all watch video link serie from streamingcommunity. from this site i can't find watch video links on network tab or also element tab, but just need to hold play to see the link. my goal is get all watch video link from py script and bypass the browser
video link (just example):with py script i need to get:Code:https://streamingcommunityz.life/it/titles/4287-squid-game
Code:https://streamingcommunityz.life/it/watch/4287?e=25568and all others ep watch linksCode:https://streamingcommunityz.life/it/watch/4287?e=25569
no one (maybe from browser only) can find all watch links ? i can't find links from browser ...
Use Stream Detector
Discord Sei#0555
start a loop with the the id number of episode 1 .... 25568
execute this code ... conversion to python is up to you
Code:curl -ks https://streamingcommunityz.life/it/watch/4287?e=xxxxx | sed -e "s#nextEpisode#\nnextEpisode#" | grep "nextEpisode" | sed -e "s#nextEpisode.\{23\}\([0-9]\{5\}\).*$#\1#"
The windows cmd code will give you the episode id for the next episode
Loop the code with this new episode id number
Code:curl -ks https://streamingcommunityz.life/it/watch/4287?e=25568 | sed -e "s#nextEpisode#\nnextEpisode#" | grep "nextEpisode" | sed -e "s#nextEpisode.\{23\}\([0-9]\{5\}\).*$#\1#" 25569 curl -ks https://streamingcommunityz.life/it/watch/4287?e=25569 | sed -e "s#nextEpisode#\nnextEpisode#" | grep "nextEpisode" | sed -e "s#nextEpisode.\{23\}\([0-9]\{5\}\).*$#\1#" 25570 curl -ks https://streamingcommunityz.life/it/watch/4287?e=25570 | sed -e "s#nextEpisode#\nnextEpisode#" | grep "nextEpisode" | sed -e "s#nextEpisode.\{23\}\([0-9]\{5\}\).*$#\1#" 25571
thank Gromyko but without browser the first ep 25568 is unknow value
anyway, runi get "sed" is not recognized as an internal or external commandCode:curl -ks https://streamingcommunityz.life/it/watch/4287?e=25568 | sed -e "s#nextEpisode#\nnextEpisode#" | grep "nextEpisode" | sed -e "s#nextEpisode.\{23\}\([0-9]\{5\}\).*$#\1#"
i am win user. maybe work on linux, but no for me
... without browser the first ep 25568 is unknow value
Code:curl -ks "https://streamingcommunityz.life/it/titles/4287-squid-game" | sed -e "s#episodes#\nepisodes#g" | grep "episodes.quot;:...quot;id.quot;:" | sed -e "s#episodes.quot;:...quot;id.quot;:\([0-9]\{5\}\).*$#\1#" 25568
i get "sed" is not recognized as an internal or external command
I must admit that this statement surprised me. I would have expected you to google sed and windows. If something did not work, I would have expected you to try and solve it on your own and not throw up your hands saying it did not work. I did write ...
Have a look at https://www.cygwin.com/The windows cmd code will give you the episode id for the next episode
I have attached sed and grep to this post.
thanks for the explanation. your last command is different from the previous one
however i've download and extract zip file on C, then run your command and have this error:
[Attachment 88218 - Click to enlarge]
maybe need to add grep and sed to env path ??
another simple and simply solution for me would be a pyscript to save page as html file. and i tried to do this with py script but the html file saved and then opened with np++ does not reveal the links to the various watch urls ... it's as if a javascript (or other) revealed the various watch urls only at second time ...
for me this solution it would be appreciated provided that all the url watch links are present in html file
Interesting problem.
There is a pypi package https://pypi.org/project/streamingcommunity-unofficialapi/ which allows search and retrieval of https://streamingcommunityz.life
I put this together with the help of chatGPT
But it fails with a 404 not found error at the line sc.load(slug)Code:# pip install -U streamingcommunity-unofficialapi from urllib.parse import urljoin from scuapi import API BASE = "https://streamingcommunityz.life" sc = API("streamingcommunityz.life") # <- no scheme, no trailing slash # robustly find the slug/id we need hits = sc.search("squid game") # dict keyed by title names # pick the exact match with id 4287 entry = next(v for v in hits.values() if v["id"] == 4287) slug = f'{entry["id"]}-{entry["slug"]}' # "4287-squid-game" sc = API("streamingcommunityz.life/it") data = sc.load(slug) # full series metadata incl. episodeList title_id = data["id"] links = [ urljoin(BASE, f"/it/watch/{title_id}?e={ep['id']}") for ep in data["episodeList"] if ep.get("season") == 1 ] for url in links: print(url)
Intercepting traffic in httptoolkit reveals the slug link to be correct and it gets a 200 OK response but the next call the scuapi package makes - https://streamingcommunityz.life/it/api/titles/preview/4287 gives a 404 error. Here I got bored as the issue is with the scuapi package.
The package scuapi has a github https://github.com/Blu-Tiger/streamingcommunity-unofficialapi you could raise an issue with the maintainer but there is already an outstanding issue on sc.load
Last edited by phased; 9th Aug 2025 at 05:31.
It seems there is a yt-dlp plugin !! https://github.com/Blu-Tiger/StreamingCommunity-yt-dlp-plugin Go to the site; read the instructions and install the plugin.
cheat sheet
Then runCode:python3 -m pip install -U https://github.com/Blu-Tiger/StreamingCommunity-yt-dlp-plugin/archive/master.zip
That produces data for each episodeCode:yt-dlp --verbose -F https://streamingcommunityz.life/it/titles/4287-squid-game
[Attachment 88220 - Click to enlarge]
Either just copy your links from that or save screen output to a text file with
Then use this python code - call it streamingcom.pyCode:yt-dlp --verbose -F https://streamingcommunityz.life/it/titles/4287-squid-game > squid.txt
And when runPHP Code:
#!/usr/bin/env python3
import argparse
import re
import sys
from pathlib import Path
# Match http/https URLs; keep it permissive, then trim trailing punctuation.
URL_RE = re.compile(r"https?://[^\s<>'\"`]+")
TRAILING_PUNCT = ".,;:!?)\]}>" # common trailing chars that cling to URLs in prose
def read_text(path: Path) -> str:
try:
return path.read_text(encoding="utf-8", errors="ignore")
except Exception as e:
print(f"error: cannot read {path}: {e}", file=sys.stderr)
sys.exit(1)
def extract_links(text: str):
links = []
for m in URL_RE.finditer(text):
url = m.group(0).rstrip(TRAILING_PUNCT)
links.append(url)
return links
def remove_links(text: str):
def _sub(match: re.Match) -> str:
return match.group(0).rstrip(TRAILING_PUNCT).replace(match.group(0), "")
# simpler: just drop the matched span (and any trailing punctuation we trimmed above)
return re.sub(URL_RE, "", text)
def main():
p = argparse.ArgumentParser(
description="Print http(s) links found in a text file. Optionally remove them."
)
p.add_argument("file", type=Path, help="path to the input .txt file")
p.add_argument("--dedupe", action="store_true", help="print unique links only")
p.add_argument("--remove", action="store_true", help="also output text with links removed")
p.add_argument("--out", type=Path, help="when using --remove, write cleaned text here; otherwise print to stdout")
args = p.parse_args()
text = read_text(args.file)
links = extract_links(text)
if args.dedupe:
seen = set()
unique = []
for u in links:
if u not in seen:
seen.add(u)
unique.append(u)
links = unique
# Print links (one per line)
for u in links:
print(u)
if args.remove:
cleaned = remove_links(text)
if args.out:
try:
args.out.write_text(cleaned, encoding="utf-8")
except Exception as e:
print(f"error: cannot write {args.out}: {e}", file=sys.stderr)
sys.exit(1)
else:
# write cleaned text to stdout after the link list
print("\n--- cleaned text (links removed) ---\n")
sys.stdout.write(cleaned)
if __name__ == "__main__":
main()
produces..Code:python streamingcom.py squid.txt
Quad erat demonstrandum - where shall I send the bill?Code:https://streamingcommunityz.life/it/titles/4287-squid-game https://streamingcommunityz.life/it/titles/4287-squid-game/season-1 https://streamingcommunityz.life/it/watch/4287?e=25568 https://streamingcommunityz.life/it/watch/4287?e=25569 https://streamingcommunityz.life/it/watch/4287?e=25570 https://streamingcommunityz.life/it/watch/4287?e=25571 https://streamingcommunityz.life/it/watch/4287?e=25572 https://streamingcommunityz.life/it/watch/4287?e=25573 https://streamingcommunityz.life/it/watch/4287?e=25574 https://streamingcommunityz.life/it/watch/4287?e=25575 https://streamingcommunityz.life/it/watch/4287?e=25576 https://streamingcommunityz.life/it/titles/4287-squid-game/season-2 https://streamingcommunityz.life/it/watch/4287?e=83180 https://streamingcommunityz.life/it/watch/4287?e=83179 https://streamingcommunityz.life/it/watch/4287?e=83181 https://streamingcommunityz.life/it/watch/4287?e=83182 https://streamingcommunityz.life/it/watch/4287?e=83183 https://streamingcommunityz.life/it/watch/4287?e=83184 https://streamingcommunityz.life/it/watch/4287?e=83185 https://streamingcommunityz.life/it/titles/4287-squid-game/season-3 https://streamingcommunityz.life/it/watch/4287?e=92043 https://streamingcommunityz.life/it/watch/4287?e=92042 https://streamingcommunityz.life/it/watch/4287?e=92045 https://streamingcommunityz.life/it/watch/4287?e=92044 https://streamingcommunityz.life/it/watch/4287?e=92047 https://streamingcommunityz.life/it/watch/4287?e=92046
But since there is a plugin
will get the whole lot. You will need to specify more arguments to the yt-dlp command like selecting quality, language and specifying output details - I leave that to you. As a windows user -Code:yt-dlp https://streamingcommunityz.life/it/titles/4287-squid-game
do all your commands in PowerShell or Terminal from MS Store
Last edited by phased; 9th Aug 2025 at 06:46.
Noob Starter Pack. Just download every Widevine mpd! Not kidding!.
https://files.videohelp.com/u/301890/hellyes6.zip
@phased
wow !! no words to say thank you enough, so BIG thanks
i didn't know about the existence of unofficial api before, and plugin for yt-dlp too. very great discover
ok so i can test all your tips soon. thanks again, god bless you
because i get error, i've trim the command:
1)2)Code:curl -ks "https://streamingcommunityz.life/it/titles/4287-squid-game" | sed -e "s#episodes#\nepisodes#g" > temp_output.txtbut just on temp_output.txt file i can't see (from np++) any watch url (like i try to save page as html file)Code:grep "episodes.quot;:...quot;id.quot;:" temp_output.txt | sed -e "s#episodes.quot;:...quot;id.quot;:$[0-9]\{5\}$.*$#\1#"
here my output file https://files.videohelp.com/u/301058/temp_output.txt
You know that the videos on that site are all re-encoded, right? It's still better to look for torrents.
@whs912km
First, I would like to thank you for giving sed and grep a try.
Have a second look at this ...
my code
sed -e "s#episodes.quot;:...quot;id.quot;:\([0-9]\{5\}\).*$#\1#"
your code
sed -e "s#episodes.quot;:...quot;id.quot;:$[0-9]\{5\}$.*$#\1#"
\( is not $
They are not the same
Code:grep "episodes.quot;:...quot;id.quot;:" temp_output.txt | sed -e "s#episodes.quot;:...quot;id.quot;:\([0-9]\{5\}\).*$#\1#" 25568
i have just copy your command from post#6
copy and paste. i don't have edit ...
but question is: from first command
can't have watch link on temp_output.txt ... this is the real issueCode:curl -ks "https://streamingcommunityz.life/it/titles/4287-squid-game" | sed -e "s#episodes#\nepisodes#g" > temp_output.txt
look my post #13 to see the output
i understand you want bypass the browser, but this site work trough js so you can get watch url only with browser
so load video url, from Element tab filter for "episode-slider-mobile" and you can find all watch urls
[Attachment 88242 - Click to enlarge]
now copy element <div data-v-d2be8733 data-v-be2814e2 class="episode-slider-mobile"> into txt file (squid.txt)
then run this script and you get just only watch urls
Code:import re file_name = input("add file name (no ext): ") + '.txt' pattern = r'https://streamingcommunityz\.life/it/watch/(\d+)\?e=(\d+)' try: with open(file_name, 'r') as file: content = file.read() matches = re.findall(pattern, content) for match in matches: link = f'https://streamingcommunityz.life/it/watch/{match[0]}?e={match[1]}' print(link) except FileNotFoundError: print(f"The file '{file_name}' was not found")
[Attachment 88243 - Click to enlarge]
oh wow @lomero you found out where the links are from Element tab, i couldn't find them...
thanks for this, but always from browser ... but well another great news. thanks