VideoHelp Forum



Support our site by donate $5 directly to us Thanks!!!

Try StreamFab Downloader and download streaming video from Netflix, Amazon!



+ Reply to Thread
Page 5 of 8
FirstFirst ... 3 4 5 6 7 ... LastLast
Results 121 to 150 of 217
  1. Originally Posted by Sawyer View Post
    Nothing about how to make it work through proxy. Maybe it should be in the first post so people know from the start that it does not work.

    // Really sucks that it works in browsers but not through N_m3u8DL-RE
    I can't speak of A_n_g_e_l_a's script since I've not tried it, but I can tell you that Nord works perfectly well with ITV. If your IP is blocked, the request will not go through at all. If you get an HTML response, that points to an issue other than your VPN.
    Quote Quote  
  2. Have just tried now

    HTML Code:
    ./N_m3u8DL-RE "https://itvpnpdotcom.blue.content.itv.com/2-4382-0001-001/34/2/VAR028-HD-s/2-4382-0001-001_34_2_VAR028-HD-s.ism/.mpd?filter=%28%28type%3D%3D%22video%22%26%26DisplayHeight%3C%3D720%29%7C%7C%28type%21%3D%22video%22%29%29&hdnea=st%3D1691703142~exp%3D1691724742~acl%3D/2-4382-0001-001/%2A~data%3Dnohubplus~hmac%3Da7db531de3ccd1e633018d00bfa6321a35867d08b557c03829f9366db4008da5"
    00:33:11.144 INFO : N_m3u8DL-RE (Beta version) 20230618
    00:33:11.150 INFO : Loading URL: [url]https://itvpnpdotcom.blue.content.itv.com/2-4382-0001-001/34/2/VAR028-HD-s/2-4382-0001-001_34_2_VAR028-HD-s.ism/.mpd?filter=%28%28type%3D%3D%22video%22%26%26DisplayHeight%3C%3D720%29%7C%7C%28type%21%3D%22video%22%29%29&hdnea=st%3D1691703142~exp%3D1691724742~acl%3D/2-4382-0001-001/%2A~data%3Dnohubplus~hmac%3Da7db531de3ccd1e633018d00bfa6321a35867d08b557c03829f9366db4008da5[/url]
    00:33:11.848 INFO : New version detected! v0.2.0-beta
    00:33:11.953 INFO : Content Matched: Dynamic Adaptive Streaming over HTTP
    00:33:11.953 INFO : Parsing streams...
    00:33:11.969 WARN : Writing meta json
    00:33:11.970 INFO : Extracted, there are 6 streams, with 5 basic streams, 1 audio streams, 0 subtitle streams
    00:33:11.971 INFO : Vid *CENC 1280x720 | 3218 Kbps | video=3218722 | avc1.640028 | 460 Segments | ~46m00s
    00:33:11.971 INFO : Vid *CENC 1024x576 | 1979 Kbps | video=1979311 | avc1.64001F | 460 Segments | ~46m00s
    00:33:11.971 INFO : Vid *CENC 1024x576 | 1323 Kbps | video=1323915 | avc1.64001F | 460 Segments | ~46m00s
    00:33:11.971 INFO : Vid *CENC 704x396 | 757 Kbps | video=757946 | avc1.64001F | 460 Segments | ~46m00s
    00:33:11.971 INFO : Vid *CENC 704x396 | 478 Kbps | video=478477 | avc1.64001F | 460 Segments | ~46m00s
    00:33:11.971 INFO : Aud *CENC audio=96000 | 96 Kbps | mp4a.40.2 | 2CH | 460 Segments | ~46m00s
    00:33:17.982 INFO : Parsing streams...
    00:33:17.983 INFO : Selected streams:
    00:33:17.983 INFO : Vid *CENC 1280x720 | 3218 Kbps | video=3218722 | avc1.640028 | 460 Segments | ~46m00s
    00:33:17.983 INFO : Aud *CENC audio=96000 | 96 Kbps | mp4a.40.2 | 2CH | 460 Segments | ~46m00s
    00:33:17.983 WARN : Writing meta json
    00:33:17.984 INFO : Save Name: _2023-08-11_00-33-11
    00:33:17.985 INFO : Start downloading...Vid 1280x720 | 3218 Kbps | video=3218722 | avc1.640028
    00:33:17.985 WARN : When CENC encryption is detected, binary merging is automatically enabled
    00:33:21.476 WARN : Response status code does not indicate success: 403 (Forbidden).          
    00:33:21.477 ERROR: Download init file failed!
    I get ERROR: Download init file failed!, if I try without vpn if fails faster

    HTML Code:
    ./N_m3u8DL-RE "https://itvpnpdotcom.blue.content.itv.com/2-4382-0001-001/34/2/VAR028-HD-s/2-4382-0001-001_34_2_VAR028-HD-s.ism/.mpd?filter=%28%28type%3D%3D%22video%22%26%26DisplayHeight%3C%3D720%29%7C%7C%28type%21%3D%22video%22%29%29&hdnea=st%3D1691703142~exp%3D1691724742~acl%3D/2-4382-0001-001/%2A~data%3Dnohubplus~hmac%3Da7db531de3ccd1e633018d00bfa6321a35867d08b557c03829f9366db4008da5"
    00:35:20.992 INFO : N_m3u8DL-RE (Beta version) 20230618
    00:35:20.998 INFO : Loading URL: https://itvpnpdotcom.blue.content.itv.com/2-4382-0001-001/34/2/VAR028-HD-s/2-4382-0001-001_34_2_VAR028-HD-s.ism/.mpd?filter=%28%28type%3D%3D%22video%22%26%26DisplayHeight%3C%3D720%29%7C%7C%28type%21%3D%22video%22%29%29&hdnea=st%3D1691703142~exp%3D1691724742~acl%3D/2-4382-0001-001/%2A~data%3Dnohubplus~hmac%3Da7db531de3ccd1e633018d00bfa6321a35867d08b557c03829f9366db4008da5
    00:35:21.387 INFO : New version detected! v0.2.0-beta
    00:35:21.440 ERROR: One or more errors occurred. (Response status code does not indicate success: 403 (Forbidden).)
    Did you do something special?
    Quote Quote  
  3. ITV requires additional headers in the N_m3u8DL-RE command. If you're downloading manually, add this and it should work:

    Code:
    N_m3u8DL-RE --append-url-params -H "cookie: hdntl=~data=hdntl~hmac=*"
    But that should all be taken care of in the script, so it won't solve your original issue.
    Quote Quote  
  4. Member
    Join Date
    Feb 2022
    Location
    Search the forum first!
    Search PM
    Originally Posted by Sawyer View Post
    00:33:11.848 INFO : New version detected! v0.2.0-beta
    Looks like you need to update N-M3u8DL-RE
    Quote Quote  
  5. Originally Posted by stabbedbybrick View Post
    ITV requires additional headers in the N_m3u8DL-RE command. If you're downloading manually, add this and it should work:

    Code:
    N_m3u8DL-RE --append-url-params -H "cookie: hdntl=~data=hdntl~hmac=*"
    But that should all be taken care of in the script, so it won't solve your original issue.
    Something wicked happened, I tried this and it worked, then I tried the script it also worked, but just once, now it's back to the JSON error.
    For some reason the script behaves differently each time.

    // Now worked again once, back to the error.
    Quote Quote  
  6. Member
    Join Date
    Feb 2022
    Location
    Search the forum first!
    Search PM
    Originally Posted by Sawyer View Post
    Originally Posted by stabbedbybrick View Post
    ITV requires additional headers in the N_m3u8DL-RE command. If you're downloading manually, add this and it should work:

    Code:
    N_m3u8DL-RE --append-url-params -H "cookie: hdntl=~data=hdntl~hmac=*"
    But that should all be taken care of in the script, so it won't solve your original issue.
    Something wicked happened, I tried this and it worked, then I tried the script it also worked, but just once, now it's back to the JSON error.
    For some reason the script behaves differently each time.
    Talk to your VPN provider: the issue is theirs. Tell them you need Transport Layer Security Level 3 on all transactions. My guess is sometime you connect with servers that provide TLS 2 - and that will not work. You can downgrade your Linux system to only ask for TLS2 but that is beyond my remit.
    Quote Quote  
  7. Originally Posted by Sawyer View Post
    Originally Posted by stabbedbybrick View Post
    ITV requires additional headers in the N_m3u8DL-RE command. If you're downloading manually, add this and it should work:

    Code:
    N_m3u8DL-RE --append-url-params -H "cookie: hdntl=~data=hdntl~hmac=*"
    But that should all be taken care of in the script, so it won't solve your original issue.
    Something wicked happened, I tried this and it worked, then I tried the script it also worked, but just once, now it's back to the JSON error.
    For some reason the script behaves differently each time.

    // Now worked again once, back to the error.
    I've now ran the script numerous times with Nord on latest Linux Mint, and I can't reproduce your error. The only way I can get it is to use a non-valid IP, so you might have connectivity issues or something similar. Which could explain why it does work sometimes.

    If you print out the status response and content from line 221, you'll see it more clearly.

    This is what you get if the IP is blocked, for example:

    Code:
    <Response [403 Forbidden]>
    b'<HTML><HEAD><TITLE>Error</TITLE></HEAD><BODY>\nAn error occurred while processing your request.<p>\nReference #219.85054917.1691745849.2177dbe8\n</BODY></HTML>\n'
    A TLS issue would give another error, I'm assuming.
    Quote Quote  
  8. Originally Posted by A_n_g_e_l_a View Post
    Originally Posted by Sawyer View Post
    Originally Posted by stabbedbybrick View Post
    ITV requires additional headers in the N_m3u8DL-RE command. If you're downloading manually, add this and it should work:

    Code:
    N_m3u8DL-RE --append-url-params -H "cookie: hdntl=~data=hdntl~hmac=*"
    But that should all be taken care of in the script, so it won't solve your original issue.
    Something wicked happened, I tried this and it worked, then I tried the script it also worked, but just once, now it's back to the JSON error.
    For some reason the script behaves differently each time.
    Talk to your VPN provider: the issue is theirs. Tell them you need Transport Layer Security Level 3 on all transactions. My guess is sometime you connect with servers that provide TLS 2 - and that will not work. You can downgrade your Linux system to only ask for TLS2 but that is beyond my remit.
    If I connect and keep that connection alive, isn't it the same all the time?
    Because that's what I did, connected and never disconnected, ran the script multiple times until sometimes it works.

    Below, for example, have tried to get 4 episodes, first one worked and when it went for the second episode, it broke.

    HTML Code:
    python3 itv.py S
         ____  ______  _   __  _  __  
        /  _/ /_  __/ | | / / | |/_/  
       _/ /    / /    | |/ / _>  <    
      /___/   /_/     |___/ /_/|_|    
                                      
    
    Provide episode link for first video in a SERIES as direct input
    Start URL https://www.itv.com/watch/monster-carp/2a4382/2a4382a0004
    Number of episodes 4
    Monster_Carp_S01E01_Japan
    Using a remote CDM
    Keys found e17e984567a54322afd69be0f2e79f4d:25123b9dc5753778eba185873e774da5
    
    13:48:53.656 INFO : N_m3u8DL-RE (Beta version) 20230618
    13:48:53.658 INFO : Loading URL: https://itvpnpdotcom.blue.content.itv.com/2-4382-0004-001/34/1/VAR028-HD-s/2-4382-0004-001_34_1_VAR028-HD-s.ism/.mpd?filter=%28%28type%3D%3D%22video%22%26%26DisplayHeight%3C%3D720%29%7C%7C%28type%21%3D%22video%22%29%29&hdnea=st%3D1691750920~exp%3D1691772520~acl%3D/2-4382-0004-001/%2A~data%3Dnohubplus~hmac%3D078badd05840a3a6df2a9a3db8b114ecde2db53e715f98c1773b91af2caa3a9d
    13:48:54.162 INFO : Content Matched: Dynamic Adaptive Streaming over HTTP
    13:48:54.163 INFO : Parsing streams...
    13:48:54.178 WARN : Writing meta json
    13:48:54.181 INFO : Extracted, there are 6 streams, with 5 basic streams, 1 audio streams, 0 subtitle streams
    13:48:54.181 INFO : Vid *CENC 1280x720 | 3241 Kbps | video=3241104 | avc1.640028 | 461 Segments | ~46m02s
    13:48:54.181 INFO : Vid *CENC 1024x576 | 2063 Kbps | video=2063396 | avc1.64001F | 461 Segments | ~46m02s
    13:48:54.181 INFO : Vid *CENC 1024x576 | 1369 Kbps | video=1369972 | avc1.64001F | 461 Segments | ~46m02s
    13:48:54.181 INFO : Vid *CENC 704x396 | 795 Kbps | video=795170 | avc1.64001F | 461 Segments | ~46m02s
    13:48:54.181 INFO : Vid *CENC 704x396 | 482 Kbps | video=482695 | avc1.64001F | 461 Segments | ~46m02s
    13:48:54.182 INFO : Aud *CENC audio=96000 | 96 Kbps | mp4a.40.2 | 2CH | 461 Segments | ~46m02s
    13:48:54.182 INFO : Parsing streams...
    13:48:54.196 INFO : Selected streams:
    13:48:54.196 INFO : Vid *CENC 1280x720 | 3241 Kbps | video=3241104 | avc1.640028 | 461 Segments | ~46m02s
    13:48:54.196 INFO : Aud *CENC audio=96000 | 96 Kbps | mp4a.40.2 | 2CH | 461 Segments | ~46m02s
    13:48:54.197 WARN : Writing meta json
    13:48:54.197 INFO : Save Name: Monster_Carp_S01E01_Japan
    13:48:54.197 WARN : MuxAfterDone is detected, binary merging is automatically enabled
    13:48:54.198 INFO : Start downloading...Vid 1280x720 | 3241 Kbps | video=3241104 | avc1.640028
    13:48:54.198 INFO : Start downloading...Aud audio=96000 | 96 Kbps | mp4a.40.2 | 2CH
    13:48:54.266 INFO : New version detected! v0.2.0-beta
    13:48:54.369 WARN : Type: cenc                                                          
    13:48:54.369 WARN : PSSH(WV): CAESEOF+mEVnpUMir9ab4PLnn00iEjItNDM4Mi0wMDA0LTAwMV8zNDgB  
    13:48:54.369 WARN : KID: e17e984567a54322afd69be0f2e79f4d                               
    13:48:54.370 WARN : Reading [url=https://www.videohelp.com/software/MediaInfo]media info[/url]...                                               
    13:48:54.377 INFO : [0x1]: Video, h264 (avc1), 1280x720                                 
    13:48:54.474 WARN : Type: cenc                                                          
    13:48:54.474 WARN : PSSH(WV): CAESEOF+mEVnpUMir9ab4PLnn00iEjItNDM4Mi0wMDA0LTAwMV8zNDgB  
    13:48:54.474 WARN : KID: e17e984567a54322afd69be0f2e79f4d                               
    13:48:54.475 WARN : Reading media info...                                               
    13:48:54.480 INFO : [0x1]: Audio, aac (mp4a), 96 kb/s                                   
    13:49:07.684 INFO : Binary merging...                                                                       
    13:49:07.725 INFO : Decrypting...                                                                           
    13:49:42.372 INFO : Binary merging...                                                                       
    13:49:43.411 INFO : Decrypting...                                                                           
    13:49:48.814 WARN : Monster_Carp_S01E01_Japan.mp4
    13:49:48.814 WARN : Monster_Carp_S01E01_Japan.m4a
    13:49:48.814 WARN : Monster_Carp_S01E01_Japan.subs.srt
    13:49:48.814 WARN : Muxing to Monster_Carp_S01E01_Japan.MUX.mkv
    13:49:49.980 WARN : Cleaning files...
    13:49:50.073 WARN : Rename to Monster_Carp_S01E01_Japan.mkv
    13:49:50.073 INFO : Done
    [info] Monster_Carp_S01E01_Japan.mkv is in output
    Traceback (most recent call last):
      File "/home/user/Documents/har/itv.py", line 315, in <module>
        main()
      File "/home/user/Documents/har/itv.py", line 304, in main
        myITV.download(url)
      File "/home/user/Documents/har/itv.py", line 113, in download
        title, data = self.get_data(url)
      File "/home/user/Documents/har/itv.py", line 226, in get_data
        return title, r.json()
      File "/home/user/.local/lib/python3.10/site-packages/httpx/_models.py", line 755, in json
        return jsonlib.loads(self.content.decode(encoding), **kwargs)
      File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
        return _default_decoder.decode(s)
      File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
        obj, end = self.raw_decode(s, idx=_w(s, 0).end())
      File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
        raise JSONDecodeError("Expecting value", s, err.value) from None
    json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
    Originally Posted by stabbedbybrick View Post
    Originally Posted by Sawyer View Post
    Originally Posted by stabbedbybrick View Post
    ITV requires additional headers in the N_m3u8DL-RE command. If you're downloading manually, add this and it should work:

    Code:
    N_m3u8DL-RE --append-url-params -H "cookie: hdntl=~data=hdntl~hmac=*"
    But that should all be taken care of in the script, so it won't solve your original issue.
    Something wicked happened, I tried this and it worked, then I tried the script it also worked, but just once, now it's back to the JSON error.
    For some reason the script behaves differently each time.

    // Now worked again once, back to the error.
    I've now ran the script numerous times with Nord on latest Linux Mint, and I can't reproduce your error. The only way I can get it is to use a non-valid IP, so you might have connectivity issues or something similar. Which could explain why it does work sometimes.

    If you print out the status response and content from line 221, you'll see it more clearly.

    This is what you get if the IP is blocked, for example:

    Code:
    <Response [403 Forbidden]>
    b'<HTML><HEAD><TITLE>Error</TITLE></HEAD><BODY>\nAn error occurred while processing your request.<p>\nReference #219.85054917.1691745849.2177dbe8\n</BODY></HTML>\n'
    A TLS issue would give another error, I'm assuming.
    I'm also on Linux Mint, I get 403 when I'm not on the proxy, otherwise I get 200 all the time.

    //Ok, so for some reason it does not let me download more than once, if I disconnect and connect again it works, so I have to download and disconnect each time.
    Thing is in the browser I can still access the site, so they don't ban the IP everywhere, they somehow restrict access to N_m3u8DL-RE after a download.

    // New tests, it works without reconnecting again, but after some time, just once, and after this I get an error in the browser "Not available. Sorry, this show isn't available right now." And if I wait a bit, it works again..
    So there is a limit between downloads

    //Scratch that, now I could download 2 one after another and JSON error, but in the browser still works. Can't find a pattern. Last time I tried it worked with the whole season, that's 4 episodes in a row.
    Last edited by Sawyer; 11th Aug 2023 at 07:20.
    Quote Quote  
  9. After some testing, I can actually reproduce the error. Sometimes it's after 20+ requests, sometimes only after 2. But eventually, I'm met with a 403 until I change IP. And the problem seemingly lies with the httpx library being used in Angelas' script.

    Whenever you intend to make several requests to a server, you want to create a session that retains a form of connection between requests so not to overwhelm it. The Client in httpx does do that, but for whatever reason the server does not like it and blocks the IP relatively quick. If I instead use the Requests library, and set up a Session() instead, the script works without halting every single time. I literally just made 100+ requests in a short burst and all were <Response [200]>.

    I don't want to step on any toes and mess around with someone else's script, but it does seem to fix the issue.
    Quote Quote  
  10. Member
    Join Date
    Feb 2022
    Location
    Search the forum first!
    Search PM
    Originally Posted by stabbedbybrick View Post
    After some testing, I can actually reproduce the error. Sometimes it's after 20+ requests, sometimes only after 2. But eventually, I'm met with a 403 until I change IP. And the problem seemingly lies with the httpx library being used in Angelas' script.

    I don't want to step on any toes and mess around with someone else's script, but it does seem to fix the issue.
    at line 122 after "timeout = httpx.Timeout(10.0, connect=60.0)" add
    Code:
               limits = httpx.Limits(max_keepalive_connections = None, max_connections = None)
    then look to the line 'timeout = timeout, " 6 lines further on...
    cls.client = Client(
    headers={
    'authority': 'www.itv.com',
    'user-agent': 'Dalvik/2.9.8 (Linux; U; Android 9.9.2; ALE-L94 Build/NJHGGF)',
    },
    timeout=timeout,

    And add this straight after, underneath and in line
    Code:
                limits = limits,
    See what that does.
    Last edited by A_n_g_e_l_a; 11th Aug 2023 at 08:24.
    Quote Quote  
  11. Member
    Join Date
    Feb 2022
    Location
    Search the forum first!
    Search PM
    Originally Posted by Sawyer View Post
    json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0).
    Errors like that may also indicate spurious characters in the clipboard paste. Suggest you first paste to gedit or whatever text editor you use to visually check, before saving them all back to clipboard. Try to avoid leading and trailing spaces.
    Quote Quote  
  12. Originally Posted by A_n_g_e_l_a View Post
    at line 122 after "timeout = httpx.Timeout(10.0, connect=60.0)" add
    Code:
               limits = httpx.Limits(max_keepalive_connections = None, max_connections = None)
    then look to the line 'timeout = timeout, " 6 lines further on...
    cls.client = Client(
    headers={
    'authority': 'www.itv.com',
    'user-agent': 'Dalvik/2.9.8 (Linux; U; Android 9.9.2; ALE-L94 Build/NJHGGF)',
    },
    timeout=timeout,

    And add this straight after, underneath and in line
    Code:
                limits = limits,
    See what that does.
    Still get 403 after some requests. Sometimes after just a couple, sometimes it's 20+. I have basically zero experience with httpx so I can't really give any helpful input on it either, but it's pretty interesting. I'm going to have to read up the difference in how it works compared to Requests.
    Quote Quote  
  13. Member
    Join Date
    Feb 2022
    Location
    Search the forum first!
    Search PM
    Originally Posted by stabbedbybrick View Post

    Still get 403 after some requests. Sometimes after just a couple, sometimes it's 20+. I have basically zero experience with httpx so I can't really give any helpful input on it either, but it's pretty interesting. I'm going to have to read up the difference in how it works compared to Requests.
    Httpx times out and unlike Requests will not sit trying a connection for ever. It possibly might be taking longer that 10 seconds to make the connection so the connection timeout could be upped but it may not be httpx at all. I've no way testing so thanks for your input as always.
    Quote Quote  
  14. As far as I can tell, the connection pooling with httpx and session is essentially the same. Yet, with httpx client, I will get blocked sooner or later. There are no timeouts, but an immediate block. Whereas with a session I can go on forever. I genuinely have no explanation for it.

    Also, if you're open to suggestions, you could loop over the NEXT_DATA and collect each episode as its own object and store it in a list. That way, you'll cut down significantly on page requests since you only have to do it once per show instead of once per episode. You'll be able to access seasons and episodes with corresponding data without generating any URL's. It probably won't do anything for this particular issue, but it made a huge difference in my own programs so I figured I'd mention it.
    Quote Quote  
  15. Member
    Join Date
    Feb 2022
    Location
    Search the forum first!
    Search PM
    Originally Posted by stabbedbybrick View Post
    As far as I can tell, the connection pooling with httpx and session is essentially the same. Yet, with httpx client, I will get blocked sooner or later. There are no timeouts, but an immediate block. Whereas with a session I can go on forever. I genuinely have no explanation for it.

    Also, if you're open to suggestions, you could loop over the NEXT_DATA and collect each episode as its own object and store it in a list. That way, you'll cut down significantly on page requests since you only have to do it once per show instead of once per episode. You'll be able to access seasons and episodes with corresponding data without generating any URL's. It probably won't do anything for this particular issue, but it made a huge difference in my own programs so I figured I'd mention it.
    NEXT_DATA seems to have grown in scope recently when ITVX improved its curation. I'd always found it incomplete when I looked, in that only data for the current episode has all you need for example mpd and subtitle link. The rest of the data is only partial, so you'd still need to do lookups for every programme link.

    I'm happy enough with it as it stands. For us in the UK, (its intended audience), it works without issue. I have no idea as to why httpx, which is regarded as a better replacement for requests, should apparently drop connection. I still haven't discounted http Transport Layer Security levels. If Nord VPN has an older machine with TLS 2 security rather than TLS 3 in the connection pathway, that would account for connections being dropped. I have met it before and solved the issue by making the underlying system only use TLS2 for all connections. The fact that the error is random sort of suggests that a NORD VPN random route may indeed be the issue. The fact that a linux machine is givng the problem sort of amplifies my thinking. Linux demands TLS 3 if that is in its configuration, whereas I believe Windows is happy to switch to backward protocols. But I'm not on secure ground here.
    Last edited by A_n_g_e_l_a; 11th Aug 2023 at 13:30.
    Quote Quote  
  16. I tried other VPNs as well and got the same result. I'm stumped.

    And the DATA is the complete index of the show where you can fetch titles and playlists for every available episode. You'll need to GET request the playlist for each, but the initial parsing of the HTML only has to be done once.

    But it doesn't matter if you're happy with it. I tend to get carried away with details and optimization, and I sometimes forget other people aren't always as obsessive
    Quote Quote  
  17. What I noticed is that it breaks if episodes are missing.
    Let's take this show, season 5 is missing and there are 7 seasons, each 4 episodes, that being 28 episodes.
    If we try to enter the link with 0001, here it's actually episode 2, episode one has 0004, they have ordered them wrong, and we enter 28 episodes, when season 4 ends which is 0016 it breaks because it goes to 0017 instead of 0021.
    Don't know if this is expected behavior or not, I just noticed it now.
    Quote Quote  
  18. Member
    Join Date
    Feb 2022
    Location
    Search the forum first!
    Search PM
    Originally Posted by Sawyer View Post
    What I noticed is that it breaks if episodes are missing.
    Let's take this show, season 5 is missing and there are 7 seasons, each 4 episodes, that being 28 episodes.
    If we try to enter the link with 0001, here it's actually episode 2, episode one has 0004, they have ordered them wrong, and we enter 28 episodes, when season 4 ends which is 0016 it breaks because it goes to 0017 instead of 0021.
    Don't know if this is expected behavior or not, I just noticed it now.
    Wow! I explain at the top of the script that a Series from one link is a bit iffy. In the case it doesn't work, you simply revert to either get The Stream Detector to do the captures, or you save each video url to a text editor and build a list. I, personally, don 't find a click on each video an imposition at all. And while one series is downloading I can be preparing the next list.

    You are miles better off that you were and yet 'thanks' has never entered your keyboard
    Last edited by A_n_g_e_l_a; 12th Aug 2023 at 10:05. Reason: Tense changed. ' have removed'
    Quote Quote  
  19. Member
    Join Date
    Feb 2022
    Location
    Search the forum first!
    Search PM
    Originally Posted by stabbedbybrick View Post
    I tried other VPNs as well and got the same result. I'm stumped.

    And the DATA is the complete index of the show where you can fetch titles and playlists for every available episode. You'll need to GET request the playlist for each, but the initial parsing of the HTML only has to be done once.

    But it doesn't matter if you're happy with it. I tend to get carried away with details and optimization, and I sometimes forget other people aren't always as obsessive
    You started a small itch - you knew didn't you??
    Updated to parse for series.
    Quote Quote  
  20. Yeah, that itch comes with the territory and never fully goes away
    Quote Quote  
  21. Originally Posted by A_n_g_e_l_a View Post
    Originally Posted by Sawyer View Post
    What I noticed is that it breaks if episodes are missing.
    Let's take this show, season 5 is missing and there are 7 seasons, each 4 episodes, that being 28 episodes.
    If we try to enter the link with 0001, here it's actually episode 2, episode one has 0004, they have ordered them wrong, and we enter 28 episodes, when season 4 ends which is 0016 it breaks because it goes to 0017 instead of 0021.
    Don't know if this is expected behavior or not, I just noticed it now.
    Wow! I explain at the top of the script that a Series from one link is a bit iffy. In the case it doesn't work, you simply revert to either get The Stream Detector to do the captures, or you save each video url to a text editor and build a list. I, personally, don 't find a click on each video an imposition at all. And while one series is downloading I can be preparing the next list.

    You are miles better off that you were and yet 'thanks' has never entered your keyboard
    You better not do stuff just to get "Thanks" from people over the internet. As you can see, you'll just get disappointed.
    If you share something, do it for yourself, not for others.
    Personally, if someone uses something I did, it's more than a "thank you".

    Sorry if I offended and thank you.
    Quote Quote  
  22. help... im getting this with this scrpt:

    C:\Users\Dannyboi\Desktop\My WEB-DL Tools\ITVX (1080p)>python bestITVX.py
    ←[32m ____ ______ _ __ _ __
    / _/ /_ __/ | | / / | |/_/
    _/ / / / | |/ / _> <
    /___/ /_/ |___/ /_/|_|

    ←[0m
    Press enter with PAGE urls in clipboard https://www.itv.com/watch/deep-fake-neighbour-wars/10a2895/10a2895a0001
    The URL list has 1 video(s)
    Deep_Fake_Neighbour_Wars_S01E01
    Traceback (most recent call last):
    File "C:\Users\Dannyboi\AppData\Local\Programs\Python\P ython39\lib\sre_parse.py", line 1051, in parse_template
    this = chr(ESCAPES[this][1])
    KeyError: '\\i'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
    File "C:\Users\Dannyboi\AppData\Local\Programs\Python\P ython39\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
    File "C:\Users\Dannyboi\AppData\Local\Programs\Python\P ython39\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
    File "C:\Users\Dannyboi\AppData\Roaming\Python\Python39 \site-packages\pysubs2\__main__.py", line 6, in <module>
    rv = cli(sys.argv[1:])
    File "C:\Users\Dannyboi\AppData\Roaming\Python\Python39 \site-packages\pysubs2\cli.py", line 100, in __call__
    self.main(argv)
    File "C:\Users\Dannyboi\AppData\Roaming\Python\Python39 \site-packages\pysubs2\cli.py", line 124, in main
    subs = SSAFile.from_file(infile, args.input_format, args.fps)
    File "C:\Users\Dannyboi\AppData\Roaming\Python\Python39 \site-packages\pysubs2\ssafile.py", line 152, in from_file
    impl.from_file(subs, fp, format_, fps=fps, **kwargs)
    File "C:\Users\Dannyboi\AppData\Roaming\Python\Python39 \site-packages\pysubs2\subrip.py", line 61, in from_file
    subs.events = [SSAEvent(start=start, end=end, text=prepare_text(lines))
    File "C:\Users\Dannyboi\AppData\Roaming\Python\Python39 \site-packages\pysubs2\subrip.py", line 61, in <listcomp>
    subs.events = [SSAEvent(start=start, end=end, text=prepare_text(lines))
    File "C:\Users\Dannyboi\AppData\Roaming\Python\Python39 \site-packages\pysubs2\subrip.py", line 51, in prepare_text
    s = re.sub(r"< *i *>", r"{\i1}", s)
    File "C:\Users\Dannyboi\AppData\Local\Programs\Python\P ython39\lib\re.py", line 210, in sub
    return _compile(pattern, flags).sub(repl, string, count)
    File "C:\Users\Dannyboi\AppData\Local\Programs\Python\P ython39\lib\re.py", line 327, in _subx
    template = _compile_repl(template, pattern)
    File "C:\Users\Dannyboi\AppData\Local\Programs\Python\P ython39\lib\re.py", line 318, in _compile_repl
    return sre_parse.parse_template(repl, pattern)
    File "C:\Users\Dannyboi\AppData\Local\Programs\Python\P ython39\lib\sre_parse.py", line 1054, in parse_template
    raise s.error('bad escape %s' % this, len(this))
    re.error: bad escape \i at position 1
    Using a remote CDM
    Keys found c665bd9086b145678da8ea5d046187c1:24377799b9f60b9b3 575f6cb8a5b397d

    path empty or file not exists!

    Description:
    N_m3u8DL-RE (Beta version) 20230628

    Usage:
    N_m3u8DL-RE <input> [options]

    Arguments:
    <input> Input Url or File

    Options:
    --tmp-dir <tmp-dir> Set temporary file directory
    --save-dir <save-dir> Set output directory
    --save-name <save-name> Set output filename
    --base-url <base-url> Set BaseURL
    --thread-count <number> Set download thread count [default: 12]
    --download-retry-count <number> The number of retries when download segment error [default: 3]
    --auto-select Automatically selects the best tracks of all types [default: False]
    --skip-merge Skip segments merge [default: False]
    --skip-download Skip download [default: False]
    --check-segments-count Check if the actual number of segments downloaded matches the expected
    number [default: True]
    --binary-merge Binary merge [default: False]
    --del-after-done Delete temporary files when done [default: True]
    --no-date-info Date information is not written during muxing [default: False]
    --no-log Disable log file output [default: False]
    --write-meta-json Write meta json after parsed [default: True]
    --append-url-params Add Params of input Url to segments, useful for some websites, such as
    kakao.com [default: False]
    -mt, --concurrent-download Concurrently download the selected audio, video and subtitles [default:
    False]
    -H, --header <header> Pass custom header(s) to server, Example:
    -H "Cookie: mycookie" -H "User-Agent: iOS"
    --sub-only Select only subtitle tracks [default: False]
    --sub-format <SRT|VTT> Subtitle output format [default: SRT]
    --auto-subtitle-fix Automatically fix subtitles [default: True]
    --ffmpeg-binary-path <PATH> Full path to the ffmpeg binary, like C:\Tools\ffmpeg.exe
    --log-level <DEBUG|ERROR|INFO|OFF|WARN> Set log level [default: INFO]
    --ui-language <en-US|zh-CN|zh-TW> Set UI language
    --urlprocessor-args <urlprocessor-args> Give these arguments to the URL Processors.
    --key <key> Pass decryption key(s) to mp4decrypt/shaka-packager. format:
    --key KID1:KEY1 --key KID2:KEY2
    --key-text-file <key-text-file> Set the kid-key file, the program will search the KEY with KID from the
    file.(Very large file are not recommended)
    --decryption-binary-path <PATH> Full path to the tool used for MP4 decryption, like C:\Tools\mp4decrypt.exe
    --use-shaka-packager Use shaka-packager instead of mp4decrypt to decrypt [default: False]
    --mp4-real-time-decryption Decrypt MP4 segments in real time [default: False]
    -M, --mux-after-done <OPTIONS> When all works is done, try to mux the downloaded streams. Use "--morehelp
    mux-after-done" for more details
    --custom-hls-method <METHOD> Set HLS encryption method
    (AES_128|AES_128_ECB|CENC|CHACHA20|NONE|SAMPLE_AES |SAMPLE_AES_CTR|UNKNOWN)
    --custom-hls-key <FILE|HEX|BASE64> Set the HLS decryption key. Can be file, HEX or Base64
    --custom-hls-iv <FILE|HEX|BASE64> Set the HLS decryption iv. Can be file, HEX or Base64
    --use-system-proxy Use system default proxy [default: True]
    --custom-proxy <URL> Set web request proxy, like http://127.0.0.1:8888
    --custom-range <RANGE> Download only part of the segments. Use "--morehelp custom-range" for more
    details
    --task-start-at <yyyyMMddHHmmss> Task execution will not start before this time
    --live-perform-as-vod Download live streams as vod [default: False]
    --live-real-time-merge Real-time merge into file when recording live [default: False]
    --live-keep-segments Keep segments when recording a live (liveRealTimeMerge enabled) [default:
    True]
    --live-pipe-mux Real-time muxing to TS file through pipeline + ffmpeg (liveRealTimeMerge
    enabled) [default: False]
    --live-fix-vtt-by-audio Correct VTT sub by reading the start time of the audio file [default: False]
    --live-record-limit <HH:mms> Recording time limit when recording live
    --live-wait-time <SEC> Manually set the live playlist refresh interval
    --mux-import <OPTIONS> When MuxAfterDone enabled, allow to import local media files. Use
    "--morehelp mux-import" for more details
    -sv, --select-video <OPTIONS> Select video streams by regular expressions. Use "--morehelp select-video"
    for more details
    -sa, --select-audio <OPTIONS> Select audio streams by regular expressions. Use "--morehelp select-audio"
    for more details
    -ss, --select-subtitle <OPTIONS> Select subtitle streams by regular expressions. Use "--morehelp
    select-subtitle" for more details
    -dv, --drop-video <OPTIONS> Drop video streams by regular expressions.
    -da, --drop-audio <OPTIONS> Drop audio streams by regular expressions.
    -ds, --drop-subtitle <OPTIONS> Drop subtitle streams by regular expressions.
    --morehelp <OPTION> Set more help info about one option
    --version Show version information
    -?, -h, --help Show help and usage information



    [info] Deep_Fake_Neighbour_Wars_S01E01.mkv is in output

    C:\Users\Dannyboi\Desktop\My WEB-DL Tools\ITVX (1080p)>pause
    Press any key to continue . . .
    Quote Quote  
  23. Member
    Join Date
    Feb 2022
    Location
    Search the forum first!
    Search PM
    Image
    [Attachment 73267 - Click to enlarge]

    Works fine. Still posting acres of rubbish Danny?
    Quote Quote  
  24. Originally Posted by A_n_g_e_l_a View Post
    Image
    [Attachment 73267 - Click to enlarge]

    Works fine. Still posting acres of rubbish Danny?
    NOT RUBBISH AS I Copied everything u said on the other forum etc IF U READ WHAT THE ERROR IS U CAN FIX IT YES

    no im getting this while running this script:

    ←[32m ____ ______ _ __ _ __
    / _/ /_ __/ | | / / | |/_/
    _/ / / / | |/ / _> <
    /___/ /_/ |___/ /_/|_|

    ←[0m
    Press enter with PAGE urls in clipboard https://www.itv.com/watch/deep-fake-neighbour-wars/10a2895/10a2895a0001
    The URL list has 1 video(s)
    Deep_Fake_Neighbour_Wars_S01E01
    Traceback (most recent call last):
    File "C:\Users\Dannyboi\AppData\Local\Programs\Python\P ython39\lib\sre_parse.py", line 1051, in parse_template
    this = chr(ESCAPES[this][1])
    KeyError: '\\i'

    During handling of the above exception, another exception occurred:
    it can get the keys but then this:

    Keys found c665bd9086b145678da8ea5d046187c1:24377799b9f60b9b3 575f6cb8a5b397d

    path empty or file not exists!
    Last edited by Dannyboi; 18th Aug 2023 at 13:42.
    Quote Quote  
  25. Member
    Join Date
    Feb 2022
    Location
    Search the forum first!
    Search PM
    Danny if you feed rubbish in to a script you will get rubbish out. I strongly suspect that is what you are doing.

    There is no help for this software. Why would there be? It is a free resource for THOSE WITH THE ABILITY TO USE IT. Sort it out yourself.
    Quote Quote  
  26. Originally Posted by A_n_g_e_l_a View Post
    Danny if you feed rubbish in to a script you will get rubbish out. I strongly suspect that is what you are doing.

    There is no help for this software. Why would there be? It is a free resource for THOSE WITH THE ABILITY TO USE IT. Sort it out yourself.
    whatever anglea help is needed for scripts u made that dont work

    this version of script works (but want updated by u but someone else), the script on page 1 DOEST WORK

    # Angela 13:07:2023
    # reworked to match recent changes at ITVX
    # 2:08:2023 revision 2
    # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    # With grateful thanks to sk8ord13 for code
    # dealing with the remote CDM
    # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

    ## This program uses The Stream Detector to capture page URLs
    ## Stream Detector select 'options'
    ## in the box adjacent to'user-defined-commands enter --> %origin%
    ## In TSD window select copy stream url as User-Defined-Command 1

    ## In addition to not having to faff about with opening and saving text files
    ## this program downloads, converts and merges subtitles.

    ## added option for a sequence number to add to the videoname (This will be the order they
    ## are selected in TSD). It removes the chance of an over-write of the video name when ITV
    ## uses a generic tile like 'Inspector Morse' and without a series or episode number.

    # @@@@@@@@@@@ IMPORTANT @@@@@@@@@@@@@@@@@@@@@
    ## subtitles need the pip install as below.
    ##
    ## pip install --pre ttconv

    # should you ever wish to run a convert subtitles routine from the command line:-
    ## tt convert -i <input .vtt file> -o <output .srt file>
    # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

    import re
    import requests
    import subprocess
    from base64 import b64encode
    from pathlib import Path
    import httpx
    from httpx import URL, Client
    from selectolax.lexbor import LexborHTMLParser
    import os
    import pyperclip as PC
    #from pywidevine.L3.cdm import deviceconfig
    #from pywidevine.L3.decrypt.wvdecryptcustom import WvDecrypt
    import pyfiglet as PF
    from termcolor import colored
    import json
    import shutil

    # GLOBALS
    OUT_PATH = Path('output')
    OUT_PATH.mkdir(exist_ok=True, parents=True)
    global count
    global SEQ
    global REMOTE

    #@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    # There is a choice of CDM to use
    # local or remote;
    # yours or someone else's.
    # To use your local CDM in the WKS-KEYS
    # folder, set REMOTE=False.
    #
    REMOTE = True
    #
    # @@@@@@@@@@@@@@@@@@@@@@@@
    # NOTE
    # seting for index number to preface videoname either True or False
    # If you want each video in your clipboard to be numbered by a preface to
    # the videoname set the value of SEQ = TRUE
    # This is useful for series without numbers but relies on the correct
    # order in the clipbord so select video1, video2, etc in sequence
    # @@@@@@@@@@@@@@@@@@@@@@@

    SEQ = False

    # Class configured as a singleton
    # only one instance is created
    # if consructor called again original
    # instance is returned
    class ITV:
    _instance = None

    def __init__(self):
    raise RuntimeError('Call instance() instead')

    @classmethod
    def instance(cls):
    if cls._instance is None:
    #print('Creating new instance of ITV Class')
    cls._instance = cls.__new__(cls)
    cls.host = 'itvpnpdotcom.blue.content.itv.com'
    timeout = httpx.Timeout(10.0, connect=60.0)

    cls.client = Client(
    headers={
    'authority': 'www.itv.com',
    'user-agent': 'Dalvik/2.9.8 (Linux; U; Android 9.9.2; ALE-L94 Build/NJHGGF)',
    },
    timeout=timeout,
    )
    return cls._instance


    def download(self, url: str) -> None:
    global count
    title, data = self.get_data(url)

    video = data['Playlist']['Video']
    media = video['MediaFiles']
    illegals = "*'%$!(),.:;"
    replacements = {
    ' Episode ': 'E',
    ' Series ': '_S',
    'otherepisodes': '_extra',
    'ITVX': '',
    ' ': '_',
    '&': 'and',
    '?': '',
    }
    # replace extraneous title data and 'illegal' characters
    videoname = ''.join(c for c in title if c.isprintable() and c not in illegals)
    # standardize and compact videoname
    for rep in replacements:
    videoname = videoname.replace(rep, replacements[rep])
    videoname = re.sub(r"(\d+)", pad_number, videoname).lstrip('_').rstrip('_')
    result = re.search(r"(^.*S\d+)_(E\d*.*)", videoname)
    try:
    pre = result.group(1)
    post = result.group(2)
    videoname = pre+post
    except:
    pass

    print(videoname)
    subs_url = video['Subtitles'][0]['Href']
    subs = requests.get(subs_url)
    f = open(f"{videoname}.subs.vtt", "w")
    subtitles = subs.text
    f.write(subtitles)
    f.close()

    # convert subtitles
    #os.system(f"tt convert -i {videoname}.subs.vtt -o {videoname}.subs.srt > /dev/null 2>&1")
    #os.system(f"tt convert -i {videoname}.subs.vtt -o {videoname}.subs.srt ")
    global SEQ # prepend a sequence number to anonymous videos
    if SEQ:
    myvideoname = format(count, "02d") +'_'+ videoname
    else:
    myvideoname = videoname
    mpd_url = [f'{video["Base"]}{y}' for x in media if (y := URL(x['Href'])).path.endswith('.mpd')][0]
    lic_url = [x['KeyServiceUrl'] for x in media][0]

    pssh = self._get_pssh(mpd_url)
    key = self._get_key(pssh, lic_url)
    temp = URL(mpd_url).params['hdnea']
    temp = temp.replace('nohubplus', 'hdntl,nohubplus')
    cookie = f"cookie: {re.sub(r'^.*(?<=exp=)', 'hdntl=exp=', temp)}"

    m3u8dl = 'N_m3u8DL-RE' # windows rename with .exe added
    subprocess.run([
    m3u8dl,
    mpd_url,
    '--append-url-params',
    '--header',
    cookie,
    '--header',
    f'host: {self.host}',
    '--header',
    f'user-agent: {self.client.headers["user-agent"]}',
    '--auto-select',
    '--save-name',
    myvideoname,
    '--save-dir',
    './',
    '--tmp-dir',
    './',
    '-mt',
    '--key',
    key,
    '-M',
    'format=mp4',
    '--no-log'
    ])

    command = [
    "mkvmerge",
    "-q",
    f"{myvideoname}.mp4",
    #f"{videoname}.subs.srt",
    f"{videoname}.subs.vtt",
    "-o",
    f"{myvideoname}.mkv"
    ]
    subprocess.run(command)
    shutil.move(f"{myvideoname}.mkv", f"{OUT_PATH}")
    #os.system(f"rm {myvideoname}.mp4 {videoname}.subs.vtt {videoname}.subs.srt")
    os.system(f"rm {myvideoname}.mp4 {videoname}.subs.vtt ")
    count = count-1


    def get_data(self, url: str) -> tuple:
    r = self.client.get(url)
    tree = LexborHTMLParser(r.text)
    jsondata = tree.root.css_first('#__NEXT_DATA__').text()
    myjson = json.loads(jsondata)
    title = myjson["props"]["pageProps"]["programme"]["title"]
    try:
    extendtitle = myjson["props"]["pageProps"]["episode"]["contentInfo"]
    title = f"{title}_{extendtitle}"
    except:
    pass
    try:
    magni_url = myjson["props"]["pageProps"]["episode"]["playlistUrl"]
    except:
    magni_url = myjson["props"]["pageProps"]["seriesList"][0]["titles"][0]["playlistUrl"]

    features = ['mpeg-dash', 'widevine', 'outband-webvtt', 'hd', 'single-track']
    payload = {
    'client': {'id': 'browser'},
    'variantAvailability': {
    'featureset': {'min': features, 'max': features},
    'platformTag': 'dotcom',
    }
    }
    r = self.client.post(magni_url, json=payload)
    return title, r.json()

    # REMOTE CDM or Local CDM
    def _get_key(self, pssh: str, lic_url: str , cert_b64=None) -> str:
    if REMOTE:
    print("Using a remote CDM")
    headers = {
    'accept': 'application/json, text/plain, */*',
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36',
    }
    json_data = {
    'password': 'password',
    'license': lic_url,
    'headers': 'Connection: keep-alive',
    'pssh': pssh,
    'buildInfo': '',
    'cache': True,
    }

    r = self.client.post('https://wvclone.fly.dev/wv', headers=headers, json=json_data).text
    m = re.search(r">(.{32}:.{32})<", r)
    if m:
    key = m.group(1)
    print(f"Keys found {key}\n")
    return key.lstrip()
    else:
    print("Using CDM on this machine")
    wvdecrypt = WvDecrypt(init_data_b64=pssh, cert_data_b64=cert_b64, device=deviceconfig.device_android_generic)
    widevine_license = httpx.post(url=lic_url, data=wvdecrypt.get_challenge(), headers=None)
    license_b64 = b64encode(widevine_license.content)
    wvdecrypt.update_license(license_b64)
    Correct, keyswvdecrypt = wvdecrypt.start_process()
    if Correct:
    mykeys = ''
    for key in keyswvdecrypt:
    mykeys += key +' '
    print(f"Keys found {mykeys}\n")
    return mykeys

    def _get_pssh(self, mpd_url: str) -> str:
    r = self.client.get(mpd_url)
    kid = (
    LexborHTMLParser(r.text)
    .css_first('ContentProtection')
    .attributes.get('cenc:default_kid')
    .replace('-', '')
    )
    s = f'000000387073736800000000edef8ba979d64acea3c827dc d51d21ed000000181210{kid}48e3dc959b06'
    return b64encode(bytes.fromhex(s)).decode()

    # add leading zero to series or episode
    def pad_number(match):
    number = int(match.group(1))
    return format(number, "02d")

    def main() -> int:
    input("Press enter with PAGE urls in clipboard ")
    urls = PC.paste().split('\n')
    global count
    count = len(urls)
    print(f"The URL list has {count} video(s)")
    myITV = ITV.instance()
    for url in urls:
    url = url.encode('ascii', 'ignore').decode()
    myITV.download(url)
    return 0

    if __name__ == "__main__":
    title = PF.figlet_format(' I T V X ', font='smslant')
    print(colored(title, 'green'))
    main()
    exit(0)
    Last edited by Dannyboi; 18th Aug 2023 at 14:10.
    Quote Quote  
  27. Member
    Join Date
    Feb 2022
    Location
    Search the forum first!
    Search PM
    Correction Danny, the script doesn't work for you. But it appears to work for everyone else - I wonder why that should be? Remember you were the guy asking what the clipboard was this morning.
    Regrettably, some people cannot be helped and my life is too short to try..

    You have got the latest version listed at page 1? And you did check before you started filling this thread with garbage? You did didn't you?
    Last edited by A_n_g_e_l_a; 18th Aug 2023 at 14:55.
    Quote Quote  
  28. Yes i've just tried using 3.8 Python on Win7 VM, works fine but you have to follow the SD instructions to the letter, as merely right clicking on the .mpd in the list gave me an error with the script, so as above you must have the correct string format in the clipboard for it to do it's stuff.
    Quote Quote  
  29. Originally Posted by A_n_g_e_l_a View Post
    Correction Danny, the script doesn't work for you. But it appears to work for everyone else - I wonder why that should be? Remember you were the guy asking what the clipboard was this morning.
    Regrettably, some people cannot be helped and my life is too short to try..

    You have got the latest version listed at page 1? And you did check before you started filling this thread with garbage? You did didn't you?
    yea but no but ya im always right
    Quote Quote  
  30. Originally Posted by Dannyboi View Post
    im always right
    I said this to my best friend once.
    To this day he jokes with me about how wrong that statement is.
    click click2
    If I/my posts ever helped you, and you want to give back, send me a private message!
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!