VideoHelp Forum




+ Reply to Thread
Page 4 of 4
FirstFirst ... 2 3 4
Results 91 to 115 of 115
  1. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Cool !

    Originally Posted by _Al_ View Post
    Another issue was using frame lengths for transitions or referencing lengths of some sort in frames was not a good idea, because if fpsnum/fpsden changes, it is all wrong then. So for transition lengths, image lengths and keeping reference for audio lengths, there is time in milliseconds used as a reference. Also pydub works in milliseconds, so it is just all working.
    OK. I used frame numbers after conversion of all incoming clips (a) VFR to CFR to target FPS (b) off-target FPS to target FPS, for the same reason. Then ms = (frames/target_fps)*1000 and it all falls into place since there's only one fps. I'm still "iffy" with the VFR to CFR methods, I need to look into it more.

    Don't rely on ChatGPT to generate working code. I had a rather extensive play. It generates good ideas but its code has lots of subtle and unsubtle bugs and red herrings, although some of it could work if you're lucky. For example with pydub, it generated incorrect calculations for frame based insertions time after time, incorrect use of functions (did not do what it thought they did), used incorrect parameters in functions it found which did not exist, etc etc. Very often I'd specify a change and it made it while also forgetting an essential change it had made only minutes earlier. I asked it to simulate some calculations and it did try, but in once case got them very wrong and even when pointed out had trouble fixing them bit by tiny bit.
    As an ideas generator, cool; for generating working code, well nice try but no cigar. Don't let it loose as yet for coding nuclear power plan scada software

    Originally Posted by _Al_ View Post
    Another issue was with vapoursynth bas source, it could have a problem with some weird audios in some old camcordes etc., but ffmpeg would just load it. So using pydub for videopaths, if there is a problem it falls back to make wave audio using ffmpeg.
    Nice, I'm keen to use see that. I had thought pydub used ffmpeg under its covers. Getting excited to see how you go about it.
    I use a pre-prepared large-ish background audio file of free instrumentals etc appended together into a .aac which pydub should open and "overlay" small audios onto, and hopefully should create .aac (not tested that bit).

    Originally Posted by _Al_ View Post
    Another thing to watch for was to make sure audio and video times for a segment are same exact length. It extends silent parts to some audio tracks (that yours video surveillance footage I guess) because audio was much shorter. This way it would not go out of sync (video and audio for the total length).
    Yes, for a segment I trimmed/padded it with silence as required, before "overlaying" it into the background with pydub since I'd thought the overlay method kept the underlying video duration etc as-is; I need to check that ... and also happily see your code.

    Originally Posted by _Al_ View Post
    ... make it on the need bases, gradually ... So audio can be run at least after that, then muxed together.
    For video, I kludged chunks of 200 or so pics/clips (getting circa 23fps) into temp intermediate ffv1 files (oh the disk space used !) and later "-f concat input" transcode (taking care of pts/dts issues ay the same time) those part-slideshow videos together with ffmpeg into the final h.264 video. Haven't checked whether this maintains color metadata etc.
    Audio sounds similar but different, to me.
    I don't vapoursynth audio at all; given the video parsing first we know it's frame insertion point in its chunk which we extrapolate an insertion point aligning with the final video, based on its frame numbers in chunks since it's target_fps is fixed.
    Last edited by hydra3333; 5th Jun 2023 at 03:51.
    Quote Quote  
  2. Member
    Join Date
    Jun 2022
    Location
    Dublin
    Search Comp PM
    Originally Posted by hydra3333 View Post
    Hello.

    I have a large number of old family .jpg pics with arbitrary filenames (although ordered) which I'd like to turn into a fast video slideshow at say 1080p (most TVs have this) with 1s/2s frame display duration ... or even dabble with 2160p.

    The thing is the .jpg's will have arbitrary dimensions (different cameras, settings) and so need to be resized (with some "quality" resizer) whilst maintaining aspect ratio during the process. Famous last words subject to change: speed isn't necessarily an issue.

    Unfortunately, I am also clueless about what (if anything) to do about colourspace conversions for this case.
    All I know about the .jpg files is that they are jpegs , some old some new, some "landscape" some "portrait"
    I guess the result would need to be Rec.709, but how to ensure it safely gets there prior to encoding is a question.

    So, could some kind souls please provide suggestions on how to use ffmpeg to
    1. do the "quality" resizing from arbitrarily dimensioned .jpg inputs whilst maintaining aspect ratios
    2. do any necessary colourspace conversions to ensure input to the h264_nvenc encoder is Rec.709 (or suggest a better alternative)

    Thanks !

    Context:
    have seen https://trac.ffmpeg.org/wiki/Slideshow
    have an nvidia "2060 Super" with an AMD 3900X cpu
    have an ffmpeg build which accepts vapoursynth (or even avisynth) input, if that helps, although I'd prefer
    prefer to use nvidia's h264_nvenc gpu encoding, probably with parameters including something like this (once I figure out how to force every frame to be an i-frame)
    Code:
    -c:v h264_nvenc -pix_fmt nv12 -preset p7 -multipass fullres -forced-idr 1 -g 1 -coder:v cabac -spatial-aq 1 -temporal-aq 1 -dpb_size 0 -bf:v 0 -b_ref_mode:v 0 -rc:v vbr -cq:v 0-b:v %bitrate_target% -minrate:v %bitrate_min% -maxrate:v %bitrate_max% -bufsize %bitrate_target% -profile:v high -level 5.2 -movflags +faststart+write_colr
    Have you considered using an nle instead of ffmpeg to create the slide shows for each of the sets of folders of video clips and stills. Where the frame rates for video clips are too far apart to combine in a single slide show (25 to 30, 30 to 25 fps you may get away with) then use more than a single slide show per folder.

    Within the nle you can use use pan, scan, cropping as required of stills to output to a slideshow video file. Add music as required to the stills sections. Most modern nle’s should handle vfr to cfr, or do it before import. Of course it means a fairly non automated process, more hands on, it depends on how much time you have to do it all.
    Last edited by JN-; 5th Jun 2023 at 12:14.
    Quote Quote  
  3. OOoooh, don't take our toy away ....
    Yes, sure nle is fast and quick, not fully automatized though, what comes to mind, what is handled in this processing as oppose to nle:
    -automate custom transition image-video, video-image, image-image,video-video,
    -automatic 601 to709 colorspace conversion if output is 709 or vice versa
    -specify upscaler or downscaler (any code if available in python), that means whatever based on input
    -deinterlacers could be QTGMC, again depending on source, right now for SD QTGMC is used, higher resolution uses Bwdif but if QTGMC is slow or knowing sources are mostly bad, deinterlace could be select based on video extension, that is a good idea also
    -nle tends to hiccup with weird audios, maybe videos too, here using ffmpeg for audio and specify a selected source video plugin for an extension would work more reliably (think early or some cheap camera videos)

    nle can input music with a comfort, yes, no problem, that could be done on audio output as well though, having audio exported as wav,
    manual pan and cropping, that is editing, which is all different category, thats editing a video
    Quote Quote  
  4. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by JN- View Post
    Have you considered using an nle instead of ffmpeg to create the slide shows for each of the sets of folders of video clips and stills. Where the frame rates for video clips are too far apart to combine in a single slide show (25 to 30, 30 to 25 fps you may get away with) then use more than a single slide show per folder.

    Within the nle you can use use pan, scan, cropping as required of stills to output to a slideshow video file. Add music as required to the stills sections. Most modern nle’s should handle vfr to cfr, or do it before import. Of course it means a fairly non automated process, more hands on, it depends on how much time you have to do it all.
    Thank you, yes I have, and tried one (Nero).
    With over 28,000 files with a variety for formats/codecs in hundreds of folders ... not practically doable.
    A thought, when trying to merge video clip's of different framerates and force them to assume the same framerate without "transcoding", yes that is doable (I tried it), however audio will become out of sync ... eg phone cameras are a pain in that they generally produce VFR, sometimes also with odd framerates like 16.1 or 31.4 (I'm in PAL 25fps land).
    I'm not sure what a free NLE would do with VFR and a variety of CFR framerates, I suspect less than agreeable things

    Thanks for the thought, though.
    Quote Quote  
  5. Member
    Join Date
    Jun 2022
    Location
    Dublin
    Search Comp PM
    Your welcome.
    Quote Quote  
  6. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    OK.

    I usually run stuff in a (free) Windows Sandbox that comes with Windows 10/11, since it is dead easy.

    Provided one has enabled it in Windows settings, the web tells me that you can stoke one up easily and when you close it then everything is automatically deleted permanently.
    One could have a read-write host folder mapped and copy everything you want to keep into that before closing the sandbox.
    Be prepared to have (lots of) space used on your C: drive if/when you run one, I'm not sure how to tell WSB to run on another disk, only how to map a folder.

    Anyway, for posterity and just in case someone else may find it useful to have an example to work from for other stuff,

    Here is the 2 files you need.
    You will need to edit both files, as they contain matching folder mappings, host <--> sandbox ... you can delete or modify bits in them to suit yourself.
    There are some dependencies within the files which you must delete or change (or comment out) as well.

    The 2 files do everything to start it:
    AI_sandbox_v01.wsb - double click on this to stoke up a Sandbox with the specified settings and host <--> sandbox folder mappings
    AI_Setup_v01.bat - this is auto-run once the sandbox is fired up, it creates desktop icons and handy files, etc
    Both of these file currently have a HARD dependency of being located on the host in C:\SOFTWARE\WindowsSandbox

    AI_Setup_v01.bat also creates 2 .bat files on the sandbox desktop:

    000-INIT.bat
    once the sandbox finished firing up, double-click this to copy files etc.
    111-SAVE_AI.bat
    once one has finished playing, if you had write enabled the correct mapped folder in the .wsb then this will save stuff to it.

    There are dependencies in AI_Setup_v01.bat which you can comment out, and mappings in AI_sandbox_v01.wsb which you can delete since .wsb crashes on comments.
    C:\SOFTWARE must exist
    C:\SOFTWARE\WindowsSandbox must exist and contain the above 2 files
    C:\SOFTWARE\NPP must exist and contain notepad++ installation, i.e. notepad++.exe
    And the other folder mappings and stuff you see in both those 2 files.

    Here's the content of those files.
    1. AI_sandbox_v01.wsb
    Code:
    <Configuration>
      <VGpu>Enable</VGpu>
      <Networking>Default</Networking>
      <PrinterRedirection>Enable</PrinterRedirection>
      <ClipboardRedirection>Default</ClipboardRedirection>
      <MappedFolders>
        <MappedFolder>
          <HostFolder>C:\software</HostFolder>
          <SandboxFolder>C:\host_software</SandboxFolder>
          <ReadOnly>true</ReadOnly>
        </MappedFolder>
        <MappedFolder>
          <HostFolder>C:\software\WindowsSandbox</HostFolder>
          <SandboxFolder>C:\Users\WDAGUtilityAccount\Desktop\WindowsSandbox</SandboxFolder>
          <ReadOnly>true</ReadOnly>
        </MappedFolder>
        <MappedFolder>
          <HostFolder>D:\ssTEST\_AI_2023.06.05\AI</HostFolder>
          <SandboxFolder>C:\HOST_AI</SandboxFolder>
          <ReadOnly>true</ReadOnly>
        </MappedFolder>
        <MappedFolder>
          <HostFolder>D:\ssTEST\_AI_2023.06.05\000-tasmania-renamed</HostFolder>
          <SandboxFolder>C:\HOST_000-tasmania-renamed</SandboxFolder>
          <ReadOnly>true</ReadOnly>
        </MappedFolder>
      </MappedFolders>
      <LogonCommand>
        <Command>C:\host_software\WindowsSandbox\AI_Setup_v01.bat</Command>
      </LogonCommand>
    </Configuration>
    2. AI_Setup_v01.bat
    Code:
    @echo on
    mkdir C:\AI
    mkdir C:\000-tasmania-renamed
    mkdir C:\TEMP
    
    REM Create a Initialization bat to run after the sandbox has fired up, for the user to double-click on in the sandbox
    set "f=%USERPROFILE%\Desktop\000-INIT.bat"
    DEL "%f%">NUL 2>&1
    echo @ECHO ON >> "%f%"
    echo mkdir C:\AI >> "%f%"
    echo mkdir C:\000-tasmania-renamed\ >> "%f%"
    echo mkdir C:\TEMP >> "%f%"
    echo CD C:\TEMP >> "%f%"
    REM echo copy /Y C:\host_software\wget\wget.exe C:\AI\ >> "%f%"
    echo xcopy C:\HOST_AI\*.* C:\AI\ /e /v /f /h /r /y >> "%f%"
    echo xcopy C:\HOST_000-tasmania-renamed\*.* C:\000-tasmania-renamed\ /e /v /f /h /r /y >> "%f%"
    echo pause >> "%f%"
    echo goto :eof >> "%f%"
    
    REM Create a .bat to save any .py .vpy .bat results of runnng AI back in the sandbox system onto the host system
    REM won't work for the moment, as we set C:\AI to readonly 
    set "f=%USERPROFILE%\Desktop\111-SAVE_AI.bat"
    DEL "%f%">NUL 2>&1
    echo @ECHO ON >> "%f%"
    echo REM THE FOLLOWING IS TO SAVE AI .bat and .vpy back to the non-sandbox host>> "%f%"
    echo copy /Y C:\AI\*.bat  C:\HOST_AI\ >> "%f%"
    echo copy /Y C:\AI\*.py   C:\HOST_AI\ >> "%f%"
    echo copy /Y C:\AI\*.vpy  C:\HOST_AI\ >> "%f%"
    echo pause >> "%f%"
    echo goto :eof >> "%f%"
    
    REM Create a link on the sandbox desktop which is mapped to the read-only C:\SOFTWARE folder on the host system
    REM which contains our set of downloaded software that we use all the time. ie make it available in the sandbox.
    REM We rely on this for stuff like notepad++ ... it C:\SOFTWARE or any other mapped folders do not exist on the host, the sandbox won't start
    set create_shortcut_SCRIPT="%TEMP%\create_HOST_software_desktop_link_%RANDOM%-%RANDOM%-%RANDOM%-%RANDOM%.vbs"
    del %create_shortcut_SCRIPT%
    echo Set oWS = WScript.CreateObject("WScript.Shell") >> %create_shortcut_SCRIPT%
    echo sLinkFile = "%USERPROFILE%\Desktop\HOST software.lnk" >> %create_shortcut_SCRIPT%
    echo Set oLink = oWS.CreateShortcut(sLinkFile) >> %create_shortcut_SCRIPT%
    echo oLink.TargetPath = "C:\host_software\" >> %create_shortcut_SCRIPT%
    echo oLink.Arguments = "" >> %create_shortcut_SCRIPT%
    echo oLink.Description = "HOST software" >> %create_shortcut_SCRIPT%
    echo 'oLink.HotKey = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.IconLocation = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.WindowStyle = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.WorkingDirectory = "" >> %create_shortcut_SCRIPT%
    echo oLink.Save >> %create_shortcut_SCRIPT%
    cscript /nologo %create_shortcut_SCRIPT%
    del %create_shortcut_SCRIPT%
    
    REM Create a link on the sandbox desktop which is mapped to our readonly D:\ssTEST\_AI_2023.06.05\AI folder on the host system
    REM This is where we originally downloaded and extracted AI's lovely software on the host
    set create_shortcut_SCRIPT="%TEMP%\create_HOST_AI_desktop_link_%RANDOM%-%RANDOM%-%RANDOM%-%RANDOM%.vbs"
    del %create_shortcut_SCRIPT%
    echo Set oWS = WScript.CreateObject("WScript.Shell") >> %create_shortcut_SCRIPT%
    echo sLinkFile = "%USERPROFILE%\Desktop\HOST AI.lnk" >> %create_shortcut_SCRIPT%
    echo Set oLink = oWS.CreateShortcut(sLinkFile) >> %create_shortcut_SCRIPT%
    echo oLink.TargetPath = "C:\HOST_AI\" >> %create_shortcut_SCRIPT%
    echo oLink.Arguments = "" >> %create_shortcut_SCRIPT%
    echo oLink.Description = "HOST AI" >> %create_shortcut_SCRIPT%
    echo 'oLink.HotKey = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.IconLocation = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.WindowStyle = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.WorkingDirectory = "" >> %create_shortcut_SCRIPT%
    echo oLink.Save >> %create_shortcut_SCRIPT%
    cscript /nologo %create_shortcut_SCRIPT%
    del %create_shortcut_SCRIPT%
    
    REM Create a link on the sandbox desktop which is mapped to our read-only D:\ssTEST\_AI_2023.06.05\000-tasmania-renamed folder on the host system
    REM This is where our original images/videos are
    set create_shortcut_SCRIPT="%TEMP%\create_HOST_000-tasmania-renamed_desktop_link_%RANDOM%-%RANDOM%-%RANDOM%-%RANDOM%.vbs"
    del %create_shortcut_SCRIPT%
    echo Set oWS = WScript.CreateObject("WScript.Shell") >> %create_shortcut_SCRIPT%
    echo sLinkFile = "%USERPROFILE%\Desktop\HOST_000-tasmania-renamed.lnk" >> %create_shortcut_SCRIPT%
    echo Set oLink = oWS.CreateShortcut(sLinkFile) >> %create_shortcut_SCRIPT%
    echo oLink.TargetPath = "C:\HOST_000-tasmania-renamed\" >> %create_shortcut_SCRIPT%
    echo oLink.Arguments = "" >> %create_shortcut_SCRIPT%
    echo oLink.Description = "HOST_000-tasmania-renamed" >> %create_shortcut_SCRIPT%
    echo 'oLink.HotKey = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.IconLocation = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.WindowStyle = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.WorkingDirectory = "" >> %create_shortcut_SCRIPT%
    echo oLink.Save >> %create_shortcut_SCRIPT%
    cscript /nologo %create_shortcut_SCRIPT%
    del %create_shortcut_SCRIPT%
    
    REM Create a link on the sandbox desktop which is mapped to C:\AI in the local sandbox system
    REM This is where our sandbox copies of original folders of AI software is
    set create_shortcut_SCRIPT="%TEMP%\create_AI_desktop_link_%RANDOM%-%RANDOM%-%RANDOM%-%RANDOM%.vbs"
    del %create_shortcut_SCRIPT%
    echo Set oWS = WScript.CreateObject("WScript.Shell") >> %create_shortcut_SCRIPT%
    echo sLinkFile = "%USERPROFILE%\Desktop\AI.lnk" >> %create_shortcut_SCRIPT%
    echo Set oLink = oWS.CreateShortcut(sLinkFile) >> %create_shortcut_SCRIPT%
    echo oLink.TargetPath = "C:\AI\" >> %create_shortcut_SCRIPT%
    echo oLink.Arguments = "" >> %create_shortcut_SCRIPT%
    echo oLink.Description = "AI" >> %create_shortcut_SCRIPT%
    echo 'oLink.HotKey = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.IconLocation = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.WindowStyle = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.WorkingDirectory = "" >> %create_shortcut_SCRIPT%
    echo oLink.Save >> %create_shortcut_SCRIPT%
    cscript /nologo %create_shortcut_SCRIPT%
    del %create_shortcut_SCRIPT%
    
    REM Create a link on the sandbox desktop which is mapped to "C:\000-tasmania-renamed in the local sandbox system
    REM This is where our sandbox copies of original folders of images are
    set create_shortcut_SCRIPT="%TEMP%\create_000-tasmania-renamed_desktop_link_%RANDOM%-%RANDOM%-%RANDOM%-%RANDOM%.vbs"
    del %create_shortcut_SCRIPT%
    echo Set oWS = WScript.CreateObject("WScript.Shell") >> %create_shortcut_SCRIPT%
    echo sLinkFile = "%USERPROFILE%\Desktop\000-tasmania-renamed.lnk" >> %create_shortcut_SCRIPT%
    echo Set oLink = oWS.CreateShortcut(sLinkFile) >> %create_shortcut_SCRIPT%
    echo oLink.TargetPath = "C:\000-tasmania-renamed\" >> %create_shortcut_SCRIPT%
    echo oLink.Arguments = "" >> %create_shortcut_SCRIPT%
    echo oLink.Description = "000-tasmania-renamed" >> %create_shortcut_SCRIPT%
    echo 'oLink.HotKey = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.IconLocation = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.WindowStyle = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.WorkingDirectory = "" >> %create_shortcut_SCRIPT%
    echo oLink.Save >> %create_shortcut_SCRIPT%
    cscript /nologo %create_shortcut_SCRIPT%
    del %create_shortcut_SCRIPT%
    
    REM Create a link on the sandbox desktop which is mapped to a local TEMP folder in the local sandbox system
    set create_shortcut_SCRIPT="%TEMP%\create_TEMP_desktop_link_%RANDOM%-%RANDOM%-%RANDOM%-%RANDOM%.vbs"
    del %create_shortcut_SCRIPT%
    echo Set oWS = WScript.CreateObject("WScript.Shell") >> %create_shortcut_SCRIPT%
    echo sLinkFile = "%USERPROFILE%\Desktop\TEMP.lnk" >> %create_shortcut_SCRIPT%
    echo Set oLink = oWS.CreateShortcut(sLinkFile) >> %create_shortcut_SCRIPT%
    echo oLink.TargetPath = "C:\TEMP\" >> %create_shortcut_SCRIPT%
    echo oLink.Arguments = "" >> %create_shortcut_SCRIPT%
    echo oLink.Description = "TEMP" >> %create_shortcut_SCRIPT%
    echo 'oLink.HotKey = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.IconLocation = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.WindowStyle = "" >> %create_shortcut_SCRIPT%
    echo 'oLink.WorkingDirectory = "" >> %create_shortcut_SCRIPT%
    echo oLink.Save >> %create_shortcut_SCRIPT%
    cscript /nologo %create_shortcut_SCRIPT%
    del %create_shortcut_SCRIPT%
    
    REM change explorer View settings to be as we like them
    REG Add HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced /V AlwaysShowMenus /T REG_DWORD /D 00000001 /F
    REG Add HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced /V SeparateProcess /T REG_DWORD /D 00000001 /F
    REG Add HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced /V NavPaneExpandToCurrentFolder /T REG_DWORD /D 00000001 /F
    REG Add HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced /V NavPaneShowAllFolders /T REG_DWORD /D 00000001 /F
    REG Add HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced /V HideFileExt /T REG_DWORD /D 00000000 /F
    REG Add HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced /V Hidden /T REG_DWORD /D 00000001 /F
    REG Add HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced /V ShowSuperHidden /T REG_DWORD /D 00000001 /F
    REG Add HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced /V ShowEncryptCompressedColor /T REG_DWORD /D 00000001 /F
    REG Add HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced /V ShowStatusBar /T REG_DWORD /D 00000001 /F
    
    REM add "Edit with Notepad++" to right click context pop-up
    REM ... Notepad++ is located in the host's SOFTWARE\NPP folder
    set fnpp="C:\TEMP\EDIT_WITH_NPP.REG"
    del %fnpp%
    ECHO Windows Registry Editor Version 5.00                     > %fnpp%
    ECHO.                                                        >> %fnpp%
    ECHO [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced] >> %fnpp%
    ECHO "UseCompactMode"=dword:00000001 >> %fnpp%
    ECHO [HKEY_CLASSES_ROOT\*\shell\Edit with Notepad++]         >> %fnpp%
    REM ECHO "Icon"="C:\\SOFTWARE\\NPP\\notepad++.exe"
    ECHO "Icon"="C:\\HOST_SOFTWARE\\NPP\\notepad++.exe"               >> %fnpp%
    ECHO [HKEY_CLASSES_ROOT\*\shell\Edit with Notepad++\command] >> %fnpp%
    REM ECHO @="\"C:\\SOFTWARE\\NPP\\notepad++.exe\" \"%%1\""
    ECHO @="\"C:\\HOST_SOFTWARE\\NPP\\notepad++.exe\" \"%%1\""        >> %fnpp%
    regedit /s %fnpp%
    
    REM restart explorer so that eveything appears including on the desktop
    taskkill /f /im explorer.exe
    start explorer.exe
    
    REM start an explorer window at the specified directory
    explorer.exe "C:\AI\"
    
    REM start an explorer window at the specified directory
    explorer.exe "C:\000-tasmania-renamed\"
    
    goto :eof
    Quote Quote  
  7. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Apologies _AI_ I am a bit up against it ... real life intervenes, and I have just been focussing trying to get stuff done for a time-pressured activity needing this ... so I have not yet looked at your stuff.

    In the meantime I'm using this feeble 'brute-force' 'old-core' 'monolothic code' approach to just getting something going https://github.com/hydra3333/QN_Auto_Slideshow_Creator_for_Windows

    It's not a faded pale patch on what you're up to, but it does create something for me, and slideshows are what I am under pressure to produce.

    Cheers
    Quote Quote  
  8. Member
    Join Date
    Apr 2018
    Location
    Croatia
    Search Comp PM
    Already have ffmpeg full solution that use xfade filter, and not vapoursynth.
    Quote Quote  
  9. no problem at all, just do your workflow, your stuff, that's the fun,

    just looked at that link, first thing I noticed is mentioning you are not handling anamorphic video, that I added, it is just couple of lines, you can added it into your boxing(clip, W, H) function, where you can use boxing(clip, W, H, ar) instead, where ar is mediainfo "AspectRatio" value (float I think) and add this to the beginning of that your function, hopefully there is no mistake:
    Code:
    def boxing(clip, W, H, ar):
    	# ensure aspect ratio of an original image/video is preserved by adding black bars where necessary
    	source_width, source_height = clip.width, clip.height
    ##        MODX = 1 << clip.format.subsampling_w
    ##        MODY = 1 << clip.format.subsampling_h
            anamorphic = False
            if ar is not None and abs(source_width/source_height-ar)>0.05 and abs(source_width/source_height-1/ar)>0.05:
                    anamorphic = True
                    if source_width > source_height:
                            source_width = objSettings.MODX * round(source_height*ar/objSettings.MODX)
                    else:
                            source_height = objSettings.MODY * round(source_width*ar/objSettings.MODY)
    
            if W==source_width and H==source_height and not anamorphic:
                    return clip
    	
    	if W/H > source_width/source_height:
    		w = source_width*H/source_height
    		x = int((W-w)/2)
    .
    .
    .
    this should resize anamorphic video to sar 1:1 and a letterbox or pillarbox it into W,H size
    Last edited by _Al_; 17th Jun 2023 at 11:48.
    Quote Quote  
  10. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by richardpl View Post
    Already have ffmpeg full solution that use xfade filter, and not vapoursynth.
    Thank you for for your suggestion ! Yes FFmpeg was the very first thing I tried and persisted in trying for a week or two. I love FFmpeg.

    It did not handle many of the cases, eg some pic/videos needed rotation from various angles, most were of various sizes, etc.

    Getting advice that worked wasn't always easy, either. Most of the advice I saw on stackoverflow etc worked to a point but not fully (specifically resizing and padding looked easy but could not get to work in all cases, it looked like it should but didn't. Some advice must have been untested because it did not work fully and resulting images were "not right") and there were some problems with imagesource. Some advice tended to be general eg "use this" perhaps without knowing it would not address the perceived need.

    I suppose I could have tried harder and perhaps convert each individual pic/video into a video clip and doing something with them at the end, however having background audio with each video clip's audio overlayed at the right place seemed a bit too hard. I did try another approach for pics only: https://github.com/hydra3333/QN_Auto_Slideshow_Creator_for_Windows/tree/main/superseded

    In summary, I did give it a pretty fair go (FFmpeg really is the best swiss knife going around) but it could not do the job by itself.

    Interestingly, under the covers FFmpeg is the engine doing the video and audio encoding, however tools like python with pydub and Pillow and vapoursynth et al are doing the smarts for more challenging processing.

    I have just (automatically) done 2,000 last night, the biggest challenge being finding suitable background music for older people
    Only about 27,000 to go ... no one could afford to spend the time looking at each image to determine rotations and resizings and separating them out or doing them by hand, etc.

    Cheers
    Last edited by hydra3333; 17th Jun 2023 at 21:46.
    Quote Quote  
  11. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by _Al_ View Post
    no problem at all, just do your workflow, your stuff, that's the fun,

    just looked at that link, first thing I noticed is mentioning you are not handling anamorphic video, that I added, it is just couple of lines, you can added it into your boxing(clip, W, H) function, where you can use boxing(clip, W, H, ar) instead, where ar is mediainfo "AspectRatio" value (float I think) and add this to the beginning of that your function, hopefully there is no mistake:
    Code:
    def boxing(clip, W, H, ar):
    	# ensure aspect ratio of an original image/video is preserved by adding black bars where necessary
    	source_width, source_height = clip.width, clip.height
    ##        MODX = 1 << clip.format.subsampling_w
    ##        MODY = 1 << clip.format.subsampling_h
            anamorphic = False
            if ar is not None and abs(source_width/source_height-ar)>0.05 and abs(source_width/source_height-1/ar)>0.05:
                    anamorphic = True
                    if source_width > source_height:
                            source_width = objSettings.MODX * round(source_height*ar/objSettings.MODX)
                    else:
                            source_height = objSettings.MODY * round(source_width*ar/objSettings.MODY)
    
            if W==source_width and H==source_height and not anamorphic:
                    return clip
    	
    	if W/H > source_width/source_height:
    		w = source_width*H/source_height
    		x = int((W-w)/2)
    .
    .
    .
    this should resize anamorphic video to sar 1:1 and a letterbox or pillarbox it into W,H size
    Thanks !!! I'll have a look at it tonight.


    edit:
    ar is mediainfo "AspectRatio" value (float I think)
    interestingly, with that not yet reviewed/implemented and ignoring the source AR - if the target is 1080p the stuff looks ok even if it isn't, if the target is 576p it looks squishy
    Last edited by hydra3333; 17th Jun 2023 at 21:42.
    Quote Quote  
  12. Member
    Join Date
    Apr 2018
    Location
    Croatia
    Search Comp PM
    Originally Posted by hydra3333 View Post
    Originally Posted by richardpl View Post
    Already have ffmpeg full solution that use xfade filter, and not vapoursynth.
    Thank you for for your suggestion ! Yes FFmpeg was the very first thing I tried and persisted in trying for a week or two. I love FFmpeg.

    It did not handle many of the cases, eg some pic/videos needed rotation from various angles, most were of various sizes, etc.

    Getting advice that worked wasn't always easy, either. Most of the advice I saw on stackoverflow etc worked to a point but not fully (specifically resizing and padding looked easy but could not get to work in all cases, it looked like it should but didn't. Some advice must have been untested because it did not work fully and resulting images were "not right") and there were some problems with imagesource. Some advice tended to be general eg "use this" perhaps without knowing it would not address the perceived need.

    I suppose I could have tried harder and perhaps convert each individual pic/video into a video clip and doing something with them at the end, however having background audio with each video clip's audio overlayed at the right place seemed a bit too hard. I did try another approach for pics only: https://github.com/hydra3333/QN_Auto_Slideshow_Creator_for_Windows/tree/main/superseded

    In summary, I did give it a pretty fair go (FFmpeg really is the best swiss knife going around) but it could not do the job by itself.

    Interestingly, under the covers FFmpeg is the engine doing the video and audio encoding, however tools like python with pydub and Pillow and vapoursynth et al are doing the smarts for more challenging processing.

    I have just (automatically) done 2,000 last night, the biggest challenge being finding suitable background music for older people
    Only about 27,000 to go ... no one could afford to spend the time looking at each image to determine rotations and resizings and separating them out or doing them by hand, etc.

    Cheers
    Looks like you are just ignorant and not at all interested in FFmpeg full solution and just looks for vapoursynth crappy solutions.
    Do as you wish.
    Quote Quote  
  13. Member
    Join Date
    Jun 2022
    Location
    Dublin
    Search Comp PM
    “Thank you for for your suggestion ! Yes FFmpeg was the very first thing I tried and persisted in trying for a week or two. I love FFmpeg.

    It did not handle many of the cases, eg some pic/videos needed rotation from various angles, most were of various sizes, etc.”

    I had a need to rotate a few video clips, but I found that Exiftool.exe worked best, not ffmpeg.
    https://exiftool.org/
    Quote Quote  
  14. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    [QUOTE=richardpl;2693996]
    Originally Posted by hydra3333 View Post
    Looks like you are just ignorant and not at all interested in FFmpeg full solution and just looks for vapoursynth crappy solutions.
    Oh dear. I did say precisely otherwise in the text, if you care to re-read it.
    Given that you are an expert with lots of posts elsewhere, I value your views ... just not that one.
    Quote Quote  
  15. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Hello. I do not know what I am doing and need a some advice.

    In vapoursynth R62 I have opened a .gif with ffms2 (which seems more reliable in reporting clip properties) and it is RGB which I need to convert to YUV420P8 rec.709, but the code throws an error and I do not know why.

    Advice would be very greatly welcomed.

    Here's the debug code.
    Code:
    # ??????????????????????????????????????????????????????????????????????????????????????
    # .gif opened with:
    #	c = core.ffms2.Source(str(path), cachefile=ffcachefile)
    #
    # CONVERT RGB TO YUV
    A_DEBUG_IS_ON = True
    print_DEBUG(f'resize_clip: ABOUT TO CONVERT RGB TO YUV clip properties: c={objPrettyPrint.pformat(c)}')
    print_DEBUG(f'resize_clip: ABOUT TO CONVERT RGB TO YUV clip properties: c={c.format}')
    print_DEBUG(f'resize_clip: ABOUT TO CONVERT RGB TO YUV clip properties: c.format.name="{c.format.name}" c.format.color_family="{c.format.color_family}" c.format.sample_type="{c.format.sample_type}" c.format.bits_per_sample="{c.format.bits_per_sample}" c.format.bytes_per_sample="{c.format.bytes_per_sample}" c.format.num_planes="{c.format.num_planes}" c.format.subsampling_w="{c.format.subsampling_w}" c.format.subsampling_h="{c.format.subsampling_h}"')
    with c.get_frame(0) as f:
    	print_DEBUG(f'resize_clip: ABOUT TO CONVERT RGB TO YUV - FRAME PROPERTIES before RGB CONVERSION TO YUV: w={c.width} h={c.height} fps={c.fps} {c} FRAME PROPERTIES={objPrettyPrint.pformat(f.props)}')
    	pass
    #c = core.resize.Spline64(c, format=vs.YUV420P8, matrix_in_s="rgb", matrix=objSettings.TARGET_COLORSPACE_MATRIX_I, matrix_s="709")
    c = core.resize.Spline64(c, format=vs.YUV420P8, matrix_s='709')
    print_DEBUG(f'resize_clip: HAVE CONVERTED RGB TO YUV clip properties: c={objPrettyPrint.pformat(c)}')
    print_DEBUG(f'resize_clip: HAVE CONVERTED RGB TO YUV clip properties: c={c.format}')
    print_DEBUG(f'resize_clip: HAVE CONVERTED RGB TO YUV clip properties: c.format.name="{c.format.name}" c.format.color_family="{c.format.color_family}" c.format.sample_type="{c.format.sample_type}" c.format.bits_per_sample="{c.format.bits_per_sample}" c.format.bytes_per_sample="{c.format.bytes_per_sample}" c.format.num_planes="{c.format.num_planes}" c.format.subsampling_w="{c.format.subsampling_w}" c.format.subsampling_h="{c.format.subsampling_h}"')
    with c.get_frame(0) as f:	###### <<<--- this is where the error is thrown
    	print_DEBUG(f'resize_clip: HAVE CONVERTED RGB TO YUV - FRAME PROPERTIES after RGB CONVERSION TO YUV: w={c.width} h={c.height} fps={c.fps} {c} FRAME PROPERTIES={objPrettyPrint.pformat(f.props)}')
    	pass
    A_DEBUG_IS_ON = False
    # ??????????????????????????????????????????????????????????????????????????????????????
    Here's the debug output.

    Code:
    2023-06-20.00:09:17.417785 DEBUG: resize_clip: ABOUT TO CONVERT RGB TO YUV clip properties: c=<vapoursynth.VideoNode object at 0x00000267313A5F40 format=RGB24, width=521, height=298, num_frames=100, fps=25>
    2023-06-20.00:09:17.417785 DEBUG: resize_clip: ABOUT TO CONVERT RGB TO YUV clip properties: c=VideoFormat
    	ID: 537395200
    	Name: RGB24
    	Color Family: RGB
    	Sample Type: INTEGER
    	Bits Per Sample: 8
    	Bytes Per Sample: 1
    	Num Planes: 3
    	Subsampling W: None
    	Subsampling H: None
    
    2023-06-20.00:09:17.417785 DEBUG: resize_clip: ABOUT TO CONVERT RGB TO YUV clip properties: c.format.name="RGB24" c.format.color_family="2" c.format.sample_type="0" c.format.bits_per_sample="8" c.format.bytes_per_sample="1" c.format.num_planes="3" c.format.subsampling_w="0" c.format.subsampling_h="0"
    2023-06-20.00:09:17.417785 DEBUG: resize_clip: ABOUT TO CONVERT RGB TO YUV - FRAME PROPERTIES before RGB CONVERSION TO YUV: w=521 h=298 fps=25 VideoNode
    	Format: RGB24
    	Width: 521
    	Height: 298
    	Num Frames: 100
    	FPS: 25
     FRAME PROPERTIES=<vapoursynth.FrameProps {'_FieldBased': 0, '_PictType': b'I', '_DurationDen': 25, '_DurationNum': 1, '_AbsoluteTime': 0.0, '_Primaries': 5, '_ColorRange': 0, '_Matrix': 5, '_Transfer': 5}>
    2023-06-20.00:09:17.417785 DEBUG: resize_clip: HAVE CONVERTED RGB TO YUV clip properties: c=<vapoursynth.VideoNode object at 0x00000267313A6240 format=YUV420P8, width=521, height=298, num_frames=100, fps=25>
    2023-06-20.00:09:17.417785 DEBUG: resize_clip: HAVE CONVERTED RGB TO YUV clip properties: c=VideoFormat
    	ID: 805830913
    	Name: YUV420P8
    	Color Family: YUV
    	Sample Type: INTEGER
    	Bits Per Sample: 8
    	Bytes Per Sample: 1
    	Num Planes: 3
    	Subsampling W: 2x
    	Subsampling H: 2x
    
    2023-06-20.00:09:17.417785 DEBUG: resize_clip: HAVE CONVERTED RGB TO YUV clip properties: c.format.name="YUV420P8" c.format.color_family="3" c.format.sample_type="0" c.format.bits_per_sample="8" c.format.bytes_per_sample="1" c.format.num_planes="3" c.format.subsampling_w="1" c.format.subsampling_h="1"
    Script evaluation failed:
    Python exception: Resize error 1026: RGB color family cannot have YUV matrix coefficients
    
    Traceback (most recent call last):
    ...
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "G:\2021.06.14-Pat.and.Ted-Photos-SLIDESHOW\slideshow_ENCODER_legacy.vpy", line 980, in resize_clip
        with c.get_frame(0) as f:
             ^^^^^^^^^^^^^^
      File "src\cython\vapoursynth.pyx", line 2025, in vapoursynth.VideoNode.get_frame
    vapoursynth.Error: Resize error 1026: RGB color family cannot have YUV matrix coefficients
    Image Attached Thumbnails Click image for larger version

Name:	troopship_Empire_Ken.gif
Views:	22
Size:	85.4 KB
ID:	71829  

    Last edited by hydra3333; 19th Jun 2023 at 10:09.
    Quote Quote  
  16. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by JN- View Post
    I had a need to rotate a few video clips, but I found that Exiftool.exe worked best, not ffmpeg.
    https://exiftool.org/
    Ta, I found a way with irfanview as well for images.
    Quote Quote  
  17. Originally Posted by hydra3333 View Post
    In vapoursynth R62 I have opened a .gif with ffms2 (which seems more reliable in reporting clip properties) and it is RGB which I need to convert to YUV420P8 rec.709, but the code throws an error and I do not know why.
    I loaded that gif with ffms2 ok, it got thru processing format and conversion.
    Short answer:
    try to set matrix_in=0, just that might help, not sure, because you perhaps have set outgoing matrix, but not incoming.

    Or did you set matrix to 5? Just carefully looking at your logs, rgb can be zero only.


    Long answer with a solution for all scenarios:
    My conclusion, to avoid exactly those messages is when format and resolution is changed, was to always provide all incoming range, matrix, transfer, primaries and also all outgoing arguments.
    It is an elaborate process to get mediainfo data, vapoursynth props and defaults and select it with a preference in that order for vapoursynth resize function. That is done for all 4 suspects: range, matrix, transfer and primaries.
    Heck I also define vapoursynth props as well just to be sure, for incoming clip before conversion and after conversion to process format clip as well, maybe that is that safe net that prevents that error. I think that is what Selur is always doing as well in his scripts.

    Vapoursynth does not make things up or defaults as ffmpeg would behind scenes, so if all those arguments are provided for conversion it should work.

    sequence of those functions:
    Code:
            self.get_mediainfo()
            self.get_colors()
            self.to_process_format()
            
            #custom processes, order could be changed (but watch logic)  or processes could be added
            self.deinterlace()
            self.rotation()
            self.boxing()
            self.change_video_FPS()
            self.image_to_length()
    
            #mandatory change to output format
            self.to_output_format()
    posting only some functions, but all of it is in process_clip.py, in slideshow package (slideshow directory in that download)
    Code:
            #this  is desired output, delivery, final output format:
            self.range_out     = 'limited'
            self.matrix_out    = 1
            self.transfer_out  = 1
            self.primaries_out = 1
            self.colors_out = dict(range_s=self.range_out, matrix=self.matrix_out, transfer=self.transfer_out, primaries=self.primaries_out)
    
        def get_colors(self):
    
            def default_prop():
                return 1 if (self.clip.width > 720 or self.clip.height>720) else 5
            
            def default(v1, v2, v3=None):
                #using this if legit values could be zero, then simple "or" could not be used
                return v1 if v1 is not None else v2 if v2 is not None else v3
            
            range_vs      = util.get_prop(self.clip, '_ColorRange')
            matrix_vs     = util.get_prop(self.clip, '_Matrix')
            transfer_vs   = util.get_prop(self.clip, '_Transfer')
            primaries_vs  = util.get_prop(self.clip, '_Primaries')
           
            if self.clip.format.color_family==vs.RGB:
                '''
                RGB
                skipping mediainfo values for evaluating, I think mediainfo ignores these for RGB, could be wrong though'''
                range_mi_to_vs, matrix_mi_to_vs, transfer_mi_to_vs, primaries_mi_to_vs = None, None, None, None
                range_in     = default(range_vs,  1)
                matrix_in    = default(matrix_vs, 0)  #default rgb matrix is 0
                transfer_in  = transfer_vs  or default_prop()
                primaries_in = primaries_vs or default_prop()
    
            else:
                '''
                YUV
                Not evaluating HDR to SDR yet, that should be here too.                       
                Priority:  mediainfo, vapoursynth props, default.
                '''
                range_mi_to_vs     = util.mediainfo_val_to_vs(self.mi['colour_range'],            util.RANGE_MEDIAINFO_TO_VS)
                matrix_mi_to_vs    = util.mediainfo_val_to_vs(self.mi['matrix_coefficients'],     util.MATRIX_MEDIAINFO_TO_VS)
                transfer_mi_to_vs  = util.mediainfo_val_to_vs(self.mi['transfer_characteristics'],util.TRANSFER_MEDIAINFO_TO_VS)
                primaries_mi_to_vs = util.mediainfo_val_to_vs(self.mi['colour_primaries'],        util.PRIMARIES_MEDIAINFO_TO_VS)
    
                range_in     = default(range_mi_to_vs, range_vs, 1)
                matrix_in    = matrix_mi_to_vs    or matrix_vs    or default_prop()
                transfer_in  = transfer_mi_to_vs  or transfer_vs  or default_prop()
                primaries_in = primaries_mi_to_vs or primaries_vs or default_prop()
    
            '''setting prop values for a clip, preventing Resize Error while frame is requested.
            Technically, just setting props, should be enough without passing colors_in dict for conversion later, but heck doing both,
            at least it is necessary for range'''
            self.clip = util.write_props(self.clip, _Matrix=matrix_in, _Transfer=transfer_in, _Primaries=primaries_in)
            range_in_s = {0:'full',1:'limited'}[range_in]
            self.colors_in = dict(range_in_s=range_in_s, matrix_in=matrix_in, transfer_in=transfer_in, primaries_in=primaries_in)
    
            logger.debug(f'selected or default:      medianfo:   vapoursynth:')
            logger.debug(f'range_in:     {range_in}, {range_in_s: <12}  {range_mi_to_vs}      {range_vs}')
            logger.debug(f'matrix_in:    {matrix_in}, {util.MATRIX[matrix_in]: <12}  {matrix_mi_to_vs}      {matrix_vs}')
            logger.debug(f'transfer_in:  {transfer_in}, {util.TRANSFER[transfer_in]: <12}  {transfer_mi_to_vs}      {transfer_vs}')
            logger.debug(f'primaries_in: {primaries_in}, {util.PRIMARIES[primaries_in]: <12}  {primaries_mi_to_vs}      {primaries_vs}')
    
    
        def to_process_format(self):
            if self.process_format is None:
                #not recommended, all clips would have to be the same
                return
            '''
            force process format conversion, just format, not resolution
            '''
            logger.debug(f'to process format: vs.{vs.PresetFormat(self.process_format).name}')
            self.clip = self.clip.resize.Bicubic(format = self.process_format, **self.colors_in, **self.colors_out)
            self.clip = util.write_props(self.clip,  _Matrix=self.matrix_out, _Transfer=self.transfer_out, _Primaries=self.primaries_out)
    
        def to_output_format(self):
            logger.debug(f'to output format: vs.{vs.PresetFormat(self.format).name}')
            self.clip = self.clip.resize.Bicubic(format = self.format)
    log says:
    Code:
    F:\test\troopship_Empire_Ken.gif
        selected or default:      mediainfo:   vapoursynth:
        range_in:     1, limited       None      None
        matrix_in:    0, rgb           None      None
        transfer_in:  5, 470bg         None      None
        primaries_in: 5, 470bg         None      None
        to process format: vs.YUV444P8
        upsizing 521x298 to 1890x1080
            resize func: vs.core.resize.Lanczos, other resize kwargs: {}
        pillarboxing with left=15 and right=15 borders to 1920x1080
        to output format: vs.YUV420P8
    so neither properties were found (as sort of expected for rgb) so defaults were used for incoming props as set in app for that resolution
    Last edited by _Al_; 19th Jun 2023 at 19:46.
    Quote Quote  
  18. also I just noticed some potential syntax problem, in latest vapoursynth R63 instead vs.PresetFormat has to be vs.PresetVideoFormat, R62 tolerates it , not sure. If there are errors regarding this with latest vaporsynth versions, that might be the reason

    Just one more thing, (as Columbo used to say) , you can have in debug that you just converted a clip into other clip and print that outgoing clip properties, but that has not actually happened yet, that debug might be misleading. Actual conversion happens only when you request a frame and that could be somewhere later in the script or not at all in that script. If no frame is requested, python would throw no error yet, only when you pass it to a previewer or encoder, where frames are definitely requested.
    Last edited by _Al_; 19th Jun 2023 at 19:40.
    Quote Quote  
  19. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    OK and thank you. All that is worth knowing.
    I'll stay with R62 for a while longer, I seem to be unaffected by any bugs in it (unless it fixes the long runtimes as discussed).
    Now I see I'll also need to put c.get_frame(0) just before debugs too !

    I did as you suggested and set properties before and after using bicubic resizier just to convert RGB -> YUV444 ("borrowed" some of your code from https://github.com/UniversalAl/load/blob/main/viewfunc.py#L157 and added _ColorRange).
    No errors now.

    A silly error on my part got me for a bit;
    Code:
    if w>=W or h>=H:	# the clip needs to be increased in size in at least one dimension, or is the same
    	resize = getattr(c.resize, objSettings.UPSIZE_KERNEL)	
    else:			# the clip needs to be reduced in size in at least one dimension
    	resize = getattr(c.resize, objSettings.DOWNSIZE_KERNEL)
    had been done at the top of a function using the incoming clip rather than on the newly converted clip. Oh well.

    Thanks again.

    Should be right in about 2 weeks or so to have a look at the download you kindly provided.
    I am a bit excited, you coding looks ... elegant.
    Quote Quote  
  20. Originally Posted by hydra3333 View Post
    "borrowed" some of your code from https://github.com/UniversalAl/load/blob/main/viewfunc.py#L157 and added _ColorRange
    that viewfunc module works for API3 and API4, that is why I use it, because I still use both. As for setting props in API4, that function write_props() in API4 could be called with same arguments directly: clip=clip.std.SetFrameProps(). But also write_props() and read_props() can store and read dictionaries
    Originally Posted by hydra3333 View Post
    A silly error on my part got me for a bit;
    resize = .....
    desided to create resize class at the end, that handles resize and accepts upsize and downsize arguments, and those could be even modules. Just was thinking about modern way of upscaling, using some "AI" scripts etc.
    Quote Quote  
  21. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    OK. I liked how it checked some of the values

    No doubt you're across this which I just (re)saw: https://github.com/Irrational-Encoding-Wizardry/vs-tools
    Quote Quote  
  22. I think I saw it before, yes, some utilities, functions that are used, so main code stays more clear. What I'm missing is some text, sum up, that says whats in it. To go all thru of those just to see if one needs something, better to just write it and make one's own utils .
    havsfunc for example uses vsutil package (another word variant on vs and util ) , havsfunc needs it to be installed, so I included it in slideshow package just to be sure
    Quote Quote  
  23. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Whilst not necessary, for people choosing to build their own FFmpeg, there's "MABS" which does it for you.
    https://github.com/m-ab-s/media-autobuild_suite

    You can include non-free codecs such as fdk-aac for best FFmpeg aac audio encoding and h264_nvenc (hardware accelerated nvidia gpu h.264 encoding) if you wish.

    If you'd rather run the build in the (free) native Windows Sandbox, this is an example:
    https://github.com/hydra3333/WindowsSandboxForMABS
    It's a bit involved and you need a grasp of the concept of mapping host/client folders, and .bat files, and possibly notepad++, but it's straightforward after a bit of practise doing it until you get it the way you need it.
    Quote Quote  
  24. Member
    Join Date
    Apr 2018
    Location
    Croatia
    Search Comp PM
    Anyone reading this thread, use FFmpeg full solution, never pick suboptimal vapoursynth or avisynth solutions.
    Quote Quote  
  25. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by richardpl View Post
    Anyone reading this thread, use FFmpeg full solution, never pick suboptimal vapoursynth or avisynth solutions.
    Cough, Richard, that is an outright fibbie You have 100% failed to comprehend the requirements, and have also failed to present ANY detail of a working "full solution".

    Requirements including:
    • - one slideshow produced from one or more folder trees of images and videos in sequence ... "hands free"
      - with auto-rotating and auto-resizing and colour-space conversion of images of various source dimensions and formats
      - with auto-rotating and auto-resizing and colour-space conversion video clips of various source formats and dimensions
      - with optional auto-subtitling with image/video paths/filenames
      - having a background audio track
      - and including video clips' audio at the right places overlaid on top of background audio with the background audio semi-muted at those places
    I've just run it over folder trees with 2x lifetimes of home photos and home videos (>28,000 images and clips) and the output was to the satisfaction of the those in their "golden years".

    I'd really love it if you were to share something that worked.
    Please do. Really ! Everyone including me would appreciate it.
    So far you have not, and have only posted unhelpful and less than informative troll-like comments.
    Buck up, mate, put your money where your mouth is in regard to saying "use FFmpeg full solution".

    edit: oh, I may have just wasted my time replying.
    I reviewed your post history and you (almost) never post any useful information nor (ever) any technical solution solving anything ... and tend to simply rubbish things and others in brief 1 sentence bytes.
    Richard, you may be really good with ffmpeg, and more good karma to you for that if so, it is just that you do not display evidence of it on this forum.
    Please consider backing up your claim about ffmpeg (as the "full solution" in this use case) with evidence !
    Last edited by hydra3333; 30th Jun 2023 at 12:35. Reason: spelling
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!