VideoHelp Forum
+ Reply to Thread
Page 3 of 4
FirstFirst 1 2 3 4 LastLast
Results 61 to 90 of 97
Thread
  1. Originally Posted by Alwyn View Post
    One more question: that was the first ever analogue AVI I've captured that ended up BFF. Why would that have been?
    It's due to the flipvertical(). Likewise your DV capture becomes TFF when flipped.

    P.S. Whether you prefer the DV or S-Video route is the old discussion about 4:2:2 (S-Video) vs. 4:2:0 (PAL DV) and lossless vs. lossy
    (The DV may be easier (less pitfalls) regarding levels adjustments and the like)
    Last edited by Sharc; 23rd Sep 2023 at 10:43.
    Quote Quote  
  2. Originally Posted by Alwyn View Post
    What's your opinion on the black bars: I'm obviously going to chop them all off and make it 4:3, but do they affect/upset AVISynth functions? Should I get rid of them before running the script?
    You can crop and resize in the same script. I would crop and resize (mod2) after the QTGMC(), i.e. when the video is deinterlaced. One must not resize interlaced footage vertically without deinterlacing.
    Last edited by Sharc; 23rd Sep 2023 at 11:24.
    Quote Quote  
  3. Yes, the field order reversal was caused by FlipVertical(). I suspect Rotate(180) should have been used instead. Black bars usually don't cause problems for AviSynth filters. Some, like deshaking, make work better without them.
    Quote Quote  
  4. BTW, just to throw one other thing into this discussion about interlaced and field order: it doesn't matter. When capturing film, each film frame is static while in the projector's gate and therefore whether or not the camera takes interlaced or progressive doesn't matter because both fields for the interlaced camera will be from the same moment in time and therefore regardless of what flag is shown, the video is actually progressive.

    The only exception to this are the frames where the camera captures a transition between frames. In this case, whether the camera is progressive or interlaced, you will want to discard that frame, if you can.

    In cases where a blended frame is the only record you have a particular film frame, if you have two of these blends in a row, you can sometimes get two perfect progressive frames by combining the lower field of one frame with the upper field of the next frame (i.e., two fields that were actually taken while the same frame of film was at rest in the projector's gate).
    Quote Quote  
  5. Originally Posted by johnmeyer View Post
    BTW, just to throw one other thing into this discussion about interlaced and field order: it doesn't matter. When capturing film, each film frame is static while in the projector's gate and therefore whether or not the camera takes interlaced or progressive doesn't matter because both fields for the interlaced camera will be from the same moment in time and therefore regardless of what flag is shown, the video is actually progressive.

    The only exception to this are the frames where the camera captures a transition between frames. In this case, whether the camera is progressive or interlaced, you will want to discard that frame, if you can.

    In cases where a blended frame is the only record you have a particular film frame, if you have two of these blends in a row, you can sometimes get two perfect progressive frames by combining the lower field of one frame with the upper field of the next frame (i.e., two fields that were actually taken while the same frame of film was at rest in the projector's gate).
    Yes. Stepping through the frames using my script in post#54 visualizes in the middle part how the 2 fields advance over time: same point in time (static film frame where the order wouldn't matter) or different point in time (transition where the order matters).
    Quote Quote  
  6. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    Thanks all, much appreciated, I've learnt a lot and also have some transferred cine that suits my needs terrifically without needing too much unique gear.

    I'm going to have another crack at my HV20 with HDMI, being mindful of the 25 FPS!
    Quote Quote  
  7. By the way, you should shoot 4:3 to take better advantage of the limited SD resolution.
    Quote Quote  
  8. Originally Posted by jagabo View Post
    By the way, you should shoot 4:3 to take better advantage of the limited SD resolution.
    +1

    I completely missed that. 100% agree.
    Quote Quote  
  9. Originally Posted by Alwyn View Post
    I'm going to have another crack at my HV20 with HDMI, being mindful of the 25 FPS!
    Worth re-testing with the HD resolution HV20, as SD video falls short against the equivalent resolution of Super 8 film (about 900x700 pixels as far as I know).
    Quote Quote  
  10. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    Originally Posted by Jagabo
    By the way, you should shoot 4:3 to take better advantage of the limited SD resolution.
    I'm investigating that. I'm seeing 4-3 in the viewfinder but both analogue digitisers give me the squarish picture. I'll try another capture program.
    Quote Quote  
  11. Originally Posted by Alwyn View Post
    Originally Posted by Jagabo
    By the way, you should shoot 4:3 to take better advantage of the limited SD resolution.
    I'm investigating that. I'm seeing 4-3 in the viewfinder but both analogue digitisers give me the squarish picture.
    Remember that the PAR (PixelAspectRatio) of a 4:3 PAL is 59:54 (aka 12:11), rather than square pixels. You have to resize the 720x576 capture (including the black borders) to 786x576 square pixels for viewing.
    Or when you crop to 704x576, the resized picture (still including black bars) would be 768x576 square pixels
    If you crop all sides as needed to remove the bars you have to do the maths, or simply encode with x264 using the mpeg-4 SAR of 12:11 and let the player do the resizing.

    Edit:
    If you shoot at 4:3 and resize as above (for PAL) but still get a squished picture your camera lenses might have introduced their own anamorphic distortion which you would need to compensate for manually (trial and error).
    Last edited by Sharc; 25th Sep 2023 at 01:18.
    Quote Quote  
  12. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    Sharc, I doubt the 704/720 VHS/CRT TV conundrum is an issue here. The projector is looks like it is putting out 4:3 and I'm seeing that in the viewfinder. There's a signal hiccup somewhere after that in the capture chain (either the Composite Out on the cam, or the digitiser interpretation of the incoming stream), but looking at the squarish capture file, when it's resized to 768x576 after cropping off all the black stuff, it looks correct, so that's what I've done.
    Quote Quote  
  13. Originally Posted by Alwyn View Post
    , .... but looking at the squarish capture file, when it's resized to 768x576 after cropping off all the black stuff, it looks correct, ....
    "Looks correct...", yes. That's the main point. Whether it is really correct depends how you actually cropped. But it is certainly not far from being "technically correct" as well. We have been through all this DAR,PAR, ... resizing stuff before.
    To me it just looks like you captured the super 8 ~4:3 as 16:9 with your videocam. The Scenalizer variant explicitly flags the DAR 16:9, and when playing the scenalizer variant or the Lagarith variant as 16:9 the active picture of the super8 is undistorted (just with huge borders). This tells me that you shot the 4:3 as 16:9, and I dont' see a problem with the capture device.
    I might be wrong with this reverse-engineering though, so never mind. My initial note was just to mention that even if you capture at 4:3 the pixels will not be square and the captured super8 frame as stored in the camera will still be slightly squashed.
    Last edited by Sharc; 25th Sep 2023 at 06:09.
    Quote Quote  
  14. Originally Posted by Alwyn View Post
    Sharc, I doubt the 704/720 VHS/CRT TV conundrum is an issue here. The projector is looks like it is putting out 4:3 and I'm seeing that in the viewfinder. There's a signal hiccup somewhere after that in the capture chain (either the Composite Out on the cam, or the digitiser interpretation of the incoming stream), but looking at the squarish capture file, when it's resized to 768x576 after cropping off all the black stuff, it looks correct, so that's what I've done.
    I don't have time to look back at all the posts so I apologize if you already provided the following information: did you ever identify this as 8mm, or is it Super 8? The aspect ratio for the two is different. Super 8 (the more modern 8mm standard) is almost perfect 4:3 and will exactly fill the frame of a standard video frame (at least that's true for NTSC). By contrast, regular 8mm (the standard introduced in the 1930s) is quite a bit more narrow than 4:3 video and you will end up with black bars on the sides. If you instead make it fit the 4:3 video, you will have stretched it horizontally.

    This is true even if you have a transfer system which includes the sprocket holes in the frame. I have such a system, and about 80% of the film I transfer actually has usable image between the sprocket holes. I can mask off the sprocket holes and deliver a result which has material no one who has looked at the film in the past has ever seen. I either just fill the sprocket hole with black or use some "delogo" filter to fill it in with surrounding pixels.
    Quote Quote  
  15. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    John, my cine is Super 8 and Super 8 Sound.

    I have looked at removing the frame in the gate so I can get the "whole" film frame (a poster on the 8mm forum actually liked the effect sprocket holes whizzing past), but it looks like that will involve some "destructive" modifications, so I'm holding off on that for the moment! Not wanting to "wreck" my projector at this point.

    It would be interesting to see how much extra image there is.
    Quote Quote  
  16. No need for an enlarged gate if you are capturing Super 8. You wouldn't get much additional image.
    Quote Quote  
  17. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    Rojer dodger, thanks.
    Quote Quote  
  18. @Alwyn: It is generally a challenge to accommodate the contrast of super 8 with SD video. You may however want to take care of the levels. Your capture has clipped superwhites. By tweaking the levels you can try to recover some of the lost details in the overexposed brights (235....255 luma). Example attached (top=original, bottom=tweaked). If your TV plays superbrights it's less critical. Nothing you can do with the clipped luma though.

    Code:
    smoothlevels(26,1.0,255,16,235)
    Image Attached Thumbnails Click image for larger version

Name:	original1.png
Views:	13
Size:	662.8 KB
ID:	74040  

    Click image for larger version

Name:	tweaked1.png
Views:	10
Size:	617.3 KB
ID:	74041  

    Quote Quote  
  19. Originally Posted by Alwyn View Post
    John, my cine is Super 8 and Super 8 Sound.
    Yes, and when you resize your capture of your DV videocam according to the PAL DV Pixel Aspect Ratio=118:81 (you find this listed in the Vdub Aspect Ratios selection as "118:81 pixel (PAL-DV wide") the active picture becomes 720x540 which is exactly 4:3. (The overall size including the bars is then 1048x576).
    Still, better try to shoot it directly as 4:3 rather than 16:9 and then squeezing it at some stage into 4:3 which will produce the narrow active picture which you need to crop and resize.
    Last edited by Sharc; 26th Sep 2023 at 06:35. Reason: Typo
    Quote Quote  
  20. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    Sharc, thanks, "capturing" cine with a camcorder is the duck's guts because you have total control over the F stop/exposure: an external proc amp on steroids. I have since started doing some captures adjusting the exposure and it is working well. I will however use that code to adjust the levels.

    Re the framing, I've fixed the squashing issue by shooting in 4:3 (the 16:9 video was being squashed into a 4:3 frame on capture). I'm resizing on export to 768x576, square.

    With a light splash of Neat Video for denoising and flicker reduction, it's coming out nicely.
    Quote Quote  
  21. Originally Posted by Alwyn View Post
    Sharc, thanks, "capturing" cine with a camcorder is the duck's guts because you have total control over the F stop/exposure: an external proc amp on steroids. I have since started doing some captures adjusting the exposure and it is working well. I will however use that code to adjust the levels.

    Re the framing, I've fixed the squashing issue by shooting in 4:3 (the 16:9 video was being squashed into a 4:3 frame on capture). I'm resizing on export to 768x576, square.

    With a light splash of Neat Video for denoising and flicker reduction, it's coming out nicely.
    Great
    Quote Quote  
  22. Originally Posted by Alwyn View Post
    Sharc, thanks, "capturing" cine with a camcorder is the duck's guts because you have total control over the F stop/exposure: an external proc amp on steroids. I have since started doing some captures adjusting the exposure and it is working well. I will however use that code to adjust the levels.

    Re the framing, I've fixed the squashing issue by shooting in 4:3 (the 16:9 video was being squashed into a 4:3 frame on capture). I'm resizing on export to 768x576, square.

    With a light splash of Neat Video for denoising and flicker reduction, it's coming out nicely.
    "Total control over the F stop" isn't enough, unless you mean that you literally manually adjust exposure for each and every scene (which is pretty much impossible).

    I've transferred a LOT of movie footage and here's what I've learned.

    1. You MUST use the zebra control on your camera. If your camera doesn't provide zebras (which visually show the areas of overexposure), then get one that does.

    2. Your goal is to expose for the highlights, so the zebras just disappear. Your black will be too dark. You fix that, as best you can, in post. You can salvage underexposed shadows, but blown out highlights are gone forever, and are visually far more obvious. If you are lucky enough to have a high-end camera which permits gamma adjustment, then use it (VideoFred built one that has this feature).

    3. To help achieve the goal in #2, I use two features. The first is the "Spotlight" feature. This is found on many cameras. It is designed for taking video of a stage performance where the primary performer is lit by a follow spot, with the area surrounding the performer either pure black or very dark. It makes sure that the relatively small area of the frame controls the exposure, rather than the usual algorithm which which ignores small really bright areas and instead takes an average of the whole frame. The second feature I use is "AE Offset". This can be found on many higher end cameras, "prosumer" cameras, and some professional cameras. It adds or subtracts from the automatic exposure. This means you can dial back even further, as you watch the capture in progress, and have the exposure go even lower, until the zebras disappear, or almost disappear -- you have to use your judgement on how far to go. For instance, it is OK to blow out the highlights of the windows in a room where the indoor exposure is 6-8 stops less than the outdoor exposure. You have to remember to crank the AE offset back up as soon as you see a darker scene, or one without highlights.

    I have, BTW, tried multiple captures using different exposures and then later combining those together. This is HDR. It works, but you can't nudge anything because alignment has to be perfect between the captures. It did let me get some reasonable results from Polaroid's instant film (called "Polavision") which was the worst movie emulsion ever created, mostly because of the chemical residue that over the years has turned into intractable blotches, but also because it has extreme contrast.
    Quote Quote  
  23. Originally Posted by johnmeyer View Post
    1. You MUST use the zebra control on your camera. If your camera doesn't provide zebras (which visually show the areas of overexposure), then get one that does.
    The OP's Canon HV20 has the ZEBRA control. Unfortunately it's the one with the CMOS sensor with the "rolling shutter" effect. I think the OP may want to retry using that camera as there were a couple of other issues which he can probably tackle better now with the experience he gained in the meantime. Just wondering whether it should be generally discouraged from using CMOS videocams at high shutter speed for that purpose unless the shutter of the projector can be removed.
    Last edited by Sharc; 28th Sep 2023 at 01:34.
    Quote Quote  
  24. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    Thanks Gents, I'll look into the exposure issue a little deeper.
    Quote Quote  
  25. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    Here's a later one I've done with Jagabo's script (GS 300 3CCD at 1/50th shutter speed, projector slowed down to zero-flicker), getting closer to production-standard! Neat Video applied for a bit of noise and fine-flicker reduction. I've tried to reduce the significant electronic projector noise (the audio came from the projector Line-Out).

    There's a few blended frames here and there so I just blink at the appropriate instant and I miss them.

    No exposure/levels adjustment yet.
    Image Attached Files
    Quote Quote  
  26. Did you interpolate? (Broken poles, bad artifacts).
    If you upload the unprocessed captured .avi you might get more advice.
    Last edited by Sharc; 4th Oct 2023 at 12:59.
    Quote Quote  
  27. Yeah, there's been a lot of processing done on that video.
    Quote Quote  
  28. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    Here's the raw capture. The poles!

    Originally Posted by Sharc
    Did you interpolate? (Broken poles, bad artifacts).
    What do you mean, "Interpolate"? I used Jagabo's script (post 58).
    Image Attached Files
    Quote Quote  
  29. I haven't looked at your latest sample yet but the script I gave in post #58 is probably not appropriate for this video. And yes, FrameRateConverter() in that script motion-interpolates new frames to increase the frame rate. This is responsible for all the artifacts in your video in post #85.
    Quote Quote  
  30. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    Thanks Jagabo, when/if you've got time, I'd be interested to see the difference in the appropriate scripts for each clip to work out what to look for. I have a few more of these I'd like to do.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!