+ Reply to Thread
Results 31 to 60 of 97
Last edited by Sharc; 21st Sep 2023 at 01:58.
It wasn't the lighting, it was my camera. I was using a Canon HV20 with CMOS. I changed to a Panasonic GS300 3CCD at 1000/sec and got full frames. Phew.
That's a bummer because I liked the 1920x1080 capture via HDMI. Now I'm stuck with Composite.
Now I'm after a script that will find and remove black or partially-black frames.
OK, now you're cooking with gas. Everything looks good except for one more thing: you either have to use manual exposure, or find a setting which slows down the auto-exposure response time. You will see that after each black frame, the next frame is over-exposed because the auto-exposure can't respond fast enough. This is unacceptable. It is why I removed the shutter from my transfer system. I briefly looked at the manual for your GS300, and it has only limited manual exposure control. You definitely can set the aperture and and shutter speed, and that should work for any given scene, but since movie exposure wanders all over the place, that is probably not a viable option.
However, you are definitely on the correct path.
John, your wish is my command. Here is the canoe again at 1/1000 and F2.8.
I remember now why I removed the shutter from my projector. The problem is that because the projector and video camera are not synchronized, you can end up with a "beat" where the shutter closure and the instantaneous picture capture can coincide for a few frames. You end up with multiple black frames, and you can even miss one or more frames. In the extreme example, you would get nothing but black, and no image at all (see diagram below to see how this is possible).
The shutter also messes up the exposure because the camera can't react fast enough to the alternation between pure black and an image that is so bright that it almost overwhelms the camera.
You probably have a 3 blade projector shutter which means that the shutter closes two times for each projected frame and then the film is pulled down during the third shutter closure. Having multiple blades increases the flicker rate so your eye do not perceive flicker. You would otherwise get a headache if the picture flickered even at 24 fps, much less 18 or 16 for silent film.
You have to actually draw out a timeline showing how long each frame is projected, and how long each shutter closure lasts, and then determine how many fps you need to ensure that the camera always gets at least one clear capture of a film frame for each and every frame of film.
With a three-bladed shutter, you end up with six images for every frame of film (three duplicates of the frame, and three periods of black). So, if you've slowed down the projector to 15 fps, you get 90 images per second.
The following crude timing diagram shows the issue. The bottom part shows one frame of film being projected three times, with the black parts being the period the shutter is closed and the clear part the period when the image is projected. It assumes that the angle of the shutter blade is exactly the same for the period that it is closed and the period it is open. That may or may not be the case on your projector. The thin vertical lines are meant to show the near instantaneous capture of your video camera using an unrealistic 1/100,000 second shutter speed. I did this just to make the drawing easier to understand. If you were to do a better job, those lines need to have a width equal to the actual duration of the shutter opening.
What you need to do is cut the diagram between the top and bottom part and slide the top part left and right to make sure that there is always one thin line which is inside the white part of the diagram below. This means you will get one good capture for each frame of film. However, if you find that, when you re-draw the diagram to represent your actual projector speed and camera speed (18 fps and 50 fps), you may find that there is a situation where each video capture always lines up within the black on the diagram below which means you will have completely missed capturing that frame of film.
What you'd need to do is draw this more accurately, and create a width for each video capture line; show the correct proportion between the shutter opening and closing; and have the correct duration of a frame. I used 60 fps because it is four in the morning, I can't sleep, and the math and measurement was easier. You'd need to draw it for 50 fps, and then adjust everything for whatever is the slowest fps you at which you can set your projector.
The only thing you really have control over, unless you can get a 60 fps (or higher) camera, is the projector speed.
The other thing you can do is to invent a capture system which uses a signalling device slaved to the projector which tells your camera when to "click" and take a picture. This is what "VideoFred" did with his system, and is what Roger Evans does with his Moviestuff Workprinter units.
So, I apologize for not remembering all the steps I went through which lead me to remove the shutter. With the shutter removed, the system works great, as you can see from this film, taken in 1928, that I captured over a decade ago.
1928 Oak Park River Forest High School Marching Band & Football
No apologies needed, John! Your knowledge is great and very helpful. I'd like to try to pull my projector apart, but that will have to wait for another time.
Now that I've swapped to the CCD camera, I'm finding that at 1/50 shutter speed, the non-flickering speed band for the projector is much larger than my CMOS camera, and so I am using that. I've been experimenting with Cycle and CycleR and come up with 12 and 8, which gives movement on every frame (no dupes at all) and very few blends. This video looks good to me.
Ironically, my clip frame rate is 16.67, so I am back to square one, but now, for some reason, AssumeFPS(18) is working and my video speed looks good.
I was intrigued by your 1928 footage. Of more interest to me was the 12FPS 1928 game of the boys, which has lots of close up movement and is quite jerky. I would love to see what Framerateconverter or RIFE could do in that situation.
Once again, thanks very much for your input.
Your decision to go back to the slow shutter speed is a good one. I didn't realize that the CMOS rolling shutter of your original camera was causing the problem. Now that you've solved that, if you get the correct slow shutter speed, you can minimize the flicker and get a reasonable capture. You won't, however, be able to do much post processing (such as dirt removal) without a true "frame-accurate" capture because the remaining blended frames screw that up.
BTW, if you want to see why the rolling shutter in your CMOS camera screwed things up, go to YouTube and enter
Rolling shutter CMOS propeller
you'll get a list of really good videos showing what a CMOS camera does when shooting through an airplane propeller. The shutter on your projector is pretty much the same thing: a rotating metal blade.
It sounds like you have something which is going to work well enough for you to get through the project.
As for changing frame rate on my 12 fps backyard football, I definitely considered that, and have done it on a few of my uploaded videos. However, you have to live with the unavoidable artifacts. Here is my capture of a 1940 parade in Flint, Michigan queued up to when the stuffed reindeer goes by. Look at the antlers. Ugly stuff.
1940 Flint Michigan Parade
The reason I used motion estimation to add in-between frames on this video was that there was a lot of panning, and at low shutter speeds the persistence of vision judder (the artifact is entirely in your head) was too distracting. If you look at other scenes from the clip, it does look nice and smooth.
P.S. The Flint, MI parade transfer was also done with my shutterless Eiki 16mm projector, using a method I was actually thinking of patenting. My method uses TFM to provide metrics followed by my own matching algorithms, which I prototyped using an Excel spreadsheet. My spreadsheet code use the TFM metrics to decimate duplicates. It was actually even more complicated than you might think because it used a camera which took interlaced video and I had to sometimes recombine the lower field from one frame with the upper field on the next frame, and vice versa. I then found out it was going to cost $50,000 just for the provisional patent (legal fees) and decided my vanity (of having a patent) wasn't worth that amount of money.
Last edited by johnmeyer; 21st Sep 2023 at 11:53.
I found just the antlers clip. It is unmodified from the original (i.e., it is my original capture, without any motion estimation applied, although I did apply dirt removal).
Flint Michigan Parade Antlers
If you want more of that video, I'll have to look around some more.
John, I ran your 1928 kid's football game 12fps Youtube video through framerate converter (24) and it came out really well, much smoother and very easy to watch. The secret is obviously having clean images to start with.
@johnmeyer: Thanks! That is enough, just like to collect samples to test interpolation methods.
Cu Selurusers currently on my ignore list: deadrats, Stears555
Originally Posted by Jagabo
Could you guys please explain how you're determining my "weird and inconsistent" field order? If I learn how to analyse, then I can do things more correctly.
The capture program was AmarecTV with a GV-USB2. What is the problem with capturing at 50fps?
Re the 25/50, AssumeFPS appears to works correctly with a 50fps source that has been TFM/TDecimated down to 16.67 speed. A 25fps source in AssumeFPS (TFMed down to 8.333 speed) produces a video that runs far too quickly (as I have mentioned upthread).
MediaInfo report 50i, or some manufacturer list support for 50i, it's really 25i. (25 interlaced frames per second, 50 fields per second). It's the same for 30i and 60i in NTSC areas.
I see it!
I doubted that Alwyn's PAL camera can take 100 fields per second, but didn't really know for sure.
Whatever, one more confirmation that one should ALWAYS manually inspect the fields sequence when "interlaced" is on the agenda.
Last edited by Sharc; 23rd Sep 2023 at 02:17.
Originally Posted by Sharc
I took a look to the manual. The GS300 (PAL) doesn't support 50 interlaced frames per second (aka 100 fields per second). It records 50 fields per second which get woven to 25 interlaced frames per second, so the format is the standard PAL 25i (=25 interlaced frames per second, sometimes called 50i referring to the 50 fields per second).
So something went wrong with the capture process it seems. (I refer to the file 'GS300 Canoe 1-000 F2-8.avi' of post#36).
The fields with changing parity can be matched using TFM() though, see below.
Here a script for visualizing:
AVISource("GS300 Canoe 1-000 F2-8.avi") clip=last clip=clip.showframenumber(scroll=true,x=60,y=260,size=36).showtime(x=100,y=286) even=clip.separatefields().selecteven().subtitle("EVEN (TOP) field",size=36,align=5) odd=clip.separatefields().selectodd().subtitle("ODD (BOTTOM) field",size=36,align=5) split=stackvertical(even,odd) matched=clip.TFM(mode=1,pp=6,display=true ).subtitle("MATCHED",size=36,align=5) #pp=0 for no posprocessing of combed frames return stackhorizontal(clip,split,matched)
Last edited by Sharc; 23rd Sep 2023 at 07:19.
Right. See my script above and set TFM(pp=0) for no postprocessing to see the orphans.
For example frame 64 with pp=0.
There's no point in dealing with this cap.
Last edited by Sharc; 23rd Sep 2023 at 07:35.
Thanks both, that particular file was only for John because he wanted a 1/000th of a second shutter speed to freeze the frames.
Here's a more "conventional" file. I've also attached a USB-Live Stream DV capture. What's your impression of that DV for working on?
My current workflow: Deinterlace with QTGMC to 50fps. Take that new file > Apply TFM, TDecimate c12-cr8, Framerate converter to 25fps. If I try to combine all of that in one AVS, it doesn't turn out as nicely (in fact it's ugly). If I feed the second group with 50fps Progressive, it comes out much better. Still a number of blends, but no dupes that I can see. I don't use AssumeFPS at all.
This worked for me:
LWLibavVideoSource("GV 25FPS LAGS(20230923-1956) Snip.avi") FlipVertical() AssumeBFF() QTGMC(preset="fast") ConvertToYV12() SRestore(frate=17) # alternate: 16.66666667 FrameRateConverter(newnum=25000, newden=1000)
That's a killer, Jagabo, thanks. My plugins folder is ever increasing in size.
What's your opinion on the black bars: I'm obviously going to chop them all off and make it 4:3, but do they affect/upset AVISynth functions? Should I get rid of them before running the script?
One more question: that was the first ever analogue AVI I've captured that ended up BFF. Why would that have been?