I've asked groyal to be as quick as he can but given him free rein to choose whether to edit to a shorter file - I am sure he will notify the forum very soon.
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 91 to 109 of 109
Thread
-
-
The session with my friend went straightforwardly this morning. I left him alone to watch the processed clips and then we discussed what he had seen, with some replaying of the clips to clear up certain points. Our observations were very similar. The only point at issue was whether the disturbance at tens of Hz was just a faster version of the unpleasant, persistent 2Hz ‘jolting’ (perhaps better than’blinking’) effect. I think not because I saw both in Bugster’s clip. My friend suggested, after looking in slomo, that the’jolting’ picture sequence was going back a step before recommencing - at about 2Hz.
Here is the final story in tabular form. Tough that I can't preview the jpeg as the print may be too small to be comfortably visible. However, I've saved it as a grey jpeg and you will hopefully be able to pick it off the screen and view in in a graphics application - it its not readable directly.
-
I've got it. It really can't be trimmed without losing the essence of what it is. Send me your e-mail in a PM and I'll send you the FTP details.
-
This is a much better test clip than what I had to work with.
Darryl -
<edit>
I had a big post here but need to correct some mistakes. I'll repost when done.
</edit>
I've figured out why the scripts that use a sequence like
SeparateFields()
BilinearResize(720,240) (or whatever resize filter you want to use)
SelectEvery(8, 0,1, 2,3,2, 5,4, 7,6,7)
Weave()
give bad results. Just to make sure everyone understands exactly what each command does I'll explain.
Opoman's VOB file was top-field-first so in all further discussions I'll use that field order. The VOB file was fully interlaced 50 fields per second -- each field was from a unique point in time.
SeparateFields() takes each frame of video and separates the two fields into individual pictures. The resulting pictures are half the height of the original and there are twice as many. A frame that started out with the following scan line sequence:
0,1,2,3,4,5,6,7..., 574,575
becomes the following two pictures:
0,2,4,6..., 574 (the top field)
and
1,3,5,7..., 575 (the bottom field)
SelectEvery(8, 0,1, 2,3,2, 5,4, 7,6,7) The first argument, 8, means "for every 8 input fields (numbered 0, 1, 2, 3, 4, 5, 6, 7), output the sequence indicated by the rest of the arguments -- 0, 1, 2, 3, 2, 5, 4, 7, 6, 7. So for every 8 input fields 10 fields are output, in the indicated sequence. This effectively converts a presumed 23.976 frame rate to 29.97 fps (23.976 * 10 / 8).
Weave() takes the 10 separated fields and weaves them together again into interlaced frames -- exactly the opposite of what SeparateFields did in the first step. (Note disclaimer below.) This then presents a series of interlaced frames (29.97 fps) to TMPGEnc.
Now the analysis: In short, the SelectEvery() examples that attempted to create interlaced frame-rate-converted video resulted in temporal or spacial distortions -- the fields are played out of sequence, or out of place, when watched on TV. Remember that you never see a full frame on television, you see a sequence of fields. By the time one field is being drawn the previous one has more or less faded away.
Let's examine the sequence from SelectEvery(8, 0,1, 2,3,2, 5,4, 7,6,7). First regroup them into pairs as they would be by the later Weave():
0,1, 2,3, 2,5, 4,7, 6,7
Remember that each field represents a picture from a unique point in time, the number of the field is the order in which it originally appeared on television, every even numbered field was a top field, every odd numbered field was a bottom field, and the original video was top-field-first. As you can see, top fields have remained top fields (each pair starts with an even number) and bottom fields have remained bottom fields (each pair ends in an odd number) but the final result has the fields played back in the wrong order: 0, 1, 2, 3, 2, 5, 4, 7, 6, 7.
(As a side note, this pattern was originally from a conversion from 23.976 progressive frames per second to 29.97 interlaced frames per second. It works perfectly for that -- it is a 2:3 pulldown pattern.)
At first it seems that you could use a different order in SelectEvery() to prevent the fields from being played out of order:
SelectEvery(8, 0,1, 2,3,3, 4,5, 6,7,7)
but if you regroup (weave) this as
0,1, 2,3, 3,4, 5,6, 7,7
you can see that the fields are played in the right temporal order but some top fields have become bottom fields and some bottom fields have become top fields! This results in a spacial distortion in the 3,4 and 5,6 frames, and one frame which consists of two identical fields, 7,7.
I have to toss in another disclaimer here. I had assumed Weave() would take each pair of frames and weave them together in the order they were presented to it. After a few hours of analysis, finding that the frames 3,4 and 5,6 were being played in the wrong temporal order I finally went to avisynth.org and read what the manual has to say about Weave(). As it turns out, avisynth keeps track of which fields are top fields and which fields are bottom fields. So when it saw the pair 3,4 it knew that 3 was originally bottom field and 4 was originally a top field. So instead of weaving them as 3,4 and 5,6 it wove them as 4,3 and 6,5! I found that I was able to overide this behaviour by adding AssumeFrameBased, AssumeFieldBased, and another AssumeTFF() all before the Weave() command. So the final command sequence was:
(select source)
SeparateFields()
BilinearResize(720,240)
SelectEvery(8, 0,1, 2,3,2, 5,4, 7,6,7)
AssumeFrameBased()
AssumeFieldBased()
AssumeTFF()
Weave()
AssumeFPS(29.97)
The additional commands make avisynth "forget" whether the fields were originally top or bottom and reassemble them in the desired order. When watched you can see that all the fields appear in the correct temporal order, and motions were pretty smooth, but there is a 4 Hz vertical bounce -- most noticable around the overlay text boxes. I thought it was more watchable than the first example (SelectEvery(8, 0,1, 2,3,2, 5,4, 7,6,7)) with the fields out of order.
Earlier I suggested the following pattern for converting directly from 25 fps to 30 fps:
SelectEvery(10, 0,1, 2,3,3, 4,5, 6,7, 8,9,9)
But this leads to a very similar situation. Regrouping the fields:
0,1, 2,3, 3,4, 5,6, 7,8, 9,9
has some fields switching position.
I haven't tried out dphirschler's script yet. The best results I've had so far are from groyal's corrected script:
(select source)
AssumeFrameBased()
SeparateFields()
SelectEvery(2,0)
LanczosResize(720,480)
AssumeFPS(23.976,true)
with 3:2 pulldown on the DVD player. This throws away every other field and fills in the missing field with data interpolated from the remaining field. In essence it converts interlaced video to progressive video with half the temporal and half the spacial resolution.
Actually, the smoothest, clearest, results I've had are from:
Crop (0, 48, 0, -48)
AssumeFPS (29.97)
Which simply crops 48 scanlines of the top and bottom (leaving 720x480) and tells TMPEnc that the stream is interlaced 29.97 fps. Of course, this plays back 20 percent too fast, the top and the bottom of the picture are missing, and the final aspect ratio is wrong -- but it's very smooth! -
I spent a good deal of time playing with opoman's clip and the conversion scripts listed in his comparison. It was fascinating, because I had never considered the implications of standards conversion beyond purely subjective reasoning (if it looks good, do it).
To be fair, I've seen mechanical PAL-to-NTSC conversions (bootleg Gerry Anderson on VHS) that looked substantially worse than some of these, so I'm not inclined to believe that hardware is necessarily always better than software. It took me many viewings, direct and peripheral, with sound and without, progressive monitor and interlaced television, continuous play and frame advance, to understand what I was seeing well enough to describe them:
(1) Field reordering (a' la SelectEvery(bunch,pattern)). Intuitively the simplest approach, but practically impossible because each field represents a different instance in time. It's extremely difficult to insert copies of fields at regular intervals to make up the rate differential while keeping the top/bottom relationship (and therefore the presentation order) of the surrounding fields intact. The visual effect is a kind of temporal precession, or cyclical vibration, that looks like Benny Hill's version of the Ministry of Silly Walks.
I don't know this as a fact, just an intuition, but I believe the reason pulldown works is because the source is progressive -- the frames have temporal precedence but the component fields do not. A new picture can start on the bottom field or the top without changing the temporal order of other pictures in the sequence, and that may be just the ticket.
(2) Frame synthesis (a' la ChangeFPS()). This is similar to what hardware converters do, except the new frames are averages rather than motion-compensated interpolations. The visual effect is a blur (more like a smear) in the leading edge of moving objects with a wake following the trailing edge, with distortion proportional to velocity. A sports clip is probably the worst-case scenario for this method, but I don't see why it wouldn't work for a genre that is less kinetic.
(3) Video as Film (a' la pulldown). In essence, this shifts the burden of rate conversion to the DVD player which has on-board means to perform an NTSC telecine (film to video) operation. The visual effect is that treating PAL like film makes it look like film, which is something I'd never have noticed if this were a clip from, say, East Enders.
Field-based video blurs moving objects along the leading edge by presenting it in different positions during the frame interval, and along the trailing edge by the object's movement during the field interval. When you drop one of the fields the leading-edge blur is eliminated but the trailing-edge blur is preserved, lending an unexpected but not objectionable "cine" look to the clip as a whole.
Apart from the visual effects I described, I couldn't see anything that looked anomalous, particularly nothing around 10 Hz. I've never watched a multistandard TV, but my eyes are accustomed to a 60 Hz display so I'm not sure to what degree I'd notice 50 Hz as being "off," nor how long it would take my eyes to adjust to that frequency. Could the 10 Hz anomaly be a side-effect of watching a 60 Hz picture on an ordinarily 50 Hz display? -
I thought long and hard about the field reordering method. This script uses Bob() to "break" the temporal relationship between fields, then duplicates every 5th field, yielding 29.97 fps interlaced output:
Code:AssumeFPS(24.975) # Set field timing Bob() # Convert top/bot to top/top SelectEvery(5,0,1,2,3,4,4) # Output 6 fields for every 5 LanczosResize(720,240) # Resize for NTSC AssumeFieldBased() # Convert top/top to top/bot Weave() # Stitch them back together ConvertToRGB() # [OPTIONAL] colorspace conversion
Video
Stream Type: MPEG-2
Size: 720 x 480
Aspect: 4:3 Display
Frame Rate: 29.97 fps
Rate Control: Constant (CBR)
Bitrate: 4000
VBV: automatic
Profile: MP@ML
Format: NTSC
Mode: Interlace
YUV: 4:2:0
Precision: 8 bits
Motion Search: Normal
Advanced
Type: Interlace
Order: Top field first (A)
Aspect: 4:3 525 line (NTSC)
Arrange: Full Screen
GOP
I: 1
P: 4
B: 2
Output interval: 1
Max frames in GOP: 15
Output bitstream for edit: check
Detect scene change: check
This about taps me out. I can't think of another way to do it that gives a good result. If you add this script to the other two known to work, you'll be able to choose the best of the three. None will be perfect, but one should be good enough. -
groyal, I was thinking along the same lines, using Bob instead of SeparateFields(). But now that you've essentially converted them both to top fields you still get a vertical bounce whenever one of them gets used as a bottom field.
What you really need to do is pull top/bottom fields out of the Bob'd frames, just as a regular 3:2 pulldown would do. But of course you can't use 3:2 pulldown because that doesn't give you the correct ratio. I don't think the a filter exists to perform the proper pulldown pattern. Could be an interesting little programming project. Maybe I'll write it when I have time. -
Originally Posted by junkmalle
What you really need to do is pull top/bottom fields out of the Bob'd frames, just as a regular 3:2 pulldown would do. But of course you can't use 3:2 pulldown because that doesn't give you the correct ratio.Code:Frame Number: 01234 56789 abcde fghij klmno (50 fields/sec) = Fields/Frame: 23232 32323 23232 32323 23232 (62 fields/sec).
I don't think the a filter exists to perform the proper pulldown pattern. Could be an interesting little programming project. Maybe I'll write it when I have time. -
groyal, I wasn't concerned about the 25.0 vs 23.976 fps difference (your 1.03 fps), but rather the fact that 3:2 pulldown is designed to work from 23.976 frames per second, not 47.952 frames per second (what you get from Bob()).
Your script will work fine but will have that slight vertical bounce. I was looking for a way to completely eliminate the bounce. I have reason to believe the method I propose is what hardware PAL to NTSC converters use. -
Oh, come on. Does EVERYTHING have to be a pissing contest around here? I wasn't rebuking you, junkmalle, I was trying to point out that the wheel has been invented already.
groyal, I wasn't concerned about the 25.0 vs 23.976 fps difference (your 1.03 fps), but rather the fact that 3:2 pulldown is designed to work from 23.976 frames per second, not 47.952 frames per second (what you get from Bob()).
Your script will work fine but will have that slight vertical bounce.
I was looking for a way to completely eliminate the bounce. I have reason to believe the method I propose is what hardware PAL to NTSC converters use. -
I just noticed this:
http://neuron2.net/dgpulldown/dgpulldown.html
Might be worth a try.
Darryl -
My thanks for some of the further suggestions above. I confess that I have got behind with this thread as I have been seduced by the remarkably thing that scharfis_brain has done at:
http://forum.doom9.org/showthread.php?s=&postid=606487#post606487
This script/plugin produces excellent conversions of my PAL rugby clip.
At the moment I am trying to deal with the fact that my 1gig PIII takes about six hours to process the 3.25 minute clip with the full treatment!
Apart from the fact that, in due time, I'm sure that there will be faster mods, I am experimenting with cutting things out and seeing how the results look.
What is much clearer to me now is that watching is satisfactory if there is what may be called 'motion fidelity'. Decent image quality is a bonus but not essential for comfortable following of the action in the game. Its is also clear that it is mainly the horizontal component of the motion which is critical for enjoying rugby.
With one exception, all the methods I reported on, and possibly some of those subsequently added, make the viewing uncomfortable because of irregularity in the motion - the 'bounce' which occurs when the field order is disturbed and the hesitation when 'change' replicates the fifth frame.
The exception is, of course, Junkmalle's tongue in cheek suggestion of cropping 96 rows and assume(29.97). As he wrote, it gives great motion fidelity but the speed up delivers 'super rugby' which doesn't quite convince - not to mention the headache of synching the sound.
If you are prepared to leave your machine alone for a few hours I recommend you try scharfis_brain's motion compensation. He/she insists on giving credit to those who went before but he/she has made the thing accessible to the likes of me by handing over a set of stuff to plant in your Avisynth plugin folder - and a very simple looking script - I mean the one that you write for yourself, not the one that you import (mvbob.avs).
From the point of smooth action, it sets a standard. Now we'll have to see if one can get something workable without putting five PIVs to work on it! -
DGPulldown:
I tried 25 to 29.97 with DGPulldown. Curiously, depite preparing my clip with a 720x480 resize, the output of DGPulldown had 720x576 and TMPGEnc DVDAuthor wouldn't have it. -
Originally Posted by opoman
I can deal with certains errors, but ghosting, jerkiness and jitter are not acceptable. I'm guessing we're talking the same thing here.
I'll have to try that script sometime.Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
groyal's last script didn't work right as posted. Here's a mod that works:
Code:(open source file) AssumeTFF() AssumeFPS(24.975) # Set field timing Bob() # Convert top/bot to top/top SelectEvery(5,0,1,2,3,4,4) # Output 6 fields for every 5 BilinearResize(720,240) # Resize for NTSC AssumeFrameBased() # Make AVISynth forget T/B field AssumeFieldBased() # Convert top/top to top/bot AssumeTFF() Weave() # Stitch them back together ConvertToRGB() # [OPTIONAL] colorspace conversion
-
I'm far too new in this to be sure but I am beginning to think, from my own as well as other results, that scripts don't always do the same for all sources, even if nominally similar (eg off-air PAL sport ). I've had one example of a surprising change of behaviour in small (say 100 frame) sections of one converted clip.
I also note the occasional reference to the use of 'assumeframebased', 'assumefieldbased' as being important although, sometimes, not obviously, called for.
I know very little about the use of flags in the MPEG2 standard but I am beginning to wonder if some of this variability is associated with a reputedly cavalier use of flags by producers and processors of video, perhaps both human and computer.
Just a thought at this stage. -
Originally Posted by junkmalle
Originally Posted by opoman
I know very little about the use of flags in the MPEG2 standard but I am beginning to wonder if some of this variability is associated with a reputedly cavalier use of flags by producers and processors of video, perhaps both human and computer.
From the point of smooth action, [scharfis_brain's script] sets a standard. Now we'll have to see if one can get something workable without putting five PIVs to work on it!
Code:setmemorymax(384) LoadPlugin("c:\video\avisynth2\plugins\mpeg2dec\MPEG2DEC3.DLL") import("c:\video\avisynth2\plugins\mvbob\mvbob2.avs") mpeg2source("d:\opoman\rugby.d2v") mvbob() mvfps(59.94) converttoyuy2() lanczos4resize(width,480) assumebff().separatefields().selectevery(4,0,3).weave()
I produced a revision of the script that discards everything that can be discarded, and motion-compensates the smallest possible picture area. The motion compensation is necessarily coarser, but the performance improvement is dramatic.
Code:LoadPlugin("c:\video\avisynth2\plugins\mpeg2dec\MPEG2DEC3.DLL") LoadPlugin("c:\video\avisynth2\plugins\mvbob\MVTools0962.dll") mpeg2source("d:\opoman\rugby.d2v") crop(12,6,696,568) i=bob(height=240) fwd=mvtools0962_mvanalyse(i,isb=false,lambda=400) bwd=mvtools0962_mvanalyse(i,isb=true, lambda=400) i.mvtools0962_mvconvertfps(bwd,fwd,fps=59.94) AddBorders(12,0,12,0) AssumeFieldBased() Weave()
1. Apparently, SetMemoryMax(384) is like emptying the ocean with a thimble. I got better performance by deleting this line and letting Windows manage the memory.
2. Mvbob() puts out some purty frames, but it slows the script geometrically. Plain old Bob() isn't as fancy, but it's lightning-fast by comparison. Since we don't need the Mvbob() procedure we don't need the Mvbob() script, so I dumped the whole thing in favor of accessing the MVtools DLL directly.
3. 60% reduction in area is achieved by cropping irrelevant information from the frame and resizing to NTSC height before motion compensation and padding back to regulation width afterward.
<edit: typo in script> -
Extra Credit:
According to the MVtools documentation, there are several motion search options, one of which may be significantly faster. This paper by IBM, Requirements for motion-estimation search range in MPEG-2 coded video (PDF) may be useful for finding the optimum parameters.
Snell & Wilcox is a manufacturer of fabulously expensive broadcast hardware (such as standards converters) who provide some useful "infotorials" on the motion-estimated conversion process:
The Engineer's Guide to Motion Compensation (PDF)
The Engineer's Guide to Standards Conversion (PDF)
Keep in mind that these are brainy sales brochures and not research products per se -- the section recommending motion compensation for NTSC telecine, for example, is justifiable but moot as virtually all motion pictures distributed in MPEG-2 format in North America are coded 23.976 progressive with 2-3 pulldown.
Similar Threads
-
Wedding DVDs conversion from PAL to NTSC help
By jag3er in forum MacReplies: 6Last Post: 14th Nov 2010, 19:23 -
Frame rate conversion for NTSC to PAL conversion?
By Trellis in forum EditingReplies: 33Last Post: 27th Jul 2010, 10:55 -
NTSC video with a film-like/PAL to NTSC conversion type of look that shouldn't
By Bix in forum RestorationReplies: 34Last Post: 8th Feb 2010, 16:17 -
PAL to NTSC, NTSC to PAL framerate conversion?
By Baldrick in forum Video ConversionReplies: 44Last Post: 6th Dec 2009, 00:31 -
NTSC to PAL, PAL to NTSC framerate conversion?
By Baldrick in forum Video ConversionReplies: 23Last Post: 23rd Apr 2008, 12:19