Some "housekeeping" issues - in either vegas or adobe - make sure you interpret the footage as progressive, and have the project properties set to progressive (or sequence settings in adobe) . Basically you want to ensure that the software doesn't make a mistake and deinterlace somewhere - that will degrade the video unnecessarily.
Alpha paint reveal technique:
A short video demonstration attached below. Sorry, no audio , but it's pretty easy to understand.
Pretend you are using track 1 & 2 only for this intro example, because they are already aligned temporally , and fairly good spatially . Looking at your samples, 99% are already covered by 1,2 . But if you need to, you can add "coverage" layers from 3 & 4 . And if they aren't covered - you still have the option of clone stamp or other compositing techniques right in AE, or photoshop if you need to
Think of it as combining parts of video 1 and 2 . You just paint a rough stroke with the brush in the alpha channel along the defect (you can change parameters like brush size, width, hardness etc... but it really doesn't matter much and you don't have have to be exact or precise - that's why it's fast. The results are mostly dependent on the characteristics on the secondary layer, like alignment) . Often the most difficult part is actually identifying where the defects are. Big ones are easy to see; but you might miss "seeing" tiny ones.
There are several different variations on how you alpha paint; but to keep it simple for this example, I have the video1 base layer on track 1 , video2 is on track 2 is the secondary layer where you are taking fixes from. Think of it as "poking holes" in the top layer to "reveal" the bottom layer underneath. The alpha channel determines transparency, thus you are painting "transparent" holes , to see the layer underneath.
I'll assume you know the very basics of AE like how to open a composition, shortcut keys etc... I like to work in "square pixels" so if you're wondering why everything is horizontally "squished" that's why
1) right click layer 1 => open layer. This is the layer panel. You can drag and dock / re-arrange / place the panels wherever you want. The way I have it setup in the video is composition panel is on the left, layer panel is on the right. The comp panel is the "final" view. The layer panel is where you "paint".
2) Select the brush (or ctrl+b). The 2 important settings for the paint mode - it has to be set to channels: alpha , duration: single frame . The others you can experiment with , brush size etc... hardness is the another one you'll want to play with - that controls the amount of feathering or edge blending between the 2 layers
3) Move to the defect, hold down LMB, paint over the defect, release LMB. Repeat. To advance to next frame , <page up> is the default shortcut key in AE. <page down> is backwards.
Anyways that's the basics. I've purposely done it slowly in the video, missed some spots and had to go over some lines twice (ok maybe not purposely for that, I just clumsily missed a few) - but I'm sure you can the general impression that it's quite easy to do. You can combine with some of the other techniques - your "base" layer might be a modified median stack for example.
+ Reply to Thread
Results 31 to 60 of 61
-
-
I took poisondeathray's idea from post #30, replacing duplicate frames in v3 with black frames, then replacing those black frames with frames taken from v4. Finally a median of v1, v2, and that video was taken. The result was even more clean frames. I didn't notice any frames that were worse than in the straight median of v1, v2 and v3 in post #22.
Code:v1 = Mpeg2Source("D:\Downloads\median_test_sample1.demuxed.d2v") v2 = Mpeg2Source("D:\Downloads\median_test_sample2.demuxed.d2v") v3 = Mpeg2Source("D:\Downloads\median_test_sample3.demuxed.d2v") v4 = Mpeg2Source("D:\Downloads\median_test_sample4.demuxed.d2v") v3 = v3.Loop(2,0,0) # align v3 better with v1 and v2 v4 = v4.Loop(2,0,0) # align v4 better with v1 and v2 # first make a motion mask of v3: black at duplicate frames, some bright pixels otherwise v3m = mt_motion(v3, thY1=10, thY2=10, thT=255) # use v3m to make a video from v3 with duplicates replaced with a black frame v3b = ConditionalFilter(v3m, v3, v3m, "AverageLuma()", "greaterthan", "10") # replace black frames in v3b with frames from v4 v5 = ConditionalFilter(v3b, v3, v4, "AverageLuma()", "greaterthan", "50") Median(v1,v2,v5, sync=1) # allow one frame forward or back # show median, v1, v2, and v5 StackVertical(StackHorizontal(last.Subtitle("median"), v1.Subtitle("v1")), StackHorizontal(v2.Subtitle("v2"), v5.Subtitle("v5")))
-
Using ReplaceFramesSimple() I replaced defective median frames with clean(er) frames from v1 or v2 when available. Add these two lines after the call to median() in the previous script:
Code:ReplaceFramesSimple(last, v1, Mappings="[62 62][178 178][247 247][264 264][303 303][315 315]") ReplaceFramesSimple(last, v2, Mappings="[23 23][30 30][56 56][84 84][87 87][136 136][241 241][252 252][258 258] [261 261][300 300][309 309][312 312][326 326][375 375][384 384]")
-
I don't often do exactly this sort of thing (replace frames from one video with those from another) but I replace quite a few frames I've cleaned up in a photo editor and your ReplaceFramesSimple can be written more efficiently. When only one frame is being replaced you don't have to write the same frame number twice or use the brackets. You can do it like this and save some time:
ReplaceFramesSimple(last, v1, Mappings="62 178 247 264 303 315")
Since I do go through my captures frame by frame (three times each), I appreciate the work you're doing and I'm learning, too. -
Thanks poisondeathray, for the track 3 & 4 analysis and for the nice tutorial.
... and there are different unique frames from each 3&4 (so in that sense you're sort of "wasting" some frames with a simple 3 on 4 overlay , or a 4 on 3 overlay).
Your tutorial is excellent and to the point. I have done some fixing with this method now and it is indeed fast, since no great paint stroke accuracy is needed. How would you fix a frame on track 1 with parts of the frame on track 3 or 4? The logical way of doing this is to use a huge brush diameter that covers the whole frame and click on the mask of layer 2, and then paint the holes on the mask of layer 1. If we want to use layer 4 instead of 3 then we have to repeat the unmasking of layer 2 for layer 3 as well, and then paint over the mask of layer 1. Is there a faster way of doing this? -
Great job jagabo!
With your last script I think you have churned out the maximum from the median filter for this project. I never thought AvySynth and its filters can be so powerful. Its time for me to get familiar with it and with the median filter. I am not a video editor and never did such work before, but since nobody would sacrifice that much time what is needed to get the maximum out of these captures, I have to learn how to do it myself now.
It seems that now we have got all the tools and methods for fixing the video, and the rest is only putting them into practice. The median filter will save lots of unnecessary work, but the manual work of replacements - alpha painting and visual checking of each frame for all kinds of blemishes is still needed (and most time consuming). Now I will read some pages about AvySynth and the median filter and then get to work.
In mean time if anybody has got some more suggestions how to further improve or speed up this blemish fixing process, please let us know.
Thanks to everybody for the contributions. -
Yes, you still have access to all the frames, but I meant for the purposes of constructing a 3rd or 4th instance to feed the median stack. That was the purpose of that little manual alignment test - to get a feel for what 3&4 are really offering. If you had a good 3rd instance, everything should be taken care of automatically for the most part using a median stack. Only in frames where the defect persists in the same location across 2 or 3 of 3 instances, will it persist after the median stack.
Your tutorial is excellent and to the point. I have done some fixing with this method now and it is indeed fast, since no great paint stroke accuracy is needed. How would you fix a frame on track 1 with parts of the frame on track 3 or 4? The logical way of doing this is to use a huge brush diameter that covers the whole frame and click on the mask of layer 2, and then paint the holes on the mask of layer 1. If we want to use layer 4 instead of 3 then we have to repeat the unmasking of layer 2 for layer 3 as well, and then paint over the mask of layer 1. Is there a faster way of doing this?
You're correct that the reason alpha painting can be fast, is you don't have to be so precise ie. "sloppy" , quick, almost careless painting works. But the only reason it works is the layer below is aligned and similar to the top layer, and the defects are not in the same spatial location.
The way you structure the workflow / setup in AE depends on how often you're going to access #3 or #4 or other layers. If it's infrequently, I would use coverage layers (a separate layer, trimmed to 1 frame duration, on the frame of interest)
A real example, where "sloppy" alpha painting doesn't work nicely because the defects are too close spatially on 1&2 is Frame 697. That's where you could "borrow" from other layers .
So I would drag a copy of video 4 just below layer 1 onto track 2. If you follow along, this will shift video 2 on layer 2, to layer 3. video4 is actually aligned on this frame already, but if you needed to align just drag the layer left or right, or use the shortcut alt+pageup or alt+pagedown to nudge a layer in time a single frame. With the video 4 layer selected and the playhead at frame 697, push alt+[ , alt+] . That will trim the layer to frame 697 only.
The layer panel should still be on the "main" layer, but if for some reason it isn't the double click the layer (or right click, open layer). Then procede as before. It's called "coverage layer" because it only covers 697. When you advance to other frames, that inserted instance of video 4 isn't being used, because you trimmed it to 697
There is a "shy layers" function in AE that is a useful "housekeeping" function. You can hide that coverage layer by enabling shy on the layer of interest and the composition shy button. If you can imagine on large projects you might have many coverage layers - it makes for a cluttered messy workspace. Shy takes care of that, very useful.
If it doesn't make sense, just ask for clarification, I can post some screenshots or another short video
There are other ways to set it up - different arrangements, paint modes, masking, coverage masking, etc... but I believe this way is the "fastest" for what you want to do in terms of manual fixes. I didn't see anything that required photoshop
As you may have noticed, median also does a good job of reducing noise. This means you have to be careful where in the workflow you plan to paint or mask, otherwise the "paint patches" won't match. If you patch in an original version over a "clean" version, it will stick out like a sore thumb. This is what compositing is really about - doing stuff in a way that makes things blend in seamlessly. There are other manipulations you can do like matching grain / noise, color, but it's better to plan your workflow ahead of time so you don't have to do those other things and make it harder for yourselfLast edited by poisondeathray; 23rd Jul 2016 at 11:57.
-
Thanks poisondeathray for the explanation.
So I would drag a copy of video 4 just below layer 1 onto track 2.
Another problem where I am stuck right now. I am trying to open avs files with Sony Vegas using avfs with no success. I am getting this error message:
MPEG2Source: Could not open one of the input files.
(C:\Volumes\median.avs, line 5)
where line 5 is:
v1 = Mpeg2Source("C:\Volumes\median_test_sample1.d2v")
Any idea How to fix this?
EDIT:
In mean time found the problem. The folder names which contained the .d2v files had to be modified and the space removed otherwise the avfs command would not mount. But since the folder name changed, the .d2v files got spoiled for some reason (they must contain an absolute path name, which is awkward) and the median.avs could not be played anymore with any player. The solution was to regenerate the files with DGIndex.exe again, and now it works.
However, the median.avs file plays very chopped in Vegas (the frame serving with Vegas might be too slow). Is it possible to speed this up?Last edited by Zoltan Losonc; 24th Jul 2016 at 12:50.
-
I meant drag it from the clip bin. An original reference.
If you wanted to copy/paste from a different layer right on the timeline ctrl+d is the shortcut to duplicate a layer. Be careful when duplicating layers, because all effects/edits/paint etc.. will also be duplicated . So in general you want to drag a "fresh" clip reference from the clip bin when doing things like coverage layers / painting
Another problem where I am stuck right now. I am trying to open avs files with Sony Vegas using avfs with no success. I am getting this error message:
MPEG2Source: Could not open one of the input files.
(C:\Volumes\median.avs, line 5)
where line 5 is:
v1 = Mpeg2Source("C:\Volumes\median_test_sample1.d2v")
Any idea How to fix this?
Post the full script . What directory location is the physical "median_test_sample1.d2v" and mpg ? The c:\volumes will be a virtual directory, so you generally don't want to put your original clip or avs there
does the script, before you mount it with avfs, preview ok in something like vdub or avspmod ?
Also for avfs and vegas, you need to add either ConvertToRGB24() or ConvertToRGB32() at the end (vegas won't accept YUV input through avfs) -
Probably not - there is additional overhead when frameserving through avisynth and again through avfs , and if you add on top of that median filtering plus any other manipulations in the script it bogs down. There is a GPU source filter (DGSource) that can offload the source decoding, but for SD MPEG2 it might only be 1-2 % . And it's not free and requires a compatible Nvidia card. It would help with HD or UHD sources, where decoding is a larger % of CPU consumption
It's probably "safer" IMO to "bake" the script into a lossless intermediate. It requires more HDD space, but it's more stable way of editing and faster/smoother on the timeline -
The original avs files were in another directory where they could not be mounted first (due to folder naming problems), and after fixing the folder names they did not work due to the .d2v files containing the old changed folder names. This is why I have attempted to copy them into Volumes folder to see if it would work there. Anyway, it works now after finding out the awkward behavior of these programs. Vegas also opened the avs file without enabling converttorgb32 ().
I have opened the madian.avs in Vegas and rendered it using lagarith with different settings for comparison. When rendered as RGB, the file size is 0.99GB; when using YUY2, then the same video is only 565MB. The RGB creates much larger files and there is no perceptible improvement or difference in colors. It is a pity if YUY2 can not be used in AE and Photoshop. I think PP, Vegas, and Avisynth all can handle YUY2. Would it not be better to render into lossless format with lagarith in YUY2 and load it into PP. Then link it to AE and Photoshop when necessary?
It indeed looks like the best start is to convert the videos into lossless format and then I can edit and tweak it with all necessary tools as many times as needed without losing quality due to compression losses. The present task is to test all possible formats and color spaces to see any quality and size differences, and then make the best choice.
To everybody: any suggestions about this lossless format choice? -
YV12 4:2:0 will give you the smallest filesize because there is more chroma subsampling. That's also your original pixel format for your mpeg files. YUY2 is 4:2:2, RGB is full color and a different colorspace. Technically only YV12 will be lossless when compared to the original
But as soon as you bring that YV12 video into vegas, or AE, it gets converted to uncompressed RGB internally . Once you're in a colorspace , stay there. Converting back and forth is lossy as a general rule (there are lossless transforms, and higher bitdepth conversions, but in practice you're going to incur loss). The more times you do it , the more loss. A lossless codec might prevent additional compression losses, but you can still incur quality loss from converting colorspaces. A lossless codec is only mathematically lossless in the same colorspace
Most compressed would be FFV1 in inter mode (long GOP) , but I would advise against it. It's very slow to edit, more suitable for archival purposes. Lagarith is a good choice, with good compression . UT Video will decode faster (better performance, but larger filesizes). MagicYUV is the fastest, but is reported to have some issues in PP (it works fine for me) . Lagarith and UT Video are "battle tested", very reliable used in many programs. UT is the lossless codec of choice recommended on the adobe forums. MagicYUV is better for vegas, if you have full range YUV video, because the full range option is accepted in vegas -
Also , if you're only wanting to encode median.avs, I recommend virtualdub to do it. It's going to be much slower frameserving though avfs, then vegas. Also vegas will upsample to RGB internally so your video is technically not "lossless"
In vdub, video=>fast recompress, audio=>direct stream copy (will frameserve uncompressed audio if you're using audio, otherwise set it to "no audio") ,video=> compression (choose your compression), file=>save as avi (choose location and filename)
Yes, the original mpeg is yuv420p, which is 4:2:0 . That is the same as "YV12" . I would only use RGB if you had to. e.g. when coming OUT of a program (because the program you use works ONLY in RGB for example. AE works in RGB only, for example. If you want to use it, RGB is a necessary step) -
For the median filter virtualdub is a good choice to encode.
Do you recommend that the original mpeg files should be also first loaded into an avs script, and then opened and losselssly encoded in virtualdub?
Vegas can open the original mpeg files directly, without frameserving, and it can also encode them using lagarith. -
Not necessary for the mpeg files, they are supported as-is in all those programs (except photoshop)
Some people choose to do it anyways, because the mpeg files you have are long GOP (each frame isn't complete, but temporal compression is used - a group of pictures dependent on a single complete frame) . Editing long GOP has a higher risk of "mixing" up frames and other buggier issues. I wouldn't worry about it, personally I would use them as-is. -
After performing a bit deeper analysis of the videos it came out that there are duplicate frames in the videos 1 & 2 as well, therefore they can not serve as a firm basis for temporal comparison, without improving at least the first track in this regard. It was only a lucky choice that in this short 30s sample clip Track 1 did not have any duplicates. Therefore I will have to find all duplicates in Track 1 and replace them with non-duplicates from the other 3 Tracks.
The option of interpolating the missing frames has been examined also, and the quality of the interpolated frames are really bad. Instead of really moving shapes, these filters only smear out moving pixels, so this is not useful for my purpose. Perhaps after 10 years with some more AI built into these interpolators with shape recognition, real interpolation can be done, but not for now.
Using parts of jagabo's code I have put together two experimental scripts to perform the duplicate detection and replacement operation. The first one FindDuplicates.avs finds all duplicates in Track 3 and records the frame numbers in a text file:
LoadPlugin("C:\V\DGDecode.dll")
LoadPlugin("C:\V\masktools2.dll")
v3 = Mpeg2Source("C:\V\median_test_sample3.d2v")
v3 = v3.Loop(2,0,0) # align v3 better with v1 and v2
# first make a motion mask of v3: black at duplicate frames, some bright pixels otherwise
v3m = mt_motion(v3, thY1=10, thY2=10, thT=255)
filename = "C:\V\duplicates_in_SampTrack3.txt"
spacer = " "
Tr = "True"
# this line is written when the script is opened
WriteFileStart(v3m, filename, """ "Type Bool"+Chr(13)+"Default False"+Chr(13)+" " """)
WriteFileIf(v3m, filename, "(AverageLuma(v3m)<0.22)", "current_frame", "spacer", "Tr")
LoadPlugin("C:\V\DGDecode.dll")
v1 = Mpeg2Source("C:\V\median_test_sample1.d2v")
v3 = Mpeg2Source("C:\V\median_test_sample3.d2v")
v3 = v3.Loop(2,0,0) # align v3 better with v1 and v2
filename = "C:\V\duplicates_in_SampTrack3.txt"
ConditionalFilter(v3, v1, Greyscale(v3), "myvar", "==", "True")
ConditionalReader(last, "C:\V\duplicates_in_SampTrack3.txt", "myvar", True)
The reason for me posting this here is to report a possible bug in AviSynth or the filters. If you use the original text file generated by the FindDuplicates.avs with the ReplaceDuplicates.avs then always "true" will be displayed on the display even when the variable is obviously "false". I have wasted a great deal of time to find out what is wrong, so here is the solution. If you open the text file in WordPad and resave it, then the ReplaceDuplicates.avs will display correctly the values of the variable. I suppose that the WriteFileIf or the WriteFileStart creates an incompatible file, or perhaps the file is not closed and that causes the malfunction.
If you know how to produce a properly working output text file automatically (without resaving it with WordPad), please let me know.
If anybody wants to do something similar, then please note that v3 = v3.Loop(2,0,0) will not be needed for you if the tracks are properly aligned, and the value 0.22 has to be adjusted to your input videos, otherwise the script will either miss duplicates or treats good frames as duplicates. This can be done by modifying the FindDuplicates.avs to record the AverageLuma value next to the detected frame number as well, and using a large value instead of 0.22. In my case even number 1 was a large value which included many good frames indicated to be duplicates. Run this modified script for the whole video with normal speed (increased speed produces many incorrect detections). Then go through the list and chose sample frames with different values and verify their validity in Vegas (or equivalent). The goal is to find the threshold value above which false positives are reported and below which true duplicates are not detected.
I have sampled first a value around 0.9 which was a false positive, then one around 0.5 with similar result, then one around 0.1 which was duplicate. Then apparently the threshold should be between 0.5 and 0.1. By testing some more samples between these limits, found that the threshold should be at about 0.22 for this video.
Knowing the correct value put it into the script above and re-run the duplicate detection. This will give the most accurate results that one can get automatically. Life would be too easy, if this final list would be perfectly accurate, which is not the case. There are few cases when for example the video is fading out, which will confuse the detection logic. It is not too difficult to locate these frames in the first test run (using a large value like 1) because they are in regions where many consecutive frames are falsely indicated as duplicated, having AverageLuma values even above the threshold. These very few false positives should be manually deleted from the list in the text file before the final run of the ReplaceDuplicates.avs. -
-
Thanks jagabo, you have found the cause of the bug.
Based on this definition:
Chr(13) = Carriage Return - (moves cursor to lefttmost side)
Chr(10) = New Line (drops cursor down one line)
The combination of both is used to start typing at the begining of a new line
Type Bool Default False
0 True
1 True
2 True
3 True
4 True
6 True
8 True
9 True
...
Type Bool
Default False
0 True
1 True
2 True
3 True
4 True
6 True
8 True
9 True
The best solution is to use only Chr(10) in which case we get the correct format in the text files, and also a correct variable value display.
So the complete line should read like this:
WriteFileStart(v3m, filename, """ "Type Bool"+Chr(10)+"Default False"+Chr(10) """) -
Some systems use only CR, some use only LF. Some require both. This is a longstanding issue with computers and text.
-
The second ReplaceDuplicates.avs reads these frame numbers from the text file and replaces the duplicate frames in Track 3 with non duplicate frames from Track 1.
Even if you correctly identify with 100% accuracy, the "automatic" replacement frame could still be the wrong one - or worse - shifted 1 or 2 frames forwards or backwards time. Recall you have runs of 3,4 same frames in videos #3 & #4. So for your real case, trying to replace duplicates in track 1, your replacements might still be a duplicate or shifted the wrong way (for the latter, keeping a duplicate is even more preferable because those are easier to "detect")
What is the pattern of duplicates later on for 1 & 2 after 30seconds ? Is it similar to your samples 3 &4 ?
How long are the videos in total ? -
poisondeathray,
If you move track 3 & 4 one frame to the right relative to tracks 1 & 2 then all tracks will be correctly aligned. Proper alignment of two tracks can be done by choosing a segment where neither tracks contain duplicates for something like 5-10 frames. Then move the tracks so that all those 5-10 frames should be aligned. When you still see difference between a fame on track 3 and 1, then that is due to duplicates. A duplicate on tracks 3 & 4 can never be properly aligned with a frame on track 1 that is in temporally correct position and not a duplicate.
The distribution of duplicates on these tracks is not uniform. There are long stretches without any duplicates, and then comes a section which is full of duplicates like every second, or with a pattern like this: GDDDGDGDDDGDGDDD... where G is a good original frame temporally well aligned and D is a duplicate. Luckily the bad sections don't overlap on all 4 tracks (well, at least the majority of them don't) therefore there is a good chance that I can replace most of the duplicates at least on track 1 with good ones from the other 3 tracks. This way the new "repaired" track 1 can serve as reference for the construction of the final product.
Removing as many duplicates as possible from all tracks is also advisable before feeding them through the median filter. This way the median result will be better, which can be finally compared with a duplicate-free track 1 during the final manual cleanup. Duplicates don't carry original information, therefore they are useless for the reconstruction of a final output. It is better to replace as many of them as possible from the other tracks.
Here is the logic and formula for replacing duplicates in track 1. The best quality is track 1 with the least number of duplicates, this is why it will be the reference and basis of the final output. The other 3 tracks follow each other in sequence as the quality decreases and number of duplicates increases. Therefore track 4 is the worse in this regard, but it still contains good frames that are missing from the other 3 tracks, which is the reason for using it.
Therefore first find all duplicates in all 4 tracks, and manually make sure the list is accurate. Then replace all duplicates in track 1 with good ones from track 2. If track 2 also contains a duplicate at the same frame number, then use the good frame from track 3. If even that is a duplicate, then use the good frame from track 4. If even that is a duplicate, then just skip the replacement operation and leave the original duplicate on track 1 untouched. Here is the practical realisation of this operation in code:
v1 = AviSource("C:\V\Track1.avi", pixel_type="YV12")
v2 = AviSource("C:\V\Track2.avi", pixel_type="YV12")
v3 = AviSource("C:\V\Track3.avi", pixel_type="YV12")
v4 = AviSource("C:\V\Track4.avi", pixel_type="YV12")
last = v1
function BInt(Bool b)
{
return b ? 1 : 0
}
filename1 = "C:\V\duplicates_in_SampTrack1.txt"
filename2 = "C:\V\duplicates_in_SampTrack2.txt"
filename3 = "C:\V\duplicates_in_SampTrack3.txt"
filename4 = "C:\V\duplicates_in_SampTrack4.txt"
ConditionalSelect(v1, "BInt(var1)*(BInt(!var2)+ BInt(var2)*(BInt(!var3)*2+BInt(var3)*(BInt(!var4)* 3)))", v1, v2, v3, v4, true)
#ScriptClip("Subtitle(String(1+BInt(var1)*(BInt(!v ar2)+ BInt(var2)*(BInt(!var3)*2+BInt(var3)*(BInt(!var4)* 3)))))")
ConditionalReader(last, filename1, "var1", true)
ConditionalReader(last, filename2, "var2", False)
ConditionalReader(last, filename3, "var3", False)
ConditionalReader(last, filename4, "var4", False)
ShowFrameNumber(last, scroll=true, offset=0, text_color=$ff0000)
The replacement of duplicates in track 2 will use first the frames from track 3, if it is a duplicate then 4, then 1.
Duplicates in track 3 will be replaced first with frames from track 4, then 1, then 2. This is how we can collect as much unique original information into the input 3 tracks to be used by the median filter.
The videos are about 75 minutes long. -
Yes, temporal alignment. But I'm thinking of this scenario where you have a string of "quads":
ABCDEEGHI
ABCCCCGHI
The duplicates you see are really dropped frames. "Placeholders" in time
Let's say the top track is the "base" track you're trying to "fix" by taking frames from the 2nd track. The top track's 2nd "E" will be replaced by the 2nd track's "C"
ABCDECGHI
This is "bad" because now you won't be able to detect the duplicate "C" which is displaced in time. -
The second E of track 1 will not be replaced by a C frame from the second track because that C frame is a duplicate. Duplicates are never used to replace other duplicates on other tracks, that would be meaningless and counterproductive.
In the case of your example there is no "good" frame in track 2 as a replacement for the duplicate E. Therefore the built in logic would have to find one in track 3, or if that does not have a good one (non-duplicate) either at this temporal position, then get one from track 4. If none of the tracks 2, 3, and 4 contain a non duplicate frame at this temporal position to replace the duplicate E in track 1, then don't replace the duplicate, but continue with the replacement operation at later temporal positions. -
Ahhh , ok I got it now thanks. I understood the "why" , but not exactly how the sequential logic part works
-
Hit a wall again with AviSynth. I have been programming in Fortran, Cobol, Basic, Visual Basic.NET, VBA for Excel, and assembler with no problems, they are all logical. But with Avisynth... it is a nightmare.
Here is a very simple problem to solve, that would be a piece of cake in any other (logical) programming language if they could only use the same (or similar) functions that Avisynth uses to manipulate videos.
Assume that there are two videos, which supposed to be identical, but they are not, due to duplicate frames and temporal shifts. The aim is to detect where temporal shift has happened on Track 2 compared to Track 1 (which is the reference). Check only for potential one frame shifts left and right. I would do it this way:
Let's say the current frame number is 5.
Check for temporal left shift difference:
Take frame number 4 from track 2 and add to it frame number 5 from track1. Now using mt_motion detect difference between the 2 frames and store the average luma of the mt_motion filtered clip in variable al_leftShift
Check for temporal no shift difference:
Take frame number 5 from track 2 and add to it frame number 5 from track1. Now using mt_motion detect difference and store the average luma in al_noShift
Check for temporal right shift difference:
Take frame number 5 from track 1 and add to it frame number 6 from track2. Now using mt_motion detect difference and store the average luma in al_rightShift
Print these 3 values into a file after the appropriate frame numbers, but only if the current frame on track1 is not a duplicate. This can be checked by reading the duplicate frame numbers from a file already prepared.
If there is no temporal shift at the current frame, then both al_leftShift and al_rightShift must be greater than al_noShift. If al_leftShift is the smallest of the three numbers then there is a temporal left shift on track2. If al_rightShift is the smallest of the three numbers, then there is a temporal right shift on Track2.
I would have to take into consideration what to do when the frame on track2 we want to compare is a duplicate. But let's just ignore that for now and see how could we accomplish this much described above. I have attempted to put together a code, which did not work, therefore it is of not much use to post it.
Any suggestions how to solve this problem?
Thanks. -
-
After some digging in the online Avisynth Viki and fiddling, got it to do part of what needs to be done. So finally it looks like I will answer my own question. But I was very frustrated that there seems to be no way to have full control over looping through the frames and making decisions for every frame. Anyway, here is the code:
LoadPlugin("C:\V\masktools2.dll")
v1 = AviSource("C:\V\Track1_TemporalShift.avi", pixel_type="YV12")
v2 = Trim(v1, 1, 0)
v2 = v2.Loop(3,5,5)
filename = "C:\V\TemporalShiftList.txt"
spacer = " "
Lv2 = interleave(v1, v2)
Rv2 = interleave(v2, v1)
Mv2m = SelectOdd(mt_motion(Lv2, thY1=10, thY2=10, thT=255))
Lv2m = SelectEven(mt_motion(Lv2, thY1=10, thY2=10, thT=255))
Rv2m = SelectEven(mt_motion(Rv2, thY1=10, thY2=10, thT=255))
WriteFileIf(Mv2m, filename, "(1 < 2)", "current_frame", "spacer", "AverageLuma(Lv2m)", "spacer", "AverageLuma(Mv2m)", "spacer", "AverageLuma(Rv2m)")
The condition of (1>2) in WriteFileIf is just an arbitrary dummy condition that should be always true, so that we get the 3 AverageLuma values for every frame to start with. Later on this condition should be set to allow writing of the values only when the current frame of v1 is not a duplicate.
Is there any way to take full control of the looping process through the video frames using for-next cycle, and be able to implement simple programming practices used in sequential languages like Fortran, or Basic? With such straightforward method this problem could have been solved in a few minutes, instead of geeking about like a mad man for hours to get somewhere... -
Have you reviewed the docs on Runtime Environment? There are a number of features for handling clips on a per-frame basis.
Similar Threads
-
Vegas 13 - move clip frame by frame?
By ricmetal in forum EditingReplies: 4Last Post: 21st Apr 2015, 03:49 -
Premiere Pro: Import/Export XviD & Frame Size Issues
By Cube11 in forum EditingReplies: 11Last Post: 6th Feb 2015, 20:43 -
Editing video frame by frame - less obvious results?
By Free2bme in forum Newbie / General discussionsReplies: 7Last Post: 2nd Jul 2014, 09:00 -
Sony vegas 11 frame-rate
By cojothebro in forum EditingReplies: 11Last Post: 7th Sep 2013, 11:02 -
Frame issues with Vegas 10 pro
By Jon.G in forum Video ConversionReplies: 16Last Post: 3rd Aug 2011, 17:24