VideoHelp Forum

Try DVDFab and download streaming video, copy, convert or make Blu-rays,DVDs! Download free trial !
+ Reply to Thread
Page 1 of 3
1 2 3 LastLast
Results 1 to 30 of 61
Thread
  1. An old VHS tape has been digitized into mpeg files, and there are 4 captures. The VCR player and digitzer were not the top quality and there is no possibility to recapture with a better equipment (no access to the VHS tape anymore). The quality is quite bad and needs a lot of work to fix it. There are static lines popping in and out on some frames; this is the first thing I have to fix. Since there are 4 captures, if I align them on 4 tracks it is possible to replace a bad frame on track 1 with a good frame from the other 3 alternative tracks.

    This requires a lot of work (like making grass grow molecule by molecule), but there is no way around it, because the video is irreplaceable, extremely important and unique. I have started to do the job in Sony Vegas 10 and using a wacom tablet succeed to reduce the copy-paste operation to 3 clicks, which is quite good. Unfortunately not all bad frames can be replaced with a good one; in some cases all 4 frames contain static lines. So finally I will have to retouch those frames which can not be replaced in Photoshop using content aware healing tools. My dilemma is that if I have to use Photoshop, then why not do the frame copy-paste operation as well in Premiere Pro or After Effects? Unfortunately I am not familiar with these two programs, therefore here is my question:

    Is it possible to do the above described copy-paste operation of single frames from one track to another in Adobe editors as efficiently as in Sony Vegas? Even if it is not possible, then what is the quickest way to do it in the Adobe products? Is it possible at all?

    The reason I consider moving the job completely to Adobe products is that the original video quality is already bad enough to motivate me not to make it even worse by repeated unnecessary renderings, accumulating compression losses. Since besides replacing bad frames, a Photshop retouch, and other filtering, straightening etc. operations will also be needed, then the best would be to do them all in a software package which can do them all, and only one single final compression would be needed when the complete job is finished and rendered.

    I know there is a separate forum category for restoration, and probably I will ask for some advises there as well about the restoration part of the work. This is posted here simply to get some input on the feasibility of the copy-paste operation on the Adobe products, which is an editing question.

    Thanks for any advice in advance.
    Quote Quote  
  2. Member racer-x's Avatar
    Join Date
    Mar 2003
    Location
    3rd Rock from the Sun
    Search Comp PM
    I don't know about Premiere, I haven't used it in many years. I would say your best plan of attack would be to export as a PNG image sequence, edit each frame as needed, then import the image sequence back into Vegas. PNG = no quality loss.
    Got my retirement plans all set. Looks like I only have to work another 5 years after I die........
    Quote Quote  
  3. Thanks racer-x for the advice, this is excellent!

    I have considered the option earlier to export only those frames one by one, which need retouching, but that would be too much work with all the exporting and importing. Your version seems to be a perfect solution if the Premiere Pro is unable to handle the task in an efficient way. One export and one import, no quality loss.

    The original mpeg file is about 3.5GB. If I export the whole video as png image sequence, then the file size will be surely much bigger. Any idea how much space would I need for the image sequence?

    I am still hoping to get some input from Adobe users whether Premiere Pro (or even better, Photoshop itself) would be up to the task or not, and in what way. If it can do it as well as Vegas, then we can skip exporting and importing.

    Thanks again for the valuable advice.
    Quote Quote  
  4. If the static is sparse you can use a median filter in AviSynth to do it all automatically.

    https://forum.videohelp.com/threads/362361-Median%28%29-plugin-for-Avisynth
    Quote Quote  
  5. Jagabo, that would be "life saving" if this could be done automatically.

    Unfortunately I think it is not possible in my case. First of all, because only 2 captures would be really usable for that filter, because the other two have many frame drop outs, and sometimes even 3-4 consecutive frames are identical on the same track, and not matching the good ones. Doing a median using these frames would result in distorted images. On the other hand if we would use only 2 captures which are quite reliably synchronized, then that would not offer sufficient base for the filter.

    The second problem might be that on my videos the static lines are much longer than on the sample image with the fish, and not always close to extreme values of white and black that could be easily recognized and dropped out. This filter could improve many frames, but also spoil other frames which can be better fixed manually. If this would be "just another job" that needs to be improved just a little, quickly and economically, then sure it could be the right solution. But in this special case I would like to bring out the best possible result from the material, so I don't really see a realistic automatic solution.

    Thanks for the tip, it may come handy in other projects.
    Quote Quote  
  6. Originally Posted by Zoltan Losonc View Post
    The second problem might be that on my videos the static lines are much longer than on the sample image with the fish, and not always close to extreme values of white and black that could be easily recognized and dropped out.
    It doesn't matter how long or prominent the dropouts are. A median filter doesn't recognize anything. It just picks the pixel with the median value. As long as fewer than half the frames have a problem at a particular pixel location a good pixel will be selected.

    Originally Posted by Zoltan Losonc View Post
    only 2 captures would be really usable for that filter, because the other two have many frame drop outs, and sometimes even 3-4 consecutive frames are identical on the same track, and not matching the good ones.
    Even if the result of the median filter isn't perfect it may substantially cut down on the manual fixes later.
    Quote Quote  
  7. How do you have your current set up in vegas (what are your 3 clicks) ? Yes, you can do something similar in Adobe. There are probably several ways you can set it up depending on what the bulk of your operations are going to be

    Photoshop extended can open video directly too (not mpg , but AVI, MOV support)

    You can eliminate additional compression losses by using a lossless codec. You can use them to go between different programs for example. Adobe has good integration in that you can dynamic link between different programs, so often you don't need to use a lossless intermediate (they require lots of HDD space). But lossless codecs are only lossless in the same colorspace. Most of the programs you're using actually work in RGB, but your source mpg is in YUV. As soon as you import it, you've technically lost quality (I wouldn't worry about it too much on a VHS source, you won't notice the difference)

    For your other thread, you can interpolate over "bad" unusuable frames using surrounding "good" frames using avisynth mvtools2 functions. So if all 4 instances have a "bad" frame, you can recover it usually from the "good" before and after frame (it can be stretches of longer than 1 frame, but the longer the sequence of bad frames, the worse the interpolation). Often the repair is good enough to use as-is, but at the very least it minimizes the amount of work you would have needed to do in photoshop

    You got to be careful with your manipulations, because you're working with interlaced VHS (you shouldn't be working in "frames", but in "fields" for many manipulations)
    Quote Quote  
  8. Member
    Join Date
    Aug 2010
    Location
    San Francisco, California
    Search PM
    I would start with jagabo's advice and see what the AviSynth median filter comes up with. Then I'd use Vegas' multicamera layout to step through the frames and pick out the best "camera" (VCR take) for each one. Exporting and retouching PNGs is a last resort because of the time involved.
    Quote Quote  
  9. In connection with racer-x’s suggestion, here is an output file size estimation. Based on a single exported png file size, the total size of the exported png sequence would be about 60GB. There is space for it, but not sure how well would Photoshop CS5 handle this on a 32bit XP laptop. Of course, it could be split into smaller segments if that would be necessary.

    Jagabo, you have got a good point that even if the median filter doesn’t do a perfect job, it could still significantly reduce the manual work. I have just one dilemma in this regard. Namely, most of the time when I find a static line on a frame, then that is the only disturbing blemish on the image, and the rest is good. If I find another frame in perfect alignment which is flawless, then the replacement yields a perfect result. If there is no good substitute, then I can retouch only that single line segment and leave the rest of the frame unchanged in its original good condition. This is not a perfect fix, but almost as good as perfect.

    However, if the median filter has got only two frames to chose from (since we have got only 2 reliably aligned tracks without dropouts) then the pixel value would be reduced to the mean of the two values (because of the even number of samples). Therefore it might reduce the intensity of the static line, but it would not remove it completely. Since this is not the maximum quality that we can get out from the available material, I would still have to manually choose the good frame from the two (or 4 if all are in alignment) tracks, and replace the slightly improved version of the filter.

    Also, the filter may create a frame that is a mixture of the two input frames’ pixels. This mixing is not desirable, because it may chose pixels from a frame that is of lower quality than what supposed to be chosen. (There are other blemishes also besides the static lines). If there is a good quality frame then that must be used only, and not diluted (mixed) with pixels from other frames. For this filter to work well it needs at least 3 perfectly aligned tracks without many dropouts, where at least 2 tracks should have good pixels at the same place in corresponding frames, in order to have an odd number of samples, and prevent using mean values. But in these videos there is often only one frame which is flawless, so the chance is high that it would chose the wrong frame. Therefore, if the filter works as I have described here, then it would not really help.

    I could make a test run though, on a short clip sample to see what it does, but I would have to learn first how to use AviSynth and the filter. If you have a suggestion which is the best and shortest tutorial for this, that would speed things up.

    I will respond to poisondeathray and JVRaines in another post.
    Quote Quote  
  10. Poisondeathray, the current setup in Vegas is as follows:

    The 4 videos are aligned on 4 tracks. The frame of the 1st track is visible on the display, and the 1st track is repaired; that will be the output. On the Wacom tablet there is one button for stepping forward one frame, one button for backward, a third for ctrl+c to copy, and a fourth for ctrl+v to paste.

    With my 4 fingers of the left hand constantly over these buttons watch the display and scroll through the video frame by frame. When a serious blemish, (usually a static line) appears, then I click the solo button on the track 2. to see if that is good enough as a replacement. If not, then I will solo the third track and/or the fourth track. If a better frame is found then I click on the good frame on the track that holds it. This will select the right track and right frame to copy from.

    Next I press the copy button on the tablet, which is the 1st click of the copy-paste operation. Then I click on the frame on the 1st track that should be replaced (2nd click). And finally press the paste button on the tablet (3rd click). The clicking on the solo buttons and tracks can be done very fast with the pen, but that tends to be sometimes unstable and it does things I did not intend. Using the mouse to click is slower and takes more finger training, but it is completely stable and reliable. The following options are enabled:

    Auto ripple is on

    Options:
    Enable snapping
    Snap to grid
    Snap to markers
    Snap to all events

    Options>Preferences>Editing:
    Do not quantize to frames for audio-only edits
    New still image length 0.04 seconds (1 frame at 25fps)

    There are only about 3 frames allowed to be displayed on the tracks as seen on the attached image in order to minimize horizontal arm movements.

    Click image for larger version

Name:	vegas_setup.png
Views:	84
Size:	541.1 KB
ID:	37876

    It would be nice to avoid a lossless intermediate of big size, but since my mpegs are YUV which is not compatible with Photoshop, how can I avoid using it? Which codec could convert this mpeg to a lossless format (as small as possible) that Photoshop CS5 can handle? It is also a limiting factor in my case that my laptop is a Dell Inspiron 1501 with 32bit Windows XP on it. (It could run a 64 bit windows, but most of my old apps I like are 32bit versions, and not keen to reinstall everything. The new versions of windows are also overtly spy devices for the 3 letter US agencies, so not eager to upgrade at all.) Therefore the latest version of Premier Pro and After Effects I can run is CS4.

    I will check out the mvtools2, which might offer a better alternative than manually retouching the bad frames in Photoshop with the healing tools. The description claims “Only progressive YV12, YUY2 video is supported”. This would require further conversion, and it says that it is very memory intensive (got 2GB)… I have seen a filter somewhere, which recognizes when two identical frames are next to each other, and the second one will be replaced with an interpolated image. Is this the same? If some test runs of this filter produce good quality, then that will spare a lot of manual retouching.

    The videos are progressive, not interlaced (the digitizer gave such output).
    Quote Quote  
  11. JVRaines,

    The multi camera layout is a good idea; it would make the process faster. The only problem with this is that we see the 4 displays side by side, and often it is hard to recognize this way when two frames are not perfectly aligned but shifted a frame or two. In such case a perfectly good frame is visible, which is not temporally aligned with the frame that needs to be replaced. If I don’t recognize this and replace the bad frame with it, then a flicker will be visible on the video, which is a mistake.

    With the present setup the seeking and copy-paste operation is slower, but with the solo button on-off toggling it is very easy to recognize whether two frames are identical or not. This excludes the usage of misaligned frames. Another thing I don’t like is that after ‘expanding to multiple tracks’ I don’t get back the same tracks that were there before, and can not use the solo buttons to verify the correctness of some replacements. This will also need some experimentation to decide which is better in this special case.

    The number of frames that will need retouching is not that great compared to the number of frames that need to be replaced with alternatives. So the most time would be consumed by the replacement operation, not so much by the retouching.

    Thanks everyone for the input, it helps a lot.
    Quote Quote  
  12. Originally Posted by Zoltan Losonc View Post
    Poisondeathray, the current setup in Vegas is as follows:
    .
    .
    .

    It would be nice to avoid a lossless intermediate of big size, but since my mpegs are YUV which is not compatible with Photoshop, how can I avoid using it? Which codec could convert this mpeg to a lossless format (as small as possible) that Photoshop CS5 can handle? It is also a limiting factor in my case that my laptop is a Dell Inspiron 1501 with 32bit Windows XP on it. (It could run a 64 bit windows, but most of my old apps I like are 32bit versions, and not keen to reinstall everything. The new versions of windows are also overtly spy devices for the 3 letter US agencies, so not eager to upgrade at all.) Therefore the latest version of Premier Pro and After Effects I can run is CS4.
    Your workflow would be a bit faster with multicam editing mode, at least in PP CC. What version of vegas are you using? I'm not too familiar with the newer multicam version in vegas, but in PP it would be easier and if you needed to manually touchup you can dynamic link to photoshop (no intermediates, changes are reflected directly on your timeline). This is a bit pointless if you're sticking to XP and x86, but in the newer PP multicam versions all tracks can be displayed in the program window (ie. you can see a preview of them all instead of having to toggle visiblity) and you can use number keys to swap between them. You can have a quick look at some multicam tutorials for both PP and vegas on youtube and weigh the pros/cons for your situation to see if you're willing to risk 3 letter agencies

    You will probably crash editing large videos in x86 with photoshop. You can actually do it without a large physical lossless intermedate by frameserving with avisynth and avisynth virtual frame server (avfs). It's a "fake" AVI based on an avs which is only a few kb large. But otherwise if you're not using PP / dynamic link, I would export out still images (PNG) for photoshop , do your edits, then reimport as an overlay. So you will have 5 tracks



    I will check out the mvtools2, which might offer a better alternative than manually retouching the bad frames in Photoshop with the healing tools. The description claims “Only progressive YV12, YUY2 video is supported”. This would require further conversion, and it says that it is very memory intensive (got 2GB)…

    Photoshop, and AE are RGB only. PP can work in YUV. Vegas is mostly RGB, but when smart rendering , some segments are treated as YUV. But it has 2 conversion to RGB modes studio RGB and computer RGB. I wouldn't worry about it too much except when something is done incorrectly (for example, chroma upsampling interlaced as progressive or vice versa - this bug was present in CS4 that you were thinking of using)

    Your original mpeg2 is YUV so avisynth /mvtools doesn't necessarily add additional degradation by colorspace conversions. You might do some manipulations in avisynth before importing into vegas, for example fixing frames by interpolation or applying a median filter over some suitable sections. Definitely there are manipulations you can use in avisynth that can save you lots of time. It's not one or the other, all these tools complement each other. There might be some things that can only be done in photoshop or after effects

    I have seen a filter somewhere, which recognizes when two identical frames are next to each other, and the second one will be replaced with an interpolated image. Is this the same? If some test runs of this filter produce good quality, then that will spare a lot of manual retouching.
    There are filters like that, that work automatically to detect duplicates according to a threshold and replace, but I was actually referring to contiguous sections (1 or more frames) where you specify the range, because you didn't mention duplicates. But they both use mvtools2.

    The videos are progressive, not interlaced (the digitizer gave such output).
    Are you sure ? Even when a "digitizer" or recorder records an analog signal from VHS, and outputs a progressive encoding - the actual content will be interlaced. Only if it forcefully drops a field, or specifically applies a deinterlace will it be truly progressive. In that case, you're missing 1/2 the information right off the bat. If you don't see "combing" it might be your preview software is applying a deinterlace, if only for the preview. For example this would happen in vegas by default.
    Last edited by poisondeathray; 20th Jul 2016 at 17:32.
    Quote Quote  
  13. Originally Posted by Zoltan Losonc View Post

    The multi camera layout is a good idea; it would make the process faster. The only problem with this is that we see the 4 displays side by side, and often it is hard to recognize this way when two frames are not perfectly aligned but shifted a frame or two. In such case a perfectly good frame is visible, which is not temporally aligned with the frame that needs to be replaced. If I don’t recognize this and replace the bad frame with it, then a flicker will be visible on the video, which is a mistake.

    With the present setup the seeking and copy-paste operation is slower, but with the solo button on-off toggling it is very easy to recognize whether two frames are identical or not. This excludes the usage of misaligned frames. Another thing I don’t like is that after ‘expanding to multiple tracks’ I don’t get back the same tracks that were there before, and can not use the solo buttons to verify the correctness of some replacements. This will also need some experimentation to decide which is better in this special case.

    The number of frames that will need retouching is not that great compared to the number of frames that need to be replaced with alternatives. So the most time would be consumed by the replacement operation, not so much by the retouching.
    Yes, but the assumption is that you've already aligned them. Multicam only works when sources are aligned. It helps in that it's easier to select which track to choose from

    Same with filters like median; it's critical that they are aligned first
    Quote Quote  
  14. I believe the median filters in AviSynth now have some ability to automatically adjust to small temporal alignment errors.
    Quote Quote  
  15. Member
    Join Date
    Aug 2010
    Location
    San Francisco, California
    Search PM
    From the docs:

    Median (clip, clip, clip, ..., bool "chroma", int "sync", int "samples", bool "debug")

    int sync = 0
    If the clips are not exactly in sync, you can use this parameter to automatically line them up. The plugin will search up to this number of preceding and following frames to find the best match.
    Quote Quote  
  16. Poisondeathray,

    I have tested multicamera editing in Vegas Pro 10.0, and I would certainly use it, if it would be easy to detect temporal misalignments on the side by side displays. Unfortunately in this mode there is no possibility to use solo buttons to quickly swap back and forth between frames and recognize even slight movements on the same display.

    If the movement of the scene is fast, then it is ease to recognize the difference between two adjacent frames even on side by side displays. It is also easy to see the difference when the movement is slow but the temporal misalignment is worse than just 1 or 2 frames. One starts to hesitate only when the movement is slow, and the temporal misalignment is only one or two frames.

    The visual recognition of a misalignment on side by side displays is based on comparing the relative position of objects, which is not very reliable if the difference is small, and time consuming when one has to hesitate and doubt. This way one is not wasting time and work on clicking buttons, but wasting time on moving one’s eyes left and right and on thinking. So the work is shifted from one’s fingers to the eyes and brain.

    Using a single display and solo buttons the work is mainly done by fingers clicking and swapping the frames, but there is no hesitation, and one can recognize even the slightest misalignment unmistakably. You just look at the display, toggle the solo button and if you see any slight movement or flicker (except for blemishes), they are not temporally identical frames, therefore can not replace each other. The work load has been shifted from the eye movements and thinking to the fingers.

    An addition very useful benefit of using this toggling is that one can not only recognize temporal misalignments, but also subtle blemishes that are present on one frame, but not present on the other (and which would be extremely difficult to see on side by side frames). Sometimes even when two frames are in alignment and the alternative frame does not contain the static line, it can contain other blemishes which are as bad as the static.

    The best solution is the combination of both, the multicam and the single display simultaneously. In this case, if one is certain that two frames are identical on the side by side displays, then just work in multicam mode. When uncertainty sets is, then one could use the single display solo toggling to look for any movement.

    I have not used Premiere Pro CS4 yet, but by watching this video it looks like it can do exactly this combination: https://www.youtube.com/watch?v=MFcxfupCgyE
    On the right to the multicam displays there is an output display as well, on which one can see the movement when toggling between two frames. So this looks promising so far. However, the manual copy paste operation demonstrated on this youtube video is very cumbersome for single frame replacements of this volume, and Vegas does this in a much more user friendly way.

    Is there any faster method of manual single frame replacements in Premier Pro CS4 (not in multicam mode)? Basically this was my original question at the start of this thread. Even if there is none, still its multicam mode could work more reliably and faster than that of Vegas. I will check this out. Unfortunately I could not find a single tutorial that could show how to do the single frame copy paste operation in Premiere Pro CS4. This is why I was asking it here in hope that someone would have some experience about this.

    You mention that one should use several different software to manipulate the videos, because they complement each other. There is no question about this fact, and the best way of doing things is by using a lossless format to prevent degrading the video by several successive imports into different color spaces, conversions, renderings, and compressions. The main reason for starting this thread was my concern about unnecessarily degrading the quality by such multiple recompressions, and moving from one software to another. Perhaps the best solution would be to convert the original files into a lossless format that could be used by all software we plan to use, including Photoshop.

    A question to everybody: which format should that be, and which program and encoder do you recommend for this?

    I have checked in Vegas the properties of the videos, and it says that they are progressive. The MPC-HC gives the same information, and I could not see any combing. Therefore, I see no reason why to doubt this. But if you can recommend a way to detect this with certainty, I can check it for you.

    Since there is a bit of confusion about the temporal misalignments in my video setup, let me clarify this issue a bit. There are two video captures which were made with identical settings as “medium quality” and the bitrate was low enough that the digitizer and data transfer lines could handle it properly. Therefore I have not found missing or duplicate frames in them on the first 5 minutes yet. But the 3rd capture was done in “HQ quality” and perhaps the bitrate was too high for the data transfer. The consequence is that some frames were lost and in their place the duplicates of the previous frame is present. Sometimes there are only 2 consecutive identical frames, sometimes when the dropout is too large then even 4 consecutive frames are identical. The 4th video was captured as DVD format and that has got similar duplicate frame issues like the 3rd one. Therefore, these two videos can be used only sometimes, when the proper temporal alignment is not spoiled by duplicate frames in them. So in a nutshell, the 4 videos are properly aligned in general; but this alignment gets spoiled on the 3rd and 4th track due to repeated duplicate frames being used in place of the lost frames. This is the reason why the multicam setup is a bit tricky, and why I don’t trust the median filter.

    If I have understood your suggestions properly, then you recommended that I should replace the duplicate frames in the 3rd and 4th videos first with interpolation in AviSynth, and then use these for the median filter. This could lead to situations when the median filter could compose a frame using pixels from different frames, and among these different frames there could be 2 fake frames created by interpolation. There is a strong disclaimer on this site http://avisynth.org.ru/mvtools/mvtools2.html about the potential bad quality of the interpolated frames. Therefore, if we combine the quality degrading action of this filter and that of the median filter, then we are getting too far away from our goal of getting out the best quality (using the most original content) from the available videos. If there is a good alternative frame to replace a bad one on track 1, then that must be used (and that only, not the mixture of several -sometimes interpolated- frames). Besides creating a visually pleasing output, we are also trying to keep as much of the original content as possible.

    There is a good chance that when the edited video is watched at normal speed, the viewers could not recognize artifacts mentioned above. But this video will be also paused and perhaps even examined by stepping frame by frame. So authenticity is also very important. Based on these considerations I still think that the replacements should be done manually to achieve the expected best possible quality. If anybody thinks the automatic processing could do an equally good job, then please explain why and how.

    By the way, at the moment I don’t plan to upgrade to spyware systems just for the sake of few (not very necessary) convenience features. (You have to agree to their spying on you, and using all your data against you when you agree to their terms and condition in Windows 10). Even If I do upgrade later to 64 bit, that will be a version of Linux. Microsoft has gone too far with their bully and BigBrother attitude…
    Last edited by Zoltan Losonc; 21st Jul 2016 at 07:01.
    Quote Quote  
  17. Originally Posted by jagabo View Post
    I believe the median filters in AviSynth now have some ability to automatically adjust to small temporal alignment errors.
    As described in my last response to Poisondeathray, the temporal misalignment of some frames on tracks 3 & 4 are not due to my inability to preform a proper alignment, but because there are several consecutive duplicate frames on them; and consequently the correct frames that the median filter would hope to find don't exist on those two tracks.

    The same is the response to the last comment of JVRaines.
    Quote Quote  
  18. Originally Posted by Zoltan Losonc View Post
    As described in my last response to Poisondeathray, the temporal misalignment of some frames on tracks 3 & 4 are not due to my inability to preform a proper alignment, but because there are several consecutive duplicate frames on them; and consequently the correct frames that the median filter would hope to find don't exist on those two tracks.
    Then you don't use those particular frames from the median filter and replace them with frames from one of the two "good" captures when you do your side by side editing.

    Rather than speculating on how well it will work why don't you just try it. I'd try using the median of 5 -- maybe blending the four videos together (or the two better videos) to create a 5th. The basic script you start with will look something like:

    Code:
    v1 = Mpeg2Source("video1.d2v")
    v2 = Mpeg2Source("video2.d2v")
    v3 = Mpeg2Source("video3.d2v")
    v4 = Mpeg2Source("video4.d2v")
    v5 = Merge(Merge(v1,v2), Merge(v3,v4)) # all four videos blended together
    
    Median(v1, v2, v3, v4, v5)
    Of course, getting AviSynth set up and learning to use it is a bit of work. If you want, upload 4 short (30 seconds?) overlapping clips (with at least one clip having the duplicate frame problem) and someone here will test it out for you. Then you can decide whether it's worth the effort or not. You can demux segments with Dgindex (part of the DgMpegDec package for AviSynth). Open a VOB file, mark-in, mark-out, File -> Save Project and Demux Video. Upload the resulting M2V files. Don't worry about getting the temporal alignment perfect. That can be adjusted in the AviSynth script.
    Quote Quote  
  19. Most important step is plan your workflow. Well worth your while investigating some of the other suggestions here because they will save you hours/days of manual work.

    If the bulk of your work is selecting which track/video , I wouldn't bother converting the format. For now lets say you're happy with doing it in vegas. Just export out individual damaged frames as an image (e.g. png) , using the loop selection, fix them in photoshop, reimport them either on an overlay track or directly onto your main track. Photoshop extended (even CS4) can handle video (AVI, MOV), but you might have problems with large videos using x86

    If you want to experiment with lossless codecs - good lossless codecs that have an RGB mode that I've used in the past (even in CS4) are lagarith, ut video codec. Lagarith is more compressed (smaller file sizes) , but worse decoding performance (slower to navigate on the timeline) . Probably not an issue for SD. Common ways to use these are virtualdub or you can even export from vegas directly

    If you're using vegas and importing mpg directly, beware that it uses "studio RGB" for mpg sources. The levels in an RGB file will look "washed out" or "low contrast" compared to 99% of other programs. When you export the final video out, vegas will automatically "fix" the levels appropriate for the export format. It's probably not making sense right now, but just keep in the back of your mind when using multiple programs and levels/colors seem off. Vegas handles YUV<=>RGB conversions slightly differently than other programs



    For the mvtools2 - I only suggested it because you said there were parts where all 4 captures had a bad frame - "Unfortunately not all bad frames can be replaced with a good one; in some cases all 4 frames contain static lines." (ie you have no good data for that current frame and were going to resort to photoshop). Well if it's a very damaged frame, spatially clone stamp / content fill is not a good option, because current frame is defective. If you use adjacent frames as the source to clone or fill from, the timing will be "off" unless you spend massive amounts of time adjusting for it. Well you can fill that with something that is temporally correct (1 or more inbetween frames depending on how long the "gap" is) using mvtools. I suggest you take a look at some examples or post a sample, because this is a huge time saver. Often single frame gaps are fixed good enough that no other touchup is required. But even a "bad" interpolation with artifacts is a much better starting place than nothing , even if it reduces the amount of photoshop time in half , that's a good thing.

    Earlier I was suggesting manual identifcation, but "automatic" fix. But then you mentioned duplicates. Not all "bad" frames are duplicates. The "bad" might be a static line, or horrendous noise, or a big scratch for example. For automatic detection and automatic fix of duplicates specifically - I'm only aware ones that handle single duplicate (ie 2 frames that are the same). So when you say you have sections with 4, those still need manual identification (or you can run a script to identify duplicate frames - duplicate frame detector). But the "fix" part is often the most time consuming operation if you want decent quality - even for photoshop gurus. Thus the "fix" part is the massive time saver



    I don't use CS4 anymore, and the multicam has changed - so I don't know how helpful or applicable this is for you . I do have it on an older computer, so if I have time I can check for you later. But in CC - You have the preview for all tracks visible, including the "final" one simultaneously. When you hit a number key (remap the keys to "cut to camera", or remap to different sets of keys instead of number keys if you want) that track is selected for the "final" multicam edit. ie. You're toggling/selecting + copy/pasting in 1 click instead of more than 3, thus it's significantly faster. You asked what was the fastest way. That's the fastest way for sure.



    A file can be read or labelled as "progressive", yet have interlaced content, and vice versa. If you are certain that MPCHC had deinterlacing disabled, then the file is progressive if there is no combing during movement. Or if you disable deinterlacing the in vegas project properties (properties => deinterlace method: none), the and still see no combing, then the content is progressive. If that's the case, then the project properties are set incorrectly post#10 .
    Last edited by poisondeathray; 21st Jul 2016 at 11:57.
    Quote Quote  
  20. Thanks jagabo for offering to perform the test.

    I wanted to test what median can do on these videos as well, but that would take some time for me to figure out how AviSynth and the filter work.

    The 30s clips are attached, and they are properly aligned as well.
    Image Attached Files
    Quote Quote  
  21. Thanks for the samples. A video is worth 10,000 words. For some reason I was imagining something slightly different

    Just curious what is the device?? I see a GHz marking

    1)
    I just checked the CS4 multicam, and it will work as described. It's slightly different than CC , but you can still map the keys so they toggle track selection. So it's still 1 key to toggle and select for the multicam sequence (not counting navigation / scrubbing) .

    But where this might not work for you as well, is track 3&4 - they are not perfectly aligned . Recall that multicam requires alignment. eg. You might want to select a frame that doesn't correspond to the same timecode or frame number (ie. the position of the playhead) as 1&2 which are completely aligned temporally. You might want to do that because the entire frame is better, or only part of the frame is better. More on that later

    Consider this fake example. These are frame numbers, tracks 1,2,3,4 in order

    Track 1 - 0 1 2 3 4 5
    Track 2 - 0 1 2 3 4 5
    Track 3 - 0 0 2 3 4 5
    Track 4 - 0 1 2 3 3 5

    Lets say when the playhead is at frame zero, that you wanted to select track 3's second zero for whatever reason (maybe it's higher quality, or fewer defects) . That won't work by pushing "3" when the playhead is at frame zero, because it will select the 1st zero. (I'm sure you get the general idea of the potential problem of 3&4 using this method)

    So if track 3&4 will be used much, multicam might not be very good for you, depending on how they are aligned

    2)
    Unfortunately not all bad frames can be replaced with a good one; in some cases all 4 frames contain static lines
    Only if a defect is in the same spatial location on all 4 versions, do you need to resort to something like photoshop or interpolation methods. Many of your defects are spatially offset on different versions, so the defect can usually be masked out (ie. not toggling whole frames, but combining parts of a single frame from different tracks). This can be done by masking or similar techniques like alpha paint. Masking sucks in premiere compared to after effects and photoshop, but the multicam sucks or is non existent in AE or photoshop compared to premiere. (The median stack should do that pretty much automatically if the sources were aligned and appropriate - it won't work ideally here without other manipulations )

    Adobe software has many issues, but this is where Adobe shines - the dynamic link and integration of apps. You could link your multicam edit to after effects and/or photoshop. The changes will be reflected in your main sequence back in PP, all without large intermediate files or wasted time exporting/ importing . But having multiple applications open on a x86 OS might be problematic
    Quote Quote  
  22. Here's the result of a median of 3, samples 1, 2, 3. The median and the three source videos are all stacked in a 2x2 matrix. Some frames are made worse, but overall I think the output is an improvement over any one of the sources.

    Code:
    v1 = Mpeg2Source("D:\Downloads\median_test_sample1.demuxed.d2v") 
    v2 = Mpeg2Source("D:\Downloads\median_test_sample2.demuxed.d2v") 
    v3 = Mpeg2Source("D:\Downloads\median_test_sample3.demuxed.d2v") 
    
    Median(v1,v2,v3, sync=0)
    
    StackVertical(StackHorizontal(last.Subtitle("median"), v1.Subtitle("v1")), StackHorizontal(v2.Subtitle("v2"), v3.Subtitle("v3")))
    I didn't find any way to use a median of 5 effectively. Using one of the source videos twice tended to just let the defects of that video through. Using an average of two or four tended to let the average image through -- which makes sense because the average is often the median.

    In the cases where the median filter didn't work, but one of the frames is clean, you can always use a filter like ReplaceFramesSimple() to copy that good frame over the bad median frame. Of course, that means you have to go though the video frame by frame, note which frames to copy, then add those frames to the replacement list.
    Image Attached Files
    Quote Quote  
  23. Taking a closer look, your duplicates in 3&4 are pure duplicates; the only substantial differences are compression differences. They offer no additional data among the duplicate set (if there is a defect, it's the same across all of them in the same video). This implies randomly using one from each duplicate set will be the same. Sometimes the duplicates are early. So I would delete the duplicates keeping a unique frame for each set (you might be able to pick out a slightly better one, from I,B,P frame differences), and leave transparent gaps where the duplicate frames are , then align them properly. This makes your compositing job easier in photoshop / after effects if you want to use masking to combine partial frames, it also makes selecting whole frames easier by any method, and should also improve median stack results (I doubt the align feature in the avisynth median plugin will be as good as manual align. You could combine 3+4 to cover the gaps and use that with 1,2 or even some combinations using some functions like min/max , blend modes etc.. If I have time I'll do a little test)
    Quote Quote  
  24. Part of the problem is that many of the "lines" in the video are actually parts of the previous frame (it looks like a dropout detector substituted pixels from the previous frame). So when there's a line in video 1 (or 2) a duplicate frame in video 3 or 4 often matches the line. So the median filter passes it to the output. And in videos 3 and 4 there are sometimes more duplicate frames than original frames.
    Quote Quote  
  25. Poisondeathray,

    The exporting and importing back individual frames one by one in Vegas could consume more time and work than the retouching in Photoshop. I don't see how could this be done quick enough. This is why I have appreciated the suggestion of racer-x to export the whole video in one go, fix the bad frames in Photoshop, and then either import the whole thing back into Vegas, or just simply convert the image sequence into a lossless video format that can be used by the other software that supposed to manipulate it further. Here is an example bad fame with a static line, which could not be replaced by a good alternative; and how it was retouched in Photoshop in about 20-25 seconds without any experience using the healing tool (similar result can be achieved with the content aware spot healing tool). After fixing a hundred of them, the time required, and quality can get even better.


    Click image for larger version

Name:	retouch3.png
Views:	307
Size:	607.1 KB
ID:	37898Click image for larger version

Name:	retouch3_done.png
Views:	307
Size:	608.6 KB
ID:	37899


    Thanks for the lossless codec tip, I will check it out.

    How would you replace the duplicate frames on the 3rd and 4th tracks automatically with transparent frames? Is there any filter for this? It would be too much extra work to do it manually.

    The original plan was that in the first go I will replace all the bad frames on track 1 with the good ones from the other 3 if they exist. In the second go I will have to deal with the remaining blemishes. Here two options exist: one is to retouch the remaining bad frames in Photoshop; the other is that the bad frames should be replaced manually with a duplicate of the good frame preceding them. Then use the filter I saw somewhere, which would automatically replace the duplicate frames with an interpolated frame from left and right. Which solution is the better choice, will become clear only after I try both options on a small sample and compare the results (and the time required to make them).

    But now, after seeing the results of the median filter that jagabo provided, the process can get easier and faster. I will comment on his results in another post.

    After our discussions and jagabo's test the new plan is this: first I will generate an extra track using the median filter. Then import all the 5 videos into Premier Pro and use the multicam mode to pick obvious replacements for bad frames. In cases when it is hard to see possible temporal misalignment on 3rd & 4th tracks, I will toggle the visibility like with the solo buttons in Vegas, and watch for any movement on the single output monitor. This way we can combine the advantages of both multicam display, and single output monitor with solo toggling. When the replacement job is done, then I will fix the remaining bad frames with either Photoshop retouching, or the interpolation method described above. This will produce a video without major blemishes, and then we can start another thread in the restoration section of this forum to discuss how can we further improve the quality with level adjustment, and different filters.

    Here is one of the restoration problems to be solved in advance: there is an ugly vertical wavy distortion in the video (although not too disturbing, but better gone), which might be possible to fix with a special filter that straightens the frames based on the curvature of the edge of the video. Perhaps the DeJitter filter of Vcmohan or similar could fix (or at least improve) this issue. Here is an example where the vertical waving of straight edges is visible.


    Click image for larger version

Name:	wavy_distortion.png
Views:	296
Size:	183.5 KB
ID:	37900


    If anybody has suggestions how to fix this distortion, please add some wisdom to this thread.

    I have disabled the deinterlace option in in MPCHC and did not see any combing, therefore it looks like the videos are not interlaced.

    The device on the sample video is a microwave signal generator with an attached passive amplifier.
    Quote Quote  
  26. Originally Posted by Zoltan Losonc View Post
    Poisondeathray,

    The exporting and importing back individual frames one by one in Vegas could consume more time and work than the retouching in Photoshop. I don't see how could this be done quick enough. This is why I have appreciated the suggestion of racer-x to export the whole video in one go, fix the bad frames in Photoshop, and then either import the whole thing back into Vegas, or just simply convert the image sequence into a lossless video format that can be used by the other software that supposed to manipulate it further.
    For exporting individual frames - I meant export ONLY the frames that you can't find a valid match with real data (spatially and temporally). I actually couldn't find any in the samples you uploaded, but there might be 1 or 2 out of 750 I'm guessing. If you're using adobe suite, you don't have to export anything if you had enough memory on x64 (dynamic link, all applications are open, communicate with each other and are accessible simultaneously)

    The potential issue with photoshop - is how do you know what or which ones to "fix" . In case you missed my earlier comment, you have coverage on practically all the frames, so instead of using a content/heal photoshop fix which interpolates data, you can use real data if you have aligned frames. 1&2 are aligned (at least temporally, and mostly spatially) . It's actually faster, instead of 20 seconds for a "line", it might take 1-2 seconds using alpha paint technique in after effects. AE is much better to work with on video. Only if you need complex repairs do you need to resort to photoshop (I didn't see anything here that requires photoshop, but I didn't examine too closely, there might be 1 or 2 frames that might need touchup)

    Here is an example bad fame with a static line, which could not be replaced by a good alternative; and how it was retouched in Photoshop in about 20-25 seconds without any experience using the healing tool (similar result can be achieved with the content aware spot healing tool). After fixing a hundred of them, the time required, and quality can get even better.
    No, that would be partially incorrect - there are valid spatial alternatives (mix & match parts of frames) . Masking or alpha paint. You're thinking of "whole frames". Think of it like a jigsaw puzzle where you mix and match parts of frames from 1,2,3,4. Really you only need 1,2 for 99% of it. Very few need to resort to 3,4. If it doesn't make sense maybe I can explain it farther or post a short video tutorial. A video is really worth 10,000 words

    Having seen the videos now, that's the way I would go if you wanted a high quality repair. Spatial repairs +/- full frame replacements. Another reason for this is you will get temporal fluctuations if your mix & match from 1,2,3,4 too much because there are slight color, alignment, and noise pattern differences between the four videos; unless you do some filtering to even it out. If you take 1 video as your "base" and only touchup tiny spatial repairs, there will be fewer fluctuations



    How would you replace the duplicate frames on the 3rd and 4th tracks automatically with transparent frames? Is there any filter for this? It would be too much extra work to do it manually.
    I don't know of any off hand that keep the "placeholder" with a transparent frame . I think I know how to do it if you only had duplicates - but the detection logic for triples, quads, or more is more complex. But that is only part of your problem - some of your duplicated frames in 3&4 come early - so the timing is "off" . For example , if in a triplicate series, you delete the last 2, the remaining unique frame might be temporally offset from 1&2's timing
    Quote Quote  
  27. Jagabo,

    Thanks for the testing, excellent work!

    Originally Posted by jagabo View Post
    Using an average of two or four tended to let the average image through -- which makes sense because the average is often the median.
    Yes, according to the definition of median, when the number of samples is an even number, then the output will be always the mean value of the two numbers in the middle. This is why we have to use an odd number of samples, if we don't want to create averaged new pixels, which don't exist in either of the originals.

    The result of the median filter is indeed a mixture of improved frames and spoiled ones. Very often the filter completely eliminates both static lines when one is in frame 1 and the other in frame 2, but at different positions. This is an excellent help, saving me manual retouch work. Thanks for that! At other places it mixes parts of static lines and other blemishes from both frames into one, multiplying the problem that needs retouching. In yet other cases it takes over blemishes even when one completely good substitute frame is available. The overall result is indeed better than any of the input videos.

    The best result can be achieved by creating an extra track with the median filter and using all 5 for the manual selection process. This way the number of frames that need manual retouching will diminish, but I can still compensate for the spoiled frames, by simply choosing one from the originals.

    The quality of the median's product could be further improved, if say we could replace some duplicate frames on track 3 with properly aligned frames on track 4 (if such frames exist at all). But I am afraid it is not possible to automate this operation. Manual replacement, on the other hand would require more time and effort than the improvement it would produce.
    Quote Quote  
  28. Originally Posted by poisondeathray View Post
    It's actually faster, instead of 20 seconds for a "line", it might take 1-2 seconds using alpha paint technique in after effects.
    That sounds great! Can you direct me to a tutorial for CS4 that shows how to do this? Or if you would record your own tutorial perhaps that would be even better. I think my laptop will be able to run PP and AE simultaneously. But perhaps that is not even necessary. I could just go through the whole video doing all the replacements possible first. Then transfer the job to AE and do the alpha paint or masking there for the remaining frames with blemishes. I hope it is possible to import the result of a PP project into AE even when the PP is not running.

    Having seen the videos now, that's the way I would go if you wanted a high quality repair. Spatial repairs +/- full frame replacements. Another reason for this is you will get temporal fluctuations if your mix & match from 1,2,3,4 too much because there are slight color, alignment, and noise pattern differences between the four videos; unless you do some filtering to even it out. If you take 1 video as your "base" and only touchup tiny spatial repairs, there will be fewer fluctuations
    Yes, but the feasibility of repairing each bad frame on track 1 in AE with masking or alpha paint depends on how much more time and work would that take compared to simply replacing all the bad frames for which a good alternative exists. If I use the multi camera setup then the replacement process could go indeed very fast. Your suggested approach would also exclude the usage of the median filter, because that also combines the pixels from several frames which might not be temporally and spatially in perfect alignment. This subject will be again clearer after doing a test on a small sample to and speak from experience.
    Quote Quote  
  29. Originally Posted by Zoltan Losonc View Post
    The quality of the median's product could be further improved, if say we could replace some duplicate frames on track 3 with properly aligned frames on track 4 (if such frames exist at all). But I am afraid it is not possible to automate this operation.
    Actually, it's probably possible to do that in AviSynth even without writing a custom filter, though I don't see a way off the top of my head. The basic algorithm would be something like: if v3[n] is equal to v3[n-1], and v4[n] is not equal to v4[n-1], replace v3[n] with v4[n]. Then you could use the result of that along with v1 and v2 in a median-of-three.
    Quote Quote  
  30. I've manually aligned 3 & 4 to 1, with transparent gaps for the 1st 10 seconds (0-250 frames) to test to see if it would help, or even if it's worth exploring whether it can be automated as a proof of concept for now . Just fiddling around with it, 3&4 are also less spatially aligned than 1&2, so my impression is it actually can make some things worse, not just the defect areas

    A screenshot of an excerpt from the timeline and how they are arranged looks like this, but I'm sure you get the general idea - there isn't full coverage with 3+4 overlaid (there are still "holes"), and there are different unique frames from each 3&4 (so in that sense you're sort of "wasting" some frames with a simple 3 on 4 overlay , or a 4 on 3 overlay). The more aligned capture sequences you have, the more "random" defects like those "lines" should be cancelled out. But the true alignment is important if you wanted to use multicam, or even layer visibility or soloing, or any other compositing techniques like alpha painting - otherwise you pick the wrong frame

    The top is 4, middle is 1 (used for reference timing), bottom is 3.
    Click image for larger version

Name:	align.jpg
Views:	79
Size:	44.7 KB
ID:	37908


    But there might be some manipulations, both spatial and math /layer operations and combinations that you can do to improve the median stack outcome so play with the samples. jagabo will probably come up with something. Lagarith RGBA



    @Zoltan, I'll put together a better description of the alpha paint reveal technique later today with a short video demonstration. The method is exactly the same in AE CS4
    Image Attached Files
    Quote Quote  



Similar Threads