ok, that's not perfect but much betterAnd it can even be better if I use the whole image (there is no black border at the right hand side of the frame).
I've some questions:
- Why do you need to convert to YUY2 ? I guess it changes the color format..
- You chose to put two clips on top of each other. But is it also possible to change the contrast of the whole frame, apply DeJitter() and restore the contrast after? Will you loose quality processing this way?
- If I try to apply the script to the video, I got an error message on the crop() function. This function can only be used on images.
I'm really impressed by the scripting possibilitiesThe combination of AviSynth + VirtualDub seems to be very powerful! I just discover all these things... Do professionals and amateur use these softwares?
+ Reply to Thread
Results 31 to 60 of 109
-
-
Starting from a JPG image isn't good either.
Your VOB source will come out of the MPEG 2 decoder as YV12. I think that will work with ColorYUV() and DeJitter(). I like working in YUY2 so that each scanline has its own U and V components (in YV12 pairs of scanlines share U and V). And later down the road, VirtualDub doesn't handle interlaced YV12 properly. You could leave the image RGB and use different color adjustment filters, like Tweak().
It depends on how much you have to adjust the brightness and contrast. I use a very large adjustment to make it easy for DeJitter() to find the edge of the picture. If you converted the whole image like that all the bright parts of the image would totally wash out. You would not be able to recover them by inverting the adjustments.
Your JPG image was 610x471. I needed to make it mod4 to convert to YUV. Your VOB will be 720x480 so you don't need to crop.Last edited by jagabo; 21st Mar 2010 at 17:29.
-
yes but I still need a crop to increase the contrast near the edges...
v2=Crop(0,0,20,-0).ColorYUV(off_y=-32).ColorYUV(gain_y=2000) -
Ah, I see. The error message said "This function can only be used on images."? I've never seen that. Did you open the video (or image) file first? Your script should look something like:
Mpeg2Source("filename.d2v")
ConvertToYUY2() #optional
v2=Crop(0,0,20,-0).ColorYUV(off_y=-32).ColorYUV(gain_y=2000)
Overlay(last,v2)
DeJitter() -
ok it works. But there are still some big errors... (blocks shifted). Do you want to try on your side?
I can upload a video snippet if you want.
-
I tried to apply my script but I have a problem when I put the images together to build the new video.
I exported 105 interlaced images in .bmp format. After processing on each images, I drop them on VirtualDub and I save in .avi format. But I got a video twice too long and twice too slow. And the video player (Windows Media Player or Power DVD) is no more able to deinterlace the video...
What am I doing wrong ? -
I'll have time to play with your latest file later today. But for now...
VirtualDub defaults to 10 fps when you import an image sequence. Use Video -> Frame Rate... to set the desired frame rate. You didn't say what codec you used for export but most don't explicitly support the flagging of interlaced content. So WMP or Power DVD don't know the video is interlaced. Try exporting as DV with Cedocida. -
I guess I have to install Cedocida. I've downloaded and extracted the zip but when I right click on cedocida.inf but I get a warning message of Windows Xp: "The software has not passed Windows Logo testing to verify ist compatibility with Windows XP". Should I click "continue"?
-
-
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
Hi jagabo,
here is what I manage to do with my program.
the original:
http://www.mediafire.com/?inmwnnntmwe
http://www.youtube.com/watch?v=mr6-mwhclvI
and the corrected version:
http://www.mediafire.com/?km0th4mrhww
http://www.youtube.com/watch?v=xa-w1OjCoE0
Hope you can do better on your sideLast edited by mathmax; 22nd Mar 2010 at 20:05.
-
I don't think DeJitter is doing any better. Here's the script I came up with for your m2v extract:
Mpeg2Source("snippet.d2v")
ConvertToYUY2()
v2=Crop(0,0,32,-0).ColorYUV(off_y=-24).ColorYUV(gain_y=2000)
Overlay(last,v2)
SeparateFields()
DeJitter()
Weave()
#just general cleanup below
Crop(16,0,-48,-16)
ColorYUV(off_y=-40, gamma_y=40, gain_y=20)
AddBorders(32,8,32,8) -
Thank you.
This is nice, but there are some errors at the middle and the end. Is it possible to correct?
On my side, I don't get good results when I work on interlaced images. So I applied the Yadif filter before exporting the images. But maybe I have to try the option "Unfold fields side-by-side" and work separately on the two frames. And maybe, time base errors will be more "logical" if I separate the two frames. I mean, they may be easier do detect. What do you think?
But what's the problem to export a deinterlaced video from VirtualDub. The result should be the same, as the video player will in any way deinterlace the video itself, shouldn't be? Interlacing is just used by television, isn't it? Actually, I'm not really sure to understand what is interlacing... there are two frame in one... but are they the same? -
You may have to work shot by shot and adjust the parameters (the width of the contrast enhanced section, the amount of darkening and contrast stretch, etc.) for each to get the best results with DeJitter.
The problem with interlaced video export from VirtualDub s that players don't know the video is interlaced unless you use a codec that supports interlace flagging. That's pretty much limited to DV with VFW codecs. -
ok, and what do you think of my idea to use the option "Unfold fields side-by-side" and work separately on the two frames ? I'm not very familiar with interlacing... but maybe it'll be cleaner to process this way.
-
If you're going to do any type of filtering that causes data to cross scan lines (and the filter doesn't specifically support interlaced video) you'll need to unfold before filtering then refold afterward. For example: blur, sharpen, rotate, many noise filters, etc.
Note that "unfold fields side by side" will cause data to cross from the right edge of one field to the left edge of the other. The extreme edges are usually not seen on TV because of overscan. You can avoid this by using the discard field mode with double frame rate. But then you need to avoid any temporal filters.
Each interlaced frame contains two half pictures, called fields, each from a different point in time. One field contains only the even numbered scanlines from the original whole picture, the other contains only the odd numbered scanlines from the other original whole picture. The fields are intended to be viewed one at a time, 59.94 fields per second on an NTSC TV (that is the only thing you ever saw on a interlaced TV, pretty much all TVs before HDTVs hit the market). The two fields are woven together and stored as a full frame when captured by a computer. In a still shot an interlaced frame looks just like a progressive frame. But if anything moves you'll see comb artifacts because the two half pictures don't "match".Last edited by jagabo; 23rd Mar 2010 at 05:52.
-
ok, that's much clear now.
So if want to do with the "discard field" mode, I guess I have to make two exports: one with "Keep top field" option and the other with "Keep bottom field" option. But will I be able to put the two fields together after that? How?
And I'm not sure to understand why I need to avoid any temporal filter... but anyway I don't use temporal filter yet. -
If you use "discard and double frame rate" each field becomes a frame, all fields are retained. You can weave them back together again with the Interlace filter. You can't use a temporal filter because successive frames are really fields -- you'll be mixing fields again.
-
I've my top fields and bottom fields in two different folders. I guess I have to put all the frame in a single folder and reorder the file alternating a top frame with a bottom frame. Then, which format source should I use: "progressive images" or "alternate frame"?
-
As far as I know, VirtualDub will not let you weave fields from different files. You'll have to do that in AviSynth. I think this will work:
top=AviSource("path\to\topfield.avi")
bot=AviSource("path\to\bottomfield.avi")
Interleave(top,bot)
AssumeTFF()
AssumeFieldBased()
Weave() -
Thank you
Lets compare the two versions
original:
http://www.youtube.com/watch?v=RXVl2mLgweQ
corrected:
http://www.youtube.com/watch?v=xc6owbz50_o
What else would you improve to make this video the most pleasant to watch ? Color/Contrast corrections ? Other ideas ? -
I got an other idea to correct time base errors. Let me know what do you think about it.
Until now, my script processed on a single image, trying to detect where the scan lines begin in order to align them.
But after this process, I still have some little mistakes because my video is dark and the beginning of a scan line is difficult to detect.
You told me about temporal filters and I thought that a scan line don't change a lot from a frame to an other. So I decided to check this. Here is an image on which the first line correspond to a scan line of the first image of the video, the second line to the same line of the second image, etc.. so you can visualize the evolution of a scan line over the time.
As you can see, there is possibility to move the scan lines to get a smoother evolution. The only problem is how to detect irregularity diagrammatically..
And you can also notice that there are two big changes. It corresponds to the change of take. But maybe there are ways to detect them... ? -
Yes, I was thinking along the same lines -- using spacial and temporal information to improve the results. Maybe blur or sharpen filters along the edge, and some smarter algorithm rather than just "intensity > threshold". I downloaded the source for DeJitter and changed the limits on the threshold (0-255 instead of 40-255) and default on wsyn (0 instead of 10). That added a little more flexibility. But nothing that couldn't be accomplished by adjusting the contrast before.
-
Sorry for the delay.
I finally wrote my algorithm to smooth the moves of the scan lines during the time. I got this:
http://www.youtube.com/watch?v=UNRekJfpPyQ&feature=player_embedded
What is surprising is that I get a better result when I work on both frames rather than when I work separately on the two frames. In the first case I put all the images together, alternating images of the top frame and of the bottom frame. It's not very clean to compare a scan line of the bottom frame with a scan line of the top frame... because they are not the same. But I think I get a better result this way because I work with 59.94 images per second and scan lines are less changing between two pictures.
The best may be to interlace the two frames and reexport the images to apply the temporal filter. This way I'll both benefit 59.94 images per second and full images. What do you think ?
I've two choices for that:
- create video from the two frames and use your script to weave:
Code:top=AviSource("top.avi") bot=AviSource("bottom.avi") Interleave(top,bot) AssumeTFF() AssumeFieldBased() Weave()
Which solution is the best ? Will the result be the same ?Last edited by mathmax; 30th Mar 2010 at 23:16.
-
I tried to interlace (to put together the two frames), deinterlace and export the images with VirtualDub. And I only get 29.97 picture/sec. and the script performed worse on these pictures...
jagabo, what do you think? Which is the best way to apply a temporal filter? -
Your temporal smoothing does seem to have improved the software TBC algorithm. I'm not sure what you mean by working with both frames -- do you mean both fields (because you've separated the fields into two videos)? I'm not sure if working with subsequent fields or frames should work better. The two fields are temporally closer (1/60 second apart vs 1/30 second apart), but spatially farther (different scanlines). I'm not sure exactly how this relates to the VCR heads -- ie, is the same head reading the top scanline of the top and bottom fields, or is it different heads, and does that make a difference? I would just experiment with different videos and use whatever works best.
Either of your two methods to re-interlace the video should work. But VirtualDub will screw up the chroma channels if your video is YV12. It may be safer to use the AviSynth script I gave you -- and be sure the video is YUY2 before it's given to VirtualDub. If you're not getting the right frame rate from the AviSynth script you can use AssumeFPS() to change it (that tells AviSynth to ignore what it thinks the frame rate is and just assume the value you tell it). If you want 59.94 fps Yadif bob deinterlaced frames you should select the double frame rate options in VirtualDub's deinterlace filter. Or use mode=1 in the AviSynth filter -- Yadif(mode=1, order=1) order=1 means TFF, order=0 means BFF. -
Thank you for your answer.
I'm not sure what you mean by working with both frames -- do you mean both fields (because you've separated the fields into two videos)?
The two fields are temporally closer (1/60 second apart vs 1/30 second apart), but spatially farther (different scanlines). ... I would just experiment with different videos and use whatever works best.
On the top of that I can apply my temporal filter working with subsequent frames.
So I get this:
Original:
First filter (working with the left black border):
Second filter (comparing lines of the picture)
Third filter (temporal filter: comparing a line with the lines of the next and preceding pictures)
My feeling, is that the second and third filter shouldn't work independently, but I still don't have a fix idea on how to make them work together... if you have any ideas...
Either of your two methods to re-interlace the video should work. But VirtualDub will screw up the chroma channels if your video is YV12. It may be safer to use the AviSynth script I gave you -- and be sure the video is YUY2 before it's given to VirtualDub.
After I drop the .avs on virtualDub. I click on "Save in AVI" but I get an heavy file. I want to have a lossless video because someone else will work on it after me. But exporting this way doesn't really make sens because the output file is even a lot heavier than the original file... is there any solution to export a file with a reasonable size without loosing quality? -
jagabo
My video suffers an other problem. Sometimes, the two interlaced fields shift vertically over one an other. Is there a way to correct this?
I'm still looking for good lossless codec or a way to export a video as small as possible without loosing quality.
Thank you
Similar Threads
-
Improve a old VHS Capture
By cd090580 in forum RestorationReplies: 1Last Post: 5th Nov 2011, 14:33 -
Can a digital video stablizer improve non-copywrited VHS tapes?
By Regor21 in forum Newbie / General discussionsReplies: 10Last Post: 22nd Jul 2010, 11:12 -
Improve 15 year old VHS quality when capturing
By diegolaz in forum Capturing and VCRReplies: 6Last Post: 12th Jul 2009, 14:17 -
VHS to DVD - is there any way I can improve quality of conversion?
By rairjordan in forum Capturing and VCRReplies: 21Last Post: 13th Feb 2009, 12:32 -
How to improve dark VHS picture?
By rbatty11 in forum RestorationReplies: 15Last Post: 11th Sep 2007, 18:57