VideoHelp Forum
+ Reply to Thread
Page 1 of 4
1 2 3 ... LastLast
Results 1 to 30 of 112
Thread
  1. Hello,
    I'm definitely a newbie to encoding and have used this forum repeatedly to get some help. I feel kind of bad, so if anybody knows of any comprehensive beginner's guides to filters, processes and such, I would be very grateful. Unfortunately, I don't really know where else to go to stock up on some knowledge, it seems really overwhelming.

    But to get to my point, I've recently downloaded many Stage Performances. I noticed that they have decent bitrate (I think) and are in 1080i, but it seems like many of them have slight to strong artifacting (varies strongly from file to file). In the most recent video that got me to write here, I don't think it looks that bad, but still, noticeably decreasing the "sharpness" of the picture. I've attached a screenshot of one randomly selected frame, I hope that gives an idea of what I'm talking about. Especially the arcs in the background seem to show some aliasing, if I'm not mistaken (both my eyesight and my knowledge of technical terms are pretty limited). Are there any filter and methods I could use to get a clearer, sharper picture? Neither CPU/GPU performance nor time should be an issue for me.
    To give an account of my knowledge of encoding, it's limited to using MeGUI and QTGMC to de-interlace those very performance videos (which I learned how to do here by following a guide. Thanks for that ). Other than that, I've sometimes used to convert to different formats and resolutions using XMedia Recode, since I found it very easy to use (complex options aside).

    Some file info:
    Format : MPEG-4
    Format profile : Base Media
    Codec ID : isom
    File size : 302 MiB
    Duration : 3mn 19s
    Overall bit rate : 12.7 Mbps
    Format : AVC
    Format/Info : Advanced Video Codec
    Format profile : High@L4.2
    Format settings, CABAC : Yes
    Format settings, ReFrames : 4 frames
    Codec ID : avc1
    Codec ID/Info : Advanced Video Coding
    Duration : 3mn 19s
    Bit rate : 12.7 Mbps
    Maximum bit rate : 26.9 Mbps
    Width : 1 920 pixels
    Height : 1 080 pixels
    Display aspect ratio : 16:9
    Frame rate mode : Constant
    Frame rate : 59.940 fps
    Color space : YUV
    Chroma subsampling : 4:2:0
    Bit depth : 8 bits
    Scan type : Progressive

    (This is the video after simply using QTGMC for de-interlacing, nothing else)

    If this is bothersome/too general, as I said, I completely understand and I'm sorry for going into this so naively and uninformed. I'm trying to learn though, since I find the matter quite interesting, but haven't found a good entry point.
    Image Attached Thumbnails Click image for larger version

Name:	Sequence 01.Still001.png
Views:	739
Size:	2.57 MB
ID:	25831  

    Quote Quote  
  2. Originally Posted by bschneider View Post
    (This is the video after simply using QTGMC for de-interlacing, nothing else)
    You're showing a picture of something you've already filtered? How about a short (5-10 second) untouched sample from the video?

    You're referring to those mostly horizontal lines behind the girls? Looks to me like a video is being shown behind them as part of the performance and it's nothing to be removed or worried about.
    Quote Quote  
  3. I was under the impression QTGMC does de-interlacing and doesn't change the quality much? But sure, that's a valid complaint, I will put up something from the original.

    The lines behind are one thing, but generally, if you look around the hair especially, I feel like the picture isn't quite as sharp as it could be. When the video's running, this is one of the better examples, but especially if you pause, you can see that the picture seems weird. But maybe this is normal. If you think I'm seeing things, I can go look for cases where the video quality definitely has been worse, just tell me, as I said, I'm a total rookie and always open to suggestions.

    I'm not sure how to cut something without re-encoding. I followed a suggestion on here to use VideoReDo. I hope this leaves the quality near untouched. I uploaded to MediaFire, I hope that is okay: http://www.mediafire.com/download/2hm4dx4jdd974rn

    And thanks a lot for taking an interest in my problem
    Quote Quote  
  4. Yes, the curves do show some aliasing. But the remedy (an anti-aliasing filter or QTGMC's InputType=1 after it's been made progressive) will do more than just smooth them out and I wouldn't recommend it. And I don't think an average of 12.7 Mbps is high at all for a 1920x1080 interlaced video, or is even enough, and may be the cause of some of the problems, especially the softness, you're noticing. I don't know if it was captured this way or reencoded afterwards, before being made available for download.

    Maybe others will have some ideas but my own verdict is it already looks about as good as it's going to get. Pretty girls.
    Last edited by manono; 20th Jun 2014 at 15:39.
    Quote Quote  
  5. Hehe, pretty indeed.

    Hmm, so it's a bitrate issue? That's the pain, I'm reliant on that source because everywhere else, the quality is worse and it's ripped from Korean TV which I can't receive here. So unfortunately I can't just record it with a higher bitrate :/

    However, I'm very grateful for your support so far, let me fetch another video that was really noticeably broken, heavy artifacting (I think that's what you call it). Parts of the video just look terribly blurry. Well not even blurry, rather boxy (??). I think that's called artifacts? I'll give you another 10 second shot of the untouched video, as I downloaded it: http://www.mediafire.com/download/njx4nv89fjuw3bk

    If you had some thoughts to share with me on that too, you would have even more of my gratitude. And some more pretty girls
    Quote Quote  
  6. Your MPEG2 source is overcompressed. You'll need to apply deblocking, deringing, and denoising filters. But those will cause some additional blurring and loss of small, low contrast detail. You'll never get a really great encoding out of it. Do you consider the attached video an improvement?
    Image Attached Files
    Quote Quote  
  7. Originally Posted by bschneider View Post
    Well not even blurry, rather boxy (??). I think that's called artifacts? I'll give you another 10 second shot of the untouched video


    Yes, they are compression artifacts in the source. Those blocks are called "macroblocking", common for MPEG2 compressed streams, there is also quite a bit of noise

    You could apply deblocking, denoising filters , but that reduces even more fine details. Maybe you can choose something in between or low settings. You can experiment and adjust to your liking. There are pros/cons to whatever you do, but it's really up to personal taste

    Nice girls - Chun Li flashback!
    Quote Quote  
  8. Originally Posted by jagabo View Post
    Your MPEG2 source is overcompressed. You'll need to apply deblocking, deringing, and denoising filters. But those will cause some additional blurring and loss of small, low contrast detail. You'll never get a really great encoding out of it. Do you consider the attached video an improvement?
    I think I see what you mean that it's slightly blurry, but I think it's a definite improvement!! It's hard to to describe, but the colours seem more... consistent? As in, there are no weird dark/bright spots mixed in there, as in the original, looking at a still shot. Thanks a ton! Are there a lot of different filters? Maybe there is a good overview for which filters to use or you could explain which you prefer? I definitely like what you did with the sample, I think it looks a lot better!

    Originally Posted by poisondeathray View Post
    Yes, they are compression artifacts in the source. Those blocks are called "macroblocking", common for MPEG2 compressed streams, there is also quite a bit of noise

    You could apply deblocking, denoising filters , but that reduces even more fine details. Maybe you can choose something in between or low settings. You can experiment and adjust to your liking. There are pros/cons to whatever you do, but it's really up to personal taste

    Nice girls - Chun Li flashback!
    Okay, I see. Anywhere I could go to, to read up on how to apply these filters best? In the UI of MeGUI there is a checkbox for Mpeg2 Deblocking and a dropdown menu for one denoise filter, but there isn't much in the sense of configuring the filters. I assume I could download other filters and apply them via script? I only really know some of the theoretical methods, I didn't exactly use any of that before. Thanks a ton to you as well!
    And yeah, hehe, the costumes do bring up some memories!
    Quote Quote  
  9. Originally Posted by bschneider View Post
    I think I see what you mean that it's slightly blurry, but I think it's a definite improvement!! It's hard to to describe, but the colours seem more... consistent? As in, there are no weird dark/bright spots mixed in there, as in the original, looking at a still shot. Thanks a ton! Are there a lot of different filters?
    It wasn't a lot of filtering. I used DgDecode's deblocking and deringing filters:

    Code:
    Mpeg2Source("filename.d2v", CPU=6)
    CPU=6 enables deblocking and deringing of luma and chroma. Its deblocking is a little too strong -- it removes a fair amount of detail. But it's easy to use with interlaced video. You could try using Deblock_QED() instead which has controls for how much deblocking is applied.

    I followed that with QTGMC() to detinterlace and Deen(thry=3, thruv=4) to denoise. Slightly lower values, say thry=3 and thruv=4, will retain a little more detail and still give good denoising. You might try using QTGMC's built in denoising filter instead, e.g. QTGMC(EZDenoise=2.0). The bigger the value the more denoising. You can try applying some sharpening after all that. Maybe LSFMod() or aWarpSharp().

    Code:
    Mpeg2Source("SAMPLE After School - First Love (MBC Korean Music Festival 2013.12.31) SAMPLE.d2v", CPU=6, Info=3) 
    QTGMC()
    Deen(thry=3, thruv=4)
    I encoded that with the x264 cli encoder at the slow preset, CRF=18.
    Last edited by jagabo; 20th Jun 2014 at 13:29.
    Quote Quote  
  10. Originally Posted by bschneider View Post

    Okay, I see. Anywhere I could go to, to read up on how to apply these filters best? In the UI of MeGUI there is a checkbox for Mpeg2 Deblocking and a dropdown menu for one denoise filter, but there isn't much in the sense of configuring the filters. I assume I could download other filters and apply them via script? I only really know some of the theoretical methods, I didn't exactly use any of that before.



    It can get as complex or simple as you want. There is a lot to learn and read about - it might take days / weeks / months to learn the basics.

    Start reading about avisynth, virtualdub, filtering. That's all preprocessing and probably the most important part for what you're trying to do. The other part is how encoding settings , compression affect the end result . Most of it will be learned by YOU playing with filters, previewing the results. Reading about it is one thing. But playing with the settings, making adjustments and getting feedback is how you really learn

    http://avisynth.nl/index.php/Main_Page
    http://avisynth.nl/index.php/Main_Page#New_to_AviSynth_-_start_here
    http://avisynth.nl/index.php/Main_Page#Filters.2C_external_plugins.2C_script_functions_and_utilities

    There are some older avisynth guides, old filtering guides with outdated plugins more geared towards anime, but nonetheless a useful resource to learn from (you can mouse over and see before/after images)
    http://www.aquilinestudios.org/avsfilters/

    The first step is probably identifying what is "wrong" with the video, or define what adjustments you think need to be made to subjectively "improve" it . That's where names and terms used to describe artifacts can be helpful - so that you can refine your search. Many of your questions are probably already discussed extensively or there are guides elsewhere such as here on this site or doom9 or other sites. Usually there are many ways to approach a problem , sometimes many with similar results.

    Megui is a good place to start, because it generates a script for you. But you're going to have to use the edit function to start adding filters. You might want to start learning about avspmod (it's a script editor, very useful), to compare scripts, preview scripts, more advanced things like use macros

    Don't be afraid to ask for help , if after you've spend some time searching and you still have questions or can't find the answer - there are lots of helpful people lurking around and everybody had to start somewhere learning stuff. Everything was way over my head when I started, I was afraid of avisynth for years. But trust me - it is well worth the time learning about it if you want to do anything with video

    Good luck
    Quote Quote  
  11. Originally Posted by jagabo View Post
    It wasn't a lot of filtering. I used DgDecode's deblocking and deringing filters:

    Code:
    Mpeg2Source("filename.d2v", CPU=6)
    CPU=6 enables deblocking and deringing of luma and chroma. Its deblocking is a little too strong -- it removes a fair amount of detail. But it's easy to use with interlaced video. You could try using Deblock_QED() instead which has controls for how much deblocking is applied.

    I followed that with QTGMC() to detinterlace and Deen(thry=3, thruv=4) to denoise. Slightly lower values, say thry=3 and thruv=4, will retain a little more detail and still give good denoising. You might try using QTGMC's built in denoising filter instead, e.g. QTGMC(EZDenoise=2.0). The bigger the value the more denoising. You can try applying some sharpening after all that. Maybe LSFMod() or aWarpSharp().

    Code:
    Mpeg2Source("SAMPLE After School - First Love (MBC Korean Music Festival 2013.12.31) SAMPLE.d2v", CPU=6, Info=3) 
    QTGMC()
    Deen(thry=3, thruv=4)
    I encoded that with the x264 cli encoder at the slow preset, CRF=18.

    Thank you for the precise explanation, I will keep this in mind. I tried opening the script (for now exactly how you did it) with VirtualDub but it gives an error. I'm still unsure where all the plugins need to go, but weirdly, VirtualDub tells me that there are filters missing, even when I don't use any. I went to the Wiki posted by poisondeathray, played around a bit, followed the first simple tutorials, and thought I'd just see if that works. Not even loading an untouched file (so just: AVISource("filename.avi")) works in VirtualDub. Opening it with MPC works fine. What's going on there?


    Originally Posted by poisondeathray View Post
    It can get as complex or simple as you want. There is a lot to learn and read about - it might take days / weeks / months to learn the basics.

    Start reading about avisynth, virtualdub, filtering. That's all preprocessing and probably the most important part for what you're trying to do. The other part is how encoding settings , compression affect the end result . Most of it will be learned by YOU playing with filters, previewing the results. Reading about it is one thing. But playing with the settings, making adjustments and getting feedback is how you really learn

    [...]

    Good luck
    Phew that's quite a handful. But exactly what I was asking about! You two gave me a bit of a shove (okay, quite a big one )in the right direction, unfortunately, things are still not up and running since I seem to overlook something with VirtualDub. It's generally a bit confusing where all the plugins are supposed to go and how the programs interact with each other (in this example avisynth and VirtualDub).

    Thanks sooooo much to the two of you already! I'm trying my best to get things started!

    EDIT: In VirtualDubMod it seems to work fine. Should I just use that instead?
    EDIT2: So following jagabo's "instructions" I tried to install DGDecode. I did that by downloading it from here and putting the .dll and .vfp file in the avisynth\plugins folder. The MPEG2Source command works, QTGMC and Deen doesn't. QTGMC gives an error message "Script error: There is no function name 'mt_makediff'". Please excuse all these issues, I'm just trying to get started
    Last edited by bschneider; 20th Jun 2014 at 17:13.
    Quote Quote  
  12. VirtualDub filters go in Virtualdub's plugins folder. Typically:

    Code:
    C:\Program Files\VirtualDub\Plugins\
    AviSynth plugins/filters can be loaded manually in your AVS script with:

    Code:
    LoadPlugin("C:\Path\To\WhateverFilter.dll")
    import("C:\Path\To\SomeFilter.avs")
    Or you can put dll files in AviSynth's plugins folder and they will auto load any time you open a script, usually:

    Code:
    C:\Program Files\AviSynth 2.5\Plugins\
    AVS filters (script based filters) will auto load if they are in the plugins folder and have .AVSI as the extension.

    If you are running 64 bit windows you need to use all 32 bit components or all 64 bit components. The two environments can't "see" each other's components. I recommend you use 32 bit components because many filters aren't available in 64 bit versions. So you need to use a 32 bit media player or 32 bit VirtualDub to view your scripts; 32 bit VirtualDub and AviSynth filters, and 32 bit AviSynth.

    Under 64 bit Windows 32 bit programs are usually installed in

    Code:
    C:\Program Files (x86)\
    Quote Quote  
  13. Ah yes, the QTGMC plugin package mentioned something about not mixing 32 and 64 bit. I'm pretty sure I didn't. The weird thing though, as you said, plugins with the file extension .avsi should autload if they are in the avisynth plugins folder. That's where I have QTGMC + all the requisite plugins yet VirtualDubMod gives me the error message I posted above. A function named "mt_makediff" sounds like it's about multithreading (at least i noticed that those functions seem to have "MT" in their name often) but I didn't install anything for multithreading, nor did I tell QTGMC to use multithreading. Importing/Loading QTGMC and DGDecode doesn't fix the problem, seems the issue lies with VirtualDubMod? My script looks as follows:

    Code:
    LoadPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\DGDecode.dll")
    import("C:\Program Files (x86)\AviSynth 2.5\plugins\QTGMC-3.32.avsi")
    MPEG2Source("C:\Program Files (x86)\VirtualDubMod\tests\AfterSchool.d2v", CPU=6, info=3)
    QTGMC()
    As you said, the loading and importing should be done automatically, but for safety, i included it. See anything wrong with it?
    Quote Quote  
  14. Did you install the regular AviSynth? MeGUI installs it's own private version in one of its subfolders. In that case you need to put the plugins in MeGUI's AviSynth plugins folder.
    Quote Quote  
  15. Originally Posted by jagabo View Post
    Did you install the regular AviSynth? MeGUI installs it's own private version in one of its subfolders. In that case you need to put the plugins in MeGUI's AviSynth plugins folder.
    I installed avisynth separately using the link on the Wiki, version 2.5.8
    Quote Quote  
  16. mt_makediff is part of masktools. Find the QTGMC thread at doom9, there is a zip file with all the requisite dll's, plugins, avsi's etc... along with instructions

    In masktools, there are .dll's named
    mt_masktools-25.dll
    mt_masktools-26.dll

    Only use one or the other (not both). The 26 is for avisynth 2.6.x . The 25 is for avisynth 2.5.x . So if you installed 2.5.8 you would use mt_masktools-25.dll (you can place it in the plugins folder to autoload)
    Quote Quote  
  17. Ah, I was dumb... I copied over the masktools but they were in a subfolder. Thanks, that was entirely unnecessary, excuse my stupidity...

    I think I've got a good setup and I suppose now it comes down to playing with filters and observing. I really have to applaud this forum, it has helped me greatly time and again, everybody here is so supportive and helpful! So thanks again to you two, I feel like I'm ready to get into this matter somewhat deeper, which I've wanted to do for some time now. Thank so much
    Quote Quote  
  18. Some general information:

    When you don't specify a named video stream the name "last" is used.
    https://forum.videohelp.com/threads/350929-Progressive-to-Interlaced-covertion-adding-c...=1#post2304471

    A good way to see the effect of a filter is to use Interleave() then step back and forth between filtered and unfiltered frames with the left and right arrow keys in VirtualDub:

    Code:
    WhateverSource()
    Interleave(last, filtered())
    I'm using "filtered()" here to mean any particular filter. You could also use a sequence like:

    Code:
    WhateverSource()
    source = last # remember the original source
    Tweak(cont=1.2) # just a few random filters
    FlipVertical()
    Interleave(source, last)
    You can stack videos side by side or over/under with StackHorizontal() and StackVertical() for comparison:

    Code:
    WhateverSource()
    StackHorizontal(last, filtered())
    Learn to use the Histogram() and/or VideoScope() filters. They have many uses but maybe most important is the adjust the black and bright levels.
    https://forum.videohelp.com/threads/340804-colorspace-conversation-elaboration?p=212156...=1#post2121568
    https://forum.videohelp.com/threads/326496-file-in-Virtualdub-has-strange-colors-when-o...=1#post2022085
    Quote Quote  
  19. Okay, so all this stuff about colours is still a bit confusing. I need to read up on the terms (like luma and chroma). But what boggles me the most at the moment is the graphs you posted with Histogram(). How do you know where Y=16 is? I can't see any numbers to describe the graph, it seems to give only the graph itself, without the scale. Do you simply know from experience that that peak has to be at Y=16 because it roughly looks to be at the supposed position and this problem occurs often? Generally, I know next to nothing concerning colours, their values and their mechanics on computers. I'm not sure if I understood the underlying issue exactly either, basically, it sounded like YUV should only have values from 16 to 235 but some recording/converting software assigns values below 16, losing detail in the dark areas because YUV will display 0 to 15 equally as black as 16, although originally, 0 was darker than, say, 12? And the same for bright areas and values above 235. Is that it?

    So Interleave basically shows frames in comparison? I.e. I give the raw source, then a filter and then the video goes first frame original, first frame filtered, second frame original, second frame filtered... ? That sounds EXTREMELY handy, thank you, I will make sure to use that often.

    I'm using "filtered()" here to mean any particular filter. You could also use a sequence like:


    Code:
    WhateverSource()
    source = last # remember the original source
    Tweak(cont=1.2) # just a few random filters
    FlipVertical()
    Interleave(source, last)
    So in "Interleave(source, last)" last describes the video including the filters that were applied beforehand (namely FlipVertical() and Tweak(cont=1.2))?

    Oh yes, StackVertical() and StackHorizontal() were introduced in one of the first tutorials, but I didn't think of any use for it yet, your idea is definitely very good, that'll help a lot too! Thanks a ton for putting in such an effort to make my entry so much easier
    Quote Quote  
  20. Another thing. I just tried what you suggested earlier, using QTGMC's EZDenoise feature. To really see the effects, I followed QTGMC's documentation and used ShowNoise to see how much noise would remain. What I find weird is that it says that a higher Denoise value means more denoising (which makes sense). Yet, when I set EZDenoise to merely 0.1 I see no noise at all, yet when I crank it up to 2.0 (as you estimated earlier) I see tons of noise. What's going on there?

    EDIT: I tried an example from the documentation:
    Code:
    QTGMC(Preset="Slower", NoiseProcess=1, NoiseRestore=0.0, Denoiser="dfttest", DenoiseMC=true, NoiseTR=2, Sigma=4.0, ShowNoise=8)
    This was mentioned as a pretty slow but powerful denoising process. It still shows noise, maybe I'm misinterpreting what the noise should look like in the view I force through ShowNoise? Should there be nothing at all or should it take a certain shape? Is it better to just look at the actual video and ignore this overlay completely?


    EDIT2: I tried using Interleave to see some changes, this time without looking at ShowNoise, but the actual output. Weirdly, when comparing different QTGMC presets and manual changes, there is no visible difference in the source material. What does that mean? I used this:
    Code:
    Interleave(MPEG2Source("C:\Program Files (x86)\VirtualDubMod\tests\AfterSchool.d2v"), QTGMC(Preset="Medium"), QTGMC(Preset="Slower", NoiseProcess=1, NoiseRestore=0.0, Denoiser="dfttest", DenoiseMC=true, NoiseTR=2, Sigma=4.0)
    I even tried the Presets Slow, Medium and Very Fast, but they look completely identical to the third filter (the one with the many manual changes). Do you know why that could be? There's a very clear difference between source and QTGMC, but no difference between any of the QTGMC Presets.
    Last edited by bschneider; 21st Jun 2014 at 07:43.
    Quote Quote  
  21. Originally Posted by bschneider View Post
    the graphs you posted with Histogram(). How do you know where Y=16 is?
    Historgram() highlights the 0-15 and 236-255 areas in yellow:

    Click image for larger version

Name:	hist.jpg
Views:	1182
Size:	51.4 KB
ID:	25863

    (I added black borders to that image to make room for the labels.) The total width of the graph is 256 pixels -- the number of values between 0 and 255 (inclusive).

    I usually prefer the traditional horizontal histogram so I use TurnRight().Histogram().TurnLeft(). That leaves the histogram at the top of the frame:

    Click image for larger version

Name:	hzhist.jpg
Views:	941
Size:	53.2 KB
ID:	25864

    Here 0 is at the bottom of the graph, 255 at the top.

    Note that this isn't really a histogram, it's a waveform monitor. It's basically what you would see on an oscilloscope monitoring the luma signal.

    VideoScope() doesn't highlight the out-of-bounds areas but if you enable the TickMarks one row of dots marks Y=16. Unfortunately there's no marking for Y=235. But the scale is the same as Histogram(), 256 pixels, so you can estimate where it is.

    Click image for larger version

Name:	vscope.jpg
Views:	312
Size:	114.0 KB
ID:	25865

    Originally Posted by bschneider View Post
    Generally, I know next to nothing concerning colours, their values and their mechanics on computers. I'm not sure if I understood the underlying issue exactly either, basically, it sounded like YUV should only have values from 16 to 235 but some recording/converting software assigns values below 16, losing detail in the dark areas because YUV will display 0 to 15 equally as black as 16, although originally, 0 was darker than, say, 12? And the same for bright areas and values above 235. Is that it?
    Yes. A properly calibrated display will show Y=0 and Y=16 the same shade of black. If they don't, normal video will look washed out (not enough contrast) because blacks at Y=16 will be dark grey, not black. The top end is less critical in terms of obvious image quality.

    Originally Posted by bschneider View Post
    So Interleave basically shows frames in comparison? I.e. I give the raw source, then a filter and then the video goes first frame original, first frame filtered, second frame original, second frame filtered... ? That sounds EXTREMELY handy, thank you, I will make sure to use that often.
    Yes, it alternates frames form the two videos. I use it along with a screen magnifier to view small changes resulting from filters. Looking at images side by side it's hard to see small differences. When you flip back and forth between the two the differences are much more easily spotted.

    Originally Posted by bschneider View Post
    I'm using "filtered()" here to mean any particular filter. You could also use a sequence like:

    Code:
    WhateverSource()
    source = last # remember the original source
    Tweak(cont=1.2) # just a few random filters
    FlipVertical()
    Interleave(source, last)
    So in "Interleave(source, last)" last describes the video including the filters that were applied beforehand (namely FlipVertical() and Tweak(cont=1.2))?
    Yes.

    Originally Posted by bschneider View Post
    Oh yes, StackVertical() and StackHorizontal() were introduced in one of the first tutorials, but I didn't think of any use for it yet, your idea is definitely very good, that'll help a lot too! Thanks a ton for putting in such an effort to make my entry so much easier
    Some problems caused by filtering are only visible at full speed playback. This is one case where seeing both the original and the filtered videos at the same time is helpful.
    Quote Quote  
  22. Originally Posted by bschneider View Post
    Another thing. I just tried what you suggested earlier, using QTGMC's EZDenoise feature. To really see the effects, I followed QTGMC's documentation and used ShowNoise to see how much noise would remain. What I find weird is that it says that a higher Denoise value means more denoising (which makes sense). Yet, when I set EZDenoise to merely 0.1 I see no noise at all, yet when I crank it up to 2.0 (as you estimated earlier) I see tons of noise. What's going on there?
    ShowNoise=true shows the noise that was removed, not the noise remaining. A flat grey image indicates no noise was removed. Variations from that represent the noise removed.
    Quote Quote  
  23. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    i see no one has offered Deadrats' Law™ as a possible solution. DL states: "Don't Bother Wasting Your Time, Just Keep It As It Is".

    once something is interlaced, you can never really de-interlace it, at least not with the tools available to the average user. all the solutions listed try and mask flaws with the various de-interlacing tools.

    and in all honesty the space savings from transcoding from a 17mbs source to a 13mbs target are negligible.

    do yourself a favor, keep them as they are.
    Quote Quote  
  24. Originally Posted by jagabo View Post
    Histogram() highlights the 0-15 and 236-255 areas in yellow:

    Image
    [Attachment 25863 - Click to enlarge]


    (I added black borders to that image to make room for the labels.) The total width of the graph is 256 pixels -- the number of values between 0 and 255 (inclusive).
    So in the case of my sample, there area no issues with darks (no dots in the are 0-15), but some with brights (plenty dots from 236-255), correct? Actually, it looks like all the shades are a bit too bright, you could move it all down a bit and it would still not touch the area below 16. That way, you don't even need to warp the proportions of the brightness scale, right?

    Originally Posted by jagabo View Post
    Some problems caused by filtering are only visible at full speed playback.
    Oh yeah, I suppose that's true, that'll come in handy then!

    Originally Posted by jagabo View Post
    ShowNoise=true shows the noise that was removed, not the noise remaining.
    Ah, I figured as much. That still leaves some confusion concerning why the picture looks the same no matter which preset I use. Any idea what's causing that? Maybe my eyesight is just bad, at least ShowNoise signals that the noise removed is different for the respective Presets, but I don't see it much.
    AGAIN, you're taking so much time with this, I know I continue to ask questions, but if it's getting too much for you that's totally fine! Thanks so much

    Originally Posted by deadrats View Post
    I see no one has offered Deadrats' Law™
    Heh, that's good too But it's not about saving space for me, I want to make the video more pleasant-looking (subjectively, at least). I think the sample that jagabo posted early on looked leagues better than my source sample and so I want to try and work on it some more. Which isn't to say I don't appreciate your attempt at saving me some time, all opinions on the matter are very welcome!
    Quote Quote  
  25. Originally Posted by bschneider View Post
    So in the case of my sample, there area no issues with darks (no dots in the are 0-15), but some with brights (plenty dots from 236-255), correct?
    Yes.

    Originally Posted by bschneider View Post
    Actually, it looks like all the shades are a bit too bright, you could move it all down a bit and it would still not touch the area below 16. That way, you don't even need to warp the proportions of the brightness scale, right?
    Yes, in the sample images I posted you could subtract 10 or so from all the Y values (ColorYUV(off_y=-10) to bring the peaks down to 235. And since there are no pixels with luma below 26 you wouldn't have any over-dark pixels. But there are some other parts of the clip where the darks are down around 22. If you subtract 10 from those they would be too black.

    Originally Posted by bschneider View Post
    Originally Posted by jagabo View Post
    ShowNoise=true shows the noise that was removed, not the noise remaining.
    Ah, I figured as much. That still leaves some confusion concerning why the picture looks the same no matter which preset I use. Any idea what's causing that? Maybe my eyesight is just bad, at least ShowNoise signals that the noise removed is different for the respective Presets, but I don't see it much.
    I was able to see differences with your sample sequence. I used a 4x screen magnifier (Windows' built in Start -> All Programs -> Accessories -> Ease of Access -> Magnifier).
    Quote Quote  
  26. Originally Posted by jagabo View Post
    I was able to see differences with your sample sequence. I used a 4x screen magnifier (Windows' built in Start -> All Programs -> Accessories -> Ease of Access -> Magnifier).
    Ah you are right, I didn't expect the changes to be that hard to locate. You're right, there are noticeable differences when magnifying especially.

    So, I tried around with some filters for quite some time, didn't want to go to crazy, so I stuck mostly to what you suggested earlier. When using aWarpSharp, I noticed that "warp" is indeed a good description hehe, it seemed to distort the proportions somewhat, it looked odd. This is the code I used now:

    Code:
    MPEG2Source("C:\Program Files (x86)\VirtualDubMod\tests\AfterSchool.d2v")
    ColorYUV(off_y=-6)
    QTGMC(Preset="Slower", EZDenoise=2.0)
    DeBlock_QED(quant1=20, quant2=22)
    asharp(1,3)
    I used both EZDenoise and DeBlock_QED, it seemed to give a good picture, may be a placebo though, I'm getting a bit tired, it might not actually do anything.

    What is still left are some large blocks, especially when magnifying. This is 300% magnifying, I marked some spots that I found odd (is this still just blocking/artifacts?), and additionally, I would like if the face looked a bit... smoother? Can you help me identify what the problem is? Click image for larger version

Name:	issues.png
Views:	267
Size:	379.5 KB
ID:	25870

    I found it interesting to search through the avisynth wiki a bit, but find myself baffled by all the terminology. Some of it is hard to look up, I couldn't find anything on contra-sharpening for example. So I still don't know what it does and couldn't find anything via Google either. Just to explain some of my issues with expanding a lot on my own, I sometimes ran into dead ends.
    Last edited by bschneider; 22nd Jun 2014 at 03:03.
    Quote Quote  
  27. Originally Posted by bschneider View Post
    When using aWarpSharp, I noticed that "warp" is indeed a good description hehe, it seemed to distort the proportions somewhat, it looked odd.
    Yes. Stick with values below 10 or so with real-world video.

    Originally Posted by bschneider View Post
    This is the code I used now:

    Code:
    MPEG2Source("C:\Program Files (x86)\VirtualDubMod\tests\AfterSchool.d2v")
    ColorYUV(off_y=-6)
    QTGMC(Preset="Slower", EZDenoise=2.0)
    DeBlock_QED(quant1=20, quant2=22)
    asharp(1,3)
    I used both EZDenoise and DeBlock_QED, it seemed to give a good picture, may be a placebo though, I'm getting a bit tired, it might not actually do anything.

    What is still left are some large blocks, especially when magnifying.
    Deblock_QED doesn't work well after QTGMC(). The recommended way to use it with interlaced video is:

    http://forum.doom9.org/showpost.php?p=934083&postcount=884

    You'll also need stronger deblocking settings with that video. And try using MPEG2Source's deringing to remove DCT ringing artifacts at sharp edges:

    Code:
    MPEG2Source("C:\Program Files (x86)\VirtualDubMod\tests\AfterSchool.d2v", CPU2="ooooxx")
    AssumeTFF()
    ColorYUV(gain_y=-6, off_y=-3) #compromise setting for entire clip
    
    SeparateFields().PointResize(width,height)
    Deblock_qed(quant1=40, quant2=45).AssumeFrameBased().AssumeTFF()
    SeparateFields().SelectEvery(4,0,3).Weave()
    
    QTGMC(Preset="Slower", EZDenoise=2.0)
    asharp(1,3)
    Quote Quote  
  28. Originally Posted by jagabo View Post
    Yes. Stick with values below 10 or so with real-world video.
    Oh okay, I will try that again.

    Originally Posted by jagabo View Post
    Deblock_QED doesn't work well after QTGMC(). The recommended way to use it with interlaced video is
    Okay, I'll try to go through this, two parts I don't quite understand. I read up on what SeparateFields does, basically, it gives me the video in its pure interlaced form? Meaning it would be at half height if I were to watch it in that state?

    Now comes the first confusing part, PointResize. Basically, I take this 540p video and make it 1080p? And only afterwards apply Deblocking? Why is that? Is it to keep the proportions right?

    Afterwards Deblock_QED, and I read that the AssumeFrameBased and AssumeTFF are to make sure it chooses the right field to deblock, correct? Since every second line would be empty after SeparateFields, deblocking BFF would basically do nothing?

    Second confusing part: SelectEvery. Reading up on the syntax, SelectEvery(4, 0, 3) would select all the frames in increments of 4, from the offsets 0 and 3. So it would select Frames 0, 3, 4, 7, 8, 11, 12 etc. My theory is that due to the interlacing and SeparateFields, there is always a two frame dependency, i.e. frames 3 and 4 belong together and are "merged" with Weave. But what happens to frames not met by this? Meaning, what happens to frames 1 and 2, 5 and 6, etc.?

    Originally Posted by jagabo View Post
    And try using MPEG2Source's deringing to remove DCT ringing artifacts at sharp edges
    Ah, you are totally right, I forgot to add that in again.

    I hope you can see that I am progressing, I'm giving my best to wrap my head around these things. I find it highly interesting, but also overwhelming at certain occasions. Also, I found the filter Autolevels() and it seemed to net some positive results. What is your take on that? I'm not very knowledgeable about colours and their underlying mechanics, so it seemed to save me a lot of work and gave a presentable picture. If there are problems with it, I'd be more than willing to learn more about that matter too, though.
    Quote Quote  
  29. Originally Posted by bschneider View Post
    I read up on what SeparateFields does, basically, it gives me the video in its pure interlaced form? Meaning it would be at half height if I were to watch it in that state?
    Yes. It peels the two fields apart making each one a half height image and orders them sequentially.

    By the way, you can view the state of the video at any point with Return(last) -- or if you're using a named stream return using the name of that stream.

    Code:
    MyStream = Yada_yada()
    Return(MyStream)
    Originally Posted by bschneider View Post
    Now comes the first confusing part, PointResize. Basically, I take this 540p video and make it 1080p? And only afterwards apply Deblocking? Why is that?
    This type of deblocking is designed to work on the type of blocks you get from MPEG encoding. Frames are broken up into 8x8 pixel blocks for compression. That means the blocky artifacts are 8x8 pixels, aligned with the top left corner of the frame. After SeparateFields() the blocky artifacts are half height, 8x4 pixels. Deblock_QED() won't work properly with that since it's looking for 8x8 blocks. So PointResize() is used to restore the full frame height, and make the blocky artifacts 8x8 pixels again.

    Originally Posted by bschneider View Post
    Afterwards Deblock_QED, and I read that the AssumeFrameBased and AssumeTFF are to make sure it chooses the right field to deblock, correct?
    The video has already been deblocked after Deblock_QED(). AviSynth keeps track of whether the images are frames or separated fields. Since SeparateFields() was used earlier it remembers that they aren't full frames. It wouldn't allow you to SeparateFields() again. AssumeFrameBased() tells it to consider the separated fields as full frames, allowing SeparateFields() to work again.

    Originally Posted by bschneider View Post
    Since every second line would be empty after SeparateFields, deblocking BFF would basically do nothing?
    After SeparateFields() there are no black lines between the lines of the field. They were removed and the 540 lines packed together into a 1920x540 images.

    Originally Posted by bschneider View Post
    Second confusing part: SelectEvery. Reading up on the syntax, SelectEvery(4, 0, 3) would select all the frames in increments of 4, from the offsets 0 and 3. So it would select Frames 0, 3, 4, 7, 8, 11, 12 etc.
    Yes. SelectEvery(N,...) means: out of every group of N frames keep only the frames indicated in the following list. The frames are numbered 0 to N-1). So SelectEvery(4,0,1,2,3) doesn't change anything. SelectEvery(4,0,2,1,3) reorders the two middle frames of the group but keeps all the frames. SelectEvery(4,0,1) keeps the first two frames of the four and throws out the second two.

    Originally Posted by bschneider View Post
    My theory is that due to the interlacing and SeparateFields, there is always a two frame dependency, i.e. frames 3 and 4 belong together and are "merged" with Weave. But what happens to frames not met by this? Meaning, what happens to frames 1 and 2, 5 and 6, etc.?
    This is kinda hard to explain. The first 8 scan lines of your original video contains 8 lines:

    Code:
    0
    1
    2
    3
    4
    5
    6
    7
    After SeparateFields you have two half height images (depicted side by side here but they're really sequential in the AviSynth stream):

    Code:
    0        1    
    2        3
    4        5
    6        7
    After PointResize() you have:

    Code:
    0        1    
    0        1    
    2        3
    2        3
    4        5
    4        5
    6        7
    6        7
    Of those duplicated scan lines only the first of the duplicates is in the correct location (line) in the left image, only the second of the duplicates is in the correct location in the second image. So the SeprateFields().SelectEvery() sequence is to assure that the correct scan lines are taken from each field when reconstructing the original interlaced frame with Weave().


    Originally Posted by bschneider View Post
    I hope you can see that I am progressing,
    I think you are doing very well.

    Originally Posted by bschneider View Post
    I'm giving my best to wrap my head around these things. I find it highly interesting, but also overwhelming at certain occasions.
    Yes, there are so many details you have to understand to work at this level.

    Originally Posted by bschneider View Post
    Also, I found the filter Autolevels() and it seemed to net some positive results. What is your take on that?
    You have to be very careful when using automatic levels (brightness, contrast, saturation, etc.) filters. They are prone to brightness "pumping". Say for example you have a dark scene where all the pixels range from Y=16 to Y=126. An auto levels filter might brighten that so that Y ranges from 16 to 235. That would change a dark dingy shot to a nice bright sunny shot (which may or may not be what you want). But then someone walks into the frame wearing a bright white t-shirt at Y=235. Suddenly the auto levels filter will darken the shot to keep the t-shirt from blowing out. The background will return to its original dark dingy state.

    Originally Posted by bschneider View Post
    I'm not very knowledgeable about colours and their underlying mechanics, so it seemed to save me a lot of work and gave a presentable picture. If there are problems with it, I'd be more than willing to learn more about that matter too, though.
    Color is another whole big issue! I'll write a bot about that later.
    Quote Quote  
  30. Yeah, you're doing well and have a ton of patience. My first few times, I gave up trying to absorb avisynth.

    My opinion - "auto" anything will give subpar results compared to if you did it manually scene by scene. So it depends on how much time/effort you want to spend. Sometimes you don't have time to adjust everything . Because of different lighting situations per scene , you often have to apply different filters, different settings to different segments . There are several ways to do that and those options are discussed in other threads

    Instead of shifting the entire waveform down, another option is to preferentially adjust the top half, or apply a limit through smoothlevels (part of smoothadjust), so the overall brightness isn't reduced as much. Lmode=1 allows you to limit the change of darker pixels (so only the brighter pixels are affected) . Look at the documentation and learn what the parameters mean for levels and smoothlevels (e.g. input low, gamma, input high, output low, output high) . Those 5 parameters are the same universally in almost all programs (including photoshop), just that adjusting in RGB, is different than adjusting in YUV . First try adjusting those parameters with normal levels() , then try playing with the numbers with smootlevels with the lmode limiter

    Code:
    smoothlevels(5,1.1,255,0,240, lmode=1,ecurve=1, darkstr=200)
    Play with the parameters and see how they alter the final image, and the waveform in the histogram - that's how you learn. Another good way to compare the results is to use avspmod. You can put different versions scripts in different tabs and toggle between them with the number keys. So tab1 might have certain settings, tab2 might have different ones , tab3, tab4 etc.... it's a very fast way to get feedback on what your scripts/settings are doing, and learn what settings do what

    I know you're still working on the 1st clip, especially the macroblocking, but just to mention the 2nd clip has other issues like oversaturation. Oversaturaton and high levels (>235) reduce the amount of details that can be visible in those bright regions and saturated regions when it's rendered to RGB for display

    RE: Macroblocking - basically the details are non recoverable. You would need a better source. The stronger the deblocking settings and filters applied, the more smooth the image and fewer fine details will be retained . It's really a balance and up to subjective tastes
    Quote Quote  
Visit our sponsor! Try DVDFab and backup Blu-rays!