VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or Try ConvertXtoDVD and convert all your movies to DVD. Free trial ! :)
+ Reply to Thread
Page 4 of 5
FirstFirst ... 2 3 4 5 LastLast
Results 91 to 120 of 124
Thread
  1. If the commentator's voice is in the center and the audience noise is stereo you can invert one of the stereo tracks and mix down to mono (easily done with Audacity, for example). But if the audio track is essentially mono that trick won't work. For example, it worked with the GER-BRA-2002.mkv file but not GRE-CZE-2004.mkv.

    Otherwise, you cannot remove a commentators voice with a simple low pass filter (unless you only want low bass sounds). It takes very sophisticated filtering to remove wide spectrum sounds like a human voice. Maybe a program like Izotope.
    Quote Quote  
  2. Originally Posted by jagabo View Post
    If the commentator's voice is in the center and the audience noise is stereo you can invert one of the stereo tracks and mix down to mono (easily done with Audacity, for example). But if the audio track is essentially mono that trick won't work. For example, it worked with the GER-BRA-2002.mkv file but not GRE-CZE-2004.mkv.

    Otherwise, you cannot remove a commentators voice with a simple low pass filter (unless you only want low bass sounds). It takes very sophisticated filtering to remove wide spectrum sounds like a human voice. Maybe a program like Izotope.
    What does it mean that commentator's voice is "in the center"?
    Quote Quote  
  3. With a stereo track sounds that seem to come from middle -- between the two speakers. Those sounds are of equal volume and in phase on the two tracks. That allows you subtract and remove them.
    Quote Quote  
  4. In that case I would remove commentator's voice, but what would happen with background sound? Wouldn't it be changed somehow?
    Quote Quote  
  5. Originally Posted by Santuzzu View Post
    In that case I would remove commentator's voice, but what would happen with background sound? Wouldn't it be changed somehow?
    Yes. But anything else you do is going to change the sound to some extent too.
    Quote Quote  
  6. Well, but isn't it expected two signals (channels) to be mostly in phase when it comes to soccer matches, and after we invert one of them, the overall signal after addition would be significantly diminished? I suppose that this is a well known trick with stereo audio and is not a sophisticated method, but would like to make clear what are its possibilities.
    Quote Quote  
  7. Like I said, some times it will work and sometimes it won't.
    Quote Quote  
  8. Thanks jagabo, I will try both variants and see how it works.

    I tried to de-interlace using QTGMC a video attached in this post. I see that it's de-interlaced but top and bottom fields are not at right places. Top fields follow top fields, bottom fields follow bottom fields. You will see it in the de-interlaced sample which is attached as well. What's going on here?
    Image Attached Files
    Quote Quote  
  9. You have the wrong field order. Call AssumeTFF() or AssumeBFF() before QTGMC(). Always post your full when asking for help.
    Quote Quote  
  10. Originally Posted by jagabo View Post
    You have the wrong field order. Call AssumeTFF() or AssumeBFF() before QTGMC(). Always post your full when asking for help.
    Beginner mistake... Thank you.

    It works on "Slower" preset, but with "Very Slow" and "Placebo" brightness is reduced a lot. Video is very dark. Any explanation for this?
    Quote Quote  
  11. I've seen that too. I think it's some kind of incompatibility with one of the filters QTGMC uses. I hardly every use anything slower than preset="fast" so I haven't bothered to track it down.
    Quote Quote  
  12. I've seen that too.
    I haven't (using 32bit Avisynth+ MT) I used:
    Code:
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\LoadDll.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\DGDecodeNV.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\AddGrainC.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\dfttest.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\EEDI2.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\eedi3.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\FFT3DFilter.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\masktools2.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\mvtools2.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\nnedi.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\nnedi2.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\SSE2Tools.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\TDeint.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\VerticalCleanerSSE2.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\PlanarTools.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\MedianBlur2.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\nnedi3.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\RgTools.dll")
    LoadCPlugin("I:\Hybrid\32bit\AVISYN~1\yadif.dll")
    LoadDLL("I:\Hybrid\32bit\AVISYN~1\libfftw3f-3.dll")
    Import("I:\Hybrid\32bit\avisynthPlugins\QTGMC.avsi")
    Import("I:\Hybrid\32bit\avisynthPlugins\SMDegrain.avsi")
    Import("I:\Hybrid\32bit\avisynthPlugins\AnimeIVTC.avsi")
    SetFilterMTMode("DEFAULT_MT_MODE", MT_MULTI_INSTANCE)
    # loading source: F:\TestClips&Co\files\interlaceAndTelecineSamples\interlaced\avc - interlaced.m2ts
    #  input color sampling YV12
    #  input luminance scale tv
    Source = DGSource(dgi="E:\Temp\m2ts_0a10d28b48f7fa511d013e46ccd03e3c_853323747.dgi",fieldop=2)
    # current resolution: 1920x1080
    SourceFiltered = Source
    # deinterlacing
    Source = Source.AssumeTFF()
    Source = Source.QTGMC(Preset="Fast", ediThreads=2)
    Source = Source.SelectEven()
    SourceFiltered = SourceFiltered.AssumeTFF()
    SourceFiltered = SourceFiltered.QTGMC(Preset="Very Slow", ediThreads=2)
    SourceFiltered = SourceFiltered.SelectEven()
    # filtering
    SourceFiltered = SourceFiltered.ConvertToRGB32(matrix="Rec709")
    Source = Source.ConvertToRGB32(matrix="Rec709")
    # interleaving for filer preview
    Source = Source.Subtitle("Fast")
    SourceFiltered = SourceFiltered.Subtitle("Very Slow") 
    Interleave(Source, SourceFiltered)
    PreFetch(4)
    return last
    trying to reproduce this.

    Also tried with the BRA-FRA-98.mkv clip to make sure this depends on the source:
    Code:
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\LoadDll.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\DGDecodeNV.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\AddGrainC.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\dfttest.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\EEDI2.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\eedi3.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\FFT3DFilter.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\masktools2.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\mvtools2.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\nnedi.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\nnedi2.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\SSE2Tools.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\TDeint.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\VerticalCleanerSSE2.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\PlanarTools.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\MedianBlur2.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\nnedi3.dll")
    LoadPlugin("I:\Hybrid\32bit\AVISYN~1\RgTools.dll")
    LoadCPlugin("I:\Hybrid\32bit\AVISYN~1\yadif.dll")
    LoadDLL("I:\Hybrid\32bit\AVISYN~1\libfftw3f-3.dll")
    Import("I:\Hybrid\32bit\avisynthPlugins\QTGMC.avsi")
    Import("I:\Hybrid\32bit\avisynthPlugins\SMDegrain.avsi")
    Import("I:\Hybrid\32bit\avisynthPlugins\AnimeIVTC.avsi")
    SetFilterMTMode("DEFAULT_MT_MODE", MT_MULTI_INSTANCE)
    # loading source: C:\Users\Selur\Desktop\BRA-FRA-98.mkv
    #  input color sampling YV12
    #  input luminance scale tv
    Source = DGSource(dgi="E:\Temp\mkv_fa12158968444d0e8c5c514bb7b56e2b_853323747.dgi",fieldop=2)
    # current resolution: 720x576
    SourceFiltered = Source
    # deinterlacing
    Source = Source.AssumeTFF()
    Source = Source.QTGMC(Preset="Fast", ediThreads=2)
    Source = Source.SelectEven()
    SourceFiltered = SourceFiltered.AssumeTFF()
    SourceFiltered = SourceFiltered.QTGMC(Preset="Very Slow", ediThreads=2)
    SourceFiltered = SourceFiltered.SelectEven()
    # filtering
    SourceFiltered = SourceFiltered.ConvertToRGB32(matrix="Rec601")
    Source = Source.ConvertToRGB32(matrix="Rec601")
    # interleaving for filer preview
    Source = Source.Subtitle("Fast")
    SourceFiltered = SourceFiltered.Subtitle("Very Slow") 
    Interleave(Source, SourceFiltered)
    PreFetch(4)
    return last
    for me both 'Fast' & 'Very Slow' have the same colors.

    The Avisynth stuff (without DGDecNV) I use can be found over [http://www.selur.de/sites/default/files/hybrid_downloads/avisynth/avisynthExtension_190314.7z]here[/url], in case one of you wants to compare the filters I used against their own.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  13. I've seen it with 64 bit AviSynth+, not with 32 bit AviSynth 2.6mt.
    Quote Quote  
  14. Ah okay, I'm not using 64bit Avisynth, for 64bit I use Vapoursynth.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  15. Originally Posted by jagabo View Post
    I only looked at GER-BRA. There are obvious deinterlacing artifacts. It was broadcast as 1080i30.
    Please, can you write where (which second) did you spot artifacts in this video?
    Quote Quote  
  16. Here are 4x point enlargements from two frames:

    Image
    [Attachment 48475 - Click to enlarge]


    Be sure to view the image full size. At the top, taken from frame 1935, the camera is not moving. Notice how the horizontal line is pretty smooth. Below that is the same section of the field from frame 1977 where the camera is moving. You can see obvious jaggy deinterlacing artifacts. You can see similar artifacts whenever the camera is panning.
    Quote Quote  
  17. Thanks jagabo. I understand that such artifacts are visible when camera is moving pretty fast because in such cases two fields from the same frame have bigger differences. BTW, what is the most obvious deinterlacing artifact that can be spotted during playback if the video has been deinterlaced in a quality way using QTGMC for example? Are these jaggy artifacts actually the worst ones?

    Another question is about broadcasting. As far as I understood, all broadcasting in last XY years is done in interlaced form in either 25 or 30 fps depending on the area, except in USA where 720p 60fps is a standard. I'm curious what happens if the event takes place in Europe for example, I guess in USA they process the video and show in 720p 60fps, but what happens in Japan and South Korea where it should be broadcast in 29.97 fps? Do they process 1080i 25fps to 1080i 30fps or they get the original in 1080i 30fps? Same for USA, why don't they get the original in 720p 60fps instead of processing it before showing? Or perhaps it depends on the broadcasting rights? Hope someone can explain how this works.
    Quote Quote  
  18. Originally Posted by Santuzzu View Post
    what is the most obvious deinterlacing artifact that can be spotted during playback if the video has been deinterlaced in a quality way using QTGMC for example?
    QTGMC occasionally will leave jaggy artifacts like that, but not often. The most obvious artifacts with QTGMC are with closely spaced, thin, near horizontal lines. Like you might see with horizontal window blinds in the background of an image, or on the walls of a skyscraper in the distance. You will sometimes get moire artifacts. With animated content you sometimes get a little ghosting as an edge or line in a previous or next frame appears in the current frame. This type of artifact is barely visible in real time playback.

    Originally Posted by Santuzzu View Post
    Another question is about broadcasting. As far as I understood, all broadcasting in last XY years is done in interlaced form in either 25 or 30 fps depending on the area, except in USA where 720p 60fps is a standard. I'm curious what happens if the event takes place in Europe for example, I guess in USA they process the video and show in 720p 60fps, but what happens in Japan and South Korea where it should be broadcast in 29.97 fps? Do they process 1080i 25fps to 1080i 30fps or they get the original in 1080i 30fps? Same for USA, why don't they get the original in 720p 60fps instead of processing it before showing? Or perhaps it depends on the broadcasting rights? Hope someone can explain how this works.
    I'm not an expert in broadcast video. But I believe video in different countries is normally shot in one of the countries usual formats. When shown in a country with different formats it will be converted locally. So, a soccer game in the UK may be shot at 1080i25 for local broadcast, but converted to 1080i30 or 720p60 in the USA.
    Quote Quote  
  19. Thanks jagabo.

    I was downloading a soccer match today and found one version in 1080i25fps but with repeated even frames. Scan type is "MBAFF", store method is "Separated fields". What does this practically mean?
    Quote Quote  
  20. Sorry, my mistake. Frames are not repeated and video has 50fps. It says it is 1080i 25fps, does this mean that video is practically deinterlaced or something else?
    Quote Quote  
  21. MBAFF is the type of interlaced encoding that x264 does:

    https://www.afterdawn.com/glossary/term.cfm/macroblock-adaptive_frame-field_coding

    Does your video show each interlaced frame twice? There's some confusion about field rate vs. frame rate with some videos and some decoders. If you're getting duplicate frames try forcing the frame rate in the source filter. Or use a different source filter.
    Quote Quote  
  22. So, when there is a motion the algorithm doesn't separate fields, otherwise it separates since the quality will be better because in such cases interpolation gives more natural results? This practically means that this video has been partly deinterlaced during streaming or recording?
    Quote Quote  
  23. Originally Posted by Santuzzu View Post
    So, when there is a motion the algorithm doesn't separate fields, otherwise it separates since the quality will be better because in such cases interpolation gives more natural results?
    Yes. Parts of the frame where there is no motion don't have comb artifacts -- so they can be encoded progressively. Parts of the frame with motion have comb artifacts and must be encoded interlaced. This is all internal to the codec. When it outputs the frame it's a normal interlaced frame of video.

    Originally Posted by Santuzzu View Post
    This practically means that this video has been partly deinterlaced during streaming or recording?
    Not really deinterlaced. The parts with no motion are effectively progressive to start with.
    Quote Quote  
  24. Do you want to say that those progressive frames are actually originals taken with a camera, that the video was encoded for the first time using this method (selective interlacing)? In that case it practically means that the quality of such videos is better than the quality of completely interlaced videos because less original information has been lost (more original data has been propagated) through encoding?

    What about deinterlacing of such videos, how would I do it with Avisynth? If it is possible, how does a deinterlacing algorithm recognize which frame is interlaced and which is not? Is there some pattern or perhaps information stored inside a video file about the frame scan type?
    Quote Quote  
  25. As far as you are concerned MBAFF is interlaced video. Treat it exactly the same as any other interlaced video.
    Quote Quote  
  26. So, MBAFF takes some parts of interlaced frames as progressive if there is no motion in order to ensure a higher compression ratio? In that case, even in the best case it can't increase (neither decrease) the quality of the input video, usually it actually decreases the quality as all lossy compression algorithms? It just helps to compress the input video with a higher compression ratio at the expense of video quality. That's how I understood it, please correct me if something is wrong here.
    Last edited by Santuzzu; 25th Mar 2019 at 23:23.
    Quote Quote  
  27. Yes, that is correct. Progressive encoding is more efficient than interlaced encoding. So if you can encode part of the frame progressive you get more compression and/or lose less quality loss.
    Quote Quote  
  28. Originally Posted by jagabo View Post
    Progressive encoding is more efficient than interlaced encoding.
    ??

    I'd never heard that one and am scratching my head as to why that would be true. AFAIK, interlaced is encoded with the odd fields as one stream and the even fields as another stream, so they are two half-height progressive streams which are then woven back together. I don't think the encoder would ever try to use information from the odd fields when encoding an even field (or vice versa) since the two fields in an interlaced frame are spatially offset.

    So, since interlaced is really two progressive streams, I'm not sure whether the encoding would be less efficient. I am not saying you are wrong, but I am saying that this is the first time I've ever seen this statement.
    Quote Quote  
  29. When you separate fields you increase aliasing and more high frequency components. That lead to higher bitrate requirements.
    Quote Quote  
  30. Originally Posted by jagabo View Post
    When you separate fields you increase aliasing and more high frequency components. That lead to higher bitrate requirements.
    Not to be argumentative, but how would the encoder know the difference between progressive 720x480 video and a 720x240 progressive video? Maybe what you are saying is that a diagonal line will have a lot more staircasing for the half-height field, and the encoder must work harder to encode that cleanly. If that is the point, I can see how that might be so.
    Quote Quote  



Similar Threads