VideoHelp Forum
+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 51
Thread
  1. Source is 25i.

    The question is: when is it recommended to add SelectEven(), to output 25p from 25i ? And in which cases is it recommended to keep 50p from 25i ?

    Is it correct to assume that a football game should get 50p, while a documentary should get 25p ?

    On what criterion do I base the decision to choose either 50p or 25p ? Based on if there are many scenes in which the camera pans quickly ? Or many scenes with very fast moving objects ? To choose if, for example, it is better if a certain documentary will have 50p instead of 25p, and vice versa.
    Last edited by codemaster; 30th Aug 2013 at 15:48.
    Quote Quote  
  2. If you want to retain smooth motion use 50p. If you don't care about smooth motion and want to minimize file size or need 25p for some other reason (like a player that can't handle 50p) use 25p.
    Quote Quote  
  3. Larger file size and device or player that can't handle 50p are the only disadvantages of choosing 50p ?

    For example, It is very rare to see video content on the web, that has 50p or 60p. Almost all of it is below 30p. The same applies to television programs distributed through peer to peer networks. Even football games are 25p. What would be the reason ? Maintaining the smallest file size possible ?
    Last edited by codemaster; 30th Aug 2013 at 17:50.
    Quote Quote  
  4. Yes. If you don't save your original interlaced source there is one more disadvantage: you are forever locking in the deinterlacing quality of QTGMC. If someone comes out with a better deinterlacer in the future you'll be out of luck.

    Regarding the file size disadvantage, it's not 2x as you might expect. For most material it's around 20 percent.
    Quote Quote  
  5. And because of this 20 percent, almost all video content on the web is below 30 fps ?

    I think I'll choose 50 fps for my 25i source, since it is standard definition. Maybe for 1080i sources it would make sense to keep the original frame rate.
    Quote Quote  
  6. 50p would play just alright with flash, mp4 players. But it is more likely to drop frames and there is more problems on weaker computers.

    Ordinary, older smartphone will play 50p SD resolution if you go easy with profile and settings, maybe dropping some frames, but you'd get "dropped frames" encoding 25p anyway, so to speak. The other problem, you do not mention this, is to decide if to resize to square pixel, or not. I'd do it at the same time in Avisynth after QTGMC line. Devices (software players in it) might ignore aspect ratio in your video. I suppose you are talking about PAL TV soccer games. If that's not true disregard this.
    Quote Quote  
  7. Of course I resize it to square pixel:
    Code:
    LanczosResize(720,404)
    At that resolution, an mp4 at 50 fps, if I send it to others, their tablets or laptops are more than capable of playing it back, without dropping frames. So it makes sense to choose doubled frame rate and smoother motion.
    Last edited by codemaster; 30th Aug 2013 at 19:35.
    Quote Quote  
  8. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Yep. The madness continues. Transcoding and resizing, hey, no problems, it's just like WinZip. All standalone DVD players play Xvid, too. Isn't digital wonderful?

    Has anyone picked up on this bullshit thread, or what? I say leave 'em to it. Some people are just blind as bats, even when viewing tiny videos on their Dick Tracy wrist watches. This is the third thread or so from the O.P. on the same subject, and so little learned. Annoying.

    Sorry. Sometimes it's better to just vent openly.

    As you were.
    Last edited by sanlyn; 21st Mar 2014 at 14:36.
    Quote Quote  
  9. There is nothing wrong with qtgmc and resizing TV interlace content, it is an executive decision that will make life easy for those who are going to watch it on devices. Nobody is watching soccer game repeatedly. Watch it and forget it, there is another game tomorrow.

    DVD is another story, maybe this is what you are talking about only, but this is certainly not a much concern for op, sure to use original captured footage would be preferable. But again this is not a home video with unique value etc.
    Quote Quote  
  10. Originally Posted by sanlyn View Post
    Yep. The madness continues. Transcoding and resizing,
    It's not just transcoding. It's editing. It's filming, trimming, adding titles, fades, transitions, correcting tonality, color correction, removing unwanted sounds. This obviously requires transcoding (rendering to something lossless and encoding to something lossy). This obviously requires resizing, since crt's are 1930s technology. Obviously it requires deinterlacing, since the camcorder bought in 2007 is using 1930s technology, and isn't capable of recording progressive frames. Of course I had many questions about deinterlacing, since I could not capture progressive.
    Last edited by codemaster; 30th Aug 2013 at 21:00.
    Quote Quote  
  11. It is your PAL DV footage and you edit it? Then you can make DVD out your editor's timeline right away. Leave it interlace and to not resize it, like Sanlyn suggests.
    I mean apart from making progressive content as well.
    Last edited by _Al_; 30th Aug 2013 at 21:07.
    Quote Quote  
  12. Originally Posted by _Al_ View Post
    Leave it interlace and to not resize it, like Sanlyn suggests.
    I was thinking I could encode to mpeg 2, from vegas pro timeline, via debugmode frameserver. Instead of H.264. That way, I can avoid deinterlacing and resizing. The player will deinterlace and upscale, but then I was thinking that, if the player deinterlaces it in real time, then maybe it uses a much lower quality deinterlacer than qtgmc or even yadif. And then, there are video players that do not deinterlace automatically, by default, like vlc, where you have to manually set it to deinterlace, and many users might not know this. Then there is the advantage of better compressibility for progressive frames, resulting in smaller file size which takes less time to upload/download to/from the file hosting cloud service, and the posibility of using higher quality encoders and newer and more efficient codecs.
    Last edited by codemaster; 30th Aug 2013 at 22:17.
    Quote Quote  
  13. No player deinterlaces as well as QTGMC.
    Quote Quote  
  14. Then, the best solution is to buy a new camcorder that can shoot progressive, and second best solution is to continue to use old 2007 camcorder but deinterlace with QTGMC. At least now, in 2013, I see that there are Canon camcorders that can shoot 25 progressive frames per second, or Panasonic with up to 50, but even now there are still others, like some Sony ones, that still can't shoot progressive.
    Quote Quote  
  15. Originally Posted by codemaster View Post
    I was thinking I could encode to mpeg 2, from vegas pro timeline, via debugmode frameserver. Instead of H.264. That way, I can avoid deinterlacing and resizing. The player will deinterlace and upscale, but then I was thinking that, if the player deinterlaces it in real time, then maybe it uses a much lower quality deinterlacer than qtgmc or even yadif. And then, there are video players that do not deinterlace automatically, by default, like vlc, where you have to manually set it to deinterlace, and many users might not know this. Then there is the advantage of better compressibility for progressive frames, resulting in lower file size, and the posibility of using higher quality encoders and newer and more efficient codecs.
    Yes 50p gives less volume encoded using qtgmc than encoded directly as 25i.

    What I meant is to produce interlace mpeg2 720x576 in Vegas directly for DVD besides of your production of 50p mp4. Mpeg2 in Vegas is not the worst, fast, but I understand it is too much work to deal with more options. Debug mode frame server is used when resize of interlaced footage is needed, where Avisynth will do it better and of course qtgmc deintrlace too.
    Last edited by _Al_; 30th Aug 2013 at 22:21.
    Quote Quote  
  16. Originally Posted by codemaster View Post
    Then, the best solution is to buy a new camcorder that can shoot progressive, and second best solution is to continue to use old 2007 camcorder but deinterlace with QTGMC. At least now, in 2013, I see that there are Canon camcorders that can shoot 25 progressive frames per second, or Panasonic with up to 50, but even now there are still others, like some Sony ones, that still can't shoot progressive.
    Panasonic, Sony even Canon (this year models) now sell 50p camcorders now. Not sure if it is 50p or some fake deinterlace from 50i, but these days I'd think it is 50p going from progressive chip and not processed to 50p later.
    Quote Quote  
  17. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by codemaster View Post
    Originally Posted by sanlyn View Post
    Yep. The madness continues. Transcoding and resizing,
    It's not just transcoding. It's editing. It's filming, trimming, adding titles, fades, transitions, correcting tonality, color correction, removing unwanted sounds. This obviously requires transcoding (rendering to something lossless and encoding to something lossy). This obviously requires resizing, since crt's are 1930s technology. Obviously it requires deinterlacing, since the camcorder bought in 2007 is using 1930s technology, and isn't capable of recording progressive frames. Of course I had many questions about deinterlacing, since I could not capture progressive.
    No, the work you're doing should not be re-encoded and resized repeatedly. But enjoy yourself. I'd be the first to avoid watching anything that has been degraded by what you're doing. Otherwise, end of rant. If it looks OK to you, have at it.
    Last edited by sanlyn; 21st Mar 2014 at 14:36.
    Quote Quote  
  18. It looks ok to me, and I prefer to edit, then render to avi uncompressed, then encode (qtgmc, downscale height only, crf24 preset very slow, aac lc vbr 160 kbps with neroAacEnc), then send an 110 MB mp4 clip, instead of just sending 2,6 GB of unedited, untouched mpeg 2 clips. It looks alot better than youtube clips anyway. I realize that those clips, with those codecs, at those bitrates, are a final thing and are not meant to be transcoded and are not made with pro gear, but I prefer to edit.
    Last edited by codemaster; 31st Aug 2013 at 09:41.
    Quote Quote  
  19. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    IMO you're doing it backwards.

    Originally Posted by codemaster View Post
    It looks ok to me
    I never said it doesn't.
    Last edited by sanlyn; 21st Mar 2014 at 14:36.
    Quote Quote  
  20. Then what do you think not doing it backwards would mean? Leave the camcorder clips untouched? Edit them, but let TV's and software players do the resizing and deinterlacing? If you film something, even if you film it with a consumer camcorder, would you not want to trim, or correct an over exposed clip, or correct white ballance, or lower the volume where wind blows into the mic?
    Last edited by codemaster; 31st Aug 2013 at 09:58.
    Quote Quote  
  21. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    I'd go to lossles as step #1 rather than cut/edit first. Then use lossless for all cleanup and correction, then resize, then encode as the last step. The way you described it, you're using a different sequence. Perhaps I'm reading it incorrectly.
    Last edited by sanlyn; 21st Mar 2014 at 14:37.
    Quote Quote  
  22. The sequence is this:
    1. Ingest footage from camcorder into Vegas Pro (directly, no transcoding involved, just copying).
    2. Trim, titles, color correction, etc in Vegas Pro.
    3. Export an avi signpost with debugmode frameserver plugin, rgb24.
    4. Open the avi signpost in AviSynth, convert to yv12, deinterlace, resize height to 404px, then encode with x264 using this avs script (crf 24, preset very slow).

    Code:
    AVISource("E:\signpost.avi", audio=false).AssumeFPS(25,1)
    Load_Stdcall_Plugin("D:\InstalledApps\MeGUI\tools\yadif\yadif.dll")
    AssumeTFF
    ConvertToYV12(interlaced=true)
    Yadif(order=1)
    LanczosResize(720,404)
    5. Export audio from Vegas Pro as wav pcm uncompressed, then encode it with neroAacEnc (vbr, q=0.5).

    6. Mux to mp4.

    Transcoding is done only once. Resizing is done only once. This should be one generation loss. In contrast to youtube, where there is at least two generation loss.
    Last edited by codemaster; 31st Aug 2013 at 10:46.
    Quote Quote  
  23. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    I wouldn't know where to begin. YV12->RGB24->YV12, and still interlaced? Color correction and any compositing were re-encoded. I'd start with import DV as DV (copy), decode to lossless with Avisynth in the original colorspace. Work in lossless YUV as much as possible (which includes deinterlacing) before going to lossless RGB for other filters. YUV to RGB also involves levels expansion/compression, so that would have to be handled in YUV first. That's for a start. There's not enough room in one thread to explain all the issues involved. But your mind is set, so there's no sense pushing it.

    Since we have no idea what your before/after vids look like, it's all rather moot.
    Last edited by sanlyn; 21st Mar 2014 at 14:37.
    Quote Quote  
  24. So, yv12 clips ingested in Vegas Pro, their colorspace is converted to rgb and their levels are expanded from 16-235 to 0-255 ?

    If so, then I must convert the yv12 clips to rgb in a way that doesn't compress or expand the levels, using avisynth and a lossless codec such as lagarith ?

    Is this why Premiere Pro outputs in yv12 all frames that don't have effects applied to them, and converts to rgb only the frames with effects applied ?

    Then, with avisynth, I have to convert the signpost.avi from rgb to yv12, but in a way that doesn't compress or expand or clip levels ?

    And before all of this, using avisynth and virtualdub, I should first of all cut (if needed), deinterlace, resize, denoise (if needed), the source clips, before I convert them to rgb lagarith, and then trim and color correct them in premiere or vegas ?

    Did I undestood correctly what you explained ? I don't know what you meant exactly by "your mind is set", but I certainly want to do this properly, and avoid mistakes. I'm definately not interested in doing it "backwards" like you say.
    Last edited by codemaster; 31st Aug 2013 at 11:54.
    Quote Quote  
  25. You edit footage in Vegas , color correct atc,. so you have to go through RGB. I'd wish to load YUV into Vegas, edit YUV and then export YUV. But there is that one RGB conversion in between, Vegas works in RGB. So you go debugmode frame server setting RGB also and then you convert it back to YUV in Avisynth.

    Coming from Vegas debugmode frame server you have to set in Avisynth, because it's Vegas that changes those levels also, it took me a while to figure it out, with the help in this forum, that Vegas does that shift and not Avisynth:
    Code:
    ConvertToYV12(interlaced=true, matrix="PC.709")
    Or you use studio to computer RGB in Vegas but that Avisynth solution is better. I'm talking about DV avi, different format be handled differently in Vegas.


    Do the same in Premiere , it supposedly goes through with YUV all the way, you set even YUV while using debug mode frame server and you check the difference ....
    Last edited by _Al_; 31st Aug 2013 at 12:00.
    Quote Quote  
  26. So, what's happening is Vegas expands to 0-255, and matrix="PC.709" in avisynth keeps full range 0-255, while the default, matrix="Rec601" compresses or clips to tv range 16-235 ?

    And Premiere Pro outputs yuv even where you applied the fast color corrector or the three way color corrector ?
    Quote Quote  
  27. oh, it is DV video, which is standard definition, so there should be:
    Code:
    ConvertToYV12(interlaced=true, matrix="PC.601")
    sorry, 709 is for HD video
    Avisynth corrects it back to 0-255,
    you have to watch for some "weird stuff" though,
    -for example, you export debugmode frame sever video, load it into VrtualDub without using ConverttoYV12 line of course, VirtualDub loads RGB, do some filtering, export uncompress RGB, load it in Vegas and color space will be ok
    -but if you load YUV encoded video again into Vegas without that matrix="PC.601 conversion, colors will be washed out

    I used to smart render DV avi, even now HDV, I prefer set everything right while shooting without color corrections using smart render , ~ you get unchanged YUV color space , I still think even 100 years later everybody would like to know (talking about home videos) what real colors were and not going to be interested in some coloring art that ruins reality ...
    Last edited by _Al_; 31st Aug 2013 at 12:51.
    Quote Quote  
  28. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    I'm not certain where all this wild info comes from, but I see some odd procedures here. For example, why are you resizing? At one point your source video appears to be standard def 4:3, at another point you resize to a 16:9 non-compliant frame size and encode to...to what? If your camera shoots 16:9 display ratio and stores it as 720x576, then that is the correct encoded frame size and aspect ratio for 16:9 PAL video. If your camera shoots for 4:3 display, then 720x576 is still the correct encoded frame size for 4:3 PAL video. Those standards should be universally compatible with stand-alone players, PC media players, and Web players. There is no need to resize.

    You seem under the impression that h264 is only for "high definition" 16:9 encoding. That isn't true.

    Originally Posted by codemaster View Post
    It's not just transcoding. It's editing. It's filming, trimming, adding titles, fades, transitions, correcting tonality, color correction, removing unwanted sounds. This obviously requires transcoding (rendering to something lossless and encoding to something lossy). This obviously requires resizing, since crt's are 1930s technology. Obviously it requires deinterlacing, since the camcorder bought in 2007 is using 1930s technology, and isn't capable of recording progressive frames. Of course I had many questions about deinterlacing, since I could not capture progressive.
    Stuff like this sort of makes my head swim. CRT's might be old hat, but they're still used in pro mastering labs to make commercial DVD's and BluRays. As far as that goes, CRT's got their start in the 1890's, so they've had about 100 years' more tweaking that LCD's, which still can't handle motion properly. The overall imaging quality of phosphor-based displays is ahead of LCD's as well. Be that as it may, today's players and displays accept interlaced content. The broadcast industry still broadcasts interlace; you're saying that all the new TV's in the world have problems with it? What about billions of people who still use CRT's?

    I'll accept that misinformation and dysinformation is rife in this hobby. I'll also concede that the way you're approaching this project is pretty much the same way the average consumer goes at it. I'll assume you're starting with DV source, so all isn't lost; likely you have decent DV quality to start with, so you have some leeway in the number of missteps you can make before you hit the critical-damage point.

    I'll recommend a procedure that flies in the face of mass-marketing and which at first glance gets most average users bent out of shape. Others can modify as they wish. You are asking about "the best" way. There are shortcuts, and there's more than one way to do it, and there are lower-quality methods that are very popular but not entirely destructive. You can diverge from this procedure at will.

    1. Get your DV onto the computer as a copy, not as a capture or recording, as you've been doing. Save it as DV with no modifications.

    2.Open the DV in an AviSynth and decode it to lossless YV12 AVI. Use Lagarith to save drive space and maintain unmodified YV12 .

    3. Make preliminary color corrections in YUV, using histograms and vectorscopes to verify that you're within the acceptable broadcast range of RGB 16-235 for luma, RGB 16-240 for chroma. If your target is web or PC display, the allowed range is RGB 0-255. You might be unaware that YUV media can exceed RGB 0-255, in which case you're in trouble whether you want broadcast or web/PC output. Converting YUV to RGB expands YUV 16-235 to 0-255. Clipping occurs when the YUV range exceeds recommeneded values, which it often does with consumer cameras, and which it almost always does with home-made video kamikaze's who like to shoot into the sun and leave autocolor and autowhite controls turned on.

    4. Leave RGB color correction for later (if you need it; a lot of correction can be done in YUV). Vegas and Adobe make really nice NLE's (I use one of them). But they're miserable failures as denoisers, and underachievers as encoders. If you want titles, superimpositions, compositing, dissolves, fades, create and work those in lossless media, and save the results as lossless. You can get superior cleanup in Avisynth and, when you get into RGB later, there are some pretty nice filters in VirtualDub as well. Deinterlace isn't always necessary, but you've guessed by now that QTGMC is the hot competitor nowadays (wait until you see what it can get from yadif in one of the repair modes).

    5. Conversion to RGB gets better treatment in AviSynth. There are sophisticated plugins for that purpose, with often shockingly better results. The same is true for getting back to YV12 from RGB.

    6. Assemble your movie as lossless and save it that way. The encoders you mention aren't disasters, but there are better ones. Some of the better ones are from budget shops, some are free. Audio: always process audio as lossless PCM. Return to lossy audio encodes at the end. Your ears will like you for it.

    7. Encode to frame and fps standards for your MPEG or BD/AVCHD.

    8. Author separately. There are free authoring apps, and there are more fully-featured budget jobs.

    9. Burn separately. The best one nowadays is free: ImgBurn.

    I would adhere to standard output formats. If many users of VLC Player can't tell interlaced from cole slaw, it's their problem, not yours. What about users who don't know that some editions of WMP don't play at the correct aspect ratio?
    Last edited by sanlyn; 21st Mar 2014 at 14:37.
    Quote Quote  
  29. Wow, this is nice, but let's be real this is not some kind of restoration, I certainly would not do that, it is important to have solid camcorder, shoot it clean, colors balanced, not to bother with it later. I'd focus on creating video , maybe give it a tone at the end but just once, not fixing it later, every shot etc.

    He is going to be fine what he is doing, except that DVD. I'd produce it right from Vegas and encode mpeg2 in there right away, to save time, it is not that bad, there is no resize, deinterlace involved. And what's wrong with Neat Video, seriously, no knowledge needed and noise could be fixed more than good (if he bothers at all).
    Quote Quote  
  30. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by _Al_ View Post
    Wow, this is nice, but let's be real this is not some kind of restoration, I certainly would not do that, it is important to have solid camcorder, shoot it clean, colors balanced, not to bother with it later. I'd focus on creating video , maybe give it a tone at the end but just once, not fixing it later, every shot etc.

    He is going to be fine what he is doing, except that DVD. I'd produce it right from Vegas and encode mpeg2 in there right away, to save time, it is not that bad, there is no resize, deinterlace involved. And what's wrong with Neat Video, seriously, no knowledge needed and noise could be fixed more than good (if he bothers at all).
    What's wrong with NeatVideo is:
    a) Addresses many types of noise. But not all types of noise.
    b) Is usually overkill for DV source.
    c) works only in RGB, properly converted from YUV ("properly" does not normally include Vegas).
    c) Is misused by owners who run NV at default settings and ruin video.

    Who said that editing and processing video in the manner described was "restoration"? If you get down to using NeatVideo, you're in the restoration business. It has its place and is sometimes essential with grungy material.

    Besides, we don't even know what the O.P.'s source looks like.
    Last edited by sanlyn; 21st Mar 2014 at 14:37.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!