VideoHelp Forum
+ Reply to Thread
Page 4 of 5
FirstFirst ... 2 3 4 5 LastLast
Results 91 to 120 of 133
Thread
  1. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    How and when you crop borders depends on what you're doing. With YV12, Crop() depends on multiples of 2. However, the YV12-> colorspace conversions depend on which conversion you're making. I believe the conversions want multiples of 4. Converti8ng and cropping are not the same thing (obviously).

    I often crop early on to prevent "auto" filters from being thrown off by big black borders. But I restore the borders and original frame size immediately (or re-establish the borders so that the image is more centered, if necessary). Many plugins want multiples of 8, sometimes 4, because many work with 8x8 pixel blocks.

    If my source video has borders, I keep them -- I often center the image, but I don't believe in resizing video, especially dirty crappy video that was difficult to fix to start with, because resizing distorts to one degree or another. That's just me. Many people do it. A classic film buff wouldn't tolerate that kind of resizing and possible distortion of aspect ratio on playback. They'd rather accept the borders.
    Last edited by sanlyn; 25th Mar 2014 at 20:06.
    Quote Quote  
  2. Are you guys using "vanilla" 2.5.8 or one of the MT builds ?

    That error message pops up in the MT build
    http://forum.doom9.org/showthread.php?p=1374834#post1374834

    I think just about every computer in the last 10-15 years has MMX instruction set, so that rules that part out
    Quote Quote  
  3. I'm using a multithreaded 2.6 32 bit version (on Win7 64 bit). Occasional problems with some filters. But nothing like in that Doom9 link.
    Quote Quote  
  4. BTW , I see absolutely zero reasons not to use avisynth 2.6.x

    It's fully backwards compatible , and offers improvements in other areas, such as other colorspaces, better and more chroma resampling options, bugfixes

    You can install it over 2.5.8 (takes a few seconds) or revert back to 2.5.8 anytime (takes a few seconds).


    Jagabo, do you get that error message with that code example ? I'm using "vanilla" 2.6 alpha4 and there is no error
    Quote Quote  
  5. Originally Posted by poisondeathray View Post
    Jagabo, do you get that error message with that code example ? I'm using "vanilla" 2.6 alpha4 and there is no error
    The MMX error with ConvertToRGB32()? No. I'm running on a Core i5 2500K.
    Quote Quote  
  6. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by poisondeathray View Post
    That code works fine here on your video sample... output video is 710x576 as expected, no errors
    Thanks for confirming. EDIT: I'm currently using vanilla 2.5.8.

    BTW , I see absolutely zero reasons not to use avisynth 2.6.x
    It's on my list!

    BTW, if one does end up correcting the brute-force way if only for this project,

    Originally Posted by poisondeathray View Post
    Originally Posted by fvisagie View Post

    Just checking, but one brute-force way would be to create Trim()s to separate treatments? It would probably be very tedious but at least as accurate as the Trim()s?

    Yes, but the problem with that is there would be no keyframe interpolation. The changes will be abrupt, not smooth . There will be jumps as your settings switch to the next set, instead of gradual . You have no control.
    this function for smoothing in changes between different filter strengths might just come in handy!

    Thanks for your inputs also, Sanlyn.
    Quote Quote  
  7. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by sanlyn View Post
    I often crop early on to prevent "auto" filters from being thrown off by big black borders. But I restore the borders and original frame size immediately (or re-establish the borders so that the image is more centered, if necessary). Many plugins want multiples of 8, sometimes 4, because many work with 8x8 pixel blocks.
    Sanlyn, how do you typically do that in Avisynth? Overlay the filtered onto the original, taking care of correct placement etc?
    Quote Quote  
  8. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    I use Crop(), and when I'm ready to restore borders I use AddBorders().
    Last edited by sanlyn; 25th Mar 2014 at 20:06.
    Quote Quote  
  9. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Thanks!
    Quote Quote  
  10. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    @sanlyn,

    Just a comment on your colour correction (and I've never had the patience to do shot-by-shot colour correction, so forgive me if you disagree with your greater experience)...

    If something is clipped at 235 or 255, or at whatever value the process that brought this video to my door managed to clip the white to, I think it should go at 235, never lower. I think it looks really wrong to have clipped white reduced to a shade of grey.

    Through decades of watching TV, we all know that sometimes a bright part of the image will be clipped to pure white. Even in professional productions, sometimes this is unavoidable (without extra lighting, which may not always be available; or a radically lowered contrast, which may look horrible). However, you almost never get clipped grey (i.e. flat clipped areas reduced to less than 235) unless something is badly wrong. It just looks weird.

    In some of your colour correction, unless I'm mistaken, it looks like you drag clipped white down to 220ish or even lower. I think this looks very strange and objectionable. I would (almost) never ever do this.


    It's a tough call if you get video with Y=255 and non-zero U and/or V, because even remapping peak Y=235 is going to leave illegal colours. In this instance, you're in "least bad result" rather than "best result" territory. However, that's not a big issue in this clip.

    Cheers,
    David.
    Quote Quote  
  11. Some notes on cropping and adding borders:

    Progressive YV12 can only be cropped mod 2. Interlaced YV12 can only be cropped mod 4 on the vertical axis, mod 2 on the horizontal axis. This is because YV12 encodes chroma at half the resolution of the luma -- odd widths and heights would require half pixels. Interlaced YV12 logically splits the frame into two half height images and each of those half height images must be mod 2, hence the mod 4 height requirement.

    If absolutely necessary you can convert to YUY2 to crop mod 1 on the vertical axis, or RGB to crop mod 1 on both the vertical and horizontal axis. But be aware that conversion back to YV12 (and almost all high compression codecs only work in YV12) will cause the colors to blur as you lose the alignment of the chroma pixels. Cropping mod 1 on the vertical axis will cause field order reversal on interlaced video.

    Perform any sharpening filters while there are no borders to prevent over sharpening halos at the edge between the picture and black border. Some other spacial filters can have problems with the black borders too.

    When adding borders back try to stick with mod 8 borders sizes and alignments. That will prevent DCT ringing at the edge between the picture and border when encoding with MPEG codecs. It will also result in slightly better compression.
    Quote Quote  
  12. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by jagabo View Post
    Perform any sharpening filters while there are no borders to prevent over sharpening halos at the edge between the picture and black border. Some other spacial filters can have problems with the black borders too.
    That's another thing I hadn't considered, thanks, jagabo.

    Cheers,
    Francois
    Quote Quote  
  13. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by fvisagie View Post
    Ignoring the detail of my to-be-improved workflow above, in essence my aim with it is to:
    • process SD with as little getting lost as possible (taking into account your and the others' earlier inputs)
    • add HD to the same workflow with as little getting lost as possible
    The approach I took with that aim: the choices with the last item above are either to scale SD to square pixel HD losing some quality in the process, or scaling HD to anamorphic SD losing some quality in the process. With DV already grainy at SD on the one hand, and with HD's higher starting resolution on the other, the latter option sounded preferable to me hence the conversion of HD content to anamorphic SD format.
    This choice should be made on whether you might want an HD output, or only need an SD output. For the former, you must use HD , for the latter, SD will do. Downscaling HD to SD inevitably loses quality, but with an SD output, you're stuck with that. Upscaling SD to HD doesn't technically need to lose anything, though it's unlikely to be mathematically losslessly reversible. SD looks kind of poor upscaled to HD, but you can't help that either.

    The colour conversion (one way or the other) is a tiny secondary issue compared with the resolution choice.

    I only edit all-SD projects in SD. Even then, AVIsynth will often be used to deinterlace the result (and maybe upscale it) for YouTube etc.


    That conversion would need to pay attention to ensuring the output comes out in the right destination non-square pixel aspect ratio, meaning the correct standards-based dimensions must be used. Then it merely becomes an issue of deciding whether to use 12/11 or 59/54 as you pointed out.

    Does this hopefully clear up something for you; otherwise, what is the big thing that I'm missing?
    Your calculations are all correct. ...

    Lastly, as to whether to base everything on a horizontal resolution of 720 or 704, I'd measured the outputs of both (correctly processed and encoded I assure you!) on all devices I could lay my hands on. This issue ultimately boils down (in my experience at least) to the choice between correct rendering on analogue DVD outputs and imperfect rendering on digital devices @ 704 pixels (but with loss of horizontal resolution on the latter compared to 720), vs. imperfect rendering on all devices but better horizontal resolution on digital ones @ 720 pixels. Since the rendering error is in all cases at most ~2.5%, in my view that makes the decision here a subjective and personal one, also largely influenced by intended audience etc.
    That is fine. I don't have these digital devices that mishandle 704 pixels - just PCs that handle it fine - so I can't see any benefit to using 720. You do not have any more "resolution" the way you're using it - you have 8 extra pixels each side. If you were encoding a PAR of 16:15, you'd have more resolution.

    The reason I prefer 704 is because there is no need to worry about PARs, (the DAR is 4x3), and no great need to crop away perfectly good picture information from 702x575 analogue (or early digital) sources and scale up what's left to get a full clean 720x576 with ITU PAR.

    I agree there's nothing wrong with what you have chosen to do (except the extra scaling of SD it forces you to do, which is hardly a big issue in the context of running it all through deshaker).

    Whichever way you choose, there are far bigger issues with impact on the final video: careful editing, SD/HD choice, levels + colour correction, deinterlacing, final encoding etc.

    Cheers,
    David.
    Quote Quote  
  14. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    The nice thing about processing your own video is that you have plenty of choices and, ultimately, you please yourself. In this case I didn't arbitrarily select a value of 235 or 220, I just adjusted until I thought it looked accurate on a calibrated monitor. Poisondeathray's version of the same clouds hit a bright peak of 255 (you can test his post yourself with a pixel sampler). On a TV calibrated to D6500 standards, RGB 255 tends to "bloom"; on a PC, it looks OK. TV and PC don't display images in the same way, and print media use different standards as well.

    True, most people don't use calibrated displays. But that's not my problem or your problem. I prefer to adjust to established standards, which is the way pro and mastering labs do it. If you get into advanced display and correction controls with high-end apps like Premiere Pro, After Effects, or even Photoshop Pro, you find handy controls for checking your work against the display standard you aim for. A "correct" setup should look "correct" on a "correct" display device, or at least reasonably viewable on a reasonably well adjusted display. Another convenience of working to the intended standard is that you spend many hours and much effort making a video look good on your PC, but it ends up looking entirely different on your TV -- a common occurrence, and one that drove me nuts for years until I finally concluded that I was trying to mix standards that just didn't like each other. So if my objective is TV display, that's what I set up for.
    Last edited by sanlyn; 25th Mar 2014 at 20:07.
    Quote Quote  
  15. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by sanlyn View Post
    On a TV calibrated to D6500 standards, RGB 255 tends to "bloom"
    I believe that depends on the specific TV (and the technology type).

    Anyway, if I owned such a TV, I'd expect to see clipped whites "blooming". Making them not "bloom" would look really strange, and make the fact that they were clipped stick out far more.

    IMO. Subjective!

    Cheers,
    David.

    P.S. I'm talking about clipping that cannot be removed (i.e. clipped at the absolute limit, e.g. Y=255). Any "clipping" that can be removed (i.e. intact Y above 235), should be.
    Quote Quote  
  16. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    @poisondeathray,

    Code:
    levels(0,1,255,0,237,false, dither=true)
    invert
    hdragc(max_gain=2, coef_sat=0.75)
    invert
    levels(12,1,255,0,255,false, dither=true)
    hdragc(max_gain=2, coef_sat=0.88)
    tweak(sat=1.2, coring=false)
    Very impressive results you got with that short snippet! Could you talk me through the sequence of events here, please?

    The first invert.hdragc I guess was for the bright regions, and the next for the darker areas?

    And then, why the two levels() statements? With the first you semi-legalised luma, but why did you choose 237 instead of the usual 235? And by using output_low=0 you allowed the original 16 to slip a little lower in that mapping? The second levels() deepens the blacks, throwing away anything under 12? Did you arrive at 12 by inspection, or is there some rule-of-thumb you used here?

    And I guess the 'dither' parameter got added in 2.6.0?
    Quote Quote  
  17. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    I agree there's nothing wrong with what you have chosen to do (except the extra scaling of SD it forces you to do...).
    Thanks, I'll now rest well tonight! About the forced scaling you mention, if you are (hopefully) referring to scaling back to 720x576 after cropping away junk vertical pixels earlier on, that's no longer going to happen. It's now clear that the poor results I got from testing 704-pixel output on my TV were caused by the equipment. Correlating the aspect ratio errors it gave with the various formats I tried on it with how those display on other equipment and how those formats are expected to display, it's now clear that any decently calibrated DVD/TV combo should have displayed my 704-pixel test content nearly perfectly. Therefore I am now also convinced that 704-pixel output is the best common format for displaying Rec.601 PAR, as you have been advocating .
    Quote Quote  
  18. Originally Posted by fvisagie View Post
    @poisondeathray,

    Code:
    levels(0,1,255,0,237,false, dither=true)
    invert
    hdragc(max_gain=2, coef_sat=0.75)
    invert
    levels(12,1,255,0,255,false, dither=true)
    hdragc(max_gain=2, coef_sat=0.88)
    tweak(sat=1.2, coring=false)
    Very impressive results you got with that short snippet! Could you talk me through the sequence of events here, please?

    The first invert.hdragc I guess was for the bright regions, and the next for the darker areas?

    And then, why the two levels() statements? With the first you semi-legalised luma, but why did you choose 237 instead of the usual 235? And by using output_low=0 you allowed the original 16 to slip a little lower in that mapping? The second levels() deepens the blacks, throwing away anything under 12? Did you arrive at 12 by inspection, or is there some rule-of-thumb you used here?

    And I guess the 'dither' parameter got added in 2.6.0?





    Yes, dither argument for levels is added in 2.6.x, I mentioned that earlier as an alternative to smoothlevels

    I just "eyeballed" it using histogram() as a rough guide . I use avspmod, so you can preview by pushing f5. Multiple versions of scripts can be stored in tabs, and swapped by pushing the number keys (easy to compare)

    Just comment out the lines and go step by step - it should make sense . Yes first invert + HDRGC was for cloud region ,next was for darker grass region . It's just a crappy workaround . I was hoping some of the other folks would suggest other avisynth methods, because it's easier, more control (at least for me) with other programs

    There are problems with HDRAGC, if the levels are beyond "legal" range , it won't work properly. You can use the HDRGAC's "shift" parameter (you can use +/- values), but I prefer to use levels. See the HDRAGC documentation , there are many more settings you can use to tweak .

    I used 235 for the first levels call initially, but looking at the end result there was room to move a bit higher (try it out, while watching histogram() ) . There might be 1 or 2 pixels that are too bright. You're allowed overshoots (it's in the spec) .

    Yes, the 2nd levels is to bring down the black level, and yes there are some junk dark pixels that go below 16 , but those are irrelevant . No scientific technique for that value, I just looked at the waveform in relation to the usable darkest areas (the shirt or dark rectangular "thingy") . I supposed you could have used input low with the 1st levels - it's probably smarter to do it that way (fewer calls, faster), but I broke it out into steps - it's easier to follow the thought process .

    These monitoring aids are just that - things to assist you - they are subject to interpretation and must be taken in context. e.g. a junk dark pixel may throw off your calculation if you only look at min/max values. ie. Don't make the mistake of treating the waveform or histogram only - it's the image that is important.
    Quote Quote  
  19. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    I just want to second what pdr has said several times now: if you're doing this properly scene by scene, AVIsynth is not the way to do it. If you're happy to apply the same tweaks, or same range of automated tweaks, to a whole set of similar scenes, AVIsynth is OK.

    e.g. for a given picture mode on my camcorder, I have a script which gets me near enough on most indoor artificial lighting shots (heaving denoising, radical colour shift), another script for indoors daylight (gamma increasing, contract reducing), and another script (which does very little in terms of colour/levels) for outdoors daylight. I still find myself tweaking problem shots individually in Sony Vegas. With all HD content, I usually edit the original files, fix the worst in the NLE, render, then apply an overall look / correction in AVIsynth to the final output. (e.g. uploads to YouTube need to be slightly brighter than things authored to DVD - IMO!).

    Cheers,
    David.
    Quote Quote  
  20. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by poisondeathray View Post
    Code:
    levels(0,1,255,0,237,false, dither=true)
    invert
    hdragc(max_gain=2, coef_sat=0.75)
    invert
    levels(12,1,255,0,255,false, dither=true)
    hdragc(max_gain=2, coef_sat=0.88)
    tweak(sat=1.2, coring=false)
    I get an Avisynth error on dither=true. Is it 'cause I'm not using your later version of Avisynth? I'm on 2.5.8.
    Last edited by sanlyn; 25th Mar 2014 at 20:07.
    Quote Quote  
  21. Originally Posted by sanlyn View Post
    I get an Avisynth error on dither=true. Is it 'cause I'm not using your later version of Avisynth? I'm on 2.5.8.
    Yes, added in 2.6.x (among many other things... , why are you guys still using 2.5.8 ??? I've been using 2.6 for about 2 years it's perfectly stable) . It's not important on this type of footage - you can get similar results by removing dither=true, just the waveform won't look as "pretty"
    Quote Quote  
  22. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Yes, I know, dithering smooths those rough peaks. Shucks. Guess I have to move with the times -- but will back up my OS first. I'll hit the 2.6 docs later and get the details about anything I'd have to change, and move back and forth if necessary.
    Last edited by sanlyn; 25th Mar 2014 at 20:07.
    Quote Quote  
  23. off the top of my head, masktools needs a different dll , the "26" version, not the "25" version . Don't keep both versions in the plugins folder

    mt_masktools-26.dll


    I've switched back and forth many times over the last 2 years without issues (just reinstall it over) . (It wasn't because of any specific problems, but to test things like chroma resampling quality and miscellaneous tests)
    Quote Quote  
  24. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Yes. And it's the '25' version that let's me use QTGMC in YUY2. '26' won't let me do it. I've been using both, but by using different folders for each and loading the one I need in the script, not as "auto" from the default plugin folder. I'll just have to work around it.
    Last edited by sanlyn; 25th Mar 2014 at 20:07.
    Quote Quote  
  25. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Before combining the prepared SD and HD content (the latter formatted to SD) onto the same timeline, how should I handle the 2-pixel top border cropped off the SD content? By this point it's 704x574 pixels, and I'm not sure how to make that call. From a technical point of view, the options available seem to be:

    1. scale to 704x576: aspect ratio needed for symmetry changes from 1.33_ to 704/574*59/54=1.340 & scaling involved
    2. crop 2 more horizontal pixels {2*(4/3)/(59/54)} and scale to 704x576: aspect ratio needed for symmetry changes from 1.33_ to 702/574*59/54=1.336, scaling involved & loss of pixels
    3. add black border: top 2 pixels will flash at transitions between SD & HD
    4. add black border and replace HD top 2 pixels with black: loss of pixels
    5. restore original border content: has similar colouring as rest of frame at least, but not as attractive as clean frame

    At this stage it seems to be the 4th option for me, but I'd appreciate your inputs. I feel it's important to get a feel for which approach one would 'generally' expect to come out best. I would like to (and probably need to from a practicality point of view) apply the same treatment to all SD footage landing up on the timeline.

    Thanks,
    Francois
    Quote Quote  
  26. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by poisondeathray View Post
    These monitoring aids are just that - things to assist you - they are subject to interpretation and must be taken in context. e.g. a junk dark pixel may throw off your calculation if you only look at min/max values. ie. Don't make the mistake of treating the waveform or histogram only - it's the image that is important.
    Fantastic explanation, just what I was hoping for. Thanks, poisondeathray.

    Originally Posted by 2Bdecided View Post
    I just want to second what pdr has said several times now: if you're doing this properly scene by scene, AVIsynth is not the way to do it. If you're happy to apply the same tweaks, or same range of automated tweaks, to a whole set of similar scenes, AVIsynth is OK.
    Noted, thanks. I'm aiming (realistically or otherwise, time will tell) for something fairly conservative that gets me near enough on most scenes, i.e. reduces the number of problem scenes that need individual attention.

    e.g. for a given picture mode on my camcorder, I have a script which gets me near enough on most indoor artificial lighting shots (heaving denoising, radical colour shift), another script for indoors daylight (gamma increasing, contract reducing), and another script (which does very little in terms of colour/levels) for outdoors daylight.
    It would be very kind if you were to share those scripts. Just for educational use, I'm already anticipating the caution of 'you can't apply these as is'!
    Quote Quote  
  27. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by fvisagie View Post
    Before combining the prepared SD and HD content (the latter formatted to SD) onto the same timeline, how should I handle the 2-pixel top border cropped off the SD content? By this point it's 704x574 pixels, and I'm not sure how to make that call.
    If you weren't using deshaker, and you were authoring to DVD, I'd have said don't crop them off to start with. Most of the 4x3 DVDs in the world (and most of the 4x3 broadcasts in the world) have those two half lines as half lines, i.e. with half the line blanked. Given the effort required just to reverse this (e.g. deinterlace, crop, rescale, re-interlace; or video inpainting), unless you're doing those things anyway, it's mad to go down that route.

    If you're uploading to YouTube and/or outputting HD, I'd suggest clean borders are an advantage, so stick with the cropping, and make sure every pixel is filled with real usable video after the resize - no little borders (just, if needed, large ones to preserve 4x3 and 16x9 as-is on the same project, if you ever choose to do that).


    Depending on which modes you use and how steady your footage is, deshaker will always remove those borders anyway so you're gaining nothing by cropping them. Some of the questions you're asking make me suspect that you still haven't really played with deshaker. Try it, see just what it does to your video (i.e. depending on settings: how much it crops off, or how deep a nasty border it leaves), and then decide. If you feed it 704x576 it will return 704x576, and then you'll have no other decision to make.


    In short, if your workflow lets you keep the original fields intact, do that - even if it means putting up with a tiny aspect ratio error and/or a tiny border. If your workflow does not let you keep the original fields (deshaker involved, or going to progressive/YouTube, or going to HD, or vertically resizing a lot), then go for all clean pixels.

    If I was you, I'd take the best SD footage I had, and I'd do a split-screen comparison of a "keep original fields intact workflow" (i.e. just levels and horizontal crop) vs whatever you are thinking of using (levels, cropping, deshaker, scaling, ...?), and author it correctly to a DVD. Then watch it on a TV and a PC. Check that all the deshaking and scaling isn't unacceptably blurring the picture (especially in comparison with the presumably pristine HD clips you're including).

    Cheers,
    David.
    Last edited by 2Bdecided; 21st Mar 2013 at 09:52.
    Quote Quote  
  28. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by fvisagie View Post
    Originally Posted by 2Bdecided View Post
    e.g. for a given picture mode on my camcorder, I have a script which gets me near enough on most indoor artificial lighting shots (heaving denoising, radical colour shift), another script for indoors daylight (gamma increasing, contract reducing), and another script (which does very little in terms of colour/levels) for outdoors daylight.
    It would be very kind if you were to share those scripts. Just for educational use, I'm already anticipating the caution of 'you can't apply these as is'!
    I'm hoping you can't apply them at all, since it would be really unfortunate if your camcorder was as bad as my old SD one! Also, I've played fast as lose with aspect ratios, because my oldest camcorder put so much junk on various sides. I chose just to keep the clean pixels and resize them to 4x3, except when it would have made people at all fatter, or visibly thinner. I've discovered that no one notices if you make people fractionally thinner. Of course if I was re-writing these today, I'd do it properly These scripts range from 1 to 5 years old! I haven't sanity checked them, but they look OK.


    Anyway, indoors crappy lighting...
    http://forum.doom9.org/showthread.php?p=1397825#post1397825
    (output is square pixel 16:9 50p for subsequent editing)

    An old approach for indoors excellent lighting 4x3 upscaling...
    Code:
    AVISource("source.avi",pixel_type="yuy2")
    converttoyv12(interlaced=true)
    levels(32,1.0,255,16,235,coring=false)
    TempGaussMC_Alpha3()
    
    nnedi(field=0,dh=true).TurnRight().nnedi(field=0,dh=true).TurnLeft()
    crop(16,0,-16,0)
    rs=last
    BlindDeHalo3(rx=6.5,ry=6.5,strength=60)
    limitedsharpenfaster()
    spline36resize(1280,960)
    sharpen(0.75)
    grainfactory3(g1str=2, g2str=4, g3str=5)
    addgrainc(0,2)
    spline36resize(1440,1080)
    limitedsharpenfaster()
    spline36resize(960,720)
    overlay(rs.spline36resize(960,720),mode="Lighten",opacity=0.5)
    grainfactory3(g1str=2, g2str=4, g3str=5)
    letterbox(10,10)
    (includes lots of grain at various resolutions to try to make it looks slightly less artificial when upscaled. I hate dehaloing, but my old camcorder has some series halos!)


    Outside HD footage may get this...
    Code:
    video=mpeg2source("welsh train.d2v")
    audio=mpasource("welsh train MPA PID 814 DELAY 0ms.mpa")
    audiodub(video,audio)
    
    QTGMC()
    
    tweak(sat=1.20)
    levels(8,1.0,245,0,255,coring=false)
    
    spline36resize(2560,1440)
    sharpen(1.0)
    limitedsharpenfaster()
    pointresize(1280,720)

    This was OK under our crappy halogen kitchen lights... (HD camcorder)
    Code:
    levels(20,1.15,145,0,255,coring=false)
    tweak(sat=1.2,coring=false)
    u=utoy()
    v=vtoy()
    u=u.Levels(16,1.0,230,0,255,coring=false)
    v=v.Levels(10,1.0,255,0,255,coring=false)
    ytouv(u,v,last)
    #levels(8,1.0,138,0,255,coring=false)
    
    hdragc()
    ...

    etc etc. The levels are no use to you, because your camcorder will be different. (My HD camcorder has a very different luma range to my SD one, and doesn't always push whites out to 255 or even 235, hence the levels commands that expand the range, rather than reduce it).

    You will see that I do horrible things to try to make murky footage sharper in quick and dirty ways. It looks a bit artificial, but things like sssharp are just too slow. With an SD output, it doesn't matter nearly so much, though I still tend to bodge it a little when downscaling from HD to get some sharpness in SD.


    This is the kind of horrible mess you get when you edit first, and then realise you have different sections with different problems that need fixing in AVIsynth... (HD camcorder, HD input, SD output)
    Code:
    video=mpeg2source("christmas.d2v")
    audio=mpasource("christmas MPA PID 814 DELAY 0ms.mpa")
    
    audiodub(video,audio)
    
    separatefields()
    
    spline36resize(880,1080) # reduce resolution to speed up denoising
    addborders(0,0,0,8) # must have used an intermediate file at some point that needed mod16 frame size
    
    #1 0-151 = 0-255 > 16-235
    #2 152-408=normal
    #3 409-724=grotto
    #4 725-1127=normal
    #5 1128-1192=photo
    #6 1193-1256=normal
    #7 1257-1321=photo
    #8 1322-4331=normal
    #9 4332ish-4509=grotto-ish
    #10 4510-4835=0-255 > 16-235
    
    
    b1=last.trim(0*2,151*2+1)
    b2=last.trim(152*2,408*2+1)
    b3=last.trim(409*2,723*2+1)
    b3a=last.trim(724*2,724*2+1)
    b4=last.trim(725*2,1127*2+1)
    b5=last.trim(1128*2,1192*2+1)
    b6=last.trim(1193*2,1256*2+1)
    b7=last.trim(1257*2,1321*2+1)
    b8=last.trim(1322*2,4331*2+1)
    b9=last.trim(4332*2,4509*2+1)
    b10=last.trim(4510*2,0)
    
    b1=b1.Levels(0, 1.0, 255, 16, 235, coring=false).limitedsharpenfaster()
    b2=b2.Levels(5, 1.0, 225, 0, 255, coring=false).tweak(sat=1.2,coring=false).limitedsharpenfaster()
    b3=b3.Levels(24, 2.1, 100, 0, 255, coring=false).mc_spuds() #.TemporalSoften(1,50,50)
    b3a=b3a.Levels(10, 1.0, 125, 0, 255, coring=false)
    b4=b4.Levels(5, 1.0, 225, 0, 255, coring=false).tweak(sat=1.2,coring=false).limitedsharpenfaster()
    b5=b5.Levels(0, 1.1, 255, 20, 230, coring=false).limitedsharpenfaster()
    b6=b6.Levels(5, 1.0, 225, 0, 255, coring=false).tweak(sat=1.2,coring=false).limitedsharpenfaster()
    b7=b7.Levels(0, 1.1, 255, 20, 230, coring=false).limitedsharpenfaster()
    b8=b8.Levels(5, 1.0, 225, 0, 255, coring=false).tweak(sat=1.2,coring=false).limitedsharpenfaster()
    b9=b9.Levels(20, 1.5, 160, 16, 255, coring=false).tweak(sat=0.8,coring=false).mc_spuds()
    b10=b10.Levels(0, 1.0, 255, 16, 235, coring=false).limitedsharpenfaster()
    
    
    b1+b2+b3+b3a+b4+b5+b6+b7+b8+b9+b10
    
    crop(0,0,0,-8)
    
    # Sharp resize rejected because it amplifies the noise!
    old_width=width(last)
    new_width=704
    old_height=height(last)
    new_height=576
    spline36resize(880,old_height).spline36resize(new_width*2,old_height).sharpen(1.0).spline36resize(new_width*2,new_height).pointresize(new_width,new_height)
    
    blur(0.0,1.0)
    sharpen(0.0,0.5)
    
    assumetff()
    separatefields()
    selectevery(4,0,3)
    weave()
    
    #converttorgb(interlaced=true, matrix="Rec709")

    I'm posting these because you asked and I didn't want to rudely refuse, but my advice would be: don't copy them! Figure out what you need for your footage.

    Cheers,
    David.
    Quote Quote  
  29. One technique you can use to interactively view the effects AviSynth filters have is to use the Animate() command. For example:

    Code:
    AviSource("video.avi")
    Trim(0,100)
    Animate(0,100, "Levels", 0,0.01,255,0,255,  0,3.0,255,0,255)
    That will visualize the effect of Levels' gamma setting over the range ~0 to 3 over 100 frames. Open the script in Virtualdub and scrub through the video.
    Quote Quote  
  30. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    Depending on which modes you use and how steady your footage is, deshaker will always remove those borders anyway so you're gaining nothing by cropping them.
    I've noticed that. My concern was what to do if I do end up using Deshaker and parts of the 2-pixel junk remain causing unattractive results. I might not be in a position then to quickly run to the forum for help .

    But you've given me more than enough guidance and to mull over here, much appreciated. Thanks also for the script examples.

    Originally Posted by jagabo View Post
    One technique you can use to interactively view the effects AviSynth filters have is to use the Animate() command
    Thanks, that's a great tip.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!