To add to jagabo's comment - also any denoise or cleaning filters applied after dithering will make the dithering less effective
Dithering is usually done at the very end , right before encoding . The other important note is dithering will increase your bitrate requirements
But I seriously doubt you need to be worried about dithering on this type of content - you probably care more about the image itself than a waveform tracing
+ Reply to Thread
Results 31 to 60 of 133
-
-
I stumbled across this PDF with a good explanation of the differences in chroma sub-sampling:
http://www.compression.ru/download/articles/color_space/ch03.pdf -
You want 16 mapped to 16, and 255 mapped to 235, with a linear range inbetween (and below, to avoid clipping any blacker-than-black overshoots). If you do 0-255>0-235, then 16 gets mapped to 14.7. If you use 0-255>1-235 or 2-235 you avoid that minuscule bit of black crushing, with 16 mapped to 15.7 or 16.6 instead. I wasn't sure which way it was rounded, so used 2. It's just being picky
I have never had a clean enough signal from one of my SD camcorders to make this an issue - the noise in the original image self-dithers it and prevents banding. With VHS captures I sometimes so some extreme level changes, giving a terrible looking histogram - but after subsequent denoising you'd never know. Denoising first and then applying extreme level changes gives easily visible banding, because there's no noise to dither the 8-bit level change.
I think I can see banding in blue skies from my HD camcorder if I use levels.
I'm sure Keith Jack will be delighted that a Russian website has ripped an entire chapter from his Video Demystified book. (some of it is freely and legally available from Google Books though).
Cheers,
David. -
Last edited by sanlyn; 25th Mar 2014 at 20:04.
-
-
Just the way I like it!
You guys have convinced me. Now I'm not scared of banding with this footage. In passing you guys implied the existence of potential "debanding or denoising first?" issues and I've seen some anxious debates on that, so I'm actually quite relieved I don't have to concern myself with that.Last edited by fvisagie; 13th Mar 2013 at 05:43. Reason: Missing [QUOTE]
-
-
Thanks for the rundown on dithering approaches, much appreciated.
That's right.
What do you mean with 'linear workflow', and 'linearize everything'? I suspect though that this might not apply here - you mention these in regard to corrections in RGB, while in this case I'll likely do colourspace corrections in the original YV12. Still, if you could spare an answer I'd be grateful. -
I, too, would be interested in knowing what 'linearize everything' actually means. What I generally do with the kind of wacky luma and chroma values that come from VHS captures (even when I capture with real-time brightness and contrast adjustments in VirtualDub Capture), is to use whatever YUV filters are available to bring extremes of dark and bright into line. That means ColorYUV, Levels, SmoothAdjust, Tweak, or whatever I can do to keep invalid values from smashing against histogram borders. The exact filters and settings depend on the video.
I don't always go to RGB for color correction if I can do it in YUV. There's always a "temporary" ConvertToRGB32 to be able to use a VirtualDub or AfterEffects histogram just for viewing purposes, to check what happens to that YUV when it's displayed as RGB. But not necessarily for RGB color work unless it's needed.
Anything else -- denoising, frame repair, IVTC, etc. -- comes after that initial balancing act.Last edited by sanlyn; 25th Mar 2014 at 20:04.
-
Here's an image comparing the original to Levels(0, 1.0, 255, 2, 235, coring=false) to ColorYUV(gain_y=-20):
Bearing in mind that later in the workflow I'll be converting to RGB32 for working in other tools and back to YV12, what would be the result of using the above Levels() statement for luma correction beforehand? I must say I like the fact that it doesn't darken the rest of the picture as (much as) ColorYUV() does? -
It will prevent loss of detail in the brights. The left image in post #5 shows what happens with the standard rec.601 YUV to RGB conversion matrix.
You should look through a variety of shots that your camcorder puts out and adjust the levels accordingly. Don't assume Levels(0, 1.0, 255, 2, 235, coring=false) is right based on that one shot. Check what it delivers at the low end too. -
-
I can confirm it was a mistake, sorry everyone. Here's the correct comparison of original to Levels(0, 1.0, 255, 2, 235, coring=false) to ColorYUV(gain_y=-20):
My concern (caused by the mistake) was that a Levels() command like the one above ostensibly still resulted in many illegal values, but that was all nonsense of course.
Taken to heart, thanks. -
Also note that I didn't mean that as the be-all-end-all. Just a starting point. You can use a combination of gain_y, cont_y, and off_y to fine tune. Of course, most people find that less intuitive than Levels().
gain_y expands or contracts the range around zero, Y' = Y * N / 256
cont_y expands or contracts the range around 128, Y' = (Y-128) * N / 256 + 128
off_y adds or subtracts to Y, Y' = Y + NLast edited by jagabo; 13th Mar 2013 at 08:13.
-
First, 8bit RGB cannot hold all 8bit Y'CbCr values. It's impossible. Not all values of Y'CbCr "map" to RGB but ALL values of RGB have a legal representation in YCbCr . You can see this pictorally in the 3rd diagram in the "color cube" . You can think of YCbCr as a "wider" color model or color space
http://software.intel.com/sites/products/documentation/hpc/ipp/ippi/ippi_ch6/ch6_color_models.html
For most purposes using a "PC" matrix is "good enough" . David's testing raised the question whether or not avisynth's PC matrice was standard or not, and revealed that it wasn't. It's not the same as studio RGB (there is a "studio RGB" filter by trevlac in one of those threads) . Either way, PC and studio matrices are fairly close, but neither in 8bits can hold all values in 8bit Y'CbCr (there will be out of gamut values that don't map).
For the stuff you can easily see like your overbrights, all the PC matrix is doing is mapping Y'CbCr 0-255 to RGB 0,0,0 -255,255,255. The contrast will be "streched" compared to normal Rec matrix . The Rec matrix "maps" Y'16-235 to RGB 0,0,0 -255,255,255 . Thus the areas 0-15 , 236-255 are clipped (241-255 for CbCr)
What do you mean with 'linear workflow', and 'linearize everything'? I suspect though that this might not apply here - you mention these in regard to corrections in RGB, while in this case I'll likely do colourspace corrections in the original YV12. Still, if you could spare an answer I'd be grateful.
e.g.
http://www.4p8.com/eric.brasseur/gamma.html
This deals with scaling, but the concepts still apply
http://www.seazo.no/linear-workflow
Theory aside, in the real world, you can see the difference along gradients when making even simple changes with levels or curves. There is a cascade effect as errors are propagated, the more corrections and filters you use. For 3d scenes and HRDI lighting there are major differences as well. Again you won't see the difference on footage like DV. But with clean HD footage, CGI, anime, 3d renders, those sort of things it will be noticable. Higher bit depths don't solve the problem alone (they help because of greater precision, eg. instead of 256 steps, 16bit would have 65536 "steps", 32bit would work in float, higher intermediate precision), you need both linear math and higher bit depths . It makes a difference , even on 8bit source footageLast edited by poisondeathray; 13th Mar 2013 at 08:16.
-
PDR is right about linear representation being required to do many operations correctly.
However, apart from a few specific filters that convert to (approximately) linear representation internally, do their thing in high bitdepth, and then convert back to normal sRGB gamma 2.2 video on their output, you can't currently make any meaningful use of linear processing in AVIsynth. The ubiquitous 8-bit filters just aren't good enough. You'll get hideous banding.
Forget about it.
FWIW I still use the PC matrix to keep extra headroom through RGB, even though I know it's not quite doing what I want it to. The only time I've been totally caught out is when I was using it to calibrate something. Naturally, the results were wrong!
Apart from certain colour correction, and DeShaker (my favourite VirtualDUB filter), I usually manage to avoid RGB entirely. Oh, and TMPGEnc - that wants RGB too. Stupid oversight by the authors of that otherwise excellent software.
BTW, if you choose the appropriate values for off_y and gain_y, the result of ColorYUV will be identical to that of levels.
Cheers,
David. -
pdr, I understand what you're saying about RGB work and have read much about it over the years. I realize that pro's work along different lines and with more sophisticated gear and software than we mere mortals at home. Still, you've offered much to ponder and to work with. If I had a cool king's ransom to spend on these projects and the expertise to use it, that's what I'd be doing.
But given what hobbyists work with, there are few reasons to work carelessly beyond the fact that most users are simply in a hurry and don't know/don't care about the results. I look at video I processed a few years back, and I see that newer projects are quantum leaps ahead in quality. Some say that these techniques are just nitpicking. Maybe they're right, but it makes a big difference to anyone who prefers to handle video as best they can. Thanks for all the info.Last edited by sanlyn; 25th Mar 2014 at 20:04.
-
Thanks for the exposition, poisondeathray.
This got me wondering that surely there must have been some science - or at least forethought - behind my camera's capturing system. I.e. the designers thinking something like "due to the characteristics of the sensor/storage mechanism/whatever, we'll preserve a better dynamic range if we use 'this' curve to scale luma to 'that' range". In other words they either planned - or it should be possible anyway - for the mapping used in capturing to be deterministically corrected afterwards.
My camera's manual doesn't say anything useful-looking:
Video signal: PAL colour, CCIR standards
Image device: 3mm (1/6 type CCD) Approx. 800 000 pixels (Effective: Approx. 400 000 pixels)
Colour temperature: Auto
Minimum illumination: 6 lx (F1.6), 0 lx (in the NightShot mode) -
My understanding is that most consumer digital photo and movie cameras capture video that is often clipped at both extremes. Isn't it "raw" video technology that doesn't do this? I wouldn't even begin to know how handle that. I recall when using my Nikon film cameras that I had to learn about exposure, the limitations of film, and how to handle it. That was helpful when I got to working with video years later.
Last edited by sanlyn; 25th Mar 2014 at 20:04.
-
BTW, here's another outline of Y'CbCr and the respective uses of Rec.601 & Rec.701. SD & HD, as you guys said
.
http://en.wikipedia.org/wiki/YCbCr -
eg. blender is free software with color managment and linear workflow
You already have AE, it has 32bit linear workflow, color managment
Honestly, don't worry about it . I wouldn't necessarily call it "careless" either - Most people don't bother with higher bit depth or linear workflow for most things . The main reason is it's a LOT slower for processing various effects and filters. Personally, I use only in situations where I KNOW it will make a noticable difference e.g. CG renders, titles or motion graphics elements with gradients , grading clean footage that actually has gradients in it (like a blue sky, or flat walls backgrounds) . Even on live action "clean" HD footage, you're more likely to run into compression issues as the cause for "banding", long before 8bit depth, or non linear gamma processing error issues.
By the time you convert to Y'CbCr for an 8bit distribution format, even more banding will occur, especially if you don't dither (you are "squishing" 0-255 "shades" into 16-235 "shades") . It's just that in scenarios where you know manipulations in RGB will cause banding in the visible image, the end product will be significantly worse if you don't use higher bit depth and linear processing . Most people don't watch waveforms or histograms as they watch their content (well maybe some of the weird ones do) - banding in the RGB histogram or Y' waveform tracing doesn't always correlate to being visible in the actual
image
It does a standard Rec601 conversion to RGB when feeding it YV12 , same as vdub . If you don't correct for that in YUV before feeding it YV12 (or feed it RGB using a full range matrix or the slightly different studio matrix), you will clip the superbrights/darks . With newer vdub versions, note some filters can work in YUV (IIRC, brightness, contrast), without incurring the RGB conversion. I don't know if this is the case for TMPGenc newer versions. To test if it's "YUV" filters really work in YUV, use a full range YUV test video as input, do a tiny YUV filter adjustment, and export a YUV format . If it incurs a standard Rec601 conversion, those superbright/dark values will be clipped.
Regarding various RGB manipulations, is it "nitpicking"? IMO it really depends on the scenario and content (those scenarios that I listed above it definitely makes a difference). If you want to run some tests, try starting with a simple 8bit RGB greyscale gradient 0-255 because it's easier to see changes so you know what to look for in other footage types
e.g
Original 8bit RGB PNG (0,0,0 - 255,255,255) , 854x480
A simple RGB levels adjustment , input black =16, input white=235 (ie. 16,1,235,0,255) , in 8bit mode, exported an 8bit PNG. You can do this in avisynth , vdub, almost any software it will give same results . (Depending on your monitor , you might "view" this differently (some panels may display differently of they are 6bit or 8bit or 10bit, and some apply processing)), but eitherway the image should show some degree of "banding".
Here is the same 8bit source image, but with the equivalent manipulation applied in float values and saved as an 8bit PNG. If TMPGenc works in 8bit precision, it will yield an image like the previous. If you examine with an RGB histogram, it should show gaps in the former, but not as bad in the latter. This is a contrived example (not many "normal" people watch gradients either), but it's easy to see where the RGB histogram will correlate with the viewer experience - The histogram "gaps" won't correlate to the normally viewed footage with something like DV because of the noise, like David said acts like a natural dither
Last edited by poisondeathray; 13th Mar 2013 at 14:11.
-
Most consumer camcorders record Y' in the range 16-255 . No special curve is applied (e.g. not a log curve), and most consumer level cameras don't have user selectable profiles and curves . The treatment was discussed earlier (either account for superbrights in Y'CbCr, or convert using full range matrix to RGB) . You should be adjusting by scene.
Higher end cameras usually have user selectable gamma profiles. Some record to a log format to increase the "dynamic range" (the curve is much flatter) , and some even record some variation of "RAW" . You can record maybe 12-14 stops in log on expensive cameras, but typical consumer cameras might only record 6-7. But this log data is not suitable for recording to 8bit formats, as there aren't enough "shades" to express the curve properly . It's the 8bit recording format that is the biggest limiting factor, and manufacturers want consumer cameras to be "plug and play". Consumer electronics like HDTV's dont' have the proper LUTs to view these other types of recordings (it will look washed out) . Everything has to be standardized (e.g. Rec601/709) -
The clipping occurs if you don't control exposure properly... j/k
Ultimately the sensor characteristics are what determines that dynamic range (how much range of light intensities, or "shades" from bright to dark are usable) . But often in-camera processing can limit that dynamic range. Even compression can limit the usable dynamic range (e.g. often shadow detail looks "mushy" when using low bitrate AVCHD compression, but when recording uncompressed through HDSDI port those shadow details will become evident)
"RAW" is often a misnomer. Stricly speaking , RAW means completely unadulterated sensor data, not debayered, processed in camera, or anything . Even the cameras like Red Epic, with Redcode are not really "RAW", it is compressed raw with some processing applied.
If you 've ever had the chance to play with RAW (or some RAW variant) footage, it's incredible to grade - you can do just about anything, make it look anyway you want. Incredible latititude. You can't compare with the VHS stuff we typically play with around here! It's another world -
What most users of still and video cameras don't realize is that professionals don't send a couple of guys out with Panavision cameras and tell them "Set it on auto and shoot whatever's out there." No way. The typical studio or on-location shot whether indoors or out has had hours of prep with 45 crew hanging around with shaders, reflectors, fill lights, baffles and whatnot, exposure and color temp meters, color sample cards, etc, etc., hours waiting around while test shots are made and/or techs wait for a cloud to pass (or to arrive), and lord knows what else. Just because the gear is "digital" nowadays doesn't mean it can make all the decisions for you or read your mind.
And zooms on amateur cameras should be against the law. But don't get me started . . .Last edited by sanlyn; 25th Mar 2014 at 20:05.
-
So here's the outline of my workflow as it stands now. SD and HD inputs, and outputing to SD for this particular outline.
Just a note on the factor of 1.366 below. A consequence of Rec.601 pixel aspect ratios that often seems overlooked is that to display e.g. "4:3" images symmetrically, the display aspect ratio needs to be DAR = SAR * PAR = 720/576 * 59/54 = 1.366. In other words, if you're scaling to Rec.601 PAR, aim for a DAR of 1.366 for "4:3" images and 1.821 for "16:9" images. Here's an exhausting discussionof Rec.601 aspect ratios and outputing to DVD.
Preparing SD source (not all steps have been prototyped yet):
Code:# <--- Input format: 720x576 4:3 59:54 PAR CFR 25 fps Rec.601 YV12 interlaced 1,536 Kbps stereo 48.0 KHz ---> source = "DVCAM.avi" FFIndex(source=source, cachefile=source+".ffindex", indexmask=-1, dumpmask=0, errorhandling=3, overwrite=false) Audio = FFAudioSource(source=source, track=-1, cache=true, cachefile=source+".ffindex", adjustdelay=-1) Video = FFVideoSource(source=source, track=-1, cache=true, cachefile=source+".ffindex", seekmode=1, rffmode=0, width=-1, height=-1) AudioDub(Video, Audio) # Crop off static border junk before Deshake # For DCR-TRV330E, crop 10-pixel junk off right, which necessitates 8-pixel vertical crop, which takes care of 2-pixel junk on top: # 10/x = 1.366/(59/54) => x = 10/1.366*(59/54) = 8 # Before we can change vertical geometry, we must deinterlace QTGMC(Preset="Slower") Crop(0, 8, -10, 0) Spline64Resize(720, 576) # Deshake # Pre-convert to RGB, using PC.601 # Deshake # Convert back to YV12, using PC.601 # Normalise levels to prevent possible clipping (do it after Deshake to keep its input cleanest) # Normalise # No need for dithering with SD DV # To balance sources: # denoise # sharpen # KEEP AS 720X576 FOR DVD (i.e. don't scale to 704x576)! See https://forum.videohelp.com/threads/353770-Checking-my-DVD-player-s-aspect-ratio. # <--- Output format: 720x576 4:3 59:54 PAR CFR 50 fps Rec.601 YV12 progressive 1,536 Kbps stereo 48.0 KHz --->
Code:# <--- Input format: 1280x720 16:9 1:1 PAR VFR 50 fps Rec.709(?) YV12 progressive 129 Kbps stereo 48.0 KHz ---> source = "HDCAM.mp4" FFIndex(source=source, cachefile=source+".ffindex", indexmask=-1, dumpmask=0, errorhandling=3, overwrite=false) Audio = FFAudioSource(source=source, track=-1, cache=true, cachefile=source+".ffindex", adjustdelay=-1) Video = FFVideoSource(source=source, track=-1, cache=true, cachefile=source+".ffindex", fpsnum = 50, fpsden = 1, seekmode=1, rffmode=0, width=-1, height=-1) AudioDub(Video, Audio) # Deshake - operate on full frame before cropping to SD # Pre-convert to RGB, using PC.709 # Deshake # Convert back to YV12, using PC.601 # Normalise levels to prevent possible clipping (do it after Deshake to keep its input cleanest) # Normalise # See if we can get away without dithering here # Convert aspect ratio to "4x3" PAL 720x576 Rec.601 PAR # 1. Dimensions must go from 1280x720 -> 720*1.366x720 = 984x720 => crop width by (1280-984)/2 each side = 148 each side # 2. Then scale to 720x576 Spline64Resize(720, 576, 148, 0, -148, 0) # To balance sources: # denoise # sharpen # <--- Output format: 720x576 4:3 59:54 PAR CFR 50 fps Rec.601 YV12 progressive 1,536 Kbps stereo 48.0 KHz --->
Code:# <--- Project input: 720x576 4:3 59:54 PAR CFR 50 fps Rec.601 YV12 progressive 1,536 Kbps stereo 48.0 KHz ---> # NLE # WORK WITH EVEN FRAME NUMBERS! # At end of project # Choose between 4:3 and 16:9 aspect ratio output, (wideout == 0) ? last : Spline64Resize(720, 576, 0, 72, 0, -72) # ...final touching up, # denoise # sharpen # ...and reinterlace (& convert to 25 fps) SeparateFields() SelectEvery(4, 0, 3) Weave() # <--- Project output: 720x576 4:3/16:9 59:54/118:81 PAR CFR 25 fps Rec.601 YV12 interlaced 1,536 Kbps stereo 48.0 KHz --->
Cheers,
Francois -
I don't agree with that thread. It's yet another attempt to make your life harder while delivering sub-optimal results. I've replied in the thread. However, the advantage of keeping the original pixels untouched doesn't apply here because you're using deshaker. 704x576 is still the resolution to shoot for though.
There is nothing anywhere in your workflow that's complicated enough for you to need to worry about PARs. You have square pixel 16x9 HD, and full frame 4x3 and 16x9 SD.
I would use AVIsource with Cedocida as the codec. Nothing else will be better, but I don't know enough about FFsource to know for sure if it'll be worse. Unless it's using a bug-free version of Cedocida internally, it will be worse.
# For DCR-TRV330E, crop 10-pixel junk off right, which necessitates 8-pixel vertical crop, which takes care of 2-pixel junk on top:
# Before we can change vertical geometry, we must deinterlace
QTGMC(Preset="Slower")
Crop(0, 8, -10, 0)
Spline64Resize(720, 576)
# Deshake
# Pre-convert to RGB, using PC.601
# Deshake
# Convert back to YV12, using PC.601
# Normalise levels to prevent possible clipping (do it after Deshake to keep its input cleanest)
This bit is confusing...
Preparing HD source
(snip)
# Convert aspect ratio to "4x3" PAL 720x576 Rec.601 PAR
# 1. Dimensions must go from 1280x720 -> 720*1.366x720 = 984x720 => crop width by (1280-984)/2 each side = 148 each side
# 2. Then scale to 720x576
You want to keep the 4x3 bit from the middle of 1280x720? That 4x3 bit is 960x720, because your HD video has a 1:1 PAR.
So this...
Spline64Resize(720, 576, 148, 0, -148, 0)
Should be this...
Spline64Resize(704, 576, 160, 0, -160, 0)
Then in third script...
# NLE
# WORK WITH EVEN FRAME NUMBERS!
# At end of project
# Choose between 4:3 and 16:9 aspect ratio output
I haven't found a neat way of working with both formats simultaneously in an NLE that gives you the choice of optimal 4x3, 14x9, or 16x9 at the output. In Sony Vegas, for example, you have to crop the 4x3 footage appropriately for the other output formats, or the 16x9 footage appropriately for the other output formats, and you have to apply that to every 4x3 or 16x9 clip - i.e. you have to go through the timeline and change the properties of half the clips depending on output format. Maybe someone else has found a better way? Thankfully I no longer know anyone with a 4x3 TV, do I just do 16x9. I pillarbox SD 4x3 content to 16x9, not crop it. The resolution is bad enough as it is without blowing it up.
When re-interlacing, you should put an assumetff(() first to ensure the correct field order. If the HD footage is sharp, you may find you need a slight vertical blur before interlacing, otherwise the footage will look awful after interlacing.
I would save the 50fps progressive version myself, and then use that as a master for DVD (by interlacing), and YouTube (by dropping every other frame). Given that some of the footage is HD, I'd edit in HD, even if my initial target is SD, so if I ever need HD, I have it. You really don't want to have to go through all the pain again.
I'm sincerely grateful for all the contributions, but any further comments or suggestions are always welcome still.
Hope some of its helpful anyway.
All the best with all of this. Hope it turns out well.
Cheers,
David.
P.S. I think you have a far far bigger problem ahead: If you think you can just send all your camcorder footage through deshaker with one set of settings and everything will be fine, then I'm guessing that you probably haven't tried it yet.Last edited by 2Bdecided; 14th Mar 2013 at 05:39.
-
Very much so. You spotted a few things I'm familiar with but missed, so thanks!
Let's put it his way - so far my VirtualDub .vcf with the DV Deshaker settings has survived unscathed! Hehe
But more seriously, you'll probably know better than me that workflow like the one above is just conceptual. In practice it most likely won't be contiguous. There'll probably be intermediate files for more responsive editing, correcting levels by shot has been mentioned, deshaking similarly may be necessary etc. -
In a little more detail,
This bit is confusing...
Preparing HD source
(snip)
# Convert aspect ratio to "4x3" PAL 720x576 Rec.601 PAR
# 1. Dimensions must go from 1280x720 -> 720*1.366x720 = 984x720 => crop width by (1280-984)/2 each side = 148 each side
# 2. Then scale to 720x576
Then in third script...
# NLE
# WORK WITH EVEN FRAME NUMBERS!
I haven't found a neat way of working with both formats simultaneously in an NLE that gives you the choice of optimal 4x3, 14x9, or 16x9 at the output.
When re-interlacing, you should put an assumetff(() first to ensure the correct field order. -
Thanks, I didn't consider that. But I'm not following the TFF? The SD DV input is BFF, which I assume means the first field comes out as the first frame after deinterlacing. Shouldn't I specify BFF for reinterlacing in this case then?
AssumeBFF()
SeparateFields()
SelectEvery(4, 0, 3)
Weave()
OR
AssumeTFF()
SeparateFields()
SelectEvery(4, 1, 2)
Weave()
The other option is to use a 720x576p50 timeline to do the editing , reinterlacing at the end (since you've already bobbed to 50p for deshaker) , so you don't have to worry about where to cut, or internal interlaced scaling issues (I don't know what kind of project you're doing, but when you scale for whatever reason e.g. overlays, PIP, whatver, NLE's typically do poor interlaced scaling) -
Yes, I see what you're doing: if you say that 704x576 is equivalent of 4x3 (some people refer to call this ITU PAR, for obvious reasons), then 720x576 is wider than 4x3, and you need more picture to fill it (or pad it with black bars). If that's what you're doing, your calculation is correct. It goes slightly wrong when PC software video players scale the whole 720x576 to 4x3. That's why I prefer 704x576.
When re-interlacing, you should put an assumetff(() first to ensure the correct field order.
If you are outputting to DV, it must be BFF. If you are outputting to MPEG-2, it can be either, as long as you correctly set the field order in the MPEG-2 encoder.
If there was some chance of keeping the original fields, then it would make sense to ensure that the input and output field order of the entire process matched (assuming the process did not swap fields) to enable this. Doing the opposite on such a process would keep only the interpolated/invented lines that weren't part of the original video signal, which would be less good (in theory). However:
QTGMC does not preserve the original fields (unless you tell it to, which you are not = good, because it produces poorer results when you force it to preserve the original field).
Deshaker does not preserve the original fields, because it move the video all over the place.
Your HD footage does not have fields (720p), and even if it did (1080i), rescaling it would move them, therefore there is no way of "preserving the original fields" from an HD source when generating SD (unless you converted by cropping the centre 768x576i of the 1920x1080i original, which would be silly!)
HD is always TFF, and commercial DVDs are always TFF, because SDI (professional video interface) is always TFF. BFF works equally well.
Cheers,
David.
Similar Threads
-
avoid rec.709 -> rec.601 conversion in premiere pro and vegas
By codemaster in forum EditingReplies: 0Last Post: 21st Dec 2012, 03:47 -
Is there a small test video one can use to test Rec 601 / 709 conversion?
By Asterra in forum Video ConversionReplies: 5Last Post: 19th Jun 2011, 08:28 -
Rec.709 to RGB24 to Rec.601
By Anonymous344 in forum Newbie / General discussionsReplies: 8Last Post: 2nd May 2011, 18:40 -
Reds look wrong for Rec.709
By Anonymous344 in forum Newbie / General discussionsReplies: 11Last Post: 23rd Apr 2011, 18:14