VideoHelp Forum




+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 44 of 44
  1. You didn't answer the questions about the final format goal. Why are you re-importing this into premiere? What do you plan to do with it? It MP4 still the final destination and where will it be played? Answer the questions because you might be doing this for nothing , or might encounter other problems later (eg. when you or someone else (e.g. youtube) convert back to 4:2:0 edges will become more blurry)

    If you keep it in lossless RGB, yes it will remain the same, but the filesize will be much larger than the original subsampled YUV 4:2:0 recording. Lossless YUV (as the original colorspace )will be smaller in filesize. Some of the differences you are seeing in the shifting is from the different algorithms used to upsample the chroma to RGB for display. I guess in that respect if you keep it RGB, you have control over the algorithm used.

    Your original recording did not have a 1.333:1 aspect ratio, it was slightly wider 704x512 so 1.375:1 AR. It looks like you cropped the left & right then scaled to 1440x1080 for 1.333:1 AR

    You can do this in 1 minute or less with ffmpeg. Probably the version of huffyuv you 're using isn't compatible with premiere (there are about a dozen different variants).

    Use ut video codec, it's compatible with premiere, offers multiple colorspace options, and you can encode right out of ffmpeg. It offers better compression and decoding speed than huffyuv

    edited video.avi (huffyuv RGB) - 225MB
    utvideo 15fps RGB - 135MB
    utvideo 16fps RGB - 144MB (duplicate frames)

    It's not a good idea to change the rate to 16. You just introduce duplicates or blends. File size will be larger, and it won't be any smoother playback. I would keep the original 15FPS

    Code:
    ffmpeg -i "original video.mp4" -vf crop=w=704:h=512:x=368:y=194 -an -pix_fmt rgb24 -f rawvideo - | ffmpeg -f rawvideo -pix_fmt rgb24 -r 15 -s 704x512 -i - -vf scale=w=2112:h=1536:flags=neighbor,crop=w=2048:h=1536:x=32:y=0,scale=w=1440:h=1080:flags=neighbor -c:v utvideo -an ut_rgb.avi
    (The ffmpeg to ffmpeg pipe is required, because ffmpeg does the nearest neighbor slightly differently with the U, V planes when working on YUV. And for some reason, a linear chain of converting to rgb first didn't work - It's basically a workaround to get sharp pixels)

    There are slight differences in the point sampling algorithms and how you cropped. In some frames it might be better, others might be worse. For example, in the first frame, your version (on the left), has 2 pixels in the guy's right eye, whereas the ffmpeg version is even. This might reverse in other frames

    Name:  0.png
Views: 1405
Size:  8.8 KB
    Quote Quote  
  2. Originally Posted by pandy View Post
    AFAIR FFV is supported on VfW by ffdshow.
    FFV1 v1 is supported by ffdshow VFW. The newest version in ffmpeg uses FFV1 v1.3, not supported by ffdshow. I supposed you could use an older ffmpeg version to encode
    Quote Quote  
  3. Originally Posted by LigH.de View Post
    Ensure that the Huffyuv codec variant used for compression is also available for decompression. There is a difference between the original Huffyuv codec by BenRG and the enhanced implementation in ffdshow, e.g. YV12 support is new, and several bugs regarding higher resolutions have been fixed since. To use the ffdshow VfW codec for decompressing Huffyuv in AVIs, you have to activate it in ffdshows "VFW Codec Configuration" in the Decoder tab, it may not already be activated upon installation.
    It worked! Thank You!


    Originally Posted by poisondeathray View Post
    You didn't answer the questions about the final format goal. Why are you re-importing this into premiere? What do you plan to do with it? It MP4 still the final destination and where will it be played? Answer the questions because you might be doing this for nothing , or might encounter other problems later (eg. when you or someone else (e.g. youtube) convert back to 4:2:0 edges will become more blurry)
    It's for YouTube. I am aware of the risks, and decided to take a chance. If I later find I've been wasting my time—lesson learned.

    Originally Posted by poisondeathray View Post
    If you keep it in lossless RGB, yes it will remain the same, but the filesize will be much larger than the original subsampled YUV 4:2:0 recording. Lossless YUV (as the original colorspace )will be smaller in filesize. Some of the differences you are seeing in the shifting is from the different algorithms used to upsample the chroma to RGB for display. I guess in that respect if you keep it RGB, you have control over the algorithm used.
    I don't know how to record in RGB using OBS.


    Originally Posted by poisondeathray View Post
    Your original recording did not have a 1.333:1 aspect ratio, it was slightly wider 704x512 so 1.375:1 AR. It looks like you cropped the left & right then scaled to 1440x1080 for 1.333:1 AR
    I changed the aspect ratio when I decreased the canvas size. Is that what you mean?

    Name:  1.gif
Views: 426
Size:  11.2 KBClick image for larger version

Name:	2.gif
Views:	358
Size:	12.8 KB
ID:	27505


    Originally Posted by poisondeathray View Post
    You can do this in 1 minute or less with ffmpeg.
    I struggle using FFmpeg. Usually all the tutorials assume you already know what you're doing, and thus are written in what I would consider
    complete and utter jargon.

    I'm lucky I've gotten this far.

    I found a single comprehensive FFmpeg tutorial for beginners, and it was behind a paywall.


    Originally Posted by poisondeathray View Post
    Use ut video codec, it's compatible with premiere, offers multiple colorspace options, and you can encode right out of ffmpeg. It offers better compression and decoding speed than huffyuv

    edited video.avi (huffyuv RGB) - 225MB
    utvideo 15fps RGB - 135MB
    utvideo 16fps RGB - 144MB (duplicate frames)
    I recently managed to import HuffYUV into Premiere, but I'll give this a try.


    Originally Posted by poisondeathray View Post
    It's not a good idea to change the rate to 16. You just introduce duplicates or blends. File size will be larger, and it won't be any smoother playback. I would keep the original 15FPS
    Yeah, I'll remember this next time I record.


    Originally Posted by poisondeathray View Post
    Code:
    ffmpeg -i "original video.mp4" -vf crop=w=704:h=512:x=368:y=194 -an -pix_fmt rgb24 -f rawvideo - | ffmpeg -f rawvideo -pix_fmt rgb24 -r 15 -s 704x512 -i - -vf scale=w=2112:h=1536:flags=neighbor,crop=w=2048:h=1536:x=32:y=0,scale=w=1440:h=1080:flags=neighbor -c:v utvideo -an ut_rgb.avi
    (The ffmpeg to ffmpeg pipe is required, because ffmpeg does the nearest neighbor slightly differently with the U, V planes when working on YUV. And for some reason, a linear chain of converting to rgb first didn't work - It's basically a workaround to get sharp pixels)
    The issue for me is, I cannot comprehend what most of that code actually means.

    I'm hoping to create a system of production wherein I record and process videos for later YouTube use. I might start a channel where I play
    pixel games or something.

    I'm sure the above code might solve my problem this time. But what about the next time? And the next?

    If I use expert opinion as a crutch now, I'll never learn to walk on my own.

    Many might find the process I've used thus far both elaborate and tangled. But for me, at least I understand it. At least it's something I
    can repeat again and again if necessary.


    Originally Posted by poisondeathray View Post
    There are slight differences in the point sampling algorithms and how you cropped. In some frames it might be better, others might be worse. For example, in the first frame, your version (on the left), has 2 pixels in the guy's right eye, whereas the ffmpeg version is even. This might reverse in other frames

    Image
    [Attachment 27502 - Click to enlarge]
    I noticed this too. If I knew how to fix it, I would.


    Originally Posted by pandy View Post
    Code:
    @ffmpeg -threads %NUMBER_OF_PROCESSORS%*1.5 -i %1 -vsync 0 %1_%%06d.png
    for enlarging pixel graphic i would suggest to use one of special filters like hqx or super2xsai - both are supported by ffmpeg, they can be used to provide some antialiasing inside ffmpeg.

    -vf hqx=2 or -vf super2xsai
    I would just like to reiterate that FFmpeg has eclipsed my own understanding.

    If anyone knows of a simple yet comprehensive FFmpeg manual, I'd read through it.
    Quote Quote  
  4. Originally Posted by TheUninformed View Post


    It's for YouTube. I am aware of the risks, and decided to take a chance. If I later find I've been wasting my time—lesson learned.
    Since the ultimate destination is YT. That conversion to RGB , and back to YUV will make the borders / color edges more blocky, than if you had kept YUV 4:2:0 all the way through even to youtube . Each colorspace conversion is lossy , each trip makes the edges worse and worse, even if you use a lossless codec (no additional losses from compression, but the losses are from colorspace conversions)

    Larger filesizes, AND worse quality. It's not a win-win. It's a loss-loss.



    I don't know how to record in RGB using OBS.
    Apparently you can't. There are other free options like camstudio. You can use a lossless RGB codec like lagarith, ut video etc...




    Originally Posted by poisondeathray View Post
    Your original recording did not have a 1.333:1 aspect ratio, it was slightly wider 704x512 so 1.375:1 AR. It looks like you cropped the left & right then scaled to 1440x1080 for 1.333:1 AR
    I changed the aspect ratio when I decreased the canvas size. Is that what you mean?
    That's what I mean. You cropped some pixels off and adjusted the AR slightly



    Originally Posted by poisondeathray View Post
    You can do this in 1 minute or less with ffmpeg.
    I struggle using FFmpeg. Usually all the tutorials assume you already know what you're doing, and thus are written in what I would consider
    complete and utter jargon.

    I'm lucky I've gotten this far.

    I found a single comprehensive FFmpeg tutorial for beginners, and it was behind a paywall.


    The issue for me is, I cannot comprehend what most of that code actually means.

    I'm hoping to create a system of production wherein I record and process videos for later YouTube use. I might start a channel where I play
    pixel games or something.

    I'm sure the above code might solve my problem this time. But what about the next time? And the next?

    If I use expert opinion as a crutch now, I'll never learn to walk on my own.

    Many might find the process I've used thus far both elaborate and tangled. But for me, at least I understand it. At least it's something I
    can repeat again and again if necessary.



    That's part of the reason of using ffmpeg. How long did it take you to waste converting it to png, processing in photoshop, etc...? How much extra HDD space did you need? That clip actually took about 30 seconds to process, I just doubled the estimate since you said your computer was slow . You can even batch process with ffmpeg - eg. process all the clips in a directory automatically

    For example, if all your recordings are the same format , with the same black border 704x512, etc...all you have to do is setup a batch file that you just double click and it will batch process all the files in a directory. Myself or someone here can do that for you. And you can learn what the commands actually do at another time at your own pace



    Originally Posted by poisondeathray View Post
    Use ut video codec, it's compatible with premiere, offers multiple colorspace options, and you can encode right out of ffmpeg. It offers better compression and decoding speed than huffyuv

    edited video.avi (huffyuv RGB) - 225MB
    utvideo 15fps RGB - 135MB
    utvideo 16fps RGB - 144MB (duplicate frames)
    I recently managed to import HuffYUV into Premiere, but I'll give this a try.
    ut video is actually the recommended codec of choice on the Adobe forums. It's faster (especially decoding speed), and more compressed. Win-Win

    And if you don't need mathematically lossless (e.g. upload to youtube), use something visuallly lossless like x264. Even in RGB mode, it compresses much better than intraframe formats (it uses temporal compression) . But it's not compatible in programs like premiere, and so compressed that it would be difficult to edit anyway

    x264 lossless RGB 12.3MB
    x264 visually lossless RGB CRF15 4.98MB



    Originally Posted by poisondeathray View Post
    There are slight differences in the point sampling algorithms and how you cropped. In some frames it might be better, others might be worse. For example, in the first frame, your version (on the left), has 2 pixels in the guy's right eye, whereas the ffmpeg version is even. This might reverse in other frames

    Image
    [Attachment 27502 - Click to enlarge]
    I noticed this too. If I knew how to fix it, I would.

    It's alternating (not just buggy eyes, other parts of the picture) because you're not scaling by even multiples, and because of your other cropping adjustments. So you "fix" it by scaling 2x exactly, or 4x exactly. That's how nearest neighbor works best. Not to 1440x1080. 1408x1024 because you started with 704x512. That doesn't help you with your goal of YT unless you have it letter and pillarboxed. Can your game do other display dimensions ?



    Also it's not clear what you are using premiere for ?

    It doesn't make sense to upscale then do the edits in premiere. e.g If your computer is old/slow, it will just become slower when you deal with a higher resolution video, especially with the large bitrates when using lossless codecs . So cropping , convert to RGB, encoding to ut video codec in ffmpeg is fine, then import at 704x512 into premiere for farther editing, export then upscale. Upscaling before premiere doesn't make a whole lot of sense, unless you have some overlays etc... but there are other better ways to do that
    Last edited by poisondeathray; 16th Sep 2014 at 19:31.
    Quote Quote  
  5. Originally Posted by TheUninformed View Post
    Originally Posted by LigH.de View Post
    VirtualDub can do all that with internal filters.
    I had previously tried VirtualDub several days ago. I didn't appreciate the way it scaled the pixels.
    It has a Nearest Neighbor (aka Point Resize) option, just like you've been using in other software.
    Quote Quote  
  6. Originally Posted by poisondeathray View Post
    Since the ultimate destination is YT. That conversion to RGB , and back to YUV will make the borders / color edges more blocky, than if you had kept YUV 4:2:0 all the way through even to youtube . Each colorspace conversion is lossy , each trip makes the edges worse and worse, even if you use a lossless codec (no additional losses from compression, but the losses are from colorspace conversions)

    Larger filesizes, AND worse quality. It's not a win-win. It's a loss-loss.
    I'll probably be recording many games with various different frame complexities, meaning I'll have to know how to use FFmpeg to an efficient degree.
    I simply do not have the necessary information to effectively wield it.

    At present, what you've seen is the very best I'm capable of.


    Originally Posted by poisondeathray View Post
    How long did it take you to waste converting it to png, processing in photoshop, etc...? How much extra HDD space did you need?
    Three days. Sixty gigabytes.


    Originally Posted by poisondeathray View Post
    That clip actually took about 30 seconds to process, I just doubled the estimate since you said your computer was slow .
    What clip? Did you upload something?


    Originally Posted by poisondeathray View Post
    For example, if all your recordings are the same format , with the same black border 704x512, etc...all you have to do is setup a batch file that you just double click and it will batch process all the files in a directory. Myself or someone here can do that for you. And you can learn what the commands actually do at another time at your own pace
    That's fine I guess. I will at some point have to learn the commands though.

    Like I mentioned, I plan to create a system that works for me. If I just do something without actually learning anything, I will inevitably find myself in
    a similar predicament when I decide to record another video later on.


    Originally Posted by poisondeathray View Post
    It's alternating (not just buggy eyes, other parts of the picture) because you're not scaling by even multiples, and because of your other cropping adjustments. So you "fix" it by scaling 2x exactly, or 4x exactly. That's how nearest neighbor works best. Not to 1440x1080. 1408x1024 because you started with 704x512.
    I'm a little confused. I told Photoshop to constrain the picture proportions. Shouldn't that make a difference? When I magnified the image, it scaled to 1485x1080, not 1440x1080. It was only after I cropped the canvas slightly that became its final 1440x1080. I wouldn't think cropping the canvas would change absolute pixel position at all.


    Originally Posted by poisondeathray View Post
    Can your game do other display dimensions ?
    Technically yes, but I think I'd have to rip it from the website.


    Originally Posted by poisondeathray View Post
    Also it's not clear what you are using premiere for ?
    I covered this earlier. I have a series of 29 videos that have I recorded and spliced together in Premiere Pro.

    The frame dimension I'm working with is 1440x1080. My plan is to carefully crop and scale all 29 videos and switch them with
    their original counterparts already loaded in Pro.

    Click image for larger version

Name:	Premier.png
Views:	360
Size:	8.0 KB
ID:	27509


    Originally Posted by poisondeathray View Post
    It doesn't make sense to upscale then do the edits in premiere. e.g If your computer is old/slow, it will just become slower when you deal with a higher resolution video, especially with the large bitrates when using lossless codecs .
    It makes perfect sense to me. My editing in Pro is technically already done.


    Originally Posted by poisondeathray View Post
    So cropping , convert to RGB, encoding to ut video codec in ffmpeg is fine, then import at 704x512 into premiere for farther editing, export then upscale. Upscaling before premiere doesn't make a whole lot of sense, unless you have some overlays etc... but there are other better ways to do that
    Why would I want to import a series of videos with tiny frames into Premiere Pro? Once I export from Pro, it's off to YouTube!


    Originally Posted by jagabo View Post
    It has a Nearest Neighbor (aka Point Resize) option, just like you've been using in other software.
    For now I think I'll stick with FFmpeg. It's been good to me so far. If things don't work out I'll keep this in mind, though.
    Quote Quote  
  7. Originally Posted by TheUninformed View Post

    Originally Posted by poisondeathray View Post
    How long did it take you to waste converting it to png, processing in photoshop, etc...? How much extra HDD space did you need?
    Three days. Sixty gigabytes.
    That's learning the hard way !



    Originally Posted by poisondeathray View Post
    That clip actually took about 30 seconds to process, I just doubled the estimate since you said your computer was slow .
    What clip? Did you upload something?
    No I didn't upload it, the UT RGB version was 135MB and I didn't feel like uploading it.

    Just copy & paste the commandline. You used ffmpeg for converting to PNG, so I assumed earlier you knew how to at least use it on a basic level

    For this particular project, if you wanted to re-assemble the PNG's and encode to ut video, or huffyuv or some other codec, you could use ffmpeg as well


    Originally Posted by poisondeathray View Post
    For example, if all your recordings are the same format , with the same black border 704x512, etc...all you have to do is setup a batch file that you just double click and it will batch process all the files in a directory. Myself or someone here can do that for you. And you can learn what the commands actually do at another time at your own pace
    That's fine I guess. I will at some point have to learn the commands though.

    Like I mentioned, I plan to create a system that works for me. If I just do something without actually learning anything, I will inevitably find myself in
    a similar predicament when I decide to record another video later on.
    All I can say is just work through the commands, look at the documentation. It's tough and poorly documented, that's why many people learn by examples, asking questions



    Originally Posted by poisondeathray View Post
    It's alternating (not just buggy eyes, other parts of the picture) because you're not scaling by even multiples, and because of your other cropping adjustments. So you "fix" it by scaling 2x exactly, or 4x exactly. That's how nearest neighbor works best. Not to 1440x1080. 1408x1024 because you started with 704x512.
    I'm a little confused. I told Photoshop to constrain the picture proportions. Shouldn't that make a difference? When I magnified the image, it scaled to 1485x1080, not 1440x1080. It was only after I cropped the canvas slightly that became its final 1440x1080. I wouldn't think cropping the canvas would change absolute pixel position at all.


    It's because 704x512 is 1.375 AR, not 1.33 . That's why photoshop gave you 1485x1080 (1485/1080 = 1.375)

    When you crop in RGB, how are you going to crop 0.5 pixels ? You have an odd number. Three of the right, two of the left? vice versa? How do you decide to do it ? Subpixel scaling or non unit pixel scaling with interpolation will give you blurrier edges . That's the point of using "nearest neighbor". (And if you were doing this in YV12 4:2:0, you're not allowed to crop by single pixels , only by 2's, because the planes containing color information are 1/4. 1/2 in each dimension)

    The pixel interpolation is nearest neighbor, so some pixels will get doubled, some won't, unless you do it exactly 2x, 4x, 8x. That's why you get the "double wide eye" phenomenon on some frames. If you have exactly 2x, 4x,... etc... scaling, that means you have 1:1 representation. If you scale by 1/3 or some weird amount or fractional ratio, you don't have 1:1 representation of pixels - does that make sense? So on some frames, some pixels will be doubled, but others they won't be. Other interpolation methods other than "nearest neighbor" use various averaging and math formulas so you don't get that effect with different fractional ratios, but nearest neighbor preserves full pixels. Also, "nearest neighbor" isn't always "exactly" the same, because the scaling center might be "top left" in some programs, or "centered" in others, and the effect of doing it in different color spaces /subsampling has a major effect for different results ( the U, V planes aren't full resolution)
    Quote Quote  
  8. Point resize scaling to non-integer multiples will result in obvious "seams" during slow panning shots. This is what poisondeathray is referring to. Compare:
    Image Attached Files
    Quote Quote  
  9. Originally Posted by poisondeathray View Post
    Code:
    ffmpeg -i "original video.mp4" -vf crop=w=704:h=512:x=368:y=194 -an -pix_fmt rgb24 -f rawvideo - | ffmpeg -f rawvideo -pix_fmt rgb24 -r 15 -s 704x512 -i - -vf scale=w=2112:h=1536:flags=neighbor,crop=w=2048:h=1536:x=32:y=0,scale=w=1440:h=1080:flags=neighbor -c:v utvideo -an ut_rgb.avi
    Damn. . . that's perfect!

    May I ask how it was done?
    Quote Quote  
  10. Originally Posted by TheUninformed View Post
    Originally Posted by poisondeathray View Post
    Code:
    ffmpeg -i "original video.mp4" -vf crop=w=704:h=512:x=368:y=194 -an -pix_fmt rgb24 -f rawvideo - | ffmpeg -f rawvideo -pix_fmt rgb24 -r 15 -s 704x512 -i - -vf scale=w=2112:h=1536:flags=neighbor,crop=w=2048:h=1536:x=32:y=0,scale=w=1440:h=1080:flags=neighbor -c:v utvideo -an ut_rgb.avi
    Damn. . . that's perfect!

    May I ask how it was done?


    It was done with ffmpeg.




    Just break down the commands

    You already know the basic ffmpeg syntax -i is input file

    -vf crop is drumroll......you guessed it....crop . "-vf" in ffmpeg means video filter "-af" means audio filter
    https://www.ffmpeg.org/ffmpeg-filters.html#crop
    How did I know to use those values for crop ?
    w= final width, so 704 , h= final height, so 512. The x, and y values might be more confusing. They mean the origin of the area of interest. So x=368 because that's where 0,0 or top left corner is for the "video in the middle" . Or if you like math, (1440-704)/2 = 368 . Or another way of saying it is 368 pixels are cropped from the right and left. Same logic for height. 900px heigh starting width, 512 pixels is the value of the "video box" (900-512)/2 = 194 , or 194px each of top and bottom cropped

    -an means no audio

    -pix_fmt rgb24 means RGB24 colorspace

    -f rawvideo is for the raw video pipe , the " | " is the pipe. Notice there is a "-" before the pipe. Pipe means sending data to other programs. In this case it's a rawvideo, RGB pipe, to another ffmpeg instance. (I already explained why you had to pipe to itself, it doesn't make entire sense but you have to do it because a linear filter chain didn't work properly for some reason). When recieving a raw pipe, you always need to specify the pixel format, frame rate, and dimensions. That's what -pix_fmt, -r, and -s are for respectively

    The -i is input (you already knew that), but notice there is a "-" after that, that is to recieve the piped data

    -vf scale is scaling, but there are many options and arguments. It actually can be a very complex filter in ffmpegland
    https://www.ffmpeg.org/ffmpeg-filters.html#scale-1
    w, and h are self evident (width, height), flags=neighbor signals the scaling kernal (nearest neighbor, or point scaling)
    https://www.ffmpeg.org/ffmpeg-scaler.html#scaler_005foptions

    In this example I chose to scale to 2112x1536 , because it was an integer 3x number, and 2048x1536 is 1.333333 (or 4/3) exactly, so it's easy to crop 32px off each of left and right. In the photoshop example, recall the odd numbered pixels. In the end it won't make a big difference. Especially on youtube. For example, you can do it even faster by using 682x512 =~ 1.33203 AR (not 4/3 exactly) because you don't scale as high , but with slightly off AR (you 'll get slightly different results but they still might be acceptable to you)

    Code:
    -vf crop=w=682:h=512:x=11:y=0,scale=w=1440:h=1080:flags=neighbor
    -c:v means video codec , in this case utvideo means ut video. The old syntax of using it was -vcodec something . ffmpeg has many codecs available, if you want to print out a list of what's available for encoding, and decoding use

    Code:
    ffmpeg -codecs 1>ffmpeg_codecs.txt
    Last edited by poisondeathray; 16th Sep 2014 at 23:45.
    Quote Quote  
  11. Well, everything is looking good! Thanks, everyone!

    I know some of you might want to see this. . .

    Click image for larger version

Name:	Lesson Learned.png
Views:	316
Size:	57.3 KB
ID:	27514

    And it's not even half of the full load.
    Quote Quote  
  12. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Originally Posted by poisondeathray View Post
    Originally Posted by TheUninformed View Post
    Originally Posted by poisondeathray View Post
    How long did it take you to waste converting it to png, processing in photoshop, etc...? How much extra HDD space did you need?
    Three days. Sixty gigabytes.
    That's learning the hard way !
    Yep.

    This resizing business is a headache. I keep asking why the original mp4 is 1440x900 and the original image is 1.375:1 aspect ratio. I don't use game consoles, but shouldn't the image be 4:3 to begin with? Don't know the O.P.'s software, though, and no time for gaming.

    In Avisynth I tried both resizing methods from poisondeathray, below:

    #1:
    Originally Posted by poisondeathray View Post
    Code:
    ffmpeg -i "original video.mp4" -vf crop=w=704:h=512:x=368:y=194 -an -pix_fmt rgb24 -f rawvideo - | ffmpeg -f rawvideo -pix_fmt rgb24 -r 15 -s 704x512 -i - -vf scale=w=2112:h=1536:flags=neighbor,crop=w=2048:h=1536:x=32:y=0,scale=w=1440:h=1080:flags=neighbor -c:v utvideo -an ut_rgb.avi
    #2:
    Originally Posted by poisondeathray View Post
    Code:
    -vf crop=w=682:h=512:x=11:y=0,scale=w=1440:h=1080:flags=neighbor
    Yep, I got some 2-pixel eyes on some frames, not on others. So I tried it in ffmpeg and got the same thing (only my second time out with ffmpeg, so maybe I did something wrong. No surprise).

    Two 2-pixel eyes show up in the attached video_newAR.mp4, which used #2 above.

    Since I don't care to alter image aspect ratios by chopping off pieces of 'em, I ran a similar Avisynth script but kept the entire original image intact and resized 2x with PointResize. Of course that meant I had to add some borders to get a 4:3 1440x1080 with the original 1.375:1 image inside it (attached video_OriginalAR_LB.mp4). On the other hand, nothing is lost from the original and it looks sharper (IMO). I realize people don't like borders, but you'll get some anyway on displays wider than 4:3. The script avoided YUV-RGB-YUV conversion. Original YV12 all the way, with help from the ColorMatrix plugin.

    Took maybe an hour to make several test versions -- nowhere near 3 days. Thanks to poisondeathray for good and useful notes on this project, and to jagabo for the cool panning demo.
    Image Attached Files
    - My sister Ann's brother
    Quote Quote  
  13. Originally Posted by poisondeathray View Post
    Originally Posted by pandy View Post
    AFAIR FFV is supported on VfW by ffdshow.
    FFV1 v1 is supported by ffdshow VFW. The newest version in ffmpeg uses FFV1 v1.3, not supported by ffdshow. I supposed you could use an older ffmpeg version to encode
    You can specify which FFV version should be used - new version provide support for more than 8 bit per component.
    https://trac.ffmpeg.org/wiki/Encode/FFV1
    Quote Quote  
  14. Originally Posted by TheUninformed View Post
    If anyone knows of a simple yet comprehensive FFmpeg manual, I'd read through it.
    In fact documentation in ffmpeg itself is way better, for some problems still you can ask on videhelp but also on Zeranoe ffmpeg forum http://ffmpeg.zeranoe.com/forum/

    in worse case read ffmpeg source...

    I believe that any payed manual for ffmpeg will be quickly outdated - don't waste time - use decent forum and site https://trac.ffmpeg.org/wiki
    Quote Quote  
Visit our sponsor! Try DVDFab and backup Blu-rays!