Anybody have any idea what technique and software(s) was used for this:
Because I'm stumped.
I've had a project sitting here for almost 10 years, waiting for the day something like this would finally be feasible.
Also: How do you embed Youtube video on VH?
+ Reply to Thread
Results 1 to 19 of 19
Uh, very cool video. I love this stuff!
So again, I ask here.
The guy used a combination of tools. 1st he used a tool to stabilize the video. Then he adjusted levels / saturation to clear it up. Then he used a MotionFlow tool to create new frames.
You can use free tools like Avisynth and / or SlowMoVideo to do most of that work. It's not so much complicated as it is time consuming.
You can see some examples in this thread:
Last edited by racer-x; 25th Jun 2014 at 17:38.One man's opinion is another man's toilet paper.......
I love the part in the Q/A where somebody bemoans not having a microphone/audio on the Rover. Duh! To have sound, one must have an atmosphere (or other medium of transmission).
I'm guessing the author would like to keep some of his technique in his back pocket. You never know, he might get hired because of this, and he wouldn't want to be upstaged by giving away all his bag of tricks. I understand that.
Certain types of content and motion don't interpolate well with optical flow. If you tried it on this video, there are sections where it fails miserably, other sections where it's passable
This is what I would do - and I'm 99% certain he did something similar because there are peripheral artifacts/signatures that suggest this, and not the typical optical flow artifacts in the main sequence. If he used typical optical flow tools, it was only on a few sections , and only in parts of some frames
Since this is mostly static background content of Mars ( non complex motions like people walking , talking) , the "easy" way to do this is to use motion tracking to link the still images from the nasa site - this way there are no optical flow morphing artifacts because each interpolated frame is a "real" frame. Motion tracking reduces the amount of manual work required in something like photoshop, AE by many times. Additionally, manually fiddling with it is prone to error and "jitter" as the values are not "evenly spaced" in time, so motion isn't smooth. It would take you 100x as long to manually get the motion as clean and perfect / evenly spaced as a motion tracked sequence
The interpolation for the "inbetween" frames is linear - so the difference in x, y, scale, rotation keyframes can be easily interpolated in after effects if you have rock stable tracking data. You can think of it as aligning the still images in sequence, moving the x,y, rotation and scale parameters . e.g if between frame 0 and frame 1 , the motion is tracked from x position of 400 to 500 for a delta of 100 , you can create "n" number of "inbetween" frames by just , moving the keyframes to scale the timeline (like an accordian). So a 2x interpolation would have the "inbetween" frame position at 450, automatically calculated - this is done for all the parameters. If you scaled 8x, for 7 "inbetween" frames the 1st interpolated key would have the correct x value of 412.5 and so forth
The drawback is there is missing information from the peripheral edges with this method; there are ways to semi-automatically fill in the gaps e.g. tracking backwards and fowards to use data from more than 1 adjacent frames to composite together (not just 1 adjacent, but even from dozens of frames apart) . There are programs designed for this - e.g. mocha pro has a module that fills in the missing data from motion tracked adjacent planar surfaces. Think of it as a jigsaw puzzle of frames in a sequence and it fills in the gaps by adjusting the scale , position, rotation, sheer of adjacent frames in the sequence to "fit" the pieces into the gaps
The other embellishment is just levels & contrast, color correction. The side by side comparison is misleading in terms of details because he used a low quality NASA version
Full res image sequence 1648x1200
Note some of the images are mislabelled, so you might have "gaps" in the color sequence, but most of the "gaps" are actually present as the "E2" version
Show me something like this on complex content, not just a mostly static background - e.g. objects passing & crossing infront of another, people walking , circular motions , hard to predict motion vectors - then I'd be REALLY impressed because those are the types of content where every method fails, short of lots of manual work, lots of rotoscoping
Last edited by poisondeathray; 26th Jun 2014 at 11:28.
@ poisondeathray, I was playing around with motion flow on the video you uploaded on another thread. The one that consists of 70 still images of a lunar landing: http://forum.videohelp.com/attachments/25710-1402639128/dewarp.mp4
It's similar and no need to color enhance. I threw it at SlowMoVideo and cropped out most of the motion flow artefects like the original author did in the mars video. You can still see some however just like in his. Maybe someone else might want to try something. Here is my example:One man's opinion is another man's toilet paper.......
The "smooth" mars version on the YT comparison video noticably has absence of those morphing artifacts from optical flow, especially around the central region . Try twixtor , slomovideo, kronos, timewarp, AE, etc... or mvtools2 etc... any optical flow method and you will notice massive distortions in the central region that you can't crop out . That's why I'm 99% certain he didn't use optical flow (or only used it for limited sections) . The peripheral edges might have been slightly cropped (you can still see some of the artifacts) because of the motion tracked fix / layered stack
The moon video is the same idea, but that one is a much harder to interpolate source with either method:
1) The images are different resolutions and aspect ratios . The "dewarped" fix was just a quickie job, so it has errors
2) There is marked illumination changes between each frame and within each frame for the topologic features - You 'll never be able to interpolate this without some degree of flicker
3) There are lens distortions and there is non linear scaling effect occuring . The scaling center is not in the centre (The left side of image scales disproportionately larger than the right) - so any motion tracked repair won't work as well (it won't match up)
To do that one, I'd remove the guide markings first, the central ones should be semi-automatic in mocha, but the peripheral ones won't "match" because of the non linear scaling
Last edited by poisondeathray; 26th Jun 2014 at 13:45.
If you watch "the making of" video, he states that he used motion flow s/w without naming what he used. He said he had to create 26 new frames for every second:
I downloaded the video and trimmed out the final sequences. It has a lot of motion artifacts in it. It still looks good and I wouldn't waste my time trying to make it better.One man's opinion is another man's toilet paper.......
If by "motion flow" he means optical flow like twixtor etc... then he used it only for a part of it. He did a lot of other post work. That "other work" is the motion tracking and patching (or manual alignment but I doubt anyone has time for that). There are digital signatures where you can see the patch joins (he didn't do a good job of masking them out). The majority of the important parts in the center section don't have optical flow written all over it. Trust me - it's too clean for that type of motion on the frames I'm talking about
Download the larger version. yes , there are artifacts, but mostly on the periphery.
Unless he's using super duper new motion interpolation techniques , the central portion of this video is amazingly clean (too clean) in terms of optical flow interpolation for this type of content and motion . If that were the case, he should sell this product, it would put everyone out of business.
Try the NASA image sequence with twixtor, kronos , mvtools2 , msu_frc etc.... and you will get completely different results. Much more artifacts, looks way way worse . So yes, he might have used "motion flow" for some parts, but there is alot of other work required for that interpolation
One man's opinion is another man's toilet paper.......