I'm using Blender to experiment with overlaying an image of a map of the East Coast of the USA over video of probe Ranger 9's impact on Alphonsus crater. The crater is 56 miles in diameter, which is roughly the radius of New York City, parts of Long Island, New Jersey, and Connecticut. As you can see from the clip below the crater fills up the entire screen by the time of impact. Is there a way, within Blender, to zoom the image of the overlayed map so that the radius on the map that matches the size of Alphonsus is kept consistent with it?
Here is a crude gif clip of what I've done so far:
I'm thinking maybe it can be done by using keyframes, but not sure how.
+ Reply to Thread
Results 1 to 17 of 17
You're either going to have to calculate the non-linear acceleration of the moving, shaking image and apply that to your map -- or use the brute force method:
Match the first and last frames,add keyframes, match the center frame, add keyframe, match frames at halfway between points, add keyframes...etc. It's only 12 seconds, and it sounds like a lot of work, but it's going to be the quickest way when all is said and done.
You may get lucky and need fewer keyframes in the earlier part of the sequence. You may be able to track the center point automatically, but this is going to require so much manual manipulation anyway that that may be a wasted effort.
Thanks! Would you say the best way to resize the image overlay is to add a Transform effect strip and then use the Uniform Scale feature?
Manually doing it is going to be a PITA, and it won't match precisely unless you spend many hours fiddling with it frame by frame
Typically what you would do in other programs is motion track the moon footage and apply that tracking data to a null object or "placeholder". You then link the overlay layer to that null object - so as the moon footage scales , rotates, translates in view, that motion data is applied to the overlay image - so it too scales & rotates & translates in the same manner. Blender has a motion tracker now, but I'm not sure of the exact steps of how to do that in blender, but you should definitely ask in the blender forums, they are typically very helpful over there
Some problems I can forsee : that Ranger footage you linked to has duplicate frames and blended frames. This is more difficult to track accurately "automatically" (especially the blends, as object borders/characterstics are no longer clearly defined) . Probably you need some manual assist tracking if you attempted on that video. If you can find the original footage, or higher quality footage it would be better to start with .
This one is more clear, with no blends (only duplicates) , thus easier to track . You could even decimate the duplicates if you wanted to (and change the framerate accordingly)
You probably need to start with either a high resolution map image or a vector map image (vectors are infinitely scalable), because there is a fair bit of scaling in this sequence.
I would try setting both tracks to 50% composite level and just use one giant keyframe in conjunction with Track Motion or Pan/Crop tool on each track. Not sure how to do it in Blender.
There's no visual way to tell if the tracking is right or wrong. Like when you look out the window of a landing airplane. There's no frame of reference.
Like Einstein's Special Theory of Relativity, two points in motion while observing each other cannot know if they are coming or going.
So long as the start and end are synced up, I think you're good to go. But you do need a super highrez image.
Last edited by budwzr; 10th Jun 2014 at 18:41.
The "visual" way is to see it no longer matches. ie. The composite falls apart. In this context, an example would be "New York" might be scaled at a different rate than the lunar crater - so it's not "stuck on" properly
A good track means it should "stick like glue" , as if it was there in the first place. There should be no "slippage" or wobbling
What you're saying is true, as usual, and for high quality, but I was thinking if it's just Broadcast quality, I see a lot of fudged animations that still "work" and impart the desired result.
So now the OP has three offers, hahaha.
P.S. If the spacecraft is coming in naturally, like falling with gravity or a rocket boost, wouldn't the trajectory by pretty much straight in? So THAT curve would be predictable. You just have to get NY lined up, no?
And NY is on a still, not a video right? So you have absolute control over where NY should be at any given time. Doesn't seem that tough. The tough part is getting decent footage. That YT sample is horrendous.
Last edited by budwzr; 10th Jun 2014 at 20:09.
Here is a quick track using a crappy map from google - I think this might be sort of what the OP wants . There is a bit of wobble & slippage so it's not what I would call a high quality track, but just gets the point across. The important point is the spatial relationship of the land mass in relation to the crater is roughly the same per frame (ie. it doesn't slip too much)
(ok maybe I missed New York, I think the crater is in Hicksville, but you get the point )
To do that manually in blender (or any program) you could manually keyframe the scale and position parameters as mentioned earlier, but it's a lot of work, and you'll get more "slippage" compared to motion tracking. You end up fiddling back & forth and it takes a lot of work (trust me, I've done lots of manual keyframes)
I left the duplicates in that example, but to reduce the amount of work, you can decimate the duplicates with avisynth (fewer # frames) . Some of the "duplicates" are not 100% duplicates (there is a bit of x,y translation, or some "duplicates" have grain or noise or scratches)
Or another approach might be to stabilize the footage first, that will make manual keyframing easier.
Or another approach might be to redo the clip using a photo image sequence . I found some higher quality ones here , but the problem is some are distorted slightly (differing aspect ratios), some are cropped or framed differently. So that's a bit of work making them match up, but the image quality is certainly better
It's weird that I can't find the actual photos on NASA site or .gov sites (budget cutbacks ? )
Good example. Something else I notice is that these two media events don't seem very compatible. I mean, they don't convey the point very well. There's just too much going on.
New Takeaway: Maps and craters don't mix well
I thought the point was to highlight an existing old meteor crater in the NY area.
Last edited by budwzr; 10th Jun 2014 at 20:54.
Haha - I was waiting for the part where the Moonmen send up spaceships with explosives that they drill and place into Ranger 9 to blow it up before it impacts the moon...or was that a different movie ?
Thanks for all the comments/advice.
I will try out some of these of these approaches. I'll use the higher quality clip that poisondeathray posted, and try to find a better map.
My purpose is purely to learn a new skill on Blender and also because I've seen this method employed when comparing atomic bomb blasts to the radius on a map. I like the effect a lot, and find it educational. I thought it would be nice to watch Ranger 9's impact and know just how big is Alphonsus crater, since without some comparison it's difficult to tell if it's the size of a football field or a state.
But there is (there should be I'm 99% sure) a way to do that in blender without the manual steps . Unfortunately I'm not as comfortable with blender (I REALLY dislike the GUI), and tend to use other programs for 3D and compositing work
It should be the same or similar technique using blender's motion tracker., but instead they call the "null object" an "empty" . The act of linking something to that null object is called "parenting" (it's called "parenting" universally in just about every program) . It's called "parenting" because the null object is the "adult" and controls all the children. So as the null moves one way, the children do too. It's like a control rig .
The blender community is very helpful so you should ask and look for some tutorials there
So if anyone is interested I "de-warped" the higher res still image sequence from the link above and re-aligned them, assembling it into a video. This was done by motion tracking the grid overlay as guidepoints and corner pinning the edges. Certainly the resolution is higher, details are more clear than the youtube crap - but unfortunately it runs like a fricken slideshow (because there are only 70 images, and it actually is a slideshow LOL) . It will be tough to interpolate this type of motion / characteristics to make it smoother without serious artifacts