I can't seem to find an answer or a solution - here's what i need to do:
Big screen shows a live video from a camera and i need to create about 25 frames of fading ghosting (i.e if somebody moves in the scene a fading ghost trail of him will be left behind for a second). Anybody here can suggest what hardware or software i need to use to get that kind of effect. Sorry if the question really has a simple answer, i'm new to live video.
See the image to better get my idea:
Thank you, your help is appreciated!
+ Reply to Thread
Results 1 to 25 of 25
You need something that has a loop-able delay device.
So instead of Input frames being:
you get Output frames being:
And the brightness/transparency of the later (fed-back) images would be a function of the intensity of the feedback (there would have to be some compensating reduction of overall brightness however).
This can be done easily with files in most NLEs and with AVISynth (I have done this myself), but I don't know what devices do exactly like what you're asking - LIVE. You would want something like "video echo" or "video delay line" or "video frame delay", etc. This is similar to, but not having the same properties as, Optical feedback. Also similar to slow framerate light trails (both operate with fed-back frame-delayed images). It's possible that a standard Frame Synchronizer might have that feature as an optional artistic "effect".
Last edited by Cornucopia; 21st Oct 2014 at 00:36.
Just throwing out some ideas, not sure if/how well they will work
You might be able to do it live with graphedit/graphstudio with ffdshow in the filterchain . Since ffdshow has an "avisynth" box, you can apply filters that way
There is also this project you might want to explore
Can't you just lay down about 8-10 tracks of the same event and stagger the position? With everything blended as an "ADD".
Budwzr, from reading his original post again, it looks like he wants to do it live. Maybe OP will chime in and say yes live or to be done in post.Extraordinary claims require extraordinary evidence -Carl Sagan
The screenshot provided by the OP seems more than simple frame-overlay... a kind of pixellation/particle effect is present also.
It is possible with simple frame-overlay with mainstream machines in almost real-time, but adding (heavy) particle effects is likely to lag unless with a very powerful machine.Stopping development until someone save me from poverty or get me out of Hong Kong...
Some cameras have the ability to do slower than normal shutter speeds which may get you partway there. You probably want to look into video DJ software -- but I don't know any specific product well enough to make a recommendation.
Here is an example of a simple frame-delay overlay that I did in AVISynth (IIRC, =8 successive frames):
Note, it doesn't have quite the same SMEAR as the OP's - you can see each individual frame's contribution, even though it was shot at 60p. This would lead me to agree with MaverickTse's assessment.
Well, you gave it a yeoman's try. I think the Ope mentioned 25 frames. There might be some unseen forces here too, like number of frames/speed of action X amount of blur... Something like that. You might need the perfect storm.
Or another possibility is a high framerate camera. That would require very little blur to look continuous.
Another possibilty is it was done with a very slow shutter speed. Slow shutters do produce ghosting, as in like a waterfall shot. Notice there's no background visible. That suggests the lighting didn't make it that far because it's very dim, just enough to barely illuminate the dancers.
Last edited by budwzr; 21st Oct 2014 at 16:46.
Yes, I agree that example is not a single simple delay or offset - but not because of the "streaks". It's the timing of the frame accumulation that indicates this.
As the pair move off to either side (you can choose either), the most recent "echo" should be the next most distinct (next highest in opacity) if the falloff were linear. However, there other images earlier in the sequence temporally that have higher opacity. This suggests at least some compositing was done
OP wasn't clear if he wanted something exactly like that (much harder to do in live, realtime), or if a simple offset would suffice
Ah-ha. Interesting article. So it's a still image sequence composite. Does it animate, I wonder? That would be very cool too.
I don't think it is actually a still image sequence, per se. More of a composite of time exposures (aka slow shutter speed) plus flash at end.
Not animated, I'm pretty sure.
That's where mine is different, because mine works on video in AVISYNTH. But it gives me ideas...
So this is a composite of a sequence of long-exposure image with flash....
"Motion-blur" can simulate the effect to some extent, but I can't get the idea of flash...(i'm a noob in photography)
is it working like this?:
1> [start long exposure](the trail) → [Flash](solid body outline) → [end exposure]
2> Repeat 1 in different pose
3> composite 2Stopping development until someone save me from poverty or get me out of Hong Kong...
So I attempted to make a simulation with 8 overlaps...
each trailing image also has "(TV-)Noise" applied
Overlapping with more layers will give longer tails but may crash the program at some point...
Note that this is already quite heavy and not suitable for LIVE
still the smear is not quite the same
The Red Mistress-Demo.avi : Video Source
RedMistress-MotionBlurTest01.zip : AviUtl project
RedMistress-MotionBlurTest01.avi : FX applied (1280x720, Lagarith AVI)Stopping development until someone save me from poverty or get me out of Hong Kong...
Yes, I believe that is how it was done. The long exposure accumulated the motion, providing the trails, and the flash at the end gave just enough more light to show up as the strongest image, and by being short and the final image in the exposure, it is sharp & fixed. Then, multiple sequences were composited.
I like your test. Will have to analyze it tonight...
Smells like a wiener.
Looks nice Maverick
There are probably several ways to do it something like this avisynth. This example combines a 16frame delay accumulation with a 8frame delay. Another overlay is put on top as the currentframe acting as the "leading" frame , otherwise the acuumulated frames tend to blend in too much - It's role is simliar to the flash used for the real thing. So 2 overlays are used, and it's slower to process. This method uses ClipBlend from StainlessS (external avisynth plugin) , but there are other ways
You should be able to get realtime with avisynth with a single staggered layer on a decent system, but if you composite multiple layers, motion blurs, add other effects, etc... you won't get real time. I think you should be able to use the ffdshow avisynth box , perhaps with a live feed in graphstudio to apply the avisynth filters
Overlay() is inefficient speedwise in avisynth compared to mt_merge(), but layer blending options are more limited with masktools. There are a few user made masktools functions that offer modes like "lighten", "screen", etc... but they have problems with YUV and chroma handling
1) eg. preview of single 8 frame
2) 8+16 (you can see the 8 frame "echo" of the female dancer's hand, combined with the longer 16 frame "echo". This is a composite of sorts like the original example pic, because there is an older "echo" that has higher opacity, except done with video)
3) Another way to get "smoother" trails, instead of "discrete echos" from frames is to interpolate the frame range using mvtools. You're essentially cramming in more frames per unit time, so the trails are more contiguous. This one combines 8+16 as above with 2 interpolated versions. Notice there is a background dude in the white shirt (obviously you'd crop that out), but that's an "echo" from an earlier time range from the interpolated version. Another method is you could probably apply some motion blurs to smooth out echos
AVISource("The Red Mistress-Demo.avi") AssumeFPS(24000,1001) main=last main clipblend(delay=8) a=last main clipblend(delay=16) b=last mix=overlay(a,b, mode="lighten") overlay(mix,main, mode="lighten")
It's probably difficult to get the smear/trails exactly the same way as in the pic as a video post effect (and that example was s a still , we don't have motion samples to examine the falloff function characteristics) . But there are many things you can try like different delay values as shown above, apply different effects, layer blending modes, apply motion vector blurs, many different things - just throwing out some ideas
There are other ways to do even more stylistic trails (even more than the original example), such as using motion tracking for the particle emitter(s) and particle engines for effects, but of course not in realtime. Realistically only the single layer "echo" of "x" frames could be done in realtime, perhaps two layers but I doubt more
Last edited by poisondeathray; 22nd Oct 2014 at 18:01.
Here's a crude method but it only works on a black background:
ffVideoSource("dance.mp4") src=last.Trim(16,0) Merge(SelectEven(),SelectOdd()) Merge(SelectEven(),SelectOdd()) Merge(SelectEven(),SelectOdd()) Merge(SelectEven(),SelectOdd()) ConvertFPS(29.97) bmask=src.mt_binarize(50).Invert().Blur(1.0) Overlay(src, last, mask=bmask)
Mine was similar, more convoluted, but didn't have the light-accumulation problem (pardon my AVISynth inefficiencies):
clipA=DirectShowSource("W:\HighlightsPromo.avi").AssumeFPS("ntsc_double") clipB=DeleteFrame(clipA,*end*) clipB=DuplicateFrame(clipB,0) clipC=DeleteFrame(clipB,*end*) clipC=DuplicateFrame(clipC,0) clipD=DeleteFrame(clipC,*end*) clipD=DuplicateFrame(clipD,0) clipE=DeleteFrame(clipD,*end*) clipE=DuplicateFrame(clipE,0) clipF=DeleteFrame(clipE,*end*) clipF=DuplicateFrame(clipF,0) clipG=DeleteFrame(clipF,*end*) clipG=DuplicateFrame(clipG,0) clipH=DeleteFrame(clipG,*end*) clipH=DuplicateFrame(clipH,0) clipAB=Merge(clipA, clipB) clipCD=Merge(clipC, clipD) clipEF=Merge(clipE, clipF) clipGH=Merge(clipG, clipH) clipABCD=Merge(clipAB,clipCD) clipEFGH=Merge(clipEF,clipGH) clipABCDEFGH=Merge(clipABCD,clipEFGH) clipABCDEFGH=ConvertToYV12(clipABCDEFGH) clipABCDEFGH=Dup(clipABCDEFGH) return (clipABCDEFGH)
Last edited by Cornucopia; 22nd Oct 2014 at 20:23.
Last edited by jagabo; 22nd Oct 2014 at 22:53.
I like jagabo's the best by far. Even better than the OP's sample.
I think the real inspiration for this is Mortal Kombat
Everyone wants to do a Johnny Cage Shadow Kick live !!!!
So, there are a few possibilities here, but still not tested with LIVE situation...
Anyone want to take the challenge?
In the past I've done something like what poisondeathray suggested in post #3. I opened a capture graph with DirectShowSource() in an AviSynth script, added effects with the script, and played the script with a media player. The biggest problem is that only some capture devices can be used this way. And if you want very long complex trails with high def video you may run into CPU power problems.