VideoHelp Forum




+ Reply to Thread
Results 1 to 3 of 3
  1. Ok, first of all apologies if this should have been posted elsewhere but I think this is the most appropriate section as this is (mostly) an audio issue.

    So, here's the issue. I have camera footage of a band concert (video & audio) and I also have a recording of the concert directly from the mixing console (mp3 320kbps). What I want to do is mix both audio sources into one. My tools are Premiere and Audition btw.

    The problem is that when I sync the two audio sources and they sound just fine for a while, later on they become desynchronized.

    The footage was recorded using a home camera and the only thing I can tell that might have something to do with this is that the camera audio was recorded in 32000hz whereas the mp3 is in 44100. I tried converting the mp3 to 32000hz and syncing it again but still the same.

    Let me also add that there were no dropped frames during video capture.

    Any help would be greatly appreciated.

    Many thanks.
    A.
    Quote Quote  
  2. Always Watching guns1inger's Avatar
    Join Date
    Apr 2004
    Location
    Miskatonic U
    Search Comp PM
    However because the audio and video recorded weren't linked to a common time base, they are not identical on playback. You get the same effect when using multiple cameras filming the same event that aren't syncronised throughout.

    So what can you do ? Basically it comes down to working out where they start to drift, splitting the files, re-syncing, perhaps time stretching one clip to match the other, and repeating as often as necessary to get them aligned. Time consuming and frustrating.
    Read my blog here.
    Quote Quote  
  3. Member turk690's Avatar
    Join Date
    Jul 2003
    Location
    ON, Canada
    Search Comp PM
    I also use Premiere Pro and Audition.
    When shooting events, I separately capture audio from the mixing console (insert jack shorts go into appropriate channels of an m-audio 1010LT card in a PC for just that purpose). Video comes from a mix of consumer and prosumer camcorders, the types of which vary depending on when and if I can get them (Canon XHS1, Sony CX12, CX550, etc.)
    After all of the media are transferred to the NLE PC capture drive, a few years back, like many in this forum, I was initially driven to despair at the gradual loss of sync between video and audio among the different sources.
    That's one of the reasons I capture audio separately; it's the first thing I edit and equalize (Audition) and lay down on the first audio track in Premiere Pro. This whole unbroken audio (2hrs or so; never more than 3hrs as Premiere becomes kvetchy) is the backbone of the whole project (akin to a tree truck, where the leaves and branches in turn connect). Then I gradually lay down each video in the appropriate track for each camera, syncing it to this 1st audio track. Over long captures, as we've noticed, audio and video gradually drift out of sync by anywhere from 3 to 10 frames or so over an hour. What I do is, for example, an hour's length of DV AVI is cut down on the timeline to 4 or five chunks. Each chunk is separately synchronized with the 1st track audio (from the beginning to end of a 15 minute AVI chunk, the difference in synchronicity between audio and video becomes no more than two or three frames at worst, which is generally not perceptible). In this way, loss of synchronicity does not pile up to a murderous noticeable teeth-gnashing length at the end of a long video file. It's still there, but largely imperceptible (experimenting, I find you begin to see it when audio and video are off by 4 frames or more). Cutting the long video file into ever smaller chunks will improve the apparent sync, but will require more hassle. I have tried what others suggested, which is to find the actual difference between audio and video and then stretch the audio by that percentage (with what I do, with sometimes 5 vidcams in a multicamera set-up, that is even more time-consuming and maddening) but I have always returned to cutting up the video in chunks in the end.
    When the video has been cut up and moved, depending on the situation, there may be a few frames worth of gaps between chunks. It's easy to just stretch video on either side of the gap and crossfade one to the other (you don't see the crossfade portion if no-one points it out).
    With this, camera audio becomes more or less in sync throughout with one another and with the 1st audio track so is now usable for such things like applause and ambience.
    The recent AVCHD camcorders are a boon because they capture in chunks of 2GB, which equates to 15 or 16 minutes of AVCHD (Sony CX-12 FH setting for example). The videos have been pre-cut for me! All I have to do now is simply sync each chunk to its appropriate point with the 1st track audio.
    If the camera audio is of a different sampling frequency compared with others, I open a separate project for it and export it to an AVI file with an audio of the same FS (44.1 or 48kHz) as the one I want. I generally capture audio at 48kHz for from the mixing board because that's the FS across the board from DV (SP), DVD, AVCHD, etc.
    For the nth time, with the possible exception of certain Intel processors, I don't have/ever owned anything whose name starts with "i".
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!