assuming that audio and video streams are copied from source file, audio channels are remapped, and another wrapper is chosen:

ffmpeg -i input.dv -map 0:0 -map 0:1 -map 0:1 -map 0:2 -map 0:2 -c:v copy -c:a pcm_s16le -map_channel 0.1.0:0.1 -map_channel 0.1.1:0.2 -map_channel 0.2.0:0.3 -map_channel 0.2.1:0.4 output.mov
what are the possible causes of audio being drifted (it goes ahead around 250ms in relation to the video)
and what are possible mitigations for this problem other than -itsoffset but rather asnync=1 to force alignment

What are the ways to keep timecode in output file from initial file, and align audio accordingly?

Would it be possible or would there be any benefits from influencing the muxer so that delay is reduced ?

I noticed that switch -r 25 for output improves/ lessens the drift but rather I am asking how to match audio samples with the timecode
and how to look also to this same problem once resampling is being done. I found similar examples to this:

-filter:a aresample=48000:async=1:min_comp=0.01:comp_duration=1:max_soft_comp=100000000:min_hard_comp=0.1
I am suggesting using any mechanism with the smallest to the biggest influencing factor, like dithering, stretching, squeezing, aligning chunks to match the timestamps and initial alignment.