[Lumiera] Audio editing

Christian Thaeter ct at pipapo.org
Sun Jul 5 22:59:41 CEST 2009


Ichthyostega wrote:
> hendrik at topoi.pooq.com schrieb:
>> On Sun, Jul 05, 2009 at 05:15:38PM +0200, Christian Thaeter wrote:
>>> Ichthyostega wrote:
>>>> ... For everything I am aware off, the frame rate was fixed to a limited
>>>> set of values dictated by hardware, eg. 8000, 44100, 48000, 96000,
>>>> 192000, etc.
> 
>>>> Christian Thaeter schrieb:
>>> That are sample rates, dunno if one wants to call single audio samples a 
>>> frame :)...
> 
> yes, "audio sample frame" is a common term. Which seemed to have confused
> me ;-)  because Juan probably referred to the NTSC video framerate (23.976)
> -- thanks for pointing that out!
> 
> I think, for the most common video framerates, we should provide fixed
> identifiers on the API/GUI (and internally represent them as rational numbers).
> 
>>> .... Even worse, a lot of (consumer) cameras clock drift depending on
>>> temperature, moon-phase and mood, the audio sample rate is not exactly
>>> 48kHz but some Hz more or less (i'd guess frame rates may drift too). Some
>>> people told me that this gives noticeable audio desyncs depending if the
>>> camera was warm or cold. We'll face endless fun with such things :).
> Yes, can confirm that. Both image and sound framerate can drift on consumer
> cameras, which gives you endless headaches esp. when filming a musical event
> with several such cameras....
> 
>>> I think some automation for frame/sample rate fine adjustment controling
>>> some noise gate (extend/compact silent phases) or frame drop (or doubling)
>>> might be a forseeable tool to correct that.
> 
> we could try to find a solution here; but I'm rather sceptical with regards to
> getting sound automatically synched to the image. Maybe a semi-automatic tool
> could work. But even a tool of the sort "pin this (mouse click in waveform)
> sound elm. to that video frame and stretch sound if necessary" would be a great
> help for this tedious task...

I intend to do this semi-automatically, that is, the user has to pin the
time points and events, but with some automation curves and other tools
lumiera will be able to find the the best position where to
stretch/compress the footage, kindof latex for video :). Events you want
to sync (visible and audible hints, like someone strumming a guitar or
banging a drum) are also the points in time where distortion would be
most noticeable, so let the software figure out where to do the syncing
at best (with some manual assistance) independently from this sync
points. That should be much better than completely manually fiddeling.

I am thinking about some kind of automation curve which acts like a
kindof frequency modulation where one can increase/decrease the
frame/sample rate slightly (or even more). Sidenote: We might want such
independently for the timecode too, long time syncronization of footage
done with different cameras needs reliable (corrected) timecodes which
need to be brought into sync.

Cavat for the 'workflow' thinkers: does this need some special
preparation/gui mode? or just loading all footage from different cameras
basically timecode aligned below each other and then do some semi-manual
preprocessing to correct timecode offsets and modulations for all clips
before doing anything else?

	Christain

> 
> 	Hermann
> 
> _______________________________________________
> Lumiera mailing list
> Lumiera at lists.lumiera.org
> http://lists.lumiera.org/cgi-bin/mailman/listinfo/lumiera



More information about the Lumiera mailing list