hi!
i took a pretty long break from being in the studio and am currently catching up with the development regarding the sync methods. i didn't find this asked here before, so my question is: how does the usync method compare to the old plugin or the audio file by the file generator? is it any better, than the old methods?
i am using rme firefaces as audio interfaces and found the explanation, that syncing with an audio signal from the same interface will give the best snyc results always highly plausible, since all audio tracks are reaching the outputs with the same timing, down to the resolution of a sample. so using an usb connection instead of the audio connection over the fireface feels kind of counterintuitive.
thanks in advance for your answer!
Usync vs. old methods
Re: Usync vs. old methods
I understand your thinking
And you're not wrong, syncing with the audio method is very accurate.
But it's "only" sample accurate. If your tempo at 24ppq is not divisible by your sample rate, then you will inherently have some jitter since the audio pulses will be "quantized" by the sample rate.
For example, let's say your sample rate is 48kHz. 120bpm at 24ppq is 48Hz, that's great: you simply send a pulse every 1000 samples. If your BPM is 121, at 24ppq that is 48.4Hz. That means you need to send an audio pulse every 991.74 samples. That will be rounded to sometimes 991 samples and sometimes 992 samples - hence the jitter of +/- 1 sample.
U-SYNC does not use your sample rate but a much more complex mechanism, and sends a lot more data to the Nome than simple audio pulses, and at a much higher rate. It does indeed suffer for the USB protocol jitter and latency, but so does your RME audio interface (which is connected to your DAW via USB), and having a higher resolution really helps compensate on this.
More data also means it is more resistant to errors and potential issues: for example one of the features I absolutely love about U-SYNC is if your computer or DAW crashes mid-show, the Nome will simply seamlessly switch to master mode, without even losing the beat!
But all in all, none of this matters much - it's just tech stuff that nerds like me really like to investigate
All you have to know is that:
And you're not wrong, syncing with the audio method is very accurate.
But it's "only" sample accurate. If your tempo at 24ppq is not divisible by your sample rate, then you will inherently have some jitter since the audio pulses will be "quantized" by the sample rate.
For example, let's say your sample rate is 48kHz. 120bpm at 24ppq is 48Hz, that's great: you simply send a pulse every 1000 samples. If your BPM is 121, at 24ppq that is 48.4Hz. That means you need to send an audio pulse every 991.74 samples. That will be rounded to sometimes 991 samples and sometimes 992 samples - hence the jitter of +/- 1 sample.
U-SYNC does not use your sample rate but a much more complex mechanism, and sends a lot more data to the Nome than simple audio pulses, and at a much higher rate. It does indeed suffer for the USB protocol jitter and latency, but so does your RME audio interface (which is connected to your DAW via USB), and having a higher resolution really helps compensate on this.
More data also means it is more resistant to errors and potential issues: for example one of the features I absolutely love about U-SYNC is if your computer or DAW crashes mid-show, the Nome will simply seamlessly switch to master mode, without even losing the beat!
But all in all, none of this matters much - it's just tech stuff that nerds like me really like to investigate
All you have to know is that:
- It's definitely at least just as precise as audio sync
- It's soooo much simpler! Just load the plugin and... done No complex audio routing, solo issues, and it frees that audio output on your interface