Latency is the perceived delay between taking an action and hearing the result. If you are using a software synth, you would notice latency between pressing a key on your MIDI keyboard, and hearing the resulting audio. Anything under 20ms is considered very good, and under 10ms is pretty much real-time.
For any given buffer size, the higher the sample rate the lower the latency. As an example, if you use 96kHz instead of 44.1khz, the latency drops to 2.6ms with the same buffer setting. (256/96000 = 0.00266 vs. 256/44100 = 0.00580)
As the latency is decreased, the CPU/Application has less time to process each piece of audio. Eventually if the latency is decreased too far the audio will begin to break-up because the CPU/Application was unable process the audio before it was time for that audio to be used. The only solution to this problem is to:
- Reduce the CPU load by turning off some plug-ins, tracks or busses.
- Increase the buffer size to give the CPU more time to process the audio.
- Decrease the sample rate of the project.
- Turn off other processes that may be stealing processing time away from the application.
Of course, both the LynxONE & LynxTWO/L22/AES16 have direct monitoring. This just means you can hear what you are recording without any delays regardless of the buffer setting or CPU load.