Electronic music - notes on audio APIs

From Helpful
Jump to: navigation, search
The physical and human spects dealing with audio, video, and images

Vision and color perception: objectively describing color · the eyes and the brain · physics, numbers, and (non)linearity · color spaces · references, links, and unsorted stuff

Image: file formats · noise reduction · halftoning, dithering · illuminant correction · Image descriptors · Reverse image search · image feature and contour detection · OCR · Image - unsorted

Video: format notes · encoding notes · On display speed


Audio physics and physiology: Basic sound physics · Human hearing, psychoacoustics · Descriptions used for sound and music

Noise stuff: Stray signals and noise · sound-related noise names · electronic non-coupled noise names · electronic coupled noise · ground loop · strategies to avoid coupled noise · Sampling, reproduction, and transmission distortions · (tape) noise reduction


Digital sound and processing: capture, storage, reproduction · on APIs (and latency) · programming and codescs · some glossary · Audio and signal processing - unsorted stuff

Music electronics: device voltage and impedance, audio and otherwise · amps and speakers · basic audio hacks · Simple ADCs and DACs · digital audio · multichannel and surround
On the stage side: microphones · studio and stage notes · Effects · sync


Electronic music: Some history, ways of making noises · Gaming synth

Modular synth (eurorack, mostly): sync · power supply · formats (physical, interconnects)


Unsorted: Visuals DIY · Signal analysis, modeling, processing (some audio, some more generic) · Music fingerprinting and identification

For more, see Category:Audio, video, images


Why latency exists (the long version)

Latency and physics

Latency in the real world exists because of distance and the speed of sound.


For some context of how long a millisecond is, and what distance does with sound:

a mic shoved guitar cab has maybe 1ms to get sound from speaker to mic.
talking to someone at one or two meters is 3ms to 6ms
a small-to-moderate music practice space easily has ~15ms of delay from one wall to the other
opposite ends of a 15m bus would be 40ms
two frames in 24fps movie are 42ms apart, two frames in 60fps are 17ms apart
halfway across a sports field is easily 100ms

Which is all physical size divided by the speed of sound. Use wolfram alpha if you're lazy.


So distance alone is

why bands may well watch their drummers
why in larger spaces you may want to use headphones instead (but not bluetooth ones)
one of a few reasons orchestras have conductors


In musical context

Hardware, and the nature of digital audio

Why larger buffers are generally useful

When smaller buffers are required

On drivers and APIs

Windows APIs

Some history

On custom ASIO drivers

In theory, you can write an ASIO driver for any hardware.

It won't be one installed automatically, and these days there may be less reason to do so (in that you can get similar latency with Core Audio/WASAPI).

...but e.g. the Kx project, that made a specific series of Sound Blaster cards more capable, also included ASIO support, and would let the cards do on the order of 5ms.


On ASIO wrappers

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)


ASIO usually means "this driver ignores windows sound APIs and talks directly to the sound hardware", and often exposing just the ASIO API, because much of the point is ignoring all the variation that comes from windows APIs.

This is also called native ASIO, mostly when contrasted with ASIO wrappers.


ASIO wrappers have a different goal.

ASIO wrappers open a sound card via a regular Windows sound API (in practice typically WDM/KS or WASAPI), force settings that are lower latency (small buffer, exclusive if possible), and present it via the ASIO API.

Yes, this is counter to ASIO's shortest-path-to-the-hardware principle, but there's still good reason to do it.

ASIO wrappers are usually about pushing underling hardware to lowest latencies.

Yes, you will only get latencies that were always possibly to get from that underlying sound API anyway.

So why add a layer?

Convenience, mostly.

  • it is easier for you to figure all this (small-buffer, possibly-exclusive) stuff out in one place
in the wrapper's settings, rather than for every DAW-plus-soundcard combination you have
not only saves work, but the config details vary between DAWs, which can be more fiddly and/or confusing.
  • it is easier for programs to talk ASIO than to expose all the very specific API tweaking they gets the something similar
  • also not unimportantly, using that wrapper can also be easier to explain to people who care more about music than decades of idiosyncratic programing history.
  • There's also some DAWs/software that speak mainly or only ASIO, because their approach is to figure out low latency in something external, and talk to that.


There's a few more useful reasons hiding in the details, like

  • you can often force WASAPI cards down to maybe say, 5-10ms without exclusive mode, which means you don't have to dedicate a sound card, to a DAW that only talks ASIO.
Which is good enough e.g. for when playing some piano on a laptop on the go, so pretty convenient
  • some ASIO wrappers can talk to on different sound cards for input and output, at the cost of slightly higher latency (will probably glitch at the lowest latencies), which DAWs talking native ASIO will typically refuse to do (for latency and glitch reasons).


As far as I can tell

  • ASIO4ALLv2 is a WDM/KS wrapper.
needs to force exclusive mode
can talk to different sound cards for input and output
  • FL Studio ASIO (a.k.a. FLASIO) is a WASAPI wrapper.
Comes with FL studio (including the demo), also usable in other DAWs
can talk to different sound cards for input and output
  • "Generic Low Latency ASIO Driver" is similar to ASIO4ALL but with different options
Comes with Cubase


And there appear to also be ASIO multiclient wrappers, basically ASIO in ASIO.

ftp://ftp.steinberg.net/Download/Hardware/ASIO_multiclient_driver/


"So which API is best?"

Linux APIs

Kernel level

Higher level

OSX APIs

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

Because OSX was a relatively clean slate at the time, at lower levels there is primarily Core Audio (also present in iOS, slimmed down(verify)), which has been quite capable for a long while.


https://developer.apple.com/library/archive/documentation/MusicAudio/Conceptual/CoreAudioOverview/Introduction/Introduction.html

https://developer.apple.com/audio/

Lowering latency

In general

tl;dr

  • zero latency does not exist, a few milliseconds of relative offsets happens all over the place
  • amounts of added latency can matter, though
  • latency matters when hearing yourself live, or syncing to something live (e.g. looper pedals)
  • digital input, output, and/or processing have some latency
In ways that are (looking at forums) usually partly misunderstood


Decide how low is necessary

The basic steps

API stuff

Windows API tweaking

Use a sound API designed for lower latency

Use sound API settings that lower the latency: exclusive mode

Use sound API settings that lower the latency: smaller buffer sizes

Linux API tweaking

OSX API tweaking

DAW considerations

Considering effects

Higher sample rates?

"Delay compensation?"

End-to-end latency

Further considerations

Hardware bus?

On network latency

Unsorted