Electronics project notes/Audio notes - Digital sound communication

From Helpful
Jump to navigation Jump to search
The physical and human spects dealing with audio, video, and images

Vision and color perception: objectively describing color · the eyes and the brain · physics, numbers, and (non)linearity · color spaces · references, links, and unsorted stuff

Image: file formats · noise reduction · halftoning, dithering · illuminant correction · Image descriptors · Reverse image search · image feature and contour detection · OCR · Image - unsorted

Video: format notes · encoding notes · On display speed · Screen tearing and vsync


Audio physics and physiology: Sound physics and some human psychoacoustics · Descriptions used for sound and music

Noise stuff: Stray signals and noise · sound-related noise names · electronic non-coupled noise names · electronic coupled noise · ground loop · strategies to avoid coupled noise · Sampling, reproduction, and transmission distortions · (tape) noise reduction


Digital sound and processing: capture, storage, reproduction · on APIs (and latency) · programming and codecs · some glossary · Audio and signal processing - unsorted stuff

Music electronics: device voltage and impedance, audio and otherwise · amps and speakers · basic audio hacks · Simple ADCs and DACs · digital audio · multichannel and surround
On the stage side: microphones · studio and stage notes · Effects · sync


Electronic music:

Electronic music - musical terms
MIDI · Some history, ways of making noises · Gaming synth · microcontroller synth
Modular synth (eurorack, mostly):
sync · power supply · formats (physical, interconnects)
DAW: Ableton notes · MuLab notes · Mainstage notes


Unsorted: Visuals DIY · Signal analysis, modeling, processing (some audio, some more generic) · Music fingerprinting and identification

For more, see Category:Audio, video, images

This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)

This is mostly about hardware interconnects. For software media routing, see Local and network media routing notes

Typically external

S/PDIF

S/PDIF ("Sony/Philips Digital Interface") (a.k.a. IEC958) is purely the protocol, not a connector.

...but S/PDIF is often carried over either

fiber, typically used with TOSLINK connectors
a single RCA connector on (preferably) a coaxial cable


S/PDIF tends to carry either

  • raw PCM
  • surround (compressed, because of bandwidth limitations), often either:


See also:


Not to be confused with

  • AES/EBU (next section)

AES3

This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)


AES3, also marked AES/EBU, is a digital audio protocol.

While AES/EBU works well, it's a little annoying to deal with these days, and you are more likely to find S/PDIF these days anyway. (S/PDIF seems to be a consumer variant of AES3, which is why the two formats are actually largely the same at low level, but not quite compatible in a "plug it in and it works" way).


It seems balanced AES/EBU was typically on XLR connectors, and it's not electrically compatible with unbalanced AES3 (often BNC connectors) - you can't connect these two variants to each other directly.

Apparently AES3id refers to the BNC



ADAT

This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)

ADAT has referred to two distinct things


Historically, and now rarely, to the Alesis Digital Audio Tape, a way of storing eight digital audio tracks onto Super VHS.


These days, so much more typically, it refers to the ADAT Optical Interface, more commonly known as ADAT Lightpipe or often just ADAT (or lightpipe), also from Alesis.

It looks the same as TOSLINK / S/PDIF, but speaks a different protocol, and somewhat faster.


It carries audio channels that are always 24 bit (devices that are 16-bit will effectively just use the 16 highest bits).

Its speed lets it carry

up to eight channels of those at 48kHz.


...or, with the common S/MUX extension

up to four channels at 96kHz
up to two channels at 192kHz


See also:

Typically internal

I2S

This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)

(Note: no technical relation to I2C)

I2S (sometmimes IIS), Inter-IC Sound, is meant as an easy and standard way to transfer PCM data between closeby chips.

It separates clock and data, so it can have slightly lower jitter (and indirectly latency) than buses that don't.


As I2S doesn't spec a plug, or how to deal with longer cables (impedance and such), it is mostly used within devices.

Exceptions mainly being audiophile setups that want to choose their DACs separately. Because it wasn't made for that, this takes more care to do right, because impedance can cause synchronization issues, particularly at higher bitrates.


Lines and bits and interpretation

The lines are

  • bit clock (BCLK) (a.k.a. continuous serial clock (SCK))
  • left-right clock (LRCLK) (a.k.a. word clock, word select (WS), Frame sync (FS))
  • data
  • ground

BCLK pulses for each bit, so should be samplerate * bitdepth * channelamount, e.g. 1411200 Hz for CD audio (44100*16*2).

LRCLK selects left/right channel

Some also add a master clock (MCLK). This is not part of standard I2S


Note that:

  • The protocol is fundamentally 2-channel (in part due to LRCLK's function)
If you functionally want to send mono, you could send zero on the other.
but if you have that sample anyway, then it makes just as much sense to output it twice, i.e. in both channels, so that if a receiver decides to implement mono by picking one channel, it doesn't matter which one. And stereo playback will be double mono rather than seeming to miss one channel.
  • Sample rate is not configured, it is implicit from the sending speed(verify),
which is part of why software bit-banging I2S would probably never sound great
  • Bit depth is implied by when LRCLK switches (which it can do because the MSB goes first)
with some work left to the receiver


DIY abuse

Because I2S interfaces are fairly high-speed, and typically DMA-assisted, it has found other uses.


Because channels=2, sample rate is controlled by the clock, and bit depth is somewhat implied, you can vary some aspects of what it sends without negotiating it.


For example, when feeding in data into an I2S DAC, you do need to do the stereo interlacing as in the spec, and the bit depth as the DAC expects, but it doesn't need know the sample rate - it will do what you ask of it, at the rate you ask it to.


For example, the ESP8266 and ESP32's I2S is actually run from a more generic piece of hardware, roughly a glorified shift register, used to implement I2S as well as LCD and camera peripherals.

It happens to go at ~1.4MHz for audio, probably from a configured, but if you can control the output rate, then you can produce other sorts of signals, and DIYers have found it's fairly stable at 40MHz, which makes it possible to produce NTSC and VGA signals, and could even sample data at that rate.


Similarly, RP2040 has a Programmable I/O (PIO)[1] [2] [3]


You could probably send PDM over these - which would be an ironic use of something already intended for audio, but which might makes sense if the receiving side isn't an I2S DAC(verify).



See also:

On DACs