Descriptions used for sound and music

From Helpful
Jump to: navigation, search
This page is in a collection about both human and automatic dealings with audio, video, and images, including


Audio physics and physiology

Digital sound and processing


Image

Video

Stray signals and noise


For more, see Category:Audio, video, images

Physical effects and/or fairly well studied

Attenuation

Difference in energy (amplitude) of signal.


Attenuation in the widest sense refers to the concept in physics where loss of energy (i.e. amplitude reduction) occurs in a medium (be it electronic equipment, a wall affecting your wifi signal, or what happens when you hear yourself chew).

Attenuation is often measured in decibel.

In some contexts it is decibel per length measure or such, for example to specify expected signal loss in electrical wiring, or perhaps in sound isolation.


In electrical signal transmission, it can refer to problems relating to analog transmission over larger distances, and can be related to the expectable SNR (though there are more aspects to both signal and noise in transmission).


Physical attenuation often also varies with frequency, in which case you can make a graph, or give an average in the most relevant frequency region.

For example,

  • attenuation is the major reason we hear our own voice differently on recordings: we hear a good part of the lower frequencies through our body, while others only hear us through air (another reason is that some frequencies make it more directly to our ears)
  • microphones with stands made just of hard materials throughout are likely to pick up the vibrations of the things they stand on, which anything or anyone not in direct contact won't hear
  • materials used for sound insulation can be seen as bandstop filters (often relatively narrowband)


See also:

Tone versus noise content

Reflection, absorption, echo, reverb

Sound hitting a hard surface will be reflected.

Larger rooms are likely to be mostly hard (and also to have reverb)


An echo is an easily identifiable and usually quite singular copy of a sound, arriving later because it was reflected.

The delay is a significant aspects. Near walls, it is minimal, and you may easily receive more energy from reflections than from a source directly. (also note that localization is not affected horribly much)


When many echoes combine to be blurred and hard to identify, this is called reverb.


Sound field descriptions

Note that:

  • These describe environments instead of sound qualities,
...yet often still relate to qualities, like how many relate to reverb somehow.
  • 'Sound field' usually refers to a specific area (or rather volume)
  • Note that some of these are more standardized terms (see e.g. ISO 12001) than others.


A free field refers to environments where sound is free to propagate without obstruction. (In practice the most relevant objects are reflective surfaces (like walls), so free field often used to mean lack of reverb - but also other implied effects such as room modes)


Direct field is the part of a sound field that has no reflections.

Reverberant field is the part of the sound field that has some reflections

A diffuse field describes an environment with no preferred direction (usually because there are so many reflections that it's more or less uniform). (can also be used to refer to light and other EM)


Most rooms are reverberant / diffuse fields, with the world of variation. For example, empty rooms, cathedrals and such have more noticeable reverb than rooms filled with as there are few soft objects to scatter or absorb sound.

Anechoic chambers are rooms that attempt to remove all echo and reverb, to simulate a space with only the source in question, and at the same time have the environment act as a free field. It is typical to see porous, wedge-shaped sound absorbers (in part because the alternative is to have a huge space - and still some absorption).


Near field is the area around an emitter close enough where the area of emitter still matters (since all of it emits the sound), via interference and phase effects, and that physically, the pressure sound pressure and velocity are not in phase.

This also tends to imply the volume-per-distance dropoff (usually 6dB per increase) goes a little funny close to an object
size of the near field varies with frequency and sound source size
which is e.g. relevant for microphones specifically used for nearby voices
A near-field monitor (which should actually be called direct field monitors, but studio engineer consider the two the same thing) means placing speakers near you so that most sound is without room reverb - which is important in mastering/mixing


Far field is "far enough that the near field effect doesn't apply". Note that there will be a transition between the two, and where that is depends on frequency.

Resonance

Diffraction

Amplitude modulation (a.k.a. tremolo)

Frequency modulation (a.k.a. vibrato)

Amplitide envelope (attack, decay, sustain, release)

(also in terms of attention)

http://en.wikipedia.org/wiki/ADSR_envelope


Harmonic content

Beat and tempo

The terminology around beat is often used a little fuzzily, and some of it matters more to performance or rhythmic feel, so in more basic description you care first about pulse, the regularity of the beats regardless of precise rhythmic use.


Which, for a lot of techno and other electronic music, is just every beat. For some other music styles it is a somewhat more complex thing, with short-term and longer-term patterns. Which sometimes get so crazy humans have trouble describing it, or even feeling it.


The tempo of most music lies within the 40-200 beats per minutes (BPM). The median varies with music style, but often somewhere around 105 BPM.





Computing BPM

The simplest form is to detect just beat, and the simplest form of that is to assume you get a bassy beat, do some heavy lowpassing (e.g. leave only sub-100Hz), and look for onsets.

Onsets are primarily the start of a larger sound, specifically its sudden increase in amplitude. Research into human judgment of onsets is ongoing, though, and is not robust for music types that don't have a punchy beat.

Onsets don't always match the perception of tempo - consider e.g. blues with guitars, where fast strumming would easily make algorithms decide a factor higher than most humans would.


Beyond just beats, you may wish to detect the pulse or other basic periodicity,

And if you're going to try to detect measures/bars, you also probably want to consider downbeat detection, detecting which beat is first in each measure.


Approaches include

  • Onset detection plus post-processing
Most onsets are easy to detect
Not all music has clear onsets
Not all tempo is defined by onsets
Changing tempo makes things harder


Autocorrelation of energy envelope(s)

Puts some emphasis on
overall energy envelope is poor information; for it to work on more than techno you would probably want at least a few subbands


Resonators (of energy envelopes)

similar to autocorrelation, though can be more selective (verify)
can be made to deal with tempo changes
based on recent evidence, so start of song is always poor guess due to no evidence (though there are ways around that, and in some applications it does not matter)
Related articles often cite Scheirer (1997), "Tempo and beat analysis of acoustic musical signals"
...notes that people typically still find the beat when you corrupt music to six subbands of noise that still have the amplitude of the musical original (but not when you reduce it to a single band, i.e. just the overall amplitude), suggesting you could typically work on this much-simplified signal.
roughly: six amplitude envelopes, differentiated (to see changes in amplitude), half-wave rectified (to see only increases), and comb filters used as tuned resonators some of which will phaselock, then somewhat informed peak-picking
...the tuned resonator idea inspired by Large & Kolen (1994), "Resonance and the perception of musical meter"


Chroma changes

to deal with beat-less music (verify)


Goto & Muraoka (1994), "A Beat Tracking System for Acoustic Signals of Music"

suggests a sort of multi-hypothesis system looking at several



Beatgraph

More of a visualization than a beat analysis?
a column is a sing bar worth of amplitude
Used e.g. in bpmdj
http://werner.yellowcouch.org/Papers/beatgraphs12/


Tempogram:

local autocorrelation of the onset strength envelope.


Cyclic tempogram

Grosche (2010), "Cyclic Tempogram - A Mid-Level Tempo Representation for Musicsignals"



TODO: look at

Goto (2001) "An Audio-based Real-time Beat Tracking System for Music With or Without Drumsounds"
Dixon (2001) "Automatic extraction of tempo and beat from expressive performances"
Dixon (2006) "Onset Detection Revisited"
Alonso et al. (2004) Tempo and Beat Estimation of Musical Signals"
Collins (2012) A Comparison of Sound Onset Detection Algorithms with Emphasis on Psychoacoustically Motivated Detection Functions

-->

Musical key

Computing musical key

Less studied, less well defined, and/or more perceptual qualities

Humans are quick to recognize and follow other properties, better than algorithmic approaches. They include:

(Timbre)

Timbre often appears in lists of sound qualities, but it is very subjective and has been used as a catch-all term that generally it means something like "whatever qualities allow us to distinguish these two sounds (that are similar in pitch and amplitide)".

A large factor in this is the harmonic/overtone structure, but a lot more gets shoved in.


tonal contours/tracks (ridges in the spectrogram)

(particularly when continuous and followable)


Spectral envelope; its changes

microintonation

Some different sounds / categories

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

There are various typologies of sounds, but many are very subjective in that they are not unambiguously resolvable to signal properties -- they are often somewhat forced.


Consider:

  • continuous harmonic sounds, such as sines and other simple predictable signals
  • continuous noise (unpredictable in the time domain)
  • impulses (short lived)

Pulses, noises, and tones cold be seen as some simpler extremes in a continuum, wherevarious inbetweens could be described, such as:

  • tonal pulses / wavelets
  • tonal/narrow-band noise
  • pulsed noise bursts
  • chirp
  • various real-world noises, such as
    • rustle noise [1]
    • babble noise

You can argue about the perceptual use of these categories as they do not distinguish the same way we do.


Some useful-to-know music theory

On fingerprinting and identification


Analysis and/or fingerprinting

See also

http://en.wikipedia.org/wiki/Acoustic_fingerprint
[1] Cano et al. (2002) "A review of algorithms for audio fingerprinting"
[2] Wood (2005), "On techniques for content-based visual annotation to aid intra-track music navigation"


Software and ideas

This list focuses on software and ideas that a project of yours may have some hope of using. There are more (see links below) that are purely licensed services.


Acoustid notes

Acoustid is the overall project.

Chromaprint is the fingerprinting part. The standalone fingerprinter is called fpcalc (which hashes the start of a file).


Used by MusicBrainz (based on submission, e.g. via Picard, Jaikoz, or anything else that uses the API), making it interesting for music identification and tagging.

Licenses:

The client is LGPL
the server is MIT license
the data is Creative Commons Attribution-ShareAlike (verify)


See also:




Semi-sorted:


Things you can do offline:

  • calculate chromaprint
mostly meaningful for lookup in acoustid database



API calls: (limit to 3/sec, and see also http://acoustid.org/webservice )
  • look up chromaprint to acoustid track
will return a list of (acoustid track ID, certainty)
optionally further metadata (essentially adds the next call:)
  • look up metadata for acoustid track
often either
if you didn't ask for that metadata in the chromaprint lookup
or you have previously resolved to a track, and e.g. see whether its name or release details have changed since then
  • List AcoustIDs by MBID


  • submit acoustid
with or even without your file's tags. Basically for statistics of what's out there. (AcoustID fingerprinter is a program that makes this simpler)
you can wait for it to be processed (by default it will return when added to the queue, which is usually a few seconds of work)
(requires registration mostly for for statistics, quality filtering)
  • get status of submitted acoustid(s)
for when you submitted without waiting, but want to know



To understand the meaning and usefulness of AcoustIDs, you probably want to think from the perspective of MusicBrainz's and acoustid's data model

Musicbrainz's[2]:

a recording is acoustically unique (a specific recording/mix)
a release is something you can buy (CD, LP, single, relreleases, etc.)
a release's tracks also have identifiers, to be able to tie in recordings
and often present as tracks on multiple releases . So it has identifiers for recordings, tracks, and releases (note that a track is on exactly one release).


Acoustid's centers around tracks (in MusicBrainz's model that would be a recording)

Different enumerated things:

  • tracks (acoustid track ID, a uuid)
  • recordings
  • fingerprints (fingerprint ID, basically enumerating unique fingerprint submissions)


For example, see https://acoustid.org/track/9ff43b6a-4f16-427c-93c2-92307ca505e0 - at the time of writing (different now),

  • is a single acoustid track
  • Has five (fingerprint,duration) that people have submitted and were assigned to this acoustid track
  • while this is a UUID, it is unrelated to MB's UUIDs.
  • has two musicbrainz recordings
the first being a (musicbrainz) track on one (musicbrainz) release
the second being five different musicbrainz tracks, each on their own musicbrainz release
in this case all with the same names, though that's not always exactly so


All this mostly matters because you can ask acoustid for MB details, and you have to decide to what degree you want to resolve this.

E.g. when tagging, you might choose to combine this with musicbrainz's metadata to see what release the combined set fits into best, by looking at other tag details. (Picard does a simple form of this)

When you are building a music player and just want to look up the artist and title text you can ignore the structure of the MB details you get back.


See also:

Echoprint notes

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Echonest is the whole company.

Echoprint is produced by its acoustic code generator (codegen), which was open sourced in 2011. Their metadata storage/searching server is also available.


Echonest's data is owned by them but publicly available - the license basically says "if you use our data and add to it, you must give us your additions".

They also have a lot of metadata and fingerprints, and a public service to look up songs from ~20 seconds of audio, and would often work on microphone-based recording.


In late 2014 (and basically because Spotify had bought echonest), echonest's closed that service.

You can still look at their metadata, you can still use their data, codegen is still available (being MIT-licensed code), you can build your own from their components, but you can no longer use their data or their lookup, so would have to build your own database/search service.


The Echo Nest



See also:

pHash notes

A few algorithms, for image, video, audio. See http://www.phash.org/docs/design.html

Audioscout is based on the audio

See also:


Audioscout

See also:

fdmf

http://www.w140.com/audio/


last.fm's fingerprinter

Combination of fingerprinter and lookup client. Available as source.

Fingerprinter is based on [3]


Fingerprinter license: GPL3

Client lookup: "Basically you can do pretty much whatever you want as long as it's not for profit."

See also:

Fooid notes

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Fooid is a fairly simple, and FOSS music fingerprinting library. It's mostly a simplified spectrogram, allowing fuzzy comparisons between songs and is pretty decent at near-duplicate detection.


While still available, it seems defunct now? (website's been dead for a while)


What a signature represents

To summarize: libfooid

takes the first 90 seconds (skipping silence at the start)
resamples to 8KHz mono
(this should help reduce influence of sample-rate differences, high frequency noise, and some encoder peculiarities)
does an FFT
sorts that into 16 Bark bands


Since these strengths are about to be packed into few bits (name 2 bits), it is first rescaled so that the most typical variation will will be relatively distinguishing (based on a bunch of real-world music).

Per frame, the information you end up with is:

  • the strength in 16 Bark bands (2-bit),
  • which band was dominant in this frame(verify).


Fingerprint and matching

A full fingerprint consists of 424 bytes (printable as 484-character hex), consisting of

  • A 10-byte header, recording
    • version (little-endian 2-byte integer, should currently be zero)
    • song length in hundreths of seconds (little-endian 4 byte integer)
    • average fit (little-endian 2 byte integer)
    • average dominant line (little-endian 2 bytes integer)
  • 414 bytes of data: 87 frames worth of data (each frame totals 38 bits, so the last six bits of those 414 bytes are unused. For each frame, it stores:
    • fit: a 2-bit value for each of 16 bark bands
    • dom: a 6-bit value denoting the dominant spectral line

Fit and dom are non-physical units in a fixed scale (differentin the averages), so that they are directly comparable between fingerprints.


The header is useful in itself for discarding likely negatives - if two things have a significantly different length, average fit, or average line, it's not going to be the same song (with some false-negative rate for different values of 'significantly').

You can:

  • do some mildly fuzzy indexing to select only those that have any hope of matching
  • quickly discard potentials based on just the header values
  • get an fairly exact comparison value by decoding the fingerprint data and comparing those values too.


With the detailed comparison, which yields a 0.0..1.0 value, it seems that(verify):

  • >0.95 means it's likely the same song
  • <0.35 means it's likely a different song
  • inbetween means it could be a remix, in a similar style, or just accidentally matches in some detail (long same-instrument intro)


See also

  • forks on github

MusicIP, MusicDNS, AmpliFIND

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Proprietary, and its latest lifecycle seems to be a licensed B2B service without public interface.


The company was first called Predixis, later and best known as MusicIP (~2000), died in 2008, relaunched as AmpliFIND(verify) Music Services (in 2009?), sold its intellectual property to Gracenote (2011? 2006?).


Probably most known for the MusicDNS service (which was at some point rebranded as AmpliFIND(verify), which mostly consists of:

  • Their servers - for comparison, and which returned PUID (Portable Unique IDentifiers) on close-enough matches
  • a client library - which generates an acoustic summary and queries using it

When an acoustic query to their databases matches something closely enough, a PUID is returned, which seems to be a randomly generated identifier (not a fingerprint).

All the interseting parts are proprietary. The MusicDNS client library implements 'Open Fingerprinting Architecture', but this is only about the querying, which is sort of useless without the acoustical analysis, lookup method, or the data.

Relatable TRM

Proprietary.

Used by Musicbrainz for a while, which found it useful to find duplicates, but its lookup had problems with collisions, and scaling (meaning its server was unreliably slow), and Relatable did not seem to want to invest in it, so its use in MusicBrainz was replaced.


http://www.relatable.com/tech/trm.html


MusicURI

See also




Unsorted

Moodbar

Assigns a single color to fragments within music, to produce a color-over-time thing that informs of the sort of sound.


Mostly a CLI tool that reads audio files (using gstreamer) and outputs a file that essentially contains a simplified spectrogram.


Apparently the .mood generator's implementation

  • mainly just maps energy in low, medium, and high frequency bands to blue, green, and red values.
  • always outputs 1000 fragments, which means
useful to tell apart parts of songs
visual detail can be misleading if time-length is significantly different
not that useful for rhythmic detail for similar reasons


Something else renders said .mood file into an image, e.g. Amarok, Clementine, Exaile, gjay (sometimes with some post-processing).

The file contains r,g,b uint8 for each of the (filesize/3) fragments.


See also: