Local and network media routing notes

From Helpful
Jump to: navigation, search
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Stuff vaguely related to storing, hosting, and transferring files and media:

LAN media sharing

Protocols tied to products

  • Chromecast [1]
Receivers: Chromcast, Chromecast Audio, Android TV
Senders: any Chrome, Android, or iOS apps using its SDK
not that that can include players reading from other protocols
Receiver apps are HTML5/JavaScript apps (and so may continue running independently from senders, based on design)
(for fairness: more open that most others, and the devices are cheap)
https://developers.google.com/cast/docs/developers



  • Airplay (Apple, proprietary) [2]
Receivers: AirPort Express, Apple TV, and some licenced third parties, and some unlicensed ones.
Not all third parties (licensed or not) support the DRM, so some media will refuse to play to them.
  • DAAP, DACP, DPAP (Apple, proprietary) - audio, volume, photos (respectively)
similar story to airplay
basically a cheaper variant to sonos
Receivers: Gramafon
Senders: supporting apps(verify)
Receivers: Sonos products
Senders: (verify)
Receivers: Intel-only, driver-based, seems pretty propietary?


Chromecast notes

Corporate networks (or any other that have more security) are more annoying.

There are two issues:

networking - to get (guest-mode-less) apps to work, you need to:
get it on a WiFi network - without 802.1x as it's not supported
allow UDP port 1900, allow broadcast (multicast) to 239.255.255.0 (for discovery - MDNS and SSDP)
don't do (which in enterprise networks often means 'disable, because you have it') things like isolating hosts via e.g. Cisco Peer to Peer Blocking[6]
get to the right chromecast, because the above easily means a bunch of disovered chromecasts


Guest mode (added later) helps for some cases

sends wifi beacons that, for supporting device, should get it listed under available devices named "Nearby device"
does not need to be on the same wifi network
apps must support guest mode to use it.
Whether they can support it depends on whether you are only controlling the app. Streaming media from the controlling device won't work (fine if it comes from the internet)
example apps that work: Youtube, Spotify(verify)
example apps that don't seem to: Chrome (desktop nor android), Android screencast
tries to sends the PIN via inaudibly high-pitched tones so, you you only have to type it it can't hear


Ad hoc networks are sometimes sensible, e.g. in some dashboard-type fixed setups.

...assuming you have internet route/sharing on that ad-hoc net


See also:

More open protocols

  • Bluetooth audio
not the easiest to use, not hifi, though works well enough for portable speakers
"hdmi over wifi", from the WiFi Alliance
up to 1080p
supporting clients: android 4.2+, windows 8.1+, (each with some footnotes)
supporting receivers: Amazon Fire TV Stick
  • open source things, such as Pulse,

Products / hardware

(note: list will never be complete or up to date)



Software

Network-controlled players

  • MPD
simple indexer-and-player, with a networked protocol so can be controlled remotely
  • Volumio
runs on some embedded devices (e.g. Raspberry Pi, Cubox, Udoo, Beaglebone)
nice interface, around what seems to be mpd at the core? (verify)
can play from UPNP servers
airplay target (verify)
  • XBMC, sort of

Network renderers / targets

Controllers / connectors

Apps like BubbleUPNP


Media serves

  • UPNP/DLNA servers include:
MediaTomb [8]
PS3MediaServer [9]
Twonky server (paid-for) [10]
uShare [11]
fuppes [12]

Shoutcast, Icecast (internet radio and derivatives)

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Internet radio used to largely be shoutcast or icecast (these days it can also be web-embedded players, or both).


Many music players can play SHOUTcast streams (on account of it being little more than HTTP and MP3)

Icecast was mostly an extension with more options, and is often supported. There are many players, and HTML5 seems pretty comfortable with it too. The main caveat is that since icecast allows you to put out more codecs, that needs to line up with your player.

Optional (but not unusual) is to have title updates. This changes the returned data to a very basic fixed-size-chunked transfer system.[13]



Servers mainly include:

  • SHOUTcast
Is proprietary (by Nullsoft)
was made for MP3 (and AAC(verify))
the protocol is an extension of HTTP that mixes metadata into the stream
...but only on request by supported clients; without, you still get just the media.(verify), though UAs that identify as known browsers are often redirect to an information-and-administration page.
Servers may be able to serve a playlist (e.g. m3u) listing themselves, to allow you to start listening via a browser and via whatever application was linked to open that playlist.
  • Icecast
is GPL software
over SHOUTcast protocol it can stream MP3, AAC, NSV. Over basic HTTP it can also stream Vorbis and Theora
added the concept of mount points, which are independent streams on further paths, meaning you can provide different streams with , e.g. different content, with the same content but different bitrates/codecs, etc.
Since shoutcast does not support mountpoints, players that do not support them will effectively use what icecast would call the mount point
/
(verify)
When using mount points, media sources are effectively independent streams pointed at specific mount points.
Sources are typically required to authenticate (default user/pass is often source/hackme).

https://wiki.archlinux.org/index.php/Streaming_With_Icecast


See also:



On the shoutcast and icecast protocol

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

With icecast being more complex, and generally supported, the below is biased to describe icecast more than basic shoutcast. There has been some cross-pollination anyway. (TODO: learn more about differences)


There's no real spec. Reading the the source is the closest thing. The basics go pretty far to what you want, though. (TODO: read it a bit, summarize some.)


Both are (mild) extensions of some basic features of HTTP (e.g. allows use of HTTP Basic auth). From a player-client's view, it could just outputs media data (often mpeg) as-is.

Icecast's title update are only sent on request, and mean the contents are slightly more complex than plain content.


Both can start the HTTP response with some useful extra headers like:

  • icy-name - stream's name
  • icy-notice1 - (you may want to watch for HTML to strip out of the notices)
  • icy-notice2 - sometimes continuation, sometimes more technical info you may not care to show
  • icy-notice3, icy-notice4 - usable in theory, apparently not used very often
  • icy-genre - (free-form?(verify))
  • icy-url - often the homepage?
  • icy-pub - 1 for public, 0 for private ((verify), because I've only seen this at 1)
  • icy-br - bitrate in kbps, though this is mostly a hint since things can be VBR. Can contain something free-form indicating VBR setting.
  • ice-audio-info: ice-samplerate=44100;ice-bitrate=128;ice-channels=2 or similar. Seems to mostly be useful for radio lists getting this info


On in-stream metadata

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

The in-stream metadata is how players update titles (though some watch for common in-media metadata like ID3v1 and ID3v2)

Optional, requested with a header:

icy-metadata: 1

The response will mention

icy-metadata: 1
icy-metaint: number


The latter number says after how many bytes of media content there comes another metadata chunk.

The metadata block consists of:

  • 1 byte: uint8 length, in units of 16 bytes {{{1}}}
  • that many bytes of metadata
    • which is plain text (probably with some non-ASCII in practice)
    • null-padded up to the coded size as necessary

Notes:

  • It's up to the client to keep track of how many data bytes have passed, read out the metadata block, and skip past it.
  • Because titles don't come very often, servers will send zero-byte metadata blocks most of the time.
  • The interval can be anything, but commonly seems to be 8196 or 24576 or one of another handful of figures.


By far the most common bit of metadata is
StreamTitle='some title'
, but there can be further variables in there. (TODO: figure out format of the metadata)



See also:

Source notes

Sources include

  • liquidsoap
http://liquidsoap.fm/
  • Oddcast
  • Muse
    • sends sound (ogg or mp3)
    • takes from sound card, file, or network streams
    • some control over mixing multiple channels
    • http://muse.dyne.org/
  • some players, sometimes natively, sometimes with a plugin. Examples:
    • mpd (natively)
    • Gnome music player (plugin)[14]


Unsorted

  • shout (from icecast)
  • Nicecast - sound [16]
  • Opticodec-PC - [17]

See also



Sound servers - lower-level routing

VLC streaming

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

The VLC client can also act as a server. This means you can fairly easily stream whatever VLC can read, to another VLC, but also to other video players, and can transcode in the process (can be useful to stream to devices and players that don't support as many formats).


ffserver

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Part of the ffmpeg package. Not actively developed. Seems to work, and to be a little hairy.

  • can serve/send in quite a few ways, and you get transcoding in the process.
  • needs to be fed specifically from ffmpeg

Peercasting

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Somewhat like multicast, but on-demand, so more fit for internet use.

http://en.wikipedia.org/wiki/Peercasting

Pulseaudio

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

PulseAudio (often abbreviated as PA or Pulse) can act as a local or networked sound source/sink and does various mixing and redirecting.

It can be used as a local system-wide daemon, and it can be used to separate sound for each user (and send it somewhere other than the shell server's sound card). See e.g. http://www.pulseaudio.org/wiki/SystemWideInstance

It is fairly easy to interface ALSA with Pulse, Pulse can act as a drop-in replacement for esound, and OSS programs can usually also be made to behave. This makes it a useful possibility for a local sound daemon.


While the configuration looks technical, most basic setups are not very hard, so try not to be too intimidated.



A pulseaudio(.exe) process can be tweaked via an interactive mode (
pulseaudio -nC
), but is usually initialized via configuration files (e.g. default.pa, also client.conf and daemon.conf).

the .pa files among those are all written in the same command set that the command line interface and immediate commands to the daemon (e.g. using pacmd) also use.

You can connect to the daemon (assuming you have module-cli-protocol-unix or module-cli-protocol-tcp loaded (verify) and tweak using that command set.


You can also configure local servers (via module-gconf) using paprefs.


See also:

Perhaps the most informative place to get started is the



Sources, sinks, and streams

Audio routing terminology: sources are things that produce audio, sinks are things that receive audio - that you can sink it into.


In general, pluggable parts of pulse may be a source, sink, or both.

Specific drivers/sound interfaces that plug into Pulse may contain one, such as an esound sink, or sine-wave source, or have both, as in the cases of ALSA, OSS, JACK, and such.

You can use filesystem FIFOs as sink and source, you can use sound daemons, network daemons (Pulse itself, RTP, other), and a few others.


There are also some special-function components, e.g. combining two sinks into one source, splitting a stream (e.g. for playback on two sound cards), remapping channels within a single stream, etc.


Authentication

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

The daemon (...primarily the native, esound, and simple modules...) does not accept just any connection.


Shared auth cookies

One option is using pulse cookies as a shared secret. These cookies are created when you first run pulse, at ~/.pulse-cookie for users (Documents and Settings\WindowsUsername\.pulse-cookie in windows) and /var/run/pulse/.pulse-cookie for --system pulses.(verify)


You can copy these over to all clients that should be allowed to inter-authenticate.

For example, scp the file to files on each client node called /etc/pulse-cookie, then edit /etc/pulse/client.conf and add:

cookie-file=/etc/pulse-cookie


X11 cookies Alternatively, you can have X11 store and handle the cookies -- when this actually applies.


IP whitelist A simpler yet somewhat easier to abuse option is to whitelist IPs or networks, like:

auth-ip-acl=127.0.0.1;192.168.0.0/16


Any access Or allow all access via:

auth-anonymous=1


Servers and client, hosts and sockets

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

At its most basic, Pulse is not networked and acts mostly as a local sound mixer and server.

This can already be useful, as a sound daemon with sound cards/drivers that only allow one program to access the sound device at a time, since you can make Pulse that one program and make everything use Pulse -- assuming you can make everything use it (the caveat for most any sound daemon).


Pulse can be networked when you want it to be, which includes:

  • Pulse (or even non-pulse) clients connecting to a remote Pulse server
  • Pulse server that send things to other Pulse servers
  • Pulse servers sending and/or receiving RTP multicasts


You may want to consider the bandwidth requirements for various uses. Note, for example, that uncompressed 16-bit 44100Hz stereo sound is ~175KB/s (~1.3Mbps).

You may want to not multicast more than one or a few of those over your LAN unless this is a feature to you in some way (e.g. for in-house radio, sharing speakers / microphones, etc.) unless you've got that bandwidth to spare. You may want to keep things off of your WiFi (particularly a/b/g), since that is more easily saturated.



The fallback series of things Pulse clients can/will try to get to do a basic to-a-pulse-server connection:

  • application settings (if applicable)
  • the $PULSE_SERVER environment variable
  • If there is an X server's DISPLAY, it will see if its environment has a $PULSE_SERVER.
  • default-server= setting in ~/.pulse/client.conf or /etc/pulse/client.conf
  • localhost via unix socket (on sytems that support those)
  • localhost via TCP socket
  • The X DISPLAY's IP at the Pulse's port

This implies that Pulse clients do not make network attempts until you configure it that way.


A server specification can take various forms, including:

unix:/unix/socket/name
/unix/socket/name
tcp:
tcp4:hostname_or_ipv4[:port]
tcp6:hostname_or_ipv6[:port]

(You may (or may not) wish to let the resolver decide between IPv4 and IPv6)


You can have fallbacks and server-specific rules. For example:

{zeus}unix:/tmp/pulse-lennart/native tcp6:zeus.lan:4713 tcp:zeus.lan:4713 medusa

Means:

  • if the local hostname is zeus, use a unix socket. (If it is not zeus, ignore this part)
  • try zeus.lan:4713 with IPv6
  • try zeus.lan:4713 with IPv4(verify)
  • try host name medusa (port 4713, deciding IPv4/IPv6 yourself)


Modules

See also http://www.pulseaudio.org/wiki/Modules


Command interface

  • module-cli-protocol-tcp
  • module-cli-protocol-unix
  • module-http-protocol-tcp (proof of concept status inspector, port 4714)


Interfacing with audio drivers

  • module-alsa-sink
  • module-alsa-source
  • module-oss
  • module-waveout (win32 waveOut type device; note: both souce and sink(verify))
  • module-jack-sink
  • module-jack-source
  • pipe-sink (filesystem FIFO)
  • module-null-sink
  • module-solaris


Notes:

  • For ALSA: Instead of using a numbered card (e.g. hw:0, hw:1), you can use names, e.g. hw:FM801AU instead, which can be handy if you have multiple and/or USB devices and don't want to rely on hardware ordering and/or pulse's detect or hal-detect. The names can be found in /proc/asound/cards


Networked audio

Pulse's network format:

  • module-native-protocol-tcp
  • module-native-protocol-unix

Pulse-server-to-Pulse-server tunnel:

  • module-tunnel-sink
  • module-tunnel-source

Esound:

  • module-esound-sink
  • module-esound-protocol-tcp
  • module-esound-protocol-unix
    • note that you may have errors related to ownership of /tmp/.esd (verify)


JACK

  • module-jack-sink
  • module-jack-source

Raw audio, means you can use things like netcat: (note: no authentication)

  • module-simple-protocol-unix
  • module-simple-protocol-tcp

Multicast (RTP/SDP/SAP):

  • module-rtp-send (multicasts to 224.0.0.56)
  • module-rtp-recv (receives multicasts there)

(note: things like mplayer can listen to this as well)


mDNS Zeroconf:

  • module-zeroconf-publish
  • module-zeroconf-discover

Other

  • module-remap-sink: redirect channels within a stream
  • module-combine: combine sources, creating a new sink


  • module-x11-publish (used to fetch credentials into your X root window)
  • module-x11-bell (intercepts X bell, plays sample)
  • module-ladspa-sink: redirect to various other processing via LADSPA, the Linux Audio Developer's Simple Plugin API
  • module-sine: sine generator


Volume setting:

  • module-volume-restore
  • module-match
  • module-mmkbd-evdev (via multimedia keyboard keys)
  • module-lirc (LIRC listener)


Automatic setup (for/via):

  • module-detect (ALSA, OSS, Win32)
  • module-hal-detect (HAL)
  • module-rescue-streams (redirect streams from old/invalid sinks to available sinks)


Other parts of the package

  • pacmd: basic shell to pulseaudio server
  • pactl: various pre-defined interaction with the pulseaudio server


  • pacat, paplay, parec: play and record raw audio (also to/from stdin/stdout)


  • pabrowse: list pulseaudio servers on the local network (uses mDNS(verify))


  • padsp: Intercepts OSS usage (of /dev/dsp, /dev/mixer, /dev/sndstat)
  • esdcompat: ESD wrapper -- used when you want a pulseaudio server to act like a drop-in for esd.


  • pasuspender
  • pax11publish is a helper related to X11-based token authentication

Errors

A Device or resource busy error often can be explained by:

  • loading the same module twice -- e.g. once via module-alsa-something and once implied through module-(hal-)detect.
  • a device that's used by some other application (run something like
    to see what it is.)


setrlimit(RLIMIT_NICE, (31, 31)) failed: Operation not permitted
setrlimit(RLIMIT_RTPRIO, (9, 9)) failed: Operation not permitted

These are related to priorities you would want to set on the daemon if you want as low latency as possible.

Are these warnings or errors ? You can lower these values in daemon.conf if you want these messages to go away



Related software

linux GTK frontends for the local sound server:


Old:

  • http://www.clingman.org/winesd/ (windows driver that acts as client to esound, so can be used via pulse's esound interface. Doesn't seem to work on XP, though.)


Configuration notes and examples

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Some systems already use Pulse as a sound server. Apparently GNOME in Ubuntu?(verify) If so, you'll probably want to use its means of configuration.


ALSA related notes

In the case of ALSA, setup can seem confusing - partly because ALSA is rather modular too, and because of the way ALSA lets you (with the help of some plugins) place Pulse between ALSA applications and ALSA hardware ().

Applications that use ALSA use its API via alsalib, and there is an ALSA-to-Pulse plugin for ALSA that lets you redirect applications that use that API to Pulse (have ALSA send sound to Pulse, and have Pulse play to the sound card).

If you want to use Pulse as a local sound daemon, you'll not only want to send ALSA API use (and others, see e.g. the Pulse wiki) to Pulse, you'll also want to make Pulse play to (and record from) a hardware device. Most likely you'll want to use ALSA for this too, using Pulse's module-alsa-sink.


To use the pulseaudio plugin for alsalib, make sure it is installed (it's not unusually part of a package called alsa-plugins), then modify /etc/asound.conf or ~/.asoundrc (as applicable to your setup. Note that it is relatively likely you don't have an /etc/asound.conf yet; see also #ALSA notes), and add:

pcm.pulse {
    type pulse
} 
ctl.pulse {
    type pulse
}

Apparently, if you want to set these as ALSA's default devices you need:

pcm.!default {
    type pulse
}
ctl.!default {
    type pulse
}


To sink pulse into local ALSA hardware, you use module-alsa-sink. Note that if you do change the default ALSA device to pulse (which can be generally handy), you'll want to either

  • manually tell pulse to output to the real card (so you'll want something like load-module module-alsa-sink device=hw:0 instead of just load-module module-alsa-sink), or
  • use HAL, which knows to use a real device(verify)

Local linux sound daemon

To set up Pulse as a local sound daemon on linux, you will often:

  • decide to add a system-wide Pulse daemon (there are other useful uses of Pulse, particularly on systems where multiple users may want different sound)
  • give the pulse daemon proper access to the device (user management, varies per distro)
  • point pulse at your sound hardware
  • point applications at Pulse (in the case of ALSA, you can do this via ALSA so that )

(For networking daemons, you'll change one of the last two points)


To test the local daemon, you'll probably want to try:

  • amixer
    (test of control channel)
  • paplay /some/file.wav
    (test whether PulseAudio)
  • aplay /some/file.wav
    (test whether ALSA works)


If you see:

*** PULSEAUDIO: Unable to connect: Access denied

Then your syslog probably shows:

protocol-native.c: Denied access to client with invalid authorization data.

The latter error means your Pulse authentication (or, more likely, lack of it) is denying ALSA. This points to module-native-protocol-unix; try adding auth-anonymous=1 to see if this is the problem. (If it is, you may not want to leave it at that; anyone can send you sound now)

Windows target

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Something-to-windows, probably linux→windows:

Windows side:

load-module module-waveout
load-module module-native-protocol-tcp listen=0.0.0.0 auth-ip-acl=127.0.0.1;192.168.0.0/24

...or whatever authentication options you chose. You may want to start pulseaudio.exe with --high-priority


(You can also use RTP, which can be played by things besides pulse on most any system)

Windows source

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

More bothersome than various other cases, because windows isn't really a primary platform for PulseAudio. It won't compile as-is and will have to be patched/ported to work.

Currently, the only download you'll find is an older and somewhat buggy one, Cendio's 0.9.6 build (from 2007), currently still the one for download (late 2009).


  • You can use module-waveout for capture (you can't tell it what input to use, so this may be set or even stuck on microphone)
  • it seems that pulseaudio 0.9.6 has problems with module-tunnel-sink, primarily that it will likely/frequently bork out with:
pulsecore/pdispatch.c: Recieved unsupported command 63
modules/module-tunnel.c: invalid packet

...which seems to be some message about the buffers on the other end.

  • module-esound-sink may work - I saw some packets, but no sound. May be me.
  • RTP wasn't implemented - you won't find it in cendio's build.


This page suggests other solutions, including using LiveInCode and netcat for windows (or, preferably, a command line SSH client to replace netcat).


Send to the module-simple-protocol-tcp like:

linco.exe -B 16 -C 2 -R 44100 | nc.exe host 4712

Tunnel via SSH, play as if local:

linco.exe -B 16 -C 2 -R 44100 | ssh.exe user@host "cat - | pacat --playback"


RTP

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Sending side:

load-module module-null-sink sink_name=rtp
load-module module-rtp-send source=rtp.monitor
# to make RTP the default sink:
set-default-sink rtp

Receiving sides:

load-module module-rtp-recv

See also


To-read list:



PortAudio

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Intended to make cross-platform audio easier; currently supports Windows, Mac, Unix and a few more. Part of PortMusic, along with PortMidi.

Useful for basic audio transfer, e.g. for playing and recording, but can also be used for filters and such.

Less heavyweight than PulseAudio, though this does mean you may have to do things like sample rate conversion in your application.


See also

http://www.portaudio.com/

Apple's Remote Audio Output Protocol (RAOP), AirTunes, and AirPlay

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Media streaming protocols from Apple.

ROAP does audio. (Apparently based loosely on PulseAudio's design)

AirPlay, apparently effectively an extension of ROAP, does audio, video, images, and their metadata. AirTunes is its old name.


They are RTSP/RTP based protocols with an added encryption step to verify the target is an Apple device, or licensed. (Hardware developers must become licensed partners to design Airplay-compatible devices).

This makes the protocols closed in usual practice. Few devices support it, and you need to send from an Apple player (mostly iTunes, and some iOS devices).


Supported hardware seems to include Relevant software/hardware:

  • Airport Express, Airport Extreme - as the concentration points of the stream (an easily found service)
    • AirPorts have a private key so cannot be easily replaced as the point of mediation (although the key has been extracted by someone)
  • Apple TV
  • AirPlay-compatible speakers
  • Some iHome and JBL products


Media sources include:

  • iTunes
  • iPhones
  • iPods
  • iPads (iOS ≤4.2 - on iOS, some apps are allowed audio streaming, but not video. Details have probably changed again)


There are some DIY solutions, but don't expect them to keep working - it seems Apple actively dislikes this.


See also:

Specific software

Airfoil

Mostly consists of Airfoil (a sender) and Airfoil Speaker (receiver). The receiver is free, the sender is a paid-for app.

Basically lets you send sound between windows, osx, and linux (and there's a receiver for iOS).


Seems to basically be AirTunes/AirPlay. Sends to such devices (and iOS devices running Airfoil Speaker Touch), but won't receive from AirPlay sources, apparently because of licensing stuff.

Hardware

In general, don't expect transparent compatibility with DRMed music, because this is rare to nonexistant in hardware players - even if they support one, they are usually so tied to it that they don't support another.

This varies with setup style, and sometimes details. For example, remote-speaker play often already-decoded music.


Squeezebox (Logitech)

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)
Music sources

Squeezeboxes are meant as independent pull-style players, fetching a stream from a local server, decoding and playing it.

  • All squeezebox hardware can play from your own squeezebox server (you control that server - sort of a playlist manager)
    • ...which can be run on Windows, Linux, BSD, Apple, and some NAS devices (partly because it's GPL software)
  • and from SqueezeNetwork, a.k.a. MySqueezebox.com
  • a good number can also play independently, from internet radio
  • ...and (since the SB3) from other sources like Spotify, Last.fm, and others

While the pull style is good for smooth, gapless playback, it's not so useful when you more or less want wireless speakers, streaming from your own PC music players (or like its way of choosing music), or such.


Less usual ways to get sound out (can be handy when you use a squeezebox server as the central place/provider for your music):

  • The Squeezebox Server software seems to provide shoutcast-style MP3 stream (transcoded where necessary)
  • SqueezePlay[19] - PC based player (Mac, Windows, Linux) playing from a squeezebox server (more or less replaces softsqueeze[20])

There are also a few ways to get sound in, but they seem pretty hacky


Hardware

It looks like all have an ethernet port, and all but the oldest SliMP3 have WiFi and digital out (coax and optical).

Alternatives:

  • SliMP3 (2001)
    • (Squeezeboxes are updated versions of this, adding WiFi, digital out, and some other things)
  • Squeezebox Controller and Squeezebox Receiver (2008) (sold as a pair, called the Duet)
  • Squeezebox Touch (2010)


See also:

Soundbridge

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

(Exists with both Roku (USA) and Pinnacle branding (elsewhere))


Sonos

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

A series of devices (mostly wireless), and its own (proprietary) protocol.

Relatively expensive (you'll probably need a few different devices for a full setup), though the ease of a wide setup may be worth it to you.


Controllers tell player devices what to do, and can be:

  • the sonos controller device - a remote with LCD screen
  • android app
  • iOS app (iPhone, iPad, iPod touch)
  • PC controller software (Mac and Windows; windows version can be run on linux via wine)


Players and other devices:

  • may be a player without amplifier/speakers (to play audio into existing hifi setups)
    • e.g. ZP80, ZP90
  • may be a player with amplifier (to add your own speakers)
    • e.g. ZP100, ZP120
  • may be a player and speakers (for easy placement)
    • e.g. Play:3, Play:5,
  • may provide inputs
    • there's also a specific model with an apple dock
  • the bridge seems to be a wireless range extender for players and controllers (but has no audio output or input itself)


Music sources, and interaction with PCs:

  • Players can play directly from SMB/CIFS shares (Windows, linux, mac), Time Capsule(verify)
  • Doesn't play various DRMed thing (Apple Fairplay, WMA)
  • Seems to be able to use Spotify, Last.fm, Pandora, and a dozen other online sources

Things that do not currently seem to be possible:

  • Streaming to sonos from your favourite music player -- it seems you must use the desktop controller (verify)
    • ...though you could buy a Sonos device with an audio input
  • Using the music from an iphone/ipad/ipod, even if you are controlling form it
  • Using a PC as speaker for the Sonos system (verify)


See also:

Unsorted

http://www.barix.com/Exstreamer_Family/311/

Semi-sorted

Streaming protocols

RTP

Real-time Transport Protocol (RTP) is a packet format specification, which could be delivered over UDP (usually), TCP, or other things. It does not have a standard port associated with it.

Relatively frequently used for streaming, in combination with things like RTSP, H.232, or such.


http://en.wikipedia.org/wiki/Real-time_Transport_Protocol


RTSP

Real Time Streaming Protocol (RTSP) is effectively a control channel meant to allow VCR-like access to video (time-based, random access). Seems to be an extension of HTTP.


In practice it is often uses RTP for data transfer, but combinations vary. For example, RealVideo uses RTSP for control but a proprietary data protocol for video transport.


See also:


SAP

Session Announcement Protocol (SAP) is a protocol for session publishing and gathering (multicast).

You can use it to announce streams.

SDP was initially made for SAP. (See also RFC 2974)

See also:


SDP

SDP (Session Description Protocol) is used to describe medis strams.

Seen used with SAP (where it originated), RTP, RTSP, SIP, and others. See e.g. RFC 4566

See also:


MMS

Microsoft Media Server (MMS), used by Windows Media Services

Microsoft switched to RTSP, but stuck to mms:// urls; WMP still tries both MMS and RTSP on mms:// URLs.

See also:


MS-WMSP

Windows Media HTTP Streaming Protocol

(Also known as MMSH?(verify))

Uses HTTP for messages (both ways) and data. Can do both on separate connections (called non-pipelined), or on one (called pipelined). Adds sessions.


RTMP

Real Time Messaging Protocol (RTMP) is a format that is fairly specific to Flash - specifically Flash video and MP3. It used to be proprietary, but since acquired by Adobe the standard has been published.

It caries both control data and media.

See also:


H.323

H.323 refers to a number of protocols meant for media delivery, usable for realtime applications, and used in some forms of VoIP and videoconferencing

(Note: PBXes may support H.323 and/or SIP)


SIP

Session Initiation Protocol (SIP) allows multi-user sessions and focuses on delivery of live data. Currently known primarily for its use in open-standard VoIP.

VoIP / messaging protocols

Besides SIP there are various others, supporting or extending VoIP and messaging-style





The possibly-handy-to-know-what-they-are list:

  • MMS short for Microsoft Media Server, but referring to a proprietary (unicast) streaming protocol. Microsoft has (for Windows Media Player) moved to RTSP.

mopidy

see also:

Other software worth noting

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)
  • Subsonic - Java-based web server that indexes music and can play it via the web page (and in other ways; hosting system, external player). Not very polished yet, but interesting.
  • VideoLan (as in VLC) - most things can be streamed, most of us just don't know how yet
  • Icecast2
    • edcast (previously oddcast)


And perhaps

  • Nullsoft Video [21]
  • Red5 is an open source flash server (as in streaming and such)

Unsorted

http://www.icecast.org/

http://muse.dyne.org/

http://www.cycling74.com/products/soundflower

http://www.rogueamoeba.com/nicecast/

DASH

https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP

ALSA notes

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


See also: