Sound class simplifies common tasks like audio inputs and visualizations using a subset of Web Audio API. It can be used with other audio libraries like tone.js, and extended to support additional web audio functions. See the guide to get started.
Construct a Sound
instance. Usually, it's more convenient to use one of the static methods like Sound.load
or Sound.from
.
a SoundType
string: "file", "input", or "gen"
If an analyzer is added (see analyze
function), get the number of frequency bins in the analyzer.
Get this Sound's AudioBuffer (if any) instance for advanced use-cases. See Sound.loadAsBuffer
.
Get this Sound's AudioContext instance for advanced use-cases.
If the sound is generated, this sets and gets the frequency of the tone.
Get this Sound's AudioNode subclass instance for advanced use-cases.
Get this Sound's Output node AudioNode instance for advanced use-cases.
Indicate whether the sound is ready to play. When loading from a file, this corresponds to a "canplaythrough" event.
You can also use this.source.addEventListener( 'canplaythrough', ...)
if needed. See also MDN documentation.
Get this Sound's Audio element (if used) instance for advanced use-cases. See Sound.load
.
Get this Sound's MediaStream (eg, from microphone, if in use) instance for advanced use-cases. See Sound.input
Get the type of input for this Sound instance. Either "file", "input", or "gen"
Add an analyzer to this Sound
. Call this once only.
the number of frequency bins
Optional minimum decibels (corresponds to AnalyserNode.minDecibels
)
Optional maximum decibels (corresponds to AnalyserNode.maxDecibels
)
Optional smoothing value (corresponds to AnalyserNode.smoothingTimeConstant
)
Connect another AudioNode to this Sound
instance's AudioNode. Using this function, you can extend the capabilities of this Sound
instance for advanced use cases such as filtering.
another AudioNode
Create or re-use an AudioBuffer. Only needed if you are using Sound.loadAsBuffer
.
an AudioBuffer. Optionally, you can call this without parameters to re-use existing buffer.
Get the raw frequency-domain data from analyzer as unsigned 8-bit integers. An analyzer must be added before calling this function (See analyze function).
Map the frequency-domain data from analyzer to a range. An analyzer must be added before calling this function (See analyze function).
map each data point [index, value]
to [width, height]
Optionally, set a starting [x, y]
position. Default is [0, 0]
Optionally, trim the start and end values by [startTrim, data.length-endTrim]
a Group containing the mapped values
form.point( s.freqDomainTo( space.size ) )
Create a Sound
given an AudioNode and an AudioContext from Web Audio API. See also this example using tone.js in the guide.
an AudioNode instance
an AudioContext instance
a string representing a type of input source: either "file", "input", or "gen".
Optionally include a MediaStream, if the type is "input"
a Sound
instance
Create a Sound
by generating a waveform using OscillatorNode.
a string representing the waveform type: "sine", "square", "sawtooth", "triangle", "custom"
the frequency value in Hz to play, or a PeriodicWave instance if type is "custom".
a Sound
instance
Sound.generate( 'sine', 120 )
Create a Sound
by streaming from an input device like microphone. Note that this function returns a Promise which resolves to a Sound instance.
@param constraint Optional constraints which can be used to select a specific input device. For example, you may use enumerateDevices
to find a specific deviceId;
a Promise
which resolves to Sound
instance
Sound.input().then( s => sound = s );
Create a Sound
by loading from a sound file or an audio element.
either an url string to load a sound file, or an audio element.
whether to support loading cross-origin. Default is "anonymous".
a Sound
instance
Sound.load( '/path/to/file.mp3' )
Create a Sound
by loading from a sound file url as AudioBufferSourceNode
. This method is cumbersome since it can only be played once.
Use this method for now if you need to visualize sound in Safari and iOS. Once Apple has full support for FFT with streaming HTMLMediaElement
, this method will likely be deprecated.
an url to the sound file
Removes the 'output' node added from setOuputNode Note: if you start the Sound after calling this, it will play via the default node
Sets the 'output' node for this Sound This would typically be used after Sound.connect, if you are adding nodes in your chain for filtering purposes.
The AudioNode that should connect to the AudioContext
Start playing. Internally this connects the AudioNode
to AudioContext
's destination.
optional parameter to play from a specific time
Stop playing. Internally this also disconnects the AudioNode
from AudioContext
's destination.
Get the raw time-domain data from analyzer as unsigned 8-bit integers. An analyzer must be added before calling this function (See analyze function).
Map the time-domain data from analyzer to a range. An analyzer must be added before calling this function (See analyze function).
map each data point [index, value]
to [width, height]
Optionally, set a starting [x, y]
position. Default is [0, 0]
Optionally, trim the start and end values by [startTrim, data.length-endTrim]
a Group containing the mapped values
form.point( s.timeDomainTo( space.size ) )
Toggle between start
and stop
. This won't work if using Sound.loadAsBuffer
, since AudioBuffer
can only be played once. (See Sound.createBuffer
to reset buffer for replay).