Sound

Audio and visual forms complement each other, giving us new opportunities to create unique expressions. Pts simplifies a subset of Web Audio API to help you with common tasks like playbacks and visualizations.

Before we dive in, let's review a snippet of using Pts' Sound functions. It's pretty straightforward.

// Load sound and attach analyzer
Sound.load( "/assets/spacetravel.mp3" ).then( s => {
  sound = s.analyze(bins);
});

// ...

// Visualize frequencies (within animate loop)
sound.freqDomainTo( space.size ).forEach( (t, i) => {
  form.fill( colors[i%5] ).point( t, 30 );
});

Here is the result. Click play button to start.

js:sound_simple

Music snippet from Space Travel Clichés by Mr Green H.

How about something more elaborate? Let's try a silly and fun visualization.

js:sound_visual

Click play button and move your pointer around the character. Music snippet from Space Travel Clichés by Mr Green H.

Input

Let's get some sounds to begin! Do you want to load from a sound file, receive microphone input, or generate audio dynamically? Pts offers four handy static functions for these.

  1. Use Sound.load to load a sound file with an url or a specific <audio> element. The sound will play as soon as it has streamed enough data. You can check if the audio file is ready to play by accessing .playable property.
Sound.load( "/path/to/hello.mp3" ).then( s => sound = s );
Sound.load( audioElem ).then( s => sound = s ); // load from <audio> element
  1. Use Sound.loadAsBuffer if you need support for Safari and iOS, since they currently don't provide sound data for <audio> element reliably. See discussion in Advanced section below.
Sound.loadAsBuffer( "/path/to/hello.mp3" ).then( s => sound = s );
  1. Use Sound.generate to create a sound. You may also generate sounds using other libraries like Tone.js. Read more in Advanced section below.
let sound = Sound.generate( "sine", 120 ); // sine oscillator at 120Hz
  1. Use Sound.input to get audio from default input device (usually microphone). This will return a Promise object which will resolve when the input device is ready.
let sound;
Sound.input().then( s => sound = s ); // default input device
Sound.input( constraints ).then( s => sound = s ); // advanced use cases

Here's a basic demo of getting audio from microphone:

js:sound_mic

You may first need to allow this page to access microphone, and then click the record button. We also make the recording stop when the pointer leave the demo area so that your microphone is not always on.

You can then start and stop playing the sound like this:

sound.start();
sound.stop();
sound.toggle(); // toggle between start and stop
sound.playing; // boolean to indicate if sound is playing
Note that current browsers no longer support autoplay. Users will need to express intent to play the sound (eg, with a click).

Analyze

Using the analyze function, we can attach an analyzer to keep track of the data in our Sound instance.

sound.analyze( 128 ); // Call once to initiate the analyzer

This will create an analyzer with 128 bins (more on that later) and default decibel range and smoothing values. See analyze docs for description of the advanced options.

There are two common ways to analyze sound data. First, we can represent sounds as snapshots of sound waves, which correspond to variations in air pressure over time. This is called the time-domain, as it measures amplitudes of the "waves" over time steps.

To get the time domain data at current time step, call the timeDomain function.

// get an uint typed array of 128 values (corresponds to bin size above)
let td = sound.timeDomain(); 

Optionaly, use the timeDomainTo function to map the data to another range, such as a rectangular area. You can then apply various Pts functions to transform and visualize waveforms in a few lines of code.

// fit data into a 200x100 area, starting from position (50, 50)
let td = sound.timeDomainTo( [200, 100], [50, 50] );

form.points( td ); // visualize as points

In the following example, we map the data to a normalized circle and then re-map it to draw colorful lines.

sound.timeDomainTo( [Const.two_pi, 1] ).map( t => ... );

js:sound_time

Click to play and visualize sounds of drum, tambourine, and flute from Philharmonia Orchestra.

In a similar way, we can access the frequency domain data by freqDomain and freqDomainTo. The frequency bins are calculated by an algorithm called Fast Fourier Transform (FFT). The FFT size is usually 2 times the bin size and they need to be multiples of 2. (Recall that we set bin size to 128 earlier). You can quickly test it with a single line of code:

form.points( sound.freqDomainTo( space.size ) );

The following is a basic frequency-domain example for your reference.

js:sound_frequency

The interplay of sounds and shapes offer many possibilities indeed. Make good use of your imagination to create something beautiful, fun, and unexpected!

Advanced

Currently Safari and iOS can play streaming <audio> element, but don't reliably provide time and frequency domain data for it. Hopefully Safari will have a fix soon, but for now you can use AudioBuffer approach - it's a bit more clumsy but it works (see loadAsBuffer).

Sound.loadAsBuffer( "/path/to/hello.mp3" ).then( s => sound = s );

AudioBuffer doesn't support streaming and can only be played once. To replay it, you need to recreate the buffer and reconnect the nodes. Use the convenient createBuffer function without parameter to re-use the previous buffer.

// replay the sound by reusing previously loaded buffer
sound.createBuffer().analyze(bins);

For custom use cases with other libraries, you can create an instance using Sound.from static method. Here's an example using Tone.js:

let synth = new Tone.Synth(); 
let sound = Sound.from( synth, synth.context ); // create Pts Sound instance
synth.toMaster(); // play using tone.js instead of Pts

The following demo generates audio using Tone.js and then visualizes it with Pts:

screenshot

Click image to open tone.js demo. See source code here.

If needed, you can also directly access the following properties in a Sound instance to make full use of the Web Audio API.

Also note that calling start function will connect the AudioNode to the destination of the AudioContext, while stop will disconnect it.

Web Audio covers a wide range of topics. Here are a few pointers for you to dive deeper:

Cheatsheet

Creating and playing a Sound instance

Sound.load( "path/file.mp3" ).then( d => s = d ); // from file
Sound.loadAsBuffer( "path/file.mp3" ).then( d => s = d ); // using AudioBuffer instead
Sound.input().then( d => s = d ); // get microphone input
s = Sound.generate( "sine", 120 ); // sine wave at 120hz
s = Sound.from( node, context ); // advanced use case

s.start();
s.stop();
s.toggle();

Getting time domain and frequency domain data

s.analyzer( 256 ); // Create analyzer with 256 bins. Call once only.

s.timeDomain();
s.timeDomainTo( area, position ); // map to a area [w, h] from position [x, y]

s.freqDomain();
s.freqDomainTo( [10, 5] ); // map to a 10x5 area