Wednesday, June 22, 2016

Web Audio API: Why Compose When You Can Code?

The very first draft for a web audio API appeared in W3C during 2011. Although audio in webpages has been supported for a long time, a proper way of generating audio from the web browser was not available until quite recently. I personally attribute this to Google Chrome, because as for the interest of Google, the browser started becoming the most important part of a computer. You may recall, the realm of the web browser didn’t start to change much until Google Chrome appeared. If you used sound in a webpage in this time, it would have been a poor design decision. But since the idea of web experiments appeared, web audio started to make sense again. Web browsers nowadays are another tool for artistic expression, and video and audio in the web browser plays a vital role in it.
Web Audio API: Why Compose When You Can Code?Web Audio API: Why Compose When You Can Code?
Web Audio API can be quite hard to use for some purposes, as it is still under development, but a number of JavaScript libraries already exist to make things easier. In this case I am going to show you how to get started with the Web Audio API using a library called Tone.js. With this, you will be able to cover most of your browser sound needs from only learning the basics.

HELLO WEB AUDIO API

Getting Started

We will begin without using the library. Our first experiment is going to involve making three sine waves. As this will be a simple example, we will create just one file named hello.html, a bare HTML file with a small amount of markup.
<!DOCTYPE html>
 <html>
   <head>
<meta charset="utf‐8">
<title> Hello web audio </title> </head>
<body>
   </body>
   <script>
   </script>
</html>
The core of Web Audio API is the audio context. The audio context is an object that will contain everything related to web audio. It’s not considered a good practice to have more than one audio context in a single project. We will begin by instantiating an audio context following the recommendations given by Mozilla’s Web Audio API documentation.
var audioCtx = new (window.AudioContext || window.webkitAudioContext);

Making an Oscillator

With an audio context instantiated, you already have an audio component: the audioCtx.destination. This is like your speaker. To make a sound, you have to connect it to audioCtx.destination. Now to produce some sound, let’s create an oscillator:
var sine = audioCtx.createOscillator();
Great, but not enough. It also needs to be started and connected to our audioCtx.destination:
sine.start();
sine.connect(audioCtx.destination);
With these four lines, you will have a pretty annoying webpage that plays a sine sound, but now you understand how modules can connect with one another. In the following script, there will be three sine shaped tone, connected to the output, each with a different tone. The code is very self-explanatory:
//create the context for the web audio
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
//create, tune, start and connect each oscillator sinea, sineb and sinec
var sinea = audioCtx.createOscillator();
sinea.frequency.value = 440;
sinea.type = "sine";
sinea.start();
sinea.connect(audioCtx.destination);
var sineb = audioCtx.createOscillator();
sineb.frequency.value = 523.25;
sineb.type = "sine";
sineb.start();
sineb.connect(audioCtx.destination);
var sinec = audioCtx.createOscillator();
sinec.frequency.value = 698.46;
sinec.type = "sine";
sinec.start();
sinec.connect(audioCtx.destination);
Oscillators are not restricted to sine waves, but also can be triangles, sawtooth, square and custom shaped,as stated in the MDN.

Patching Logic of Web Audio

Next, we will add a gain module to our orchestra of Web Audio components. This module allows us to change the amplitude of our sounds. It is akin to a volume knob. We have already used the connect function to connect an oscillator to the audio output. We can use the same connect function to connect any audio component. If you are using Firefox, and you take a look at the web audio console, you will see the following:
If we want to change the volume, our patch should look like:
Which means that the oscillators are no longer connected to the audio destination, but instead to a Gain module, and that gain module is connected to the destination. It’s good to always imagine that you do this with guitar pedals and cables. The code will look like this:
var audioCtx = new (window.AudioContext || window.webkitAudioContext)
// we create the gain module, named as volume, and connect it to our
var volume = audioCtx.createGain();
volume.connect(audioCtx.destination);
//these sines are the same, exept for the last connect statement.
//Now they are connected to the volume gain module and not to the au
var sinea = audioCtx.createOscillator();
sinea.frequency.value = 440;
sinea.type = "sine";
sinea.start();
sinea.connect(volume);
var sineb = audioCtx.createOscillator();
sineb.frequency.value = 523.25;
sineb.type = "sine";
sineb.start();
sineb.connect(volume);
var sinec = audioCtx.createOscillator();
sinec.frequency.value = 698.46;
sinec.type = "sine";
sinec.start();
sinec.connect(volume);
volume.gain.value=0.2;
You can find the solution at https://github.com/autotel/simple-webaudioapi/.
GainNode is the most basic effect unit, but there is also a delay, a convolver, a biquadratic filter, a stereo panner, a wave shaper, and many others. You can grab new effects from libraries such as Tone.js.
Storing one of these sound patches in objects of their own will allow you to reuse them as needed, and create more complex orchestrations with less code. This could be a topic for a future post.

MAKING THINGS EASIER WITH TONE.JS

Now that we have taken a brief look how the vanilla Web Audio modules work, let us take a look at the awesome Web Audio framework: Tone.js. With this (and NexusUI for user interface components), we can very easily build more interesting synths and sounds. To try things out, let us make a sampler and apply some user interactive effects to it, and then we will add some simple controls for this sample.

Tone.js Sampler

We can start by creating a simple project structure:
simpleSampler
|-- js
    |-- nexusUI.js
    |-- Tone.js
|-- noisecollector_hit4.wav
|-- sampler.html
Our JavaScript libraries will reside in the js directory. For the purposes of this demo, we can useNoiseCollector’s hit4.wav file that can be downloaded from Freesound.org.
Tone.js provides its functionalities through Player objects. The basic capability of the object is to load a sample, and to play it either in a loop or once. Our first step here is to create a player object in a “sampler” var, inside the sampler.html file:
<!doctype html>
<html>
    <head>
        <title> Sampler </title>
        <script type="text/javascript" src="js/nexusUI.js" ></script>
        <script type="text/javascript" src="js/Tone.js" ></script>
        <script>
            var sampler = new Tone.Player("noisecollector_hit4.wav", function() {
                console.log("samples loaded");
            });
        </script>
    </head>
    <body>
    </body>
</html>
Note that the first parameter of the player constructor is the name of the WAV file, and the second is a callback function. WAV is not the only supported file type, and the compatibility depends on the web browser more than the library. The callback function will run when the player has finished loading the sample into its buffer.
We also have to connect our sampler to the output. The Tone.js way of doing this is:
sampler.toMaster();
… where sampler is a Tone.Player object, after line 10. The toMaster function is shorthand for connect(Tone.Master).
If you open your web browser with the developer console open, you should see the “samples loaded” message, indicating that the player was created correctly. At this point you may want to hear the sample. To do that, we need to add a button to the webpage, and program it to play the sample once pressed. We are going to use a NexusUI button in the body:
<canvas nx="button"></canvas>
You should now see a rounded button being rendered in the document. To program it to play our sample, we add a NexusUI listener, which looks like this:
button1.on('*',function(data) {
    console.log("button pressed!");
})
Something outstanding about NexusUI is that it creates a global variable for each NexusUI element. You can set NexusUI to not do that, and instead have these variables only in nx.widgets[] by setting nx.globalWidgets to false. Here we are going to create just a couple of elements, so we’ll just stick to this behaviour.
Same as in jQuery, we can put these .on events, and the first argument will be the event name. Here we are just assigning a function to whatever is done to the button. This whatever is written as “*”. You can learn more about events for each element in the NexusUI API. To play the sample instead of logging messages when we press the button, we should run the start function of our sampler.
nx.onload = function() {
    button1.on('*',function(data) {
    console.log("button pressed!");
        sampler.start();
    });
}
Also notice that the listener goes inside an onload callback. NexusUI elements are drawn in canvas, and you can’t refer to them until nx calls the onload function. Just as you would do with DOM elements in jQuery.
The event is triggered on mouse down and on release. If you want it to be triggered only on press, you have to evaluate whether event.press equals one.
With this, you should have a button that plays the sample on each press. If you set sampler.retrigger to true, it will allow you to play the sample regardless of whether it is playing or not. Otherwise, you have to wait until the sample finishes to retrigger it.

Applying Effects

With Tone.js, we can easily create a delay:
var delay= new Tone.FeedbackDelay("16n",0.5).toMaster();
The first argument is the delay time, which can be written in musical notation as shown here. The second is the wet level, which means the mixture between the original sound and the sound that has an effect on it. For delays you don’t usually want a 100% wet, because delays are interesting with respect to the original sound, and the wet alone is not very appealing as both together.
The next step is to unplug our sampler from master and plug it instead to the delay. Tweak the line where sampler is connected to master:
sampler.connect(delay);
Now try the button again and see the difference.
Next, we will add two dials to the body of our document:
<canvas nx="dial"></canvas>
<canvas nx="dial"></canvas>
And we apply the dials’ values to the delay effect using the NexusUIlistener:
dial1.on('*',function(data) {
    delay.delayTime.value=data.value;
})
dial2.on('*',function(data) {
    delay.feedback.value=data.value;
})
The parameters that you can tweak on each event can be found in Tone.js documentations. For delay, it ishere. Now you are ready to try the example and tweak the delay parameters with the NexusUI dials. This process can be easily done with each NexusUI element, not limited only to effects. For instance, also try adding another dial, and adding its listener as follows:
dial3.on('*',function(data) {
    sampler.playbackRate=data.value;
})
You can find these files at github.com/autotel/simpleSampler

CONCLUSION

When I went through these APIs, I started feeling overwhelmed by all the possibilities and ideas that started to come to my mind. The big difference between this implementation of audio and the traditional implementations of digital audio is not in the audio itself, but in the context. There are no new methods of making synthesis here. Rather the innovation is that audio and music making are now meeting web technologies.
I am personally involved in electronic music making, and this area has always had this paradox of the ambiguity between actually performing music and just pressing play to a recorded track. If you want to really make live electronic music, you must be able to create your own performative tools or “music-making robots” for live improvisation. But if the performance of electronic music becomes simply tweaking parameters in pre-prepared music making algorithms, then the audience can also be involved in this process. I have been working on little experiments regarding this integration of web and audio for crowdsourced music, and perhaps soon we will be attending parties where the music comes from the audience through their smartphones. After all, it’s not that different from rhythmic jams we might have enjoyed in the cave ages.
This article was written by JOAQUÍN ALDUNATEToptal HTML 

1 comment:

Arkon Web Solutions said...

Really Good blog post. Provided a helpful information. I hope that you will post more updates like this.
Arkon Web Solutions is the top web design company in usa. We create attractive UX and UI design as well as superb responsive web design.